Edited from a sharply-insightful Comment by NLR at Francis Berger's blog:
There's plenty of reasons to disagree with this, but I want to discuss the assumptions associated with this view of creativity. The core distinction is that the assumptions of mechanistic creativity are false at a fundamental level.
The idea that the virtual world would become a world unto itself has been presented over and over again for decades, in both fiction and nonfiction.
And yet how can that be true when the virtual world takes everything it has from the real world?
What actually happened is that the virtual world has shrunk more and more over the years and so-called AI is just increasing this.
Since AI works by copying what's on the Internet, its products are a copy of a copy, creating a world of virtual virtuality.
7 comments:
I tend to thing of AI, the high tech kind, as just a glorified, upgraded version of conventional low-tech "artificial intelligence" in the form of fake research, plagiarism, and (your term) parroting.
A copy of a copy sums it up well. AI then is a copy of secondary thinking, making it . . . tertiary-level thinking?
@Frank: "tertiary-level thinking?" More tertiary-level Not-thinking! After all AI is applying human-programmed algorithmic procedures to human-selected/ categorized data-bases such as selection, combination, extrapolation etc.
Yes, but I mean the humans who use the third level Ai stuff. However, in the end, that's also not thinking
A commenter in the IT industry summed it up thus:
"I would take this further by emphasizing LLMs have no ability to falsify hypotheses. This is why a recent LLM generated x-ray included three arm bones where only the ulna and radius exist.
LLMs don’t 'know' anything — they are not sentient and do not interact with the world. This makes them dangerous and untrustworthy. And what’s arguably worse, they have already started to cut humans off from the vital process of knowledge accrual and assimilation by providing us with false data that we cannot know is false unless we could review all of the inputs and processing the AI performs to arrive at its false conclusions.
It is a self-assured nonsense generating machine that gives just enough accurate information to make it dangerous."
The IT sector is extremely proud of their crypto and AI toys. And all I can think is, if that's your cutting edge frontier, then computer science has gone as far as it can go. You can always build a marginally better mousetrap but creatively you have reached an end and are headed down a funnel into esoterica, like being the ten thousandth PhD in literary criticism to deconstruct The Iliad. As the old saying goes, people know more and more about less and less. I run out of banal cliches to describe the modern world; so tragic it's actually amusing.
@A-G - Well said!
I find this whole thing weird. And a no-brainer that AI is both fake, and dangerous when not recognized as fake.
And the push for it, all over the place... It's rather like normies are, enthusiastically, working on a plan to install autistic psychopaths to rule themselves, in every respect!
But it's of a piece with many other aspects of Western life, that depict a covert self-loathing and suicidal urge, personal and social; just below the surface but getting ever closer to awareness.
i believe you are right. but i see also another dimension. even the simplest tool embodies a spirit, which is given energy by human use. there is no logical basis to suppose that an increase in complexity arrests this process. in fact, the opposite. the more complex, and the more human attention, the more amplified the effect. AI appears to me more like summoning cthulu. not that we haven't been trying, and succeeding, to summon him already before with the internet.
@Laeth - Agreed.
I also see that people have been trained, over several decades, to accept incomprehensibly complex and irrational statistical procedures as the "gold standard of truth - against which individual human intelligence, experience, and values are judged as having literally zero validity.
This happened in medicine with respect to "evidence based medicine" embedded in an increasingly powerful managerial structure.
In science, it was more a matter of multiple layers of black-box "peer review"/ committee consensus that was ultimately driven by massive funding (and status attached to receiving massive funding).
Now, this is being generalized to "society" as a whole.
The unasked question is who Or What exactly stands behind the "AI" acceptable-mandatory algorithms - Who - at the very top of the hierarchy of functionaries - gives the nod to what gets implemented and enforced?
That's where "Cthulu" comes-in.
Yet, the thing about AI is that the plans will Not Actually Happen; because AI is in reality a system of chaotic destruction *disguised as* a system of totalitarian control.
The destructive chaos will supervene long before totalitarian control eventuates.
Post a Comment