Edited from a sharply-insightful Comment by NLR at Francis Berger's blog:
There's plenty of reasons to disagree with this, but I want to discuss the assumptions associated with this view of creativity. The core distinction is that the assumptions of mechanistic creativity are false at a fundamental level.
The idea that the virtual world would become a world unto itself has been presented over and over again for decades, in both fiction and nonfiction.
And yet how can that be true when the virtual world takes everything it has from the real world?
What actually happened is that the virtual world has shrunk more and more over the years and so-called AI is just increasing this.
Since AI works by copying what's on the Internet, its products are a copy of a copy, creating a world of virtual virtuality.
18 comments:
I tend to thing of AI, the high tech kind, as just a glorified, upgraded version of conventional low-tech "artificial intelligence" in the form of fake research, plagiarism, and (your term) parroting.
A copy of a copy sums it up well. AI then is a copy of secondary thinking, making it . . . tertiary-level thinking?
@Frank: "tertiary-level thinking?" More tertiary-level Not-thinking! After all AI is applying human-programmed algorithmic procedures to human-selected/ categorized data-bases such as selection, combination, extrapolation etc.
Yes, but I mean the humans who use the third level Ai stuff. However, in the end, that's also not thinking
A commenter in the IT industry summed it up thus:
"I would take this further by emphasizing LLMs have no ability to falsify hypotheses. This is why a recent LLM generated x-ray included three arm bones where only the ulna and radius exist.
LLMs don’t 'know' anything — they are not sentient and do not interact with the world. This makes them dangerous and untrustworthy. And what’s arguably worse, they have already started to cut humans off from the vital process of knowledge accrual and assimilation by providing us with false data that we cannot know is false unless we could review all of the inputs and processing the AI performs to arrive at its false conclusions.
It is a self-assured nonsense generating machine that gives just enough accurate information to make it dangerous."
The IT sector is extremely proud of their crypto and AI toys. And all I can think is, if that's your cutting edge frontier, then computer science has gone as far as it can go. You can always build a marginally better mousetrap but creatively you have reached an end and are headed down a funnel into esoterica, like being the ten thousandth PhD in literary criticism to deconstruct The Iliad. As the old saying goes, people know more and more about less and less. I run out of banal cliches to describe the modern world; so tragic it's actually amusing.
@A-G - Well said!
I find this whole thing weird. And a no-brainer that AI is both fake, and dangerous when not recognized as fake.
And the push for it, all over the place... It's rather like normies are, enthusiastically, working on a plan to install autistic psychopaths to rule themselves, in every respect!
But it's of a piece with many other aspects of Western life, that depict a covert self-loathing and suicidal urge, personal and social; just below the surface but getting ever closer to awareness.
i believe you are right. but i see also another dimension. even the simplest tool embodies a spirit, which is given energy by human use. there is no logical basis to suppose that an increase in complexity arrests this process. in fact, the opposite. the more complex, and the more human attention, the more amplified the effect. AI appears to me more like summoning cthulu. not that we haven't been trying, and succeeding, to summon him already before with the internet.
@Laeth - Agreed.
I also see that people have been trained, over several decades, to accept incomprehensibly complex and irrational statistical procedures as the "gold standard of truth - against which individual human intelligence, experience, and values are judged as having literally zero validity.
This happened in medicine with respect to "evidence based medicine" embedded in an increasingly powerful managerial structure.
In science, it was more a matter of multiple layers of black-box "peer review"/ committee consensus that was ultimately driven by massive funding (and status attached to receiving massive funding).
Now, this is being generalized to "society" as a whole.
The unasked question is who Or What exactly stands behind the "AI" acceptable-mandatory algorithms - Who - at the very top of the hierarchy of functionaries - gives the nod to what gets implemented and enforced?
That's where "Cthulu" comes-in.
Yet, the thing about AI is that the plans will Not Actually Happen; because AI is in reality a system of chaotic destruction *disguised as* a system of totalitarian control.
The destructive chaos will supervene long before totalitarian control eventuates.
as i see it, and in short, ahriman built it, but sorath rules it
There are two main sources of the huge push for AI just now, I think.
- The Totalitarians, as you said.
- Bureaucrats - the system is infected with managerial-ism, which says that if you are a good manager, then you don't have to understand the workings of what you're managing. A corollary is that your biggest problems are labor & the expense thereof, so anything that cuts labor is good. AI is the greatest labor cutter since offshoring, so it will be implemented wherever they can.
They will have to find out the hard way.
@Phil - I agree; but I regard the bureaucrats as just a subset of the totalitarians.
A further aspect of bureaucratic managerialism is that - so long as you adhere to currently accepted managerial "principles" (i.e fashions), then you are not to blame for adverse consequences. Anyway, if you are "successful" as a manager, you will have moved onward and upward by the time the consequences are apparent.
(Managers get appointed/ promoted on the basis of their track record of having implemented organizational change - Not on the basis of performing a useful function.)
Had the most bizarre experience of my career recently. An email arrived from someone who was hired to review and summarize a model.
“Can you review this? I still don’t understand the model, but I put it into AI and it gave me this. Does this make sense?”
This is lower-tier cheating than even copying someone else’s homework. It’s more like just stealing it and turning it in as your own!
@Mia - Stealing someone else's homework - and you don't even know who you stole it from!
Related to this, I've noticed that when immersive virtual worlds are depicted in fiction, no one goes into a virtual world to go into a virtual world. No one goes into a virtual world to log onto social media for instance. It's usually that someone goes into a virtual world to do things different from typical modern life.
Likewise, people don't like interacting with bots on social media. If you're going to take part in a virtual society, you don't want to do virtual socializing within a virtual society.
People do not want to experience or depict situations that are clearly doubly virtual. I think this shows that subconsciously, people know that you lose something when you move from the real world into a virtual world. Its only when the doubly virtual nature is hidden that people accept it.
@NLR - It seems like that to me, although i don't participate. It seems that the designers of what is currently called AI have decided to engineer something that is specifically designed to pass the Turing Test; including that they create such a stripped down and dehumanized environment and trained-human-user - that it becomes almost impossible to detect human nature, in the way that we do in the real world. So, interactions are decontextualized, brief, unimodal, unnatural in form (eg. written text, 2D imagery on screen) and so forth; and users are in the habit of interacting with uninterested strangers whose responses are following flow-charts.
I recently had a thought on Mars colonisation thing, and I came to conclusion that neural networks can be used for tasks such as landscape analysis. While "AI" lacks logical understanding of what it's doing, it can do basic routine stuff that doesn't require thinking
@Ap - Well, people have been saying this for the past 35 years to my certain knowledge; and yet navigation through an open landscape by means of computer visuals is still extremely unreliable.
It isn't a matter of computing power, or algorithms. The basic model by which machines "see" and interpret; is qualitatively different from the way people (and other broadly similar animals) navigate.
I suspect that this problem will be "solved" by the usual modern means of redefining failure as success - e.g. changing (grossly simplifying) the operating environment to enable the vehicle to work, or ignoring errors and disasters by dishonestly attributing them to other causes.
If we talk about other planets colonisation, sending humans to them is 100 death guarantee. So the colonisation must be done at least at the first stages and tested by the robots. They cannot be controlled directly by some operators because of a huge ping, so, they have to have some kind of autonomy and AI - i e. operational freedom in the context of a given task. By AI I mean not only neural networks, I have other ideas on how to make machine "think", and this can be combined with algorithmical presets like in regular programming. While I don't believe that computer mirrors or equals human intelligence, it's still an instrument that has to be used, nut much different from an hammer
@Ap - I don't think you have grasped what I have been writing about the "AI" impositions of the past couple of years especially - its deep motivations, why it is suddenly "everywhere" etc. This kind of stuff about potential tool-like uses is a distraction - and deliberately so.
Post a Comment