Sunday 24 September 2023

So-called AI represents a judgement on the degradation of post-millennial life

I recently watched yet another (this time video) presentation on the wonders and dangers of the "new" kinds of AI ("Artificial Intelligence") - and yet again felt that this phenomenon mostly reveals how degraded, and unhuman, has our world become over the past thirty years...

Degraded, bureaucritized, dehumanized, and de-skilled to such an extent that mediocre simulations can 'replace' people without very much loss of discernible quality... 

Which just goes to show that when creativity and judgment are systemically excluded, personnel are systematically degraded, real human contact is displaced by the virtual, and quality is driven sufficiently far down - then computers can (almost undetectably) substitute-for the resulting output. 


13 comments:

WJT said...

At least people still recognize that there are some things only a real human being can do. I refer, of course, to the art of selecting all squares with motorcycles.

Bruce Charlton said...

@WJT - Actually, it seems I am not very good at that; despite (supposedly) being a human... In other words, it is the wrong kind of 'training' task altogether, based on a false understanding of how humans see. This is not a new problem. To my personal knowledge, computer learning (neural nets) has been working on the same wrong lines for at least 35 years.

Crick saw this a long while ago: https://apsc450computationalneuroscience.files.wordpress.com/2019/01/crick1989.pdf

But Cricks own suggestions about what to do instead, haven't worked out either; I think mostly due to the death of science - as evident in the decades of high-status/ highly-rewarded drivel that masqueraded as Functional Brain Imaging.

https://iqpersonalitygenius.blogspot.com/2015/06/a-critique-of-functional-brain-imaging.html

Jeff Z said...

AI can certainly do that. The real reason that is there is for google to get your Ip address on 3rd party blogs. It doesn't stop AIs.

Bruce Charlton said...

@JZ - That is my point, expressed inversely.

"AI" can do that functionally-futile task, but cannot navigate an open and changing environment - which is something all seeing animals can do; often from an early age and without any training.

When you find something that AI is doing, as with this detecting motorcycle bits in still photos example, you will probably be observing the Texas Sharpshooter mindset at work: find out "something", pretend that was exactly what you were looking for, then praise yourself for solving this important problem...

https://charltonteaching.blogspot.com/search?q=texas+sharpshooter

No Longer Reading said...

Not to mention, so called AI is not some neutral technology that just inevitably emerged from bottom up developments. I suspect that people think it is because that idea has been repeated again and again in books, movies, television shows, news media, etc.


But it's nothing of the sort: AI is an establishment project, reflecting positivist/materialist thinking and specifically designed to replace people and centralize control.

GunnerQ said...

I plan to snark a lot about how NPCs can't tell the difference between a chatbot and their roommate.

Me being a blogger, I know there's no way I or any producer of original thought, could possibly be replaced by AI. Our blogs could not possibly be reproduced by any pastiche of other peoples' blogs.

The peoples' desire for oblivion is astonishing. They really do think there's nothing special about themselves. I am reminded of a Dilbert comic, "Originality is randomness. Otherwise, we would have figured out the algorithm by now." They cannot imagine the existence of something neither procedural nor random... which is the ultimate indictment of materialism.

Bruce Charlton said...

@Joel - You are right - a mixture of being entrained by computer usage, and actual training (which is often about making a job auditable by managers) has made people increasingly think like (bad) computers.

A further element of this is that there is no priority given, in the first place, to employing those who do the job best.

Nor is there function orientated management, which is prepared to pay what is necessary (typically somewhat more than the bare minimum to get a warm body) to get someone who can do a job sufficiently well to be effective at the proper tasks.

Bruce Charlton said...

@Gunner Q - Excellent comment!

"Our blogs could not possibly be reproduced by any pastiche of other peoples' blogs."

True - at least not the good posts that make blogs worth reading, but (and this is the thing!) perhaps the off-day posts could be simulated, so as to be not-significantly-different to an inattentive reader.

And there seem to be plenty of celebrity blogs (or more likely Twitter/X posts) that could easily be done indistinguishably by some kind of automatic algorithm - indeed, they probably have been for many years.

"The peoples' desire for oblivion is astonishing. "

I noticed this when I was writing the blog posts that later became Thought Prison - there is a covertly self-loathing, suicidal, nihilism that underlies Western modernity

- which was described early and well by Eugene (later Father Seraphim) Rose -

https://www.oodegr.com/english/filosofia/nihilism_root_modern_age.htm

Michael Dyer said...

It’s funny recently I had a conversation with a friend of mine who’s an academic in English, who suggested the idea that it’s analogous to the calculator. I expressed my horror at the suggestion, math is math, the numbers and formulas working inside of a system, but language can express the human soul. Outsourcing that to a computer is different in kind. I’m kind of surprised more people aren’t basically horrified by this, like on an instinctual level, like building a machine to simulate your mothers love so you don’t need your mother. I shouldn’t have to explain why that belongs on an episode of the twilight zone, not in real life.

Alexey said...

Actually, talking to someone who holds very tightly to their (reality-distorting) views and tries to either downplay or reinterpret them in favor of their arguments without any sign of doubt feels like talking to a prebiased neural network. Except that the neural network doesn't try to refute everything you say and can provide useful information

Alexey said...

I came to the idea that the neural networks are indeed very complex calculators too, this is why I don't fear them to replace humans

Alexeyprofi said...

Neural networks are not AI because they do not learn consciously. In order to mirror human intelligence, artificial intelligence system must be able to mindfully analyse why it's failed and then change itself so to not make same mistake again. While neural networks just do the same stuff with slight differences over and over again untill it gets the best possible result

WanderingGondola said...

IP addresses can easily be fetched without a CAPTCHA. Human verification was always the primary purpose, however much bots can do the task now. Google's version also helps train their image recognition and generation models, though.

I don't know whether non-Goog CAPTCHAs are similarly multi-purposed for their creators. Were I to start a website, I'd try to avoid implementing the damn things at all.

FWIW, my stance on AI is complex. As a sort-of computer nerd, I recognise the technological value and human effort involved, while frowning upon Big Tech which largely made it happen. At the same time, I certainly see and understand what Bruce has been discussing. In knowing that current "AI" (as the public knows it) isn't truly intelligent but an advanced tool working with given data, the intentions of developers and users seems the important thing. I personally have little problem eschewing the ChatGPT and image-gen that most are focused on; of myriad other AI types, language translation, optical character recognition and speech-to-text conversion are a few to consider useful for the Good.