Monday 23 September 2024

"What have you got against AI?"

"AI" (Artificial Intelligence) has done a lot of harm by making it seem "necessary" for us to defend not interacting personally with AI entities and AI products! 


We need to notice and think about this. It reveals that an open-ended acceptance of "AI" into human society is already the default

We got here step-by-step over many decades, and centuries; and indeed all online communities (such as you, reading this) are already many steps along the way -- but general, consensus, default AI-acceptance brings us all the way

We Have Arrived.


What this means is that there is here-and-now a consensus (which may itself be mostly artificial) that it already is (and indeed should be) fine and normal for individual human beings to interact (inevitably personally) indiscriminately with some unknown and unknowable mixture of other-humans and AI entities and products

The key concept is "interact". 


From my perspective; this implicitly amounts to a denial of the ultimate reality and existential significance of this mortal life. 

Which is, of course, a great victory for the powers of purposive evil. 

And such a situation is not some future possible dystopian bogeyman state that may happen "if we are not careful" - for many, many people this is how things already are. 


For some of these people (and I suspect this has always been the case), I think there never has been any genuine inner distinction between interacting with other real beings, and with simulated-Beings (with projections, for instance) - because such people are ultimately solipsistic: they only really "believe in" themselves. 

But there seem to be more and more such people; because they are numerous and strong enough to demand that anyone who does not want to become one of them, must justify "why not".

When you are in a situation of trying to convince others "why not?" go along with their plans for you - you have already and decisively lost the rhetorical war; the battle of Public Relations. 


...At which point you either give-up and join the downslide to damnation; or else effectively fight-back: maybe materially, certainly spiritually.

  

12 comments:

Laeth said...

i wrote this almost a year ago when image generators started to become common, and as far as i can see, the situation has only become worse (as i of course expected it to):

'the saddest and scariest part of all this is that clearly many have lost all instinct and feel for the 'uncanny valley' – for what these machines produce is often eerie, off in a way that is hard to describe. One gets a visceral feeling, bypassing rationality, a blood memory or an ingrained intuition, when one encounters its products. Or at least, some do, ourselves included. The fact that many do not, and are happy to ingratiate themselves to these machines and the beings behind them, seems to us a great and terrible sign of the times. '

Bruce Charlton said...

@Laeth "clearly many have lost all instinct and feel for the 'uncanny valley' – for what these machines produce is often eerie, off in a way that is hard to describe. "

Yes, that's exactly how it seems to me. So many years of frequent (and especially in the birdemic almost exclusively) online interactions - increasingly many of which are computer generated/ edited - has numbed the ability to discriminate language.

Then the "AI Art" we have been deluged with in the past couple of years ,is doing the same about visual material, about which people seem more naturally discriminating. Quite a few supposedly anti-Left and/or self-identified Christian bloggers and essayists are using these regularly.

Attitude towards AI is another Litmus Test, exactly on the basis of whether people recognize the demonic strategy behind it - or not. Clearly most do not acknowledge the agenda, or don't care because they have implicitly internalized it already.

Crosbie said...

The 'technological singularity' concept betrays a desire, *glee* almost, for self-annihilation.

William Arthurs said...

'Artificial intelligence', like 'mental health', is a shoddy metaphor -- a cliched phrase that comes weighted with mostly-unexamined assumptions. I don't often meet people who understand what I mean when I say this.

Bruce Charlton said...

@WA - Oh yes indeed. It assumes the answer to all the most fundamental questions. I regard intelligence as an attribute of "a being" - not a thing that can be detached or made.

Francis Berger said...

I have lost count of the many people in my life who have gleefully informed me all about how AI is going to change our lives, revolutionize the world, create great art, music, and novels, make life more efficient, make everyone healthier, saner, and happier, and that I had no option but get with the program.

No option? Really?

I beg to differ.

NLR said...

More generally than just AI, there's multiple false assumptions about technology and humanity that have become prevalent. And which impede real discussion.

Such as, technology is inherently good or at least neutral. That's not empirical, it's a metaphysical belief. In fact, there's plenty of empirical evidence against it (if allowed to count as evidence).

Or, that technological changes are inevitable, rather than a product of the beliefs and goals of the human beings who invent technologies.

Or the idea that technology must *always* act in the manner of creative destruction, never destructive destruction, so that even if developments ruin or destroy good things, we will *always* be able to find other equal things to replace them, if only we could just see how.

Or that human labor has no value in itself, so if the niches in which good work could exist are destroyed, then it wasn't any big loss.

Or that, even if there's drawbacks to any technology now, it will always be possible for all those drawbacks to be fixed eventually.

There's reasons to disbelieve all of these, but much of the time they aren't even acknowledged as assumptions.

Andrew said...

I’ve been using “language models” and noticed a great deal of Reddit comments are now, quite obviously, generated by early/simplistic ones (such as can be downloaded and easily run on a home computer or with a cheap graphics card). Many people reply as if they’re real. This will become exponentially worse very rapidly.

Laeth said...

"Quite a few supposedly anti-Left and/or self-identified Christian bloggers and essayists are using these regularly."

i've noticed this and, especially the christian ones, i do not understand the phenomenon, at all. i understand the pull of politics, for example, even though i don't agree with it and think it's a big mistake. but the gleeful use of 'ai' by these trad types just seems so absurd on so many levels. even i wasn't pessimistic enough to imagine this type of capitulation.

Andrew said...

Another observation on what seems the obvious long-term direction.

These models are trained on massive amounts of data to give realistic-sounding and knowledgeable-sounding replies.

The current trained models that one can access either through other organizations or downloaded are already heavily censored mostly after the training data is processed.

In the near future this can behave quite circularly by censoring the training data at its source so the "misinformation" is effectively no existence.

All consensus will quite easily be formed or forced by simply applying active automated censoring (e.g. comments on blogger) and reported back to the authorities.

This will be applied to all social media in a much more effective way than current censoring.

The push is that we will all use AI, including at the professional level. For medical care, news, education - everything. Any flip flops and reversals could be implemented universally and instantly to support whatever "the current thing" is.

Crosbie said...

@Andrew -

> Many people reply as if they’re real.

It turns out the Turing test *does* test for the presence of real intelligence. But not on the side of the screen envisaged by Turing

Bruce Charlton said...

@Crosbie - The Turing Test is so absurd that the fact it was taken seriously is an indictment! It takes the actual (multi-faceted, though time) relationships of living-Beings - and then excludes everything about the situation except the abstraction of verbal interaction.

Owen Barfield pointed out that there are many things that can be distinguished with validity and usefully - but not divided; and intelligence is one of them.

Thus we can distinguish intelligence as a more or less useful abstraction from the attributes of a Being (of which there are many other attributes such as purpose, life, growth, self-sustaining...) - but intelligence cannot be divided or detached from that Being, nor from any number of Beings.

This becomes clear when one tries to define and analyse human intelligence. Intelligence can be measured and distinguished in a relativistic way; but intelligence cannot actually be divided from other attributes of human behaviour - such as perception, movement, personality, or types of permanent or temporary brain function/ impairment (all of which influence measures of intelligence).