Friday, 11 April 2025

The "AI" Litmus Test Fail: Is it caused by a kind of Stockholm Syndrome? Projection? Managerialism?

I am intrigued by the Litmus Test failure of so many self-identified Christians to discern the grossly net-evil intent driving the current (post November 2022) wave of so-called AI. I want to understand it. 

"It's just a tool" - they say. Yes, and so are all the instruments of mass surveillance and population control "just tools"; from secret police and death squads, to smear campaigns, covert propaganda, and infiltration/ subversion. 

Of course, there are always potentially useful aspects to any tool, bureaucracy, technology - but so what? You can use thumbscrews to hold the door open, or a cosh as a paper weight. You need to ask - what are they designed for, how will they be used.  


The real questions at issue are things like: why did the global Establishment spend trillions of dollars on developing, launching, and implementing these "AI" technologies; what results are they intending to achieve from their investment; and what functions will they actually be deployed for, in the world as it actually is? 

Clearly; it is very important to some people with a lot of power and money, that these AI technologies be adopted and used very widely - whether they work well or not, whether we want them or not. 

Then, from our spiritual and Christian point of view we need to ask - honestly, and by learning from past experience - what will be the overall effect of mass usage of AI-technologies: what will they (on average) do to the way people think, Western society and its institutions, our attitude to the world, our aspirations?

Does the spread of AI technologies lead towards a more spiritual and creative, personal, loving and Christian perspective - or towards ever-more this-worldly manipulative materialism? 

To ask is to answer. 

 

The only honest conclusion from such questions regarding the evil provenance of AI, the fundamentally untruthful propaganda surrounding its emergence and spread (e.g. the word-concept "intelligence!), the coercive and totalitarian implementation - is that this kind of AI is a massive strategy designed to do harm of many kinds. 

Whether you personally believe you are personally exempted from general harm, that you can surf this wave of evil to your own advantage - well this is another matter altogether. 

But even if you can, and even if you actually do make the best of a societal state of waxing corruption (gaining more money, prestige, or power for yourself, perhaps?) - this does not excuse you when you argue in favour of what should recognized as a malign plan.


Otherwise you are no better than a stereotypical war profiteer; one who uses his influence (fasle information, bribery, blackmail etc.) to cause, expand and continue destructive wars - so that you personally can do well out of it. Even if some war is good for you here-and-now, does not mean that war is good in itself - and you ought not to believe or say that it is. 


We cannot defend ourselves against evil unless we recognize evil. Apologists for "AI" are not just harming themselves but others in failing to acknowledge an obvious and major demonic scheme. 

Why, then, do they do this? I think the reasons are psychological - not spiritual. 

There is, I think, a kind of Stockholm Syndrome at work. 


I discern that some of the most vocal advocates of AI themselves actually fear AI; AI makes them afraid - and they respond by trying to make friends with AI - they take the side of AI, defend it against its critics. 

I think this befriending, like the Stockholm Syndrome it so much resembles, is fear induced - a response to a threat they perceive to be potentially deadly, and inescapable. 

(You cannot beat 'em, so you might as well join Them.)  


One reason I think pro-AI advocacy is often fear-induced, is that such people project their fear onto others - inappropriately, wrongly. They taunt that those who do not embrace AI are afraid of AI!

But this is patent nonsense in general and specifically. Fear of AI is far from normal - which is why so much propaganda must be expended on trying to generate fear (via innumerable mass media fictions about evil AIs, and AI dystopias).

In real life the overwhelming response to AI is on a spectrum from moderate irritation and boredom, to mainstream everyday careerist attempts at exploitation of "the latest trendy thing". 


Therefore such an accusation of fear is a dead giveaway, a projection onto others of something within oneself - often emanating from those who really feel, personally, threatened by replacement or oppression with AI technologies. 

For instance those who hope (like generations before them) to escape this fate by vaulting-over the threat into a managerial situation: to position themselves as expert and enthusiastic "AI managers" in their particular field. 

To adopt an accusatory or "therapeutic", stance to those who see the evil motivation behind AI is classic managerialism! 


The managerial way of dealing with dissent is that real world (and spiritual) problem is reframed into emotions... 

The problem is not the global Establishment-driven mandatory AI take-over;  the "real" problem is those who criticize or resist AI, or who decline to engage with it. "They must have something wrong with them". 

AI-resisters are assumed to be ignorant, weak, or frightened - and the managerial answer is they need to be educated, soothed, or mocked and shamed - until they fall in line, and do what is good for them.


If you regard yourself as a Christian, and are currently an advocate of AI - you are Missing The Obvious; and it is time to take a step back. It's never too late to repent, and all spiritual learning from experience is a positive gain. It's what we are here for, after all.     

 


23 comments:

Michael Coulin said...

Marshall McLuhan put this mindless 'technology is neutral/just a tool' argument to bed 60+ years ago, yet very few people seem to have got the memo - including well-known 'Christian' blogger VD - who has become an ardent ambassador for it.

Bruce Charlton said...

@Michael C - Agreed. There is no such thing as "neutrality" with respect to value - https://charltonteaching.blogspot.com/search?q=neutrality

McLuhan was actually rather difficult to unravel on these matters - personally he was a devout Catholic, and much concerned by values - and that was how he saw his own work. But he often seems to become fascinated by phenomena in a cold, analytic sort of way. Plus; after his brain tumor and surgery, he was never the same again - it's the early work that I have found to be valuable.

William Wildblood said...

Nothing is 'just a tool'. The tools you use affect your state of mind, sometimes dramatically so. AI being materialistic to the nth degree will alter and desensitise the minds of those who use it.

Derek Ramsey said...

You've already written on this topic on many occasions. Here you sort-of admit that AI is a tool, but then disregard it. I admit to being perplexed.

For probably a decade, I've said that AI is just a tool, but that it will be largely harnessed for evil purposes. I said that when China rolled out its AI-driven, facial-recognition mass surveillance program. But as evil as all of this is, AI itself is just a tool. Evil is in the domain of mankind. Men are evil, tools are just tools, even if 95% of the tool use is evil. If not AI, then some other tool would be used. The tool doesn't matter except in the scale to which it applies.

I'm an Anabaptist. I shun all violence in favor of nonresistance. I will never own a weapon. Yet, even I do not believe that guns are anything more than a tool, despite the mass damage that they do. For it is the people who wield them, not the tools themselves, that are evil.

I understand that you take a difference stance, but what I genuinely can't see is why. I don't understand why this isn't an argument for something like extreme asceticism and avoidance of all technology (including the internet). Or, to put another way, why have you not joined the Amish?

If you've explained it, it's gone over my head.

Bruce Charlton said...

@Derek - I'm puzzled by your puzzlement! - I think it is because you seem to believe that "a tool" can be *just* "a tool" - and I cannot imagine what valid analytical work that "just" is doing in that phrase.

I think you are perhaps captured by the categories you are employing to discuss this - and probably you have absorbed this analytic - unconsciously - from mainstream propaganda. As we agree, in such matters, assumptions are almost-everything.

Nothing is "just" anything, but especially not when that something is a vast top-down global strategy. And when the global totalitarians who devise and deploy these vast strategies are (perhaps we would agree) on the side of evil, ultimately in service to the agenda of Satan.

You could as well say "just" a war, or "just" a program of extermination, or "just" brainwashing people to despise God and divine creation.

Your distinction seems bizarre and obtuse to me. With AI we are not talking about a thing that might conceivably be left harmlessly lying around in the middle of a desert. We are instead talking about an enormous, very-expensive, and coordinated program - involving many millions of people.

With AI we are talking about something dynamic and purposive, not a static and inert object.

In other words AI is exactly the sort of value-driven, value-laden strategy that Christians must evaluate and discern as a motivated strategy. To assert neutrality about AI is to pretend it is possible to be neutral about the spiritual war of this world.

Ron Tomlinson said...

If I understand you correctly Bruce you would say that self-driving cars may be a convenience but that they will inevitably be used to track us and control us and that the spiritually corrupting aspect is that *we will therefore accept this as normal*

Or a radiologist who relies on a Deep Learning Model for the speedy analysis of medical images to detect cancer cells is going to lose competence as a result? Perhaps falling into similar errors as Evidence-based Medicine?

Bruce Charlton said...

@William - "alter and desensitise the minds of those who use it."

I believe I have seen this happen. Embracing AI seems to act like an "entry drug" to the hard-stuff of this-worldly materialism.
2025 M04 11 13:53

Bruce Charlton said...

@Ron - Much more than what you say.

We need to recognize these are elements in a massive, global agenda - and we know where (ultimately) that agenda comes from, and what it aims at.

As I keep saying (!), unless we allow ourselves to be confused by this-worldly expediencies, false assumptions, or psychological reactions - this fact ought to be very obvious to Christians.

Derek Ramsey said...

I remain deeply puzzled. Are you not making a simple category error?

The things you mention—war, programs of extermination, and brainwashing—are not tools. They are not even things. Rather, they are actions that people do: fight, exterminate, lie. They are not like tools. I can hand you a hammer, but I can't hand you war, extermination, or brainwashing. I absolutely agree with you that the different human deed you have listed are fundamentally evil. Similarly, it is impossible to talk about good adultery, good dishonesty,etc.

But we are talking about things that are "just" tools, like self-driving cars, video cameras, hammers, or guns. They are not in the category of sinsful deeds, they are things. AI is not a strategy, it is a piece of software and data.

There is something that cannot be put to any good use: pornograpy. Is AI like a video camera that can produce both good and bad results depending on its context? Or is it like pornography where it has no valid use at all, where even its existence is inherently evil.

I would argue the former. AI can produce objectively good results, even if it more typically does not. I'm not arguing that it is neutral! I merely fail to see how the motivations of the creators of AI necessarily and inherently applies to the product itself.

You talk about AI as an expensive, coordinated, enormous program involving a motivated strategy, but so is the internet. Why do you have a blog at all? Is has been obvious to me that the internet has altered and desensitized the minds of those who have used it. The same arguments against AI apply here. But here we are.

Bruce Charlton said...

@Derek - All l I can say is think about what I'm saying. It is not a difficult argument in the slightest degree!

I agree that the internet is evil, in the same sense as AI; and I have often said so:

https://charltonteaching.blogspot.com/search?q=internet

Of course nobody (at least in in The West) can opt-out of the evil that is endemic and pervasive in this most evil of times. Jesus knew is was impossible to cease from sinning, which is why he did not ask the impossible; He came to save *sinners*.

But that must not stop us from recognizing and repenting the evil that we must do - every hour of every day. What is vital is that we are aware of these matters. If we aren't aware of huge long term pervasive evils, if we don't acknowledge their evil - then how can we avoid getting drawn into supporting the wrong side in the spiritual war?

(That's what this mortal living is about, after all. )

Jesus made it very simple for us, and possible for everyone. Repentance is simple but, apparently, difficult - because people are so very reluctant to do it - even when repentance is (apparently) so easy and quick - and no power on earth can prevent us!

Derek Ramsey said...

But I don't disagree with any of that! Therefore, I conclude that what you are saying must have some orthogonality with what I'm saying, we're just unable (at this point) to communicate that fact.

As I've said before to you, I apparently fail this Litmus test, but perhaps not in actuality.

Perhaps you should avoid calling AI a "program" because that makes it seem like you are talking about a program as merely "a piece of software" or "website" (i.e. just a tool) instead of a set of coordinated plans and activities. Perhaps refer to it as "AI-ism" (or something more clever) that more clearly implies and includes the motivations behind it.

In any case, I'll keep reading what you have to say.

Bruce Charlton said...

@Derek - It strikes me that the problem may be in relation to what I regard as evil. I regard Good as the "side" of those aligned with God and divine creation, and evil as the "side" comprising those who are opposed to God and divine creation.

It's all about which side you are not - not about your innate qualities of good or evil.

So many nice kind etc people are against God and some nasty people are on God's side.

In other word, the situation is analogous to a war - the spiritual war of this world.

I also don't believe that there are any "things" - that everything in creation is ultimately a Being (alive, purposive, conscious etc) or part of a Being; and Beings are on one side or another in the spiritual war (at any given time).

Those on the side of God are good, those against the side of God are evil. It's the choice of side that matters, not the quality of the Being.

Hagel said...

What's a better name for AI than artificial intelligence?
Amalgated interslop?
Please reply with suggestions

Francis Berger said...

Failing the AI Litmus test (or any LT) is one thing. Doubling down on that failure by rationalizing and justifying it is another thing entirely. I only say this because I see a lot of doubling down on AI these days.

No Longer Reading said...

There's an idea that you need to have written the Summa Technologica to criticize technology. But it's a lot more basic than that. If it has bad effects or it's based on a bad ideology then it's bad. Techno-totalitarian mass surveillance has obviously had bad effects. There's no need to debate whether "the true mass surveillance has never been tried". It's also based on a bad ideology. So, it's bad.

Also, there's a mixing up of two separate things. On the one hand there's the fact that the universe is set up such that a particular technology can be invented. On the other hand is the technology itself.

Nature works how it works, but then humans choose what to investigate and what to try to invent within the bounds set by nature. Whether the universe is set up in a particular way is neutral, well, no living human being knows that, so it's a moot point.

But technology is a human endeavor. If religion doesn't get a free pass that it's always good or at least neutral, why are we supposed to give such a fee pass to technology, which is every bit as human as religion?

Wm Jas Tychonievich said...

I had some fun with "AI" when it was just getting started. It stopped being fun as soon as people started taking it seriously. That so many people *are* taking seriously boggles the mind. I haven't been this embarrassed on behalf of my species since the birdemic hysteria.

Arguing the point with people who don't immediately and intuitively get it unfortunately seems to be a waste of time. Let the dead bury their dead.

Frank, I agree that the doubling down is particularly painful to watch. VD with AI is like Scott Adams with the peck. He's making a laughingstock of himself, and I don't see how he ever recovers from it.

Bruce Charlton said...

@NLR - What an excellent comment!

That expresses very well how I also understand such things.

Bruce Charlton said...

@Frank and Wm - re: "doubling down" - I suppose this is what happens when you continue down the path.

Having failed the initial - very easy, very obvious - discernment of evil (people who don't immediately and intuitively get it), then people get further and further enmeshed in false perspectives and irrelevant notions.

This is probably a very general phenomenon in our society. It's why I keep harping on about "Litmus Tests" - because it seems that just one LT Fail is enough to set people on that path to the dark side.

It is (apparently) so easy to stop and turn back - but people don't so it - often enough (it seems) simply because they do not want to admit they made a mistake, fear of "losing face" or something.

Joseph Ellsworth said...

@Derek

This may be more intuitively obvious for those with a background in the arts, where the formal elements (how something is expressed, the technique) are seen as the most essential, perhaps even taking precedence over content (what is being expressed)—so essential that ultimately such a distinction between form and content breaks down.

We cannot consider “AI” as a tool in the sense that a hammer is a tool; it has definite structural/technical commitments (in the manner in which the computation occurs) which are by no means natural or basic in the way one can consider a hammer or a wheel to be basic and natural. And then "AI" purports to be modeled after the human mind--so its creation is based on, and its usage and success must propagate, certain false assumptions about the nature of mind and thought; it confirms the sinister bias towards abstraction, digitization, and quantification, which undermine spirituality more than anything these days

McLuhan attributed his insights to his study or modernist and symbolist art, saying of Wyndham Lewis (and I think this quote explains well what is going on with “AI” and its evil effects):

“It was Lewis who put me on to all this study of the environment as an educational—as a teaching machine. To use our more recent terminology, Lewis was the person who showed me that the manmade environment was a teaching machine—a programmed teaching machine. Earlier, you see, the Symbolists had discovered that the work of art is a programmed teaching machine. It’s a mechanism for shaping sensibility. Well, Lewis simply extended this private art activity into the corporate activity of the whole society in making environments that basically were artifacts or works of art and that acted as teaching machines upon the whole population.”

Bruce Charlton said...

@Joseph Ellsworth - Reading that McLuhan quote makes me want to go back and re-read - https://charltonteaching.blogspot.com/search?q=mcluhan - my favourite is Gutenberg Galaxy.

Dennis Heretz said...

I’ve been in software for 35 years and have never seen anything like this AI project. We developers are under tremendous pressure to use the AI tools to “assist” in our coding. I have refused and expect to lose my job, possibly my livelihood, as a result.

We are told to program by feeding the AI ordinary words and hope it responds with code that does what we want. My colleagues report that 80% of the output of the AI is rejected by them—it doesn’t compile, has errors, or just doesn’t work. The 20% remaining is possibly useful, or at least compiles, but no one seems to take much time to analyze the output to confirm it is correct.

What sort of tool is this that fails 80% of the time? Imagine a hammer that 1 stroke out of 5 sort of hits the nail, but the rest of the time the head flies off to the side, or turns to jello. The experts insist the quality will only get better, but offer no indication of how this will occur, beyond their own hopes.

A “computer scientist” has coined the term “vibe coding” to describe this process of “conversing” with the AI to coax it to produce useful output. Within a few weeks of his posting this term on X there was a wiki page for it, which suggests it is being pushed from the top. It reminds me other grotesque attempts to reframe highly negative concepts as breezy and cool (e.g. “gig economy”).

As best as I can tell the purpose of this project is lessen our humanity, by attacking one of our key gifts, our reason. We are expected to discard our training, experience and rationality and hand the task of creativity over to a gibbering idiot, hoping for something useful. It’s sad to see many people going down this path, but I am also hopeful in that the uptake of these tools is lower than expected.

Bruce Charlton said...

@DH - Thanks for that insider view.

I don't think many people realize how "normal" it is for destructive, functionally harmful, anti-human ideas/ strategies to be pushed-through the Western bureaucracies.

Emanating from the top level (which is at a multi-national level) and very quickly (as quickly as as a few weeks) pressure-permeating down to the "Capillary" level of almost every person who works in any part of the system.

People cannot grasp that the 2025 (after several decades increasing) the normal managerial mentality is absolutely unconcerned whether these strategies Really have any benefit, or whether they actually do serious harm. The managers *core* job is to implement these strategies.

All systemic discussion and debate excludes, blocks, trivializes (just your opinion) and punishes even the acknowledgment of such matters.

From the POV of The System, this global Giga-policy of AI, is indeed "just another" in a sequence of incremental steps of functional destruction and human exclusion/ enslavement.

That's why AI is a significant Litmus Test. "But *I* am having fun with AI technologies!" may of course be true, but that does not refute the evil nature and purpose of the global AI-strategy.



Derek Ramsey said...

@Joseph Ellsworth

Thank you, this is a good explanation.

@Dennis Heretz

This has not been my experience.