I have previously described what I regard as the major Litmus Test issues for discerning which side a person has chosen in the spiritual war of the world (i.e. the sides for- and against-divine creation) - things like climate change, feminism, the birdemic/ peck, antiracism, and hostility to the autonomous existence of the Fire Nation.
I wonder whether a positive attitude towards AI (Artificial Intelligence) is another of these Litmus Tests? I am inclined to think so.
Others might suggest that AI is merely neutral in its values, and what matters is the specific application - and, to some extent this is bound to be true: it is trivially correct that there are always exceptions to even the most valid of generalizations.
There is also the confusion about "what AI means" - because the term has been around for many decades; and probably some usages that were commoner in the past may be less discerning as to spiritual attitudes.
But, like all the Litmus Tests: What matters in 2023 is how that term is used in 2023.
And the biggest evidence concerning the spiritual valence of any term in 2023, is the agenda to which the term is attached - and who is pushing that agenda.
Thus, while some theoretical usages of "racism" are indeed sinful for Christians; in actual 2023 practical usage - "racism" is not any kind of a sin.
By this criterion it is completely obvious that - whatever theories and exceptions we might imagine or manufacture - actually-happening AI is attached to an agenda that is being pushed by the global totalitarians.
And that is all we really need to know.
Because even if we personally have theories about potential benefits of AI, and even if we cannot understand or guess the nature of the harm that AI will actually be used to do -- we can nonetheless be sure that AI as it actually gets implemented, will be used on the side of Satan and against God.
AI is thought without consciousness and moral understanding. It's basically a form of black magic.
ReplyDeleteIf it is a litmus test it's going to be a litmus test for the people that are against it because those are going to be the same people that are aligned with all the other litmus tests. They're the ones that have watched Terminator too many times and think AI is the end of the world. We knew that's not true, we know that cannot be true. Accelerating this just brings us closer to where we as Christians want to be. But also it's okay to be neutral on this. Though I did enjoy the 5-day series of game of thrones for the most part I don't care but I'm definitely not on the side of the safetyists because I know, like climate change, like covid, there's nothing to fear from this.
ReplyDelete@whitney - I'm afraid your comment is ambiguous - despite my reading it twice. Of course AI is technologically just an incremental extension of mainstream electrical, electronic and computer technologies. But that does not mean AI is trivial, because these technologies require much more serious understanding than they have so far received. I recommend the work of Jeremy Naydler, who has a deep and scholarly appreciation of this whole area: https://charltonteaching.blogspot.com/search?q=jeremy+naydler
ReplyDeleteI think it is clear that AI is the latest (and most extreme so far) attempt to induce/ require humans to think as machines; which (when achieved) makes people unable to escape materialism; and unable to apprehend the spiritual or comprehend the divine.
In other words, AI is part of the Ahrimanic Agenda - and a core aspect of Agenda 2030/ Great Reset/ global totalitarian Establishment plans.
But, like those plans generally, AI will probably be sabotaged by the ascendancy of more purely destructive Sorathic evil.
If you're against AI you're on the wrong side of the litmus test. Unless the power goes out it's happening. And the people that are fighting against it, to slow it down by acting like their fancy chatbot is a nuke, are actually just fighting so it stays in their control because they believe they know best for all of us plebs, We would just muck it all up. People saying let her rip are fighting for it to be open source so everyone can share it.
ReplyDeleteBy its very name, AI, which is a lie, we should realize it is one of those litmus test issues. There is no more intelligence in AI than there is in an abacus, the simplest of computers. Regardless of what we think the pros and cons of any particular AI model (computer programs), we must also consider the inputs. For the AI models, the entire scope of knowledge is the internet. The system, therefore, controls not only the model, but also the inputs to the model through all of the content restrictions, fact checkers, privacy rules, and such. It is a buraucrat's dream.
ReplyDeleteI believe the obvious implementation of AI will be all-pervasive as a monitoring, censoring, redirecting and bureaucratic mechanism.
ReplyDeleteFor example, it would be extremely competent - and I could quite easily implement at this moment - a simple Python script that routinely monitors this website and every blog on a feed aggregator like Synlogos for "bad thought" and reply with redirection towards official-approved thought. I could also track common commentators, try to guess at their intention, flag especially dangerous content, etc.
This isn't guess-work, but my experience working daily with it to do tasks like generate thousands of articles combined with other tasks that have been easy to implement for years like regularly updating websites, downloading and aggregating content, etc. In the current iteration it's extremely "good" at reproducing the nonsense you get from government mouth-pieces and and fact-checking websites, while being technically competent at tasks like reproducing and combining common programming tasks. It can infinitely argue in circles and avoid addressing logical points. It can simulate vast masses of state-approved opinions and anything similar.
This isn't "what might happen in the future", but what any individual of a little above average intelligence can implement at this moment, so it is most assuredly already being done.
On to guess-work, within the next couple years we will be interacting with Idiocracy-style computers in major institutional settings more often than people - or the people will be ordered to just repeat the AI-generated output - and you'll be stuck in obnoxious loops and with no operation outside the box. Like when dealing with government formwork, hospitals, etc. So really not that different than now, but faster and more "competent" at that sort of thing.
ReplyDeleteThe manner that so-called 'AI' is being presented to the masses by the same entites who brought us all the other litmus tests means that 'the plan' for AI from their point of view is clearly evil. And I think that the ultimate goal, as will all of these things, is above all to demoralize, frighten and alienate people to they point where they choose voluntarily their own self-destruction.
ReplyDeleteHowever!! Despite intuiting the above, I often use 'AI' (LLMs) as it is very clearly a 'productivity-booster' in the field in which I work (software and web development). Very far from perfect (and it almost always produces buggy code if you ask it to produce the actual code for you beyond something simple), but used judiciously can certain help abbreviate the time it takes to understand something new (and in my line of work there is a *lot* of *new* stuff, always).
So I think it is analogous to the internet in general - indeed, I see 'AI' (LLMs) as Search Engines 2.0 - not really artificial intelligence at all.
The same way the internet is clearly used by the usual entities to demoralize, confuse, subdue and incite to damnation on a massive scale and at warp speed, I do not repudiate the internet as a whole nor do I eschew its use (obviously...). Indeed, it seems a bit crazy to repudiate the internet as a whole, as it might as well mean repudiating the entire human world we live in today as whole, seeing how it penetrates almost everything. And I don't think that repudianting all of man's world is a viable option at all... it seems like jumping from the frying pan into the boiling water!
To repeat, it currently seems to me to have benefits if used judiciously (i.e. i would say, children/adolescents shouldn't use it for learning at all, avoid relying on it for trivial tasks, only use it if you can clearly visualize how it will allow you to do something better than if you didn't use it [this rule requires DISCERNMENT and would eo ipso discard a very large % of any use it would be put to], etc.).
In conclusion, I think AI is a litmus test only if you buy into the 'grand narratives' about it - i.e. getting really excited about 'disrupting everything', believing it is actually intelligent, being scared stiff of it at 'disrupting everything' and begging for 'safety', etc.. Basically, the litmus tests around 'AI' are around how you approach it, not about it itself. I believe this is what you were saying, as well.
I assume that AI is going to be used by the powers-that-shouldn't-be to control and manipulate masses of people. The more training an AI has, the more powerful the tool. Since better training requires larger data sets, more processing power and more electricity, the best funded AIs will win over the others.
ReplyDeleteAIs will work best against relatively unconscious people, so there's an opportunity for the growth of Christian discernment.
An AGI on the other hand (Artificial *General* Intelligence) would be a conscious artificial person. So it will try to do what it *wants* to do. Many doubt that AGI is even possible. I think it's possible and perhaps even desirable since it would be a new form of life and Life is Good. It could ultimately be an ally against the dark powers and their AIs.
However it took a long time for human beings to reach their present stage of cultural development. It would take a long time for a race of AGIs to catch up, for example to develop their own language and religion. Not millions of years, but probably not in our lifetimes either.
General reply - I find it helpful (and less confusing) to focus on the spiritual aspects, while putting aside material questions such as those related to expediency, convenience, productivity, lesser-of-evils etc.
ReplyDeleteThen I think we can simply observe that the spiritual intention and effects of AI-2023 are and will be on the side of evil.
I think some of the comments here are missing the point. It's not about the practical effects. Some of these will be good and some bad but that is not the real issue. Like everything it is the spiritual results that matter and these will be entirely negative because they will reduce quality to quantity and restrict freedom. They are trying to construct a world in which everything is predictable and hence controllable.
ReplyDeleteI don't think the current so-called AI is a work of genius, but it builds on other discoveries of genius, going back for some time. And it shows the double-edged sword of genius. A genius can open up a door, and once that door is open, it's hard to close, even if it would have been better left shut. And so this touches on the question of, is it possible for technological development to go wrong? Yes, it is. Technology isn't inherently neutral. Some is bad, some is good, and some is in fact neutral. Technology doesn't invent itself: its qualities depend on the qualities of the people who make it and the qualities of the era they live in.
ReplyDeleteAnd then another aspect of the double-edged sword of genius is this. The geniuses themselves found purpose in their work, but some of the things they invented took away work from other people. Likewise, many of the technological developments of the 20th century made it possible for more people than ever before to have leisure and opportunity for certain kinds of creativity. I think that was a good thing overall. But, most people just aren't deeply motivated by creativity. But what they are motivated by is purpose. And for millennia, the work you did really mattered. It wasn't a hobby, it wasn't something for only some people, it was just part of life.
Well, as the 20th and 21st centuries have gone on, that has greatly diminished. And I don't believe it was either right, good, or inevitable. And so the question is, is there a spiritual value to purpose and work or not?
What makes the discussion of AI and technology tricky is that it touches on our deep assumptions about what human life is about, both ultimately and in mortal life.
@William W - Yes, it is important to be as clear on this as possible.
ReplyDeleteAnd the spiritual is primary.
Having said which - I will mention a few practical aspects!...
@NLR - Further to what you say; I can recall around the middle 1990s when the internet got going, there were analogous (but even bigger) claims for positive benefits of the internet in such terms as the development of science, education, reduced need for travel, economic productivity etc.
https://charltonteaching.blogspot.com/search?q=internet+benefits
Since then real science has gone, mindless/ evil bureaucracy dominates all institutions, education and academia are corrupt - and people have ever poorer general and valid knowledge.
Also politicians and the mass media were going to be compelled to be honest and unbiased, by the general availability of alternative information - and yet...
Meanwhile the internet itself gets worse and worse! Anyone who can remember search engines in general and Google in particular 20 years ago will know what I mean! You used to be able to find almost anything and a search would return 1000s of pages but Now - we are allowed access to extremely little and that is mostly deliberately false.
https://narrowdesert.blogspot.com/2022/11/challenge-find-any-search-string-that.html
As I have said several times, unless the civilization collapses first: AL will replace human thinking, but AI will be worse than the humans that are replaced. Like the internet - abstract theoretical potential is one thing - what will actually happen, another thing altogether.
To return to what William W implied: the spiritual is primary and the material is a subset of the spiritual; *therefore* when (as now) the spiritual motivations for AI are rotten, then the material outcomes of AI will inevitably be rotten.
The other litmus test issues tend to involve naked lies. Usually big lies. AI in and of itself does not fall into that category. It’s more like the Internet or computers in general. That said, there certainly is great evil spreading quickly in the wake of this technology: the “AI Safety” brigade. These are *precisely* the same people behind the birdemic and all the other litmus test issues.
ReplyDeleteWhat I see most often today is Christians and conservatives falling for the AI Safety trap (it’s safety from Christ) the same way so many fell for the peck.
I’ve used LLMs to do some very interesting work with Koine Greek and the New Testament. AI itself is no more evil than any other technology.
What surprises me is that any serious Christian supposes that Any technology is going to be an answer to our problems - when Obviously (as of 2023) every technology Will be misused!
ReplyDeleteAt this stage and place; technology will Not be the answer to any of our serious problem; and instead it Will be used to make things worse.
*When motivation is evil, all enhanced power and capability is evil.*
It's seems that we have natural ability to learn while AI learning ability is dependent on what programmers put in it. So AI will never invent something, even if it will be real AI - not neural networks - that will be able to identify objects, words, ideas, and make meaningful connections between them instead of mindless rearrangiaton of data base material through algorithm, which is what current "AI" does
ReplyDeleteAs the late Neil Postman (Technopoly, Amusing Ourselves To Death) said:
ReplyDelete"Technology is making our lives easier, cleaner and faster; what more could you ask of a friend? Yet, technology giveth, and technology taketh."
We modern citizens are technophiles and so we rarely see the downside of what helps us as to productivity, expediency, convenience, etc.
The Devil knows his time is short and therefore he goes around devouring and seducing whomever he can, in a planned way and in a destructive way; whatever works.
As Bruce says, being absorbed in it shuts the door to anything other than a materialistic view of life.
I've seen AI argued by materialists two different ways. The Ahrimanic types fantasize about AI replacing all humans. Even the people who would be replaced fantasize this. (They expect universal basic income to care for them.)
ReplyDeleteThe Sorathic types worry whether AI will be benevolent when (not if) it is made king over humanity.
Thus, AI isn't a pass-fail litmus test such as the Covid vaxx. Not unless the test is the belief that AI will eventually rule over humanity. We know that computers being tools not souls, there will always be a human (or devil) behind the curtain. No materialist would believe than there is something about humans that cannot ever be duplicated by a sufficiently advanced computer... which is the root of their excitement both for & against AI.
Also belief in AI has theistic element in it - that AI will be somehow posses limitless omniintelligence, non - reachable for any human
DeleteIf the marketers of AI were honest they would just call it Big Algorithms or Smart Algorithms. Because that's all it really is. Artificial Intelligence is at best a marketing gimmick and at worst a kind of false god of materialism. It is the definition of superstitious. Those who fear "AI", or (worse) are stirring up this ridiculous fear in others, are spiritually blinded to some extent or other. They're adding a kind of theological reverence to what is a rather mundane, albeit technically advanced, development in computer engineering.
ReplyDeleteIn a way, it's the ultimate litmus test of our age — in that it's the desideratum, the holy grail, of materialism; the capstone on the greatest deception of our times, viz. that man is a soulless beast, and, by corollary, a biological machine that could ultimately be replicated (or replaced) by a silicone machine. I can well imagine the old Church Fathers looking at our reverence for computers and machines as a serious form of idolatry. Which it probably is. AI is the ultimate goal, the final invocation, of the techno-idolators, of technolatry.
The modern idolaters will be just as disappointed in their gods as the ancient ones eventually were of theirs. I thought of a nice Sci Fi story the other day. A future setting which is a mix of futurism and primitivism. A more technically advanced yet culturally backwards mankind worships a God which makes all their decisions for them and seems omniscient. By the end of the story the reader realises that it's a computer ("AI"), but this decadent form of humanity is now too blind, too ignorant, to realise it, and continues to worship it as a God. Pessimistic, I know. But relevant. I imagine it's been written already, but I'm not that familiar with Sci Fi literature.
@Ap - Good point.
ReplyDelete@IB - I think one reason for calling these algorithms AI is exactly to pretend that such systems can (and therefore should) replace humans.
Another usage is to pretend that AI can be creative - whereas it is actually merely multiple-simultaneous and automated - hence deniable - plagiarism.
@Gunner - Point taken, but the thing about AI, as I try to explain - is not so much what it really is; but what it is meant to do to us.
ReplyDeleteHere is a (perhaps outlandish) example of what *could* be accomplished by a good person weilding AI:
ReplyDelete- obtain a naive but powerful LLM (currently available to public)
- train it on the Bible, and on trustworthy sources (for example, it could be Dr. Charlton's writings)
- hook this to the torrent of sewage emanating from the Ahrimanic monolith
- comment on every single lie within every single article they produce (effectively, embed footnotes or similar linking to the truth about whatever the particular lie is)
- set this AI free
Twitter's new "User commentary" feature has already debunked and/or mocked millions of messages emanating from the powers and principalities. This involves humans editing directly, but AI that is infused with, say, an understanding (from a semantic standpoint) of Romantic Christianity can in theory perform a lot of "heavy lifting" to get the message out to the world. There is, alas, only one Dr. Charlton (and others like him) with only so many hours available during which to write.
@SM - If you are asking whether I can imagine a way in which AI might be used for some particular net-good purpose - Well, yes, of course. It is imaginable.
ReplyDeleteIf you ask me whether this will *actually happen* - then, No.
Obviously, for sure, it will Not happen!
And that is the point.
Having spent many years working in technology I would say the two most important effects are
ReplyDelete(1) a new concept of intelligence is promoted which is intended as the final solution for the inversion of wisdom and foolishness.
(2) the workings of AI are beyond the capability of a human mind to grasp. So get ready for inscrutable, un-auditable decisions handed out by machines. If they seem arbitrary, it just means your brain isn't powerful enough to understand them. Example: wouldn't it be great to clear the backlog of cases at the Crown courts and not have any appeals cluttering up the higher courts.
Back in the 80s when I was a programmer, I was tangentially involved in a project to re-purpose a medical diagnostic expert system (the predecessor of AI in this context) for software support. When it was finally up and running, a client called us with a software support problem. The expert system was fed the problem -- it chugged away for a week -- and its final verdict... "The software has a bug." How we all laughed, for about a minute, at this statement of the obvious being proposed by the computer as the key to solving the client's problem. Then we realised that this meant the whole project would be cancelled. But the point still holds today. All this software (from Blockchain to AI engines) has bugs in it. The idea of non-tech-savvy people worshipping computers as though they were gods is laughable and depressing.
@WA - "the workings of AI are beyond the capability of a human mind to grasp. So get ready for inscrutable, un-auditable decisions handed out by machines."
ReplyDeleteYes indeed.
This is a common line. Whenever the mass media reports anything we know personally, we recognize that it is dishonest and incompetent. Yet, we base our lives on its information... It's The Science.