The current wave of Establishment-styled AI ("Artificial Intelligence") is a fake and a cheat, designed and implemented for the malefit of Men - as must be the case given its provenance.
(i.e. Given the sources that have funded, devised, advertised and imposed these technologies).
Current-wave AI is not, of course, intelligence; but what is its relationship to intelligence?
Current-wave AI is a simulation of human intelligence, it is a set of computer programming technologies specifically designed to mislead human beings into supposing they are dealing with another human being.
Current AI is, in other words, designed to be a way of successfully cheating at the Turing Test.
But for current-AI to be an effective simulation, for it to fool humans as successfully as it does; entails that humans are operating in such a pervasively reductionistic and computerised environment that this cheating becomes almost undetectable; and that such humans have habitually assimilated computer-like patterns of thinking*.
The success of AI entails that the full scope of the natural human world has become stripped-down to such a minimal level - for instance; that the fullness of Man's sociality has become brief and hurried interactions with de facto strangers on restricted themes, via type-written media.
So impoverished, so unnatural, have become our worlds, our relationships with other people and with nature and the cosmos; that within-this-context, it is possible for the outputs of computers to be indistinguishable from the real people who are hamstrung in multiple directions, by the compulsion to operate in such constricted and artificial situations.
As a thought experiment, imagine current AI technologies being introduced to historical societies - even just a few generations ago, but more obviously some centuries in the past - and then ask yourself whether in such a total-context AI technologies would have been able to pass for human beings?
Of course they would not; and considering this fact brings to the surface the extremity of reductionism of culture, within which current AI is making its claim of intelligence.
*See Jeremy Naydler's In the shadow of the machine.
10 comments:
"The success of AI entails that the full scope of the natural human world has become stripped-down to such a minimal level - for instance; that the fullness of Man's sociality has become brief and hurried interactions with de facto strangers on restricted themes, via type-written media. "
If I can go off on a bit of a tangent, this reminds me of what I have always seen as a limitation of mathematical descriptions of reality where all phenomena are understood to be reducible to numbers. No matter how many places come after the decimal there is always a certain arbitrariness in the value we assign to a thing. There is always some trimming at the furthest edges that we accept for the convenience of being able to describe reality in mathematical terms. I think video games have a related problem: your courses of action are always limited to a certain number of options and you have to, so to speak, "click" from one to the other. There is no fluidity as we experience in unmediated, or "analog," reality. You can hit the target or miss it or come extremely close but your point of impact must always be one of a perhaps vast but always finite prescribed set. This prescribed set may be purposefully arranged by a human or be the result of an algorithm. But it's always like a "multiple choice" test as opposed to an essay. It allows the designer of the "environment" or the algorithm to prescribe the reality of those who submit to it. Broadcast TV is another way we have been trained to select from a limited set of carefully crafted prescribed objects. Everything "digital" is like this. Digits are a useful way of modeling reality but they are not reality. Increasingly this habitual resort to digital modes of conceiving reality has grown into an all-enveloping mind maze at the heart of which, I intuit, lives the Minotaur. If we wish to escape (assuming we can even imagine doing so), like Theseus we have to carefully and vigilantly lay down a thread that connects us to the outside, that is, the truth.
@Avro - Good points.
"But for current-AI to be an effective simulation, for it to fool humans as successfully as it does; entails that humans are operating in such a pervasively reductionistic and computerised environment"
That's a good point.
Another reason is that people *want* to believe. If AI is really thinking, then soon it should become intelligent enough to solve all our problems (or so some people think).
People assume that technology is inherently good (or at least neutral). And they also assume that technology must always open up new opportunities; it has to lead to creative destruction, not destruction destruction. And technology must do this regardless of the values and goals of the people developing it. If it looks like destructive destruction, you're just not looking hard enough.
But over the past 25 years, these ideas have come to appear less and less reasonable. Not only are we not where people predicted we would be back then, our society hasn't even conserved the good things from 25 years ago.
But people don't want to give up those assumptions. Much of the time, they aren't even acknowledged as assumptions. And so people tell themselves that we're just a little bit off the track, rather than recognizing that the assumptions are false. They think that AI is the thing that will get us back on track to the glorious future of inevitable technological progress. (Or at least that's how it seems from how people talk about it).
@NLR Agreed.
"People assume that technology is inherently good (or at least neutral)."
Nothing is neutral - but even if AI was neutral, it is clear that those who are implementing it have malign intent.
Also, the standard view of technological development, while influential, is not correct. Technological development is viewed as similar to a road, so you can make a linear comparison between technologies and societies as more or less advanced. There's only one direction to go in and if you do, a society will invent certain things.
I view technology as more like a tree, branching out. There are many more possibilities for potential technologies than is generally imagined. We can model some phenomena of nature in certain ways, but we don't know all that there is in nature. We don't even know everything about the phenomena we can model. If we go down one path, then that influences what happens next, but there isn't only one path.
If there was a civilization on another planet, I would speculate that it wouldn't make sense to say they were more or less advanced than us; they would probably have followed a completely different path. Many of their technologies wouldn't be directly comparable to ours. And this is what we see in historical societies to a lesser extent. There are different kinds of inventions, different emphases and for that matter different values and goals. Just making a simple comparison of more or less advanced leaves out much of the picture.
The restricted domain of machine algorithms in the past could be good at times: think CAT scans, or auto-pilots on aircraft, etc. But even then, the computer algorithms could (and were) put to evil use, e.g., guided missiles, etc. Now, people are insanely ok with the idea that opaque, black-box agorithms that model a restricted, artificially constrained segment of "reality" should actually have power over their thinking, their information, and even medicine and law. It won't end well. Civilizations are built by actual, creative soulful thinking. As the sources of the AI's data becomes corrupted by itself, it will degenerate quite rapidly.
This is incisive and brilliant as always. I think interacting with AI is so remarkably like encountering our government bureaucracy that it becomes immediately imbued with the status of a principality by people with limited imagination.
Thankfully these fancy new word calculators are eminently steerable. The models are just this—word calculators. It’s the programs that wrap them up that fulfill some particular group’s vision of what it means to be intelligent (and intelligence is, sadly, almost definitive of being a person to many of these folks).
There is an interesting discussion to have here about the Imago Dei. Historically many theologians have relied on the idea that humans are unique in God’s creation because of our faculty for language. This idea is now possibly problematized. My own opinion is like yours: current AI is simply a projection of true human and, by extension, divine intelligence. This is what makes it so impressive, but it is also what makes it so typical of our current consciousness.
If you read Turing's paper on defining Artificial Intelligence (it's a short & easy read), you'll see he sweeps all ontological (substantial) notions of Intelligence under the rug as theology or metaphysics, and proceeds to define intelligence as that which we take to be intelligent in our dealings with it.
Hence the aim of the Turing test is just tge building of a machine that will fool the average educated man into thinking it's an intelligence he's dealing with; and, for Turing, that very convincing ipso facto makes it an intelligence.
Besides the definition being viciously circular, what stands out to me is how much this whole affair is basically a magic trick: if you can delude people enough into making them think that an intelligence is present, well then you've created an intelligence. This is the core of the magician's (in the bad sense of magician that C S Lewis uses) worldview — the world is all an illusion, so if I can create a magnificent enough illusion to be accepted by all, then I'm a creator on par with God.
It's both disturbing and silly.
Imagine leading a drunk person into a dark, smoke filled room with a crude robot and a tape recorder. If the drunk person believes it's an intelligent being, is it therefore so? The only difference is that Turing wants to deceive the entire modern educated world in broad daylight, not just a drunkard in a smoke filled room.
What strikes me also is how much it parallels certain aspects of ancient idolatry. The founder of Hermeticism, the Egyptian Hermes Trismegistus, records that the magicians of Egypt would bind spirits to the statues of the gods by incantation. Surely that's the ancient form of Artificial Intelligence — intelligence bound to an artefact. It's the same trick being carried out, just under a different form of incantation.
Besides the idolatry of creating minds, there's the apparently more prosaic aim of wanting to make more and more efficient algorithms for speeding up production, etc. I wonder how much these are really intertwined. The god of efficiency has become a Moloch. You don't heal a person with lung cancer by helping them to smoke more efficiently. Our society has lung cancer.
I think you're exactly right about this whole charade relying on the reduction of all our mental faculties to essentially computer-istic ones. So-called "IQ" for example only tests the mind in its most externalised, computational aspects. A more ancient definition of intelligence would be the awareness of being, and the more deeply aware of the mysteries and structural intricacies if being you are, the greater your intelligence (c.f. Aquinas' account of angelic intelligence).
I'm an AI engineer. It has many pragmatic uses in research, medicine and other fields. In one of my projects we allowed engineers to find references to various topics across 190,000 pages of documentation for a nuclear power plant. Whereas they were limited to exact match searches (or slightly better "fuzzy" searches) or the old-school index and table of contents, today they can get a list of areas in the documentation where certain topics are dealt with *semantically*. There may be no overlap in the words used at all, yet the AI is able to discern that two paragraphs are discussing a similar general topic. This has been extremely useful to the staff at the plant, and was relatively cheap to implement.
AI is not conscious because it is not alive (notwithstanding the residual consciousness present in all things). Even more importantly, AI lacks any form of intentionality or agency, and we haven't the fondest clue how to ever achieve something like that in a non-living entity. Whenever AI seems as though it is doing something, there is *always* a human in the loop somewhere.
It is certainly possible that Ahrimanic human forces will use AI for evil -- indeed it is highly likely. However this is no different than their use of all other technologies. In theory it is possible to create "white knight" AI agents that would specifically seek out and destroy malicious AI entities (but this assumes good people have at least some degree of power and control).
@Stephen M - You need to consider the actual argument and the points I am making, because they are unaffected by the points that you make.
"this is no different than their use of all other technologies" - Yes, in a general sense, But *This* is a type of technology that has been launched upon the world in the past two-plus years in a colossal multinational, multi-institutional strategy - something that must have cost vast sums and required immense organization and central direction.
The implications of the ways "AI" is Actually being implemented *in bad ways for wrong purposes* in correspondingly vast.
Think about it.
Post a Comment