Since Google's 2K search engine was far superior to any so-called AI; then why are people in 2025 so awed and submissive about AI?
It's an interesting question.
There was certainly a vast degree of hype about The Internet in the 1990s, as well as many serious and thoughtful attempts to predict its consequences (which were spectacularly wrong in almost every instance!); and people who used search engines recognized that Google was a qualitative improvement on what had existed before (I recall that Altavista was the best engine just before Google emerged in 1997).
And indeed the Google search engine rapidly became a superb tool; far superior to anything that has existed since around 2010. Google has almost completely destroyed its own search value, and nothing significantly superior has emerged - or, probably, nothing superior has been allowed to emerge.
But there was never anything like the quasi-worship and sheer mental derangement that is at present rampant on the subject of AI; where people I would have (until recently) regarded as sensible and decent, are spouting incoherent-impossible nonsense about what AI can/ should/ will be used for, and apparently being taken seriously.
Worst of all - they are looking forward to it! This reminds me of the 2020 totalitarian globalist who said that in his future we will "own nothing - and be happy" - but even worse: this time we shall instead (or as well) "be ruled, monitored and controlled by robot computers - and be happy".
I take this as strong and solid evidence of the astonishing degree of spiritual corruption to which people in general have sunk. Not just the rulers, who have led and managed the avalanche of nihilism; but the masses as well.
Common sense has gone. Simple two-step reasoning has gone. And the reason is that almost everybody, including almost all self-identified religious or spiritual people, has adopted the pervasive assumptions of inhabiting a purpose-less, meaning-less world from-which we are alienated, and with which we engage only materially.
Inside people there is, apparently... nothing - a void; continuously filled up from externally by the media and institutions and superficial "cause and effect" interactions of "social life".
And it turns-out that is makes no significant difference whether this inner void is filled by political ideology or church doctrine and guidance - these are all substantially converged, and it all comes to the same in the end.
Of course there is not really a void inside us, but an eternal self; which is a part of God's living creation.
But we have chosen to deny, disbelieve and ignore this in favour of mechanism, passivity, and ultimate, existential irresponsibility.
That's why people are so very impressed by the real-world-useless, destructive, literally soul-destroying toy-master that is called AI.
11 comments:
"evidence of the astonishing degree of spiritual corruption to which people in general have sunk. Not just the rulers, who have led and managed the avalanche of nihilism; but the masses as well. " And not just these people but many supposedly spiritual people. But then that reminds me of when computers became something everyone could use and many supposedly spiritual people welcomed that as indicative of a new and higher consciousness. Plus ca change.
@William - I wouldn't say plus ca change - I think things have become much worse. In earlier phases there was considerable resistance, even at the institutional level.
For instance, initially the Mormon church resisted social media and smartphones, but (when they began to converge with left-materialism) they officially, top-down decided to embrace and use these new technologies - encouraging people to use them for supposedly religious purposes.
Now there seems no resistance, and immediate embrace - and mandatory imposition is widespread
I quite agree that things have got worse. I just meant the plus ca change in the context of spiritual people being all too easily seduced by the shiny products of materialistic thinking.
I have never taken a smartphone into a church. It just seemed self-evidently inappropriate to do so. When I recently revisited a CJCLDS , though, I found almost no one had physical scriptures with them but read everything from their phones. Even a church is no longer a refuge from that stuff.
Large Language models are powerful tools -- like a spreadsheet. I've built a language model from scratch. When one goes through this exercise and actually has hands-on experience with the inner workings of this technology, the mystical nonsense evaporates quickly. Nevertheless, it is quite shocking what LLMs are capable of, given their extremely simple internal structure (that is replicated billions of times).
LLMs are genuinely useful in tasks such as summarization. They can with fairly good reliability (getting steadily better) ingest a 300 page legal brief and reduce it to a good 2 page summary.
Apparently the public came to believe that LLMs can be used as knowledge retrieval systems. This flies in the face of how LLMs work. They do not encode knowledge in the way Wikipedia stores text about e.g., the actinides and lanthanides in a document database. LLMs recursively apply a simple algorithm that attempts to predict the next "token" (part of a word) within the context of a sequence of tokens. LLMs can have no connection to Truth, in principle. A true database can have a connection to Truth -- Dr. Charlton's corpus on this blog is stored by Blogger -- unaltered as far as we know -- in a database. A database is comparable to a book. A large language model is not comparable to a book, and should never be treated as a source of truth but rather only as a tool for the transformation of text for technical purposes.
@Stephen
You need to join up your knowledge: e.g.
"LLMs are genuinely useful in tasks such as summarization. They can with fairly good reliability (getting steadily better) ingest a 300 page legal brief and reduce it to a good 2 page summary"....
"LLMs can have no connection to Truth, in principle. "
Surely you can see that something that has no connection to truth Cannot provide a "good" summary?
Or:
"A true database can have a connection to Truth "
Wrong. Data has zero connection to truth. Truth is in the interpretation of data, and only in that.
Consider the parable of "A Canticle for Liebowitz"
https://medicalhypotheses.blogspot.com/2010/03/after-science-has-tradition-been-broken.html
Or, in more depth, Barfield's Saving the Appearances. At present you are editing human consciousness out from knowledge - and this leads to incoherence.
This is a very common, indeed civilizational, error - but it is demonstrably an error. What is needed is a better understanding of the role of consciousness.
If this reality (the necessity for consciousness in all possible knowledge) can be grasped - everything looks different, and we realize the depth of error under which we live and are governed.
Wm Jas Tychonievich has left a commen:
"Just ran across this by chance: “Google intentionally crippled search to increase ad revenue.”"
(...Ref to screenshot followed by screeds of anonymous posturing)
A misleading half-truth - the kind that gets told to middle managers as a pseudo-rationalization: i.e. the first half of the sentence is obviously correct, the second half deliberately deceptive - pseudo cynicism.
It's not done to increase ad revenue - the strategy is much bigger than that, comes from much higher-up, and Google is merely a means to an end.
I disagree completely with commenter Stephen Macdonald that LLMs provide "good summaries". My current browser now has an "AI overview" (which I never asked for) when I search something, showing an AI summary of the subject researched, and several times the summaries are factually wrong. Many websites now, including the supposedly alternative news site the Unz Review, now provide "AI summaries", which ara, again, full of obvious errors.
I don't even understand why we need "summaries" now, even if they were accurate, which they are not. Are people so lazy that they can't read the actual article?
Another thing that irks me is so-called "AI art". Now whenever I look for an image, even if I search for "classic painting" or "real photograph", AI images come up in the search. It has completely contaminated image search, rendering it virtually useless.
Only nerds think that LLMs are impressive, powerful or reliable.
@Zeno - I would say that the important thing is not so much that LLMs provide Bad summaries 9the promoters would assert these can be improved), but that LLMs Could Not (except by accident) ever provide Good summaries.
A good summary requires understanding, LLMs do not understand, therefore...
The *imposition* of AI images by search engines is a good example of the top-down and malign intent behind AI. As you say, AI search results are currently prioritized by innate preference of the software. Nobody asked for this, but we get it- across several search platforms - which I regard as a sure sign of globalist totalitarian control.
Although I suspect that the only genuine ("grassroots") demand for AI is perhaps related to imagery - i.e. the pornographic; where creative quality does not much matter, and constant Novelty (but Not originality) is required to overcome tolerance and habituation. AI could, I suppose, be trained to cater for personal kinks etc. Probably LLMs and verbal interacting software would sufficiently satisfy demand in this area too.
That's really the only domain in which I can imagine that AI would be supplying a (ahem) "bottom-up" mass demand.
@Bruce
>At present you are editing human consciousness out from knowledge - and this leads to incoherence. This is a very common, indeed civilizational, error - but it is demonstrably an error. What is needed is a better understanding of the role of consciousness.
I think your first sentence above goes a long way towards supplying that understanding. The role of consciousness is to increase coherence!
@Ron - Consciousness is an essential part of knowledge.
The trouble is that the philosophical tradition presents two choices - empiricism and idealism; neither of which makes sense.
The mainstream modern versions are scientism and relativism - either truth is out there and humans are merely inessential observers and commenters; or else truth is a product of each human mind, and each person lives in a solipsistic dream world.
People find it nigh impossible to model the world in any other way than these, usually oscillating between them in accordance with expediency. I have often read and heard people arguing that morality and truth are arbitrary and purely a matter of power; then, with their next breath, moralizing zealously and spitefully in the typical leftist way.
It seems absurd and unsustainable, yet that is how our world functions; and has done for generations (e.g. see After Virtue by Alasdair MacIntyre)
It was the insight of the early Rudolf Steiner (in his PhD thesis and other works of the 1890s) that such a situation would lead to the intractable problems of modernity.
Post a Comment