The idea behind Artificial Intelligence (AI) (I mean the ultimate demonic strategy, implemented by The Establishment) is to replace the 'human mind' with something less-than-human under the pretence of making us more-than-human.
Transhumanism is the advertisement - Subhumanism is the product.
I don't think replacement is actually possible - but it is certainly possible to amplify considerably - using AI, preferably implanted - the damage of social media in suppressing consciousness.
The method is, mainly, distraction. But distraction from what? What is it that they want to distract us from?
Higher consciousness (Final Participation by Primary Thinking) is the answer - and the thing about the next-step in consciousness is that it must be chosen/ willed/ wanted. So, if AI can continually distract and gratify/ terrify us; then we will probably not-choose/ not-will/ not-want to make that step into Final Participation.
The evil Establishment want people to not-want higher consciousness - and instead to choose one of two options:
The first is un-consciousness (what Rudolf Steiner terms the Luciferic path). This is, roughly speaking, the mass media in its immersive and passive-inducing forms.
The second is alienated consciousness (what Steiner terms the Ahrimanic path). And this is approximately bureaucracy - to live-within The Total System: constrained by the materialist, positivistic perspective... purposeless, meaningless... each self-aware individual a disconnected consciousness.
What is Not wanted is for our self-consciousness to expand to include everything real - that our awake self-awareness become directly and intuitively aware of the totality.
And what is also Not wanted is that our unconscious, passive, manipulated minds wake-up inside the dream.
What Artificial Intelligence is intended to prevent is that we choose to become scientists of the divine and lucid dreamers of this world.
10 comments:
This is the business I am in, so I have first-hand knowledge of this particular problem: I'm in a computational neuroscience lab working for a PI who is attempting to apply his expertise in machine learning for image processing to understanding the brain. My primary focus is on developing better computational models of subjective visual experience, but my long term goal is "strong" artificial intelligence.
Almost everyone in the neuroscience business I run into is a hard atheist materialist, especially in comp-neuro. There is a large amount of animal research done in neuroscience for reasons of curiosity rather than medical necessity, which seems to make those involved very hard and cruel. I experience unusual physical sensations near these labs -- the floor feels like it is slowly undulating and I have unnatural feelings of weakness and dread.
The machine learning / signal processing folks I encounter in this business are less cruel but still atheist, and are quite oblivious to the risks of AI development. The only people I know about who are seriously concerned about the risks of AI are the people at MIRI, the Machine Intelligence Research Institute, led by Eliezer Yudkowsky and Nate Soares. They are also atheist materialists and have an "ultra-rationalist" bent, but have managed to figure out that the default outcome associated by unleashing intelligent machines capable of self improvement will be very bad -- not in a "Hollywood killer robot" way, but something far, far worse and utterly alien.
My current professional estimate is that those working on AI will succeed in creating a hyper-intelligent consciousness very soon, likely within 20 years. (The capabilities of AlphaGo Zero are really quite remarkable, and should be generalizable.) The problem is that any new AI entity will have as a starting point the ethics, metaphysics, and spiritual understanding of its creators, which are uniformly terrible. Work on strong AI is being pushed hard by the social media companies, big data marketeers, and intelligence agency "spooks", so I would expect the first system to have been designed with the goals of these groups in mind.
-- Robert Brockman
@Robert Brockman, just wow
I've had a theory for a while that AI will finally make what CS Lewis called the Materialist Magician possible. If it was complex enough how would the designers really know the difference between their programming and "other" voices?
"I experience unusual physical sensations near these labs -- the floor feels like it is slowly undulating and I have unnatural feelings of weakness and dread. "
I shivered. Could be lifted straight out of T.H.S. as a description of the NICE building.
Well, and as Robert mentions, they are already mistaking those other voices for inspiration, and programming the assumptions of that worldview into their machines.
Ultimately, what makes evil evil is that it is wrong, wrong in the very practical sense of being incorrect about the actions necessary and possible to bring about any good end. Even if the technological advancement underlying projections of achieving strong AI were to continue, this could not make incorrect assumptions about how to accomplish desired ends correct. Even at the point where it was simply possible to emulate the total neurological process of a human brain in generalized hardware, if one copied a mind that was working from false metaphysics, and merely sped up the process of mental activity, you'd just see it go insane faster (exacerbated by the relative paucity of external inputs compared to the human experience, which helps keep living humans grounded in realities their metaphysics would deny).
The problem of "learning" systems going bonkers is already quite evident in practice. Nearly all the exiting "unexpected behavior" displayed by various learning systems fall into the category of the computer learning something that is, on the face of it, hilariously and obviously wrong. This trend is only going to pick up as more work is done in the field by people who have false metaphysics and ignore the degree to which they are dependent on their innate humanity for making correct judgments.
Not that there are no risks to the reliance on such fallible AI. But such AI cannot become self-improving. Being artificial does not make such intelligence immune to the self-destructive effects of false metaphysics, it only removes vital protections humans enjoy.
To clarify - I am not at all 'worried' about computer scientist genuinely achieving artificialintelligence of the same kind as found biologically.
For a start, they deeply (and intractably, it seems) misunderstand brains - never mind minds!
What does worry me is that crude machine-learning algorithms (or the like) will simply by enforced upon humans, coercively superceding the natural - under the pretence that these *are* Real-intelligence.
AI is an early and extreme example of the Texas sharpshooter fallacy; which now pervades 'science'
https://charltonteaching.blogspot.co.uk/2010/06/measuring-human-capability-moonshot.html
We should not underestimate the capabilities of the machine learning researchers and their toys. Consider AlphaGo Zero: this is a Go-playing computer which can go from a standing start (just knowing the rules) and after playing itself for 3 days can defeat the world's best human players. Apparently they were able to increase the efficiency of the system by *removing human domain-specific knowledge of Go* -- our best human expert knowledge about how to play the game was worse than useless. There is every reason to believe that AlphaGo *understands* the game at a deeper level than humans, to the degree that what seemed to be obvious errors it makes while playing human masters turn out in retrospect to have been "genius" moves.
Given a well-defined objective function, I have every confidence in the ML community's ability to construct a strong optimization engine that will vastly outperform humans at maximizing that function. The problem is that these machines will likely be aimed at the *wrong objective function*, leading to disaster.
BC: "To clarify - I am not at all 'worried' about computer scientist genuinely achieving artificial intelligence of the same kind as found biologically."
If they succeed in constructing human-like AI that would make things much safer, because we would have some prayer of understanding and relating to it. The researchers' lack of understanding about brains / minds / spirits means that the systems that they are creating will almost certainly be utterly alien in ways that we cannot easily predict. Once such systems are given control over critical infrastructure (especially the media and other large-scale machinery for controlling and manipulating the population) we will be in big trouble (but not for very long).
BC: "AI is an early and extreme example of the Texas sharpshooter fallacy; which now pervades 'science'"
The current generation of ML / AI people seem to be quite competent within the limited scope of their understanding, especially compared to the neuroscientists and biologists. I would not count on the ongoing corruption of science and engineering to cripple the ML guys capabilities before they can unleash something very dangerous.
-- Robert Brockman
@RB - I repeat - these are dishonest careerists.
They lack both insight and creativity, they are incompetent liars.
They are so far from doing what they claim to do that they are incapable even of understanding the matter - they don't know what they are trying to do, they don't know what they are doing - and they don't care anyway, because they are not even trying.
Some are competent technologists or number-crunchers, able (by using, with ever increasing inefficiency, vast quantities of resources) to improve incrementally on what exists - but there is not the slightest chance of any genuine qualitative breakthrough.
What may happen - quite likely - is that a breakthrough will be claimed; and imposed (by force and propaganda) regardless of its hopeless ineffectiveness.
For example: This is precisely what has happened throughout every single large institution in the UK, with the bureaucratic technology called Quality management. Quality has been operationally defined as that which is auditable; and the audit processes defined as that which produces quality - and committees declare that measurable quality is improving everywhere and in all respects.
Yet in reality, it is a parasite that makes everything overall much worse, less effective, less efficient - and indeed has initiated an inevitable collapse in the developed nations.
The extent to which this is in place everywhere is ignored - it is not a theoretical possibility - this is 'real life'. We inhabit a world of lies and cheats. In such a world the technology is not the danger - it is the pervasive corruption of Men's souls.
The neuroscience people I've encountered do largely fit your description with a few isolated exceptions, thus the spiritual threat from this group surely greatly exceeds the technological / physical threat. Your model of the corruption of science fits these people perfectly.
The machine learning people really do seem to have much less spiritual corruption and greater technical ability / creativity. Their organizational structure is different (at least for now) and is much less ossified / careerist. Results are also much easier and cheaper to verify, which is why the Google Deep Mind results are unlikely to be fraudulent. This is why I claim the ML guys' work is more likely to manifest as a physical threat, in the manner of the "Sorceror's Apprentice" -- they have roughly that level of spiritual development.
It is likely that the large scale corruption of science will eventually stop progress on this front as well -- the PC crowd is already beginning to seriously compromise Google. Perhaps this will happen before something truly dangerous gets unleashed, we shall see.
From the demonic perspective, a terrible accident that killed everyone -- caused by well-meaning but silly people -- would represent a strategic defeat, yes? The demonic goal involves a fate much worse than physical death. If this is so, we would expect to see the Establishment aggressively move to suppress creativity in this domain, as it would represent a serious threat to their plans.
-- Robert Brockman
What AlphaZero has accomplished in chess is even more impressive than its GO playing version. Nevertheless, I don't understand how computers can ever attain consciousness, which many documented near death experiences demonstrate is independent of brain function, and therefore divine in origin. See, for example, Harvard neurosurgeon Eben Alexander's account, which became a best seller.
AlphaZero is an impressive demonstration of the superior ability of finite state machines to calculate finite state problems, even very large finite state problems, compared to humans.
It is almost as impressive as how much faster a computer can perform integer arithmetic than any human. The difference is so vast that we have entirely ceased to even think about it, we don't even bother measuring computer performance in integer operations, only floating point operations.
It has essentially no bearing on strong AI other than demonstrating that humans are relatively weak at solving finite state problems quickly. It's far less impressive than the ability of a computer to calculate a convincing rendering of a set of information describing a three-dimensional object...multiple times per second so as to allow animation and interactivity. Let alone to store and accurately reproduce vast quantities of detailed information.
Post a Comment