Searching for oracles to take the responsibility of judgment off our shoulders is as old as humankind itself. Whether it is the ergot-infused Oracle at Delphi, reading the tea leaves, the stars and their astrologics, the Tarot, ayahuasca and a tribe’s totem animal guides, or the preachings of teachers such as Nostradamus, our obsession with knowing the future appears innate.

While the 20th century saw many black swans, such as the atom bomb, the fall of the Soviet Union, 9/11, and the rise of the internet, the tendency to extrapolate the future from the past is always challenged by oracles, who regularly predict discontinuities. The past century turned to ‘Great Men’ – mainly captains of industry and economists to set the tone for future developments. We drummed up a sort of hero-worship that made certain savvy business gurus into actual gurus – imagined authorities that could do our thinking for us.

Such outsourcing of responsibility is understandable, especially when societal change accelerates, and uncertainty mounts. If stability is said to be the highest priority for businesses and governments (not to mention us regular mortals), then knowing what is coming down the pipeline when increasingly weird surprises accumulate rapidly, is a boon. Yet, these oracles, like many oracles of yore, didn’t have the best track-record. Like stock price predictions, most advice, even from the Warren Buffets of the world, usually only become actionable after the action. That is to say, any advice worth taking which is publicly available will already have been acted upon by the time it reaches you. Any algorithm useful to your future will already have been seized upon by those who control the computers and have faster processors. Any AI capability, has already been exploited before released (‘beta’ or otherwise) to the masses. The human need to feel like an ‘insider,’ which comes from evolutionary anthropology and the perks of being part of the ‘inner circle’ drives us in a commercial age to irrationally hold onto the belief that ‘exclusive’ groups we can buy our way into may provide us with specialized knowledge that we can then exploit for personal gain, over and against those poor other dupes who weren’t as quick to the draw.

Enter AI. Rather than human oracles (the ‘Great Men’), or natural ones, like the winds or the stars, we now turn to oracles of our own making – a poor man’s God. Generative AI, giving us answers to medical problems, social problems, educational problems, even hazarding answers to science and politics, has been given carte blanche in our current globalized tech-hype milieu. As tech has become the driver of norms, now since the internet, web 2.0, and smartphones have become our new compasses by which we navigate, tech companies controlling the AI technology are now happy to lay down their own agency as developers, marketers and manufacturers, and declare that AI now is running the show.

All hail our new AI overlords! If they weren’t of human creation, would we so enthusiastically embrace them? If AI were alien, would we still let it be our oracle?

Of course, we know that these same companies are actively researching and developing AI capacity (with or without ‘alignment) as fast as they can to get patents and first-to-market advantage. And, in our bankrupt competitive way of doing everything, from making bombs to entrancing kids on social media, why wouldn’t they also join in the fray? After all, maybe AI can help drown out genuine things that help people, so we’re so disoriented that we have no more resistance against the shoddy lands of AI non-information?

The problem of taking the AI oracle seriously, at face value, or even as an oracle at all, rather than yet another tool in the tool belt. Just as ‘the wisdom of the crowd’ can be easily exploited by trolls, why shouldn’t the ‘wisdom of machine learning’ be taken with equal skepticism. We tend to regard AI as Johnny Depp’s character Will Caster uploaded to the internet in Transcendence, the familiar trope of a unified intelligence, the familiar god figure. When in fact, AI mostly is just snippets of human knowledge remixed. The AI oracle is less of a Bach and more of a DJ scratching (the surface) and remixing pat responses to our increasingly complex problems.

So, now that WordPress has a built in AI blog generator (I guess we’re all just reading gobbildy-gook now, apparently), I asked it to describe the premise of this article, which it got completely wrong:

The Fallacy of AI as an Oracle

Treating AI as an oracle, capable of providing infallible guidance or absolute truth, is indeed an idea fraught with inherent fallacies and limitations. While AI has made significant strides in recent years, it is crucial to acknowledge its strengths and limitations to avoid falling into the trap of unwarranted certainty and overreliance.

1. Unpredictability: AI systems, including language models, operate based on patterns and data they have been trained on. They do not possess genuine understanding or consciousness. As a result, relying solely on AI for decision-making can lead to unpredictable and potentially flawed outcomes.

2. Lack of Contextual Understanding: AI lacks the ability to comprehend complex real-world nuances, cultural context, and emotions. This limitation can hinder its capacity to provide meaningful and holistic guidance in many situations that require subjective or contextual understanding.

3. Ethical Considerations: Treating AI as an oracle raises challenging ethical questions. AI systems are only as unbiased as the data they are trained on. If not carefully designed and trained, AI can perpetuate biases, exacerbate societal inequalities, or even make morally questionable decisions.

4. Limited Perspective: AI lacks the human capability of experience, intuition, and moral reasoning. It cannot fully understand the intricacies of human life, emotions, and values. Hence, it may provide solutions that are logical but fail to consider the broader implications or ethical dimensions of a situation.

5. No Guarantee of Accuracy: AI systems are not infallible. They can make mistakes, generate inaccurate information, or provide flawed advice. Relying solely on the output of an AI system without critical thinking and human judgment can lead to misguided decisions.

While AI can be a valuable tool for generating insights, providing suggestions, or assisting in decision-making, it is essential to recognize its limitations. Combining AI’s capabilities with human judgment and critical thinking is the key to harnessing its potential in a responsible and meaningful way.

NOTE: The above discussion refers to current capabilities of AI and may not encompass any hypothetical future advancements.

This AI-generated self-reflection fails to point out the problem of oracles in general, or the human need for someone to make decisions for us. It has a good left-brain answer, while being unable to step outside and examine the right-brain holism. Even though it does mention the holism problem, it does so in the context of seeing this as a limitation, instead of as a judgment issue. Judgment is the fundamental faculty humans wish to outsource (see Eichmann in Jerusalem) which we never can.

By relying on yet another oracle, at the dizzying rapidity we see many institutions jumping in with both feet into this bottomless experiment, it is likely that the feedback loops of such reliances – if not tempered by wisdom – may lead humanity even further astray.

The deus ex machina solution may never become what we want it to be. But for those for whom the burden of judgment and the weight of responsibility is more than they can handle, it surely will be an expedient for not exercising judgment.

As AI researcher Stuart Russell has remarked: “AI researchers must, like nuclear physicists and genetic engineers before them, take seriously the possibility that their research might actually succeed and do their utmost to ensure that their work benefits rather than endangers their own species.”