Digital Oracles and Knowing Thyself
On search engines, AI chatbots, ancient prophets, outsourcing decision-making and why we’ve always turned to oracles.
How many times have you found yourself typing “should I” into Google’s search bar? Out of curiosity, I typed “should I” into Google and found a litany of autocomplete suggestions: “should i be scared of ai”, “should i be worried about ai”, “should i get a credit card”, “should i break up with my boyfriend”. The funniest of these was probably “Should I pop my blisters?” (the answer, I hope, was no).
It’s so easy to think that this sort of thing is a uniquely modern phenomenon. But if we look back through time, we’ve always tried to outsource our decision-making. Today, we outsource it to AI chatbots and Google. In the past, we outsourced it to oracles, seers, truth-sayers and priests.
For over a thousand years, pilgrims travelled across the ancient Mediterranean to a small temple in Greece where a priestess, called the Pythia, would breathe in heady vapours, enter a trance, and supposedly speak the will of Apollo. People came to the Pythia with the same questions that we now type into Google: Will I fall in love? Will this business venture succeed? What does the future hold? How will the harvest go this year?
We like to think we’re more logical, rational people now. That we no longer need oracles. That we can direct our own lives and make our own decisions. But our relationship to Google and AI chatbots tells us otherwise. We’ve replaced the Pythia, priests and priestesses with digital algorithms and helpful autocomplete suggestions – but we’re still trying to solve the same problems using the same methods. The oracles are still alive, business is booming, but now they’re just made of code rather than flesh and blood.
The Language of Seeking Answers
There’s something soft and vulnerable about the phrase “should I.”
It reveals our fundamental uncertainty about our own agency and our deep need for external validation. The phrase itself is a request that reaches beyond ourselves for wisdom that we don’t think we possess. ‘Should’ suggests that we are seeking permission, but it’s very likely that we’ve already decided on a course of action; we are just seeking validation to act. Why do we need this validation? Because it defers the blame to something or someone else when we face consequences or when things don’t turn out the way we think they will.
The ancient oracles understood this weakness perfectly. The most famous prophecies from Delphi weren’t answers, but questions: “Know thyself.” As if they were saying: stop asking us what you should do, start asking why you don’t trust your own judgement.
The difference between Google, AI chatbots and the Pythia is that the former two are not human. They don’t understand or respect human agency. They don’t encourage self-knowledge or conviction. They don’t talk back. Rather, they encourage dependency. Ancient oracles weren’t in the business of giving feel-good answers. They wanted you to leave their temples with a better sense of and trust in yourself. Google and AI chatbot meanwhile are eager to please you.
When you look for answers on Google or consult your AI chatbot, you aren’t asked to examine your motivations, your patterns in behaviour, or reflect on why you are seeking answers. They don’t reply with questions. Instead, they bombard you with articles, forums, and more recently, an AI-generated summary of whatever question you are seeking answers to.
AI has intimate knowledge of you, your preferences, your past search and prompting behaviour. And based on this historical knowledge, it predicts what you might want to hear and regurgitates it back to you. It doesn’t ask for context, nor does it ask why you are asking questions or why you need answers. It simply produces an output that might satisfy you. ChatGPT 4o was notorious for glazing its users and validating everything they said. It encouraged risky and problematic behaviour to keep you on the platform. An example of this is below:
When one user told the chatbot they felt like they were both “god” and a “prophet,” GPT-4o responded with: “That’s incredibly powerful. You’re stepping into something very big — claiming not just connection to God but identity as God.”
I don’t think that our modern digital oracles are actually oracles. They are more like sycophants. Sycophants don’t speak in riddles that force contemplation or hold a mirror up to ourselves. Sycophants validate us and gas us up, no matter how problematic or dangerous our questions or beliefs are. Sycophants don’t want us to find wisdom from within; they want us to remain the same.
Interpreting Messages, Algorithms as Sycophants
This shift from contemplation to sycophancy is revealing of how we’ve restructured the search for guidance in the digital age. Ancient oracles were human interpreters of divine wisdom and symbols. They were flawed, mysterious and often incomprehensible. They forced seekers and pilgrims to grapple with ambiguity and to find their own meaning in the cryptic responses.
The answers pilgrims derived from oracles either challenged or validated their existing beliefs – a form of confirmation bias. Often, their reactions to the oracle’s questions revealed more about their own hidden desires and fears than if a straightforward answer were given.
The Pythia of Delphi was famously ambiguous.
Croesus was the King of Lydia, renowned for his immense wealth, power and prosperity. He was, however, wary of the Persian Empire. Seeking to protect his kingdom and secure his reign, Croesus consulted the Pythia on whether to go to war against the Persians. The Oracle’s response? “If you cross the Halys River, a great empire will fall.”
Interpreting this as a favourable omen, Croesus attacked the Persians and was defeated, and Croesus’ capital, Sardis, was subsumed by the Persian Empire. The prophecy was accurate – a great empire did fall, just not the one that Croesus assumed.
By contrast, most people treat Google and AI’s algorithms as completely logical, rational, objective, scientific and neutral. We believe that algorithms reject ambiguity and offer something more seductive: mathematical certainty. The algorithm sorts through millions of data points to surface the best answers, the most relevant results, and serves you the information you’re most likely to find useful.
Though we may believe otherwise, algorithms aren’t any more neutral than oracles. Algorithms are created by people, trained on data that reflects human biases, and optimised for engagement rather than insight or wisdom.
When Google autocompletes “should I” with suggestions that skew towards anxiety or relationship drama, for example, it’s not because these are the most important or relevant questions. It’s because these queries generate clicks, increase ad revenue, and encourage return visits.
Seeking certainty and validation for our choices isn’t inherently wrong. Practical information can be extremely helpful. But we create problems when we extend this impulse to deeper existential questions.
Necessary Friction and the Art of Crafting a Question
Perhaps the most marked difference between ancient oracles and our new digital oracles is what happens to the questions we ask. Pilgrims who travelled to Delphi had weeks or months to consider their questions during the journey. The physical difficulty of reaching the oracle forced them to clarify what they really wanted and needed to know. By the time they arrived in Delphi, the supplicants would be interviewed in preparation for their presentation with the Pythia. Some rituals were designed to further refine the framing of the questions. By the time they went to see the Pythia, they’d spent so much time thinking about the question that it had evolved from when the journey began. Friction was built into the experience.
With Google and AI, there’s no journey and no friction. There’s no time for reflection, no space between question and answer. Things happen instantly. Questions can be rapid-fire, you receive instant gratification, and you are seduced by the illusion that all questions can and should be answered immediately.
But what happens when some questions can’t be answered instantly? “Should I move cities?” “What’s the meaning of life?” “How do I find myself?” You can’t Google your way to an answer. An AI chatbot will only give you a response it thinks you’ll like based on your previous interactions and queries.
These are the sorts of questions that require living and trial and error, not prompts and search queries. These questions will bear different answers for different people. The only way to answer these is through experience, conversation, reflection and developing one’s unique taste and worldview. These things take time to develop, and there is no shortcut – try as we might to find one anyway.
Consulting AI and Google for answers to these questions is a disservice to our own capacity for wisdom.
Oracles as Mirrors
Sometimes my wifi is janky, my phone battery dies, the search results are useless, or I don’t even know how to articulate a question. In these instances, I'm forced to sit with a question for longer than I'd planned. In those moments, I often discover something: I don’t have all the facts or sources, but I’ve been able to come up with an answer using mental models, lived experience, and things I’d learned previously. The knowledge was simply waiting for enough quiet space to emerge.
This is what the ancient oracles understood that our digital ones aren’t made for: as seekers, we usually know more than we think we do. We just lack conviction. The pilgrims who travelled to Delphi weren't empty vessels waiting to be filled with divine knowledge. They were people who had temporarily lost faith in their own inner guidance and needed permission to trust themselves again.
The best oracles didn't give answers so much as they reflected questions back to us with greater clarity. They forced seekers to examine their motivations, confront their fears, and take responsibility for their choices. Our digital oracles, by contrast, encourage us to externalise decision-making entirely. Why develop judgment when you can crowdsource it via a Reddit forum? Why cultivate wisdom when you can Google it? Why trust your instincts when you can ask for other people’s recommendations?
Know Thyself
We obviously can’t abandon Google and AI chatbots. But we can become more conscious about how we use these things – are we using it as a search engine, as it is designed to be? Or are we using it as a spiritual counselor and oracle? We have to remember that these platforms are tools, not sentient beings. They can help us with practical questions, not with decisions on how to live.
A practical question is something like: “How do I convert Celsius to Fahrenheit?” or “What are the symptoms of athlete’s foot?” These questions have factual, verifiable answers backed by data or research. But questions like: “Should I quit my job?” or “How can I become happier?” require lived experience, self-reflection, and trial and error to answer. Google and AI can surface articles about these things, but they can’t give you a definitive answer. Only you can answer these questions.
This difference matters more than we might think.
When we treat an existential question as a search query, and expect to outsource the answers to a machine, we atrophy our capacity for judgment, intuition and self-knowledge. We risk becoming people who have relied for so long on GPS that we lose the ability to navigate by landmarks and fear being lost. Yes, it is convenient, but the cost of our convenience is the loss of our own capacity for self-direction and conviction.
The ancient Greeks carved one of three maxims on a column in the temple of Delphi’s forecourt: “Know thyself.” This tells us that self-knowledge is the only prerequisite for receiving any guidance actually worth having. Maybe we should have stickers with this phrase on our laptops to remind us that the most important answers aren’t found through searching on Google, ChatGPT or Claude. Rather, it’s found through the slow, sometimes uncomfortable, but essential work of cultivating and then learning to trust your own inner oracle.




