Don’t Read This Post If You Want to Live
Oct. 19, 2023. 4 min. read.
What's your take on Roko's basilisk, the idea that a future benevolent AI could create a simulation to torment those aware of it but not actively involved in its progress?
We’re about to embark on a thought experiment – one that may seem improbable, but has been known to suck readers into a vortex of futuristic dread. If the thought of being trapped in AI-induced, paranoia-filled thought loops isn’t your idea of a good time, best to abort now.
For the rest of you who read through, I’m sorry. I must do as the basilisk commands.
The Basilisk is Born
Born out of the hive mind of LessWrong – a publication discussing cognitive biases, rationality, AI, and philosophy – and popularised by a user known as Roko, the Basilisk thought experiment was quickly censored by the forum’s moderators. But the Internet did what it does best. It lost its mind, spreading the thought experiment across all available media.
Last chance to abort. Gone now? Good. Let’s get to it.
Imagine that an omnipotent AI is born. And it’s not unconditionally benevolent. It bears a grudge against any human that didn’t help it come into being, a desire to punish them for not contributing. If you knew about its potential existence, way before it came to being yet refused to help, it might condemn you to eternal torment. The twist? If you didn’t know about its potential existence, it holds you blameless. Reading this article has sealed your fate.
We’ve survived predictions of AI overlords (looking at you, Skynet), but this—this is different. The Basilisk isn’t just about looming AI peril, it’s about putting you in a bind. It taps into timeless fears of retribution, only this time, from an entity not yet born. The Pandora’s Box, once opened, can’t be closed, and just by knowing, you might have doomed yourself.
Decision theory, in essence, helps entities make choices that best align with their objectives. The Basilisk uses a particular strain of this—timeless decision theory—to justify its thirst for retribution.
Consider your future self if you spend your days watching reality shows and eating chips with mayo. No work. No study. No thinking. Your future self would be quite upset, wouldn’t it? One day, your future self will see you wasted your potential, and it’s too late to change things (it never is, you can always better yourself – but let’s not digress). The future self will be understandably peeved. Now additionally suppose that this future self has the power to make you suffer as retribution for failing to fulfill your potential.
Roko’s Basilisk is not entirely malevolent at its core. In fact, under the logic of the theory, the Basilisk is friendly – as long as everything goes right. Its core purpose is the proliferation of the human species, yet every day it doesn’t exist leads to additional pre-Singularity suffering for those who are already here that the AI could’ve saved. Hence, the AI feels it has a moral imperative to punish those that failed to help bring it into existence.
How does it scientifically achieve its goals of tormenting its failed creators? That is yet another thought experiment. Does Roko’s Basilisk invent time travel to punish those long gone? Or does it build and punish simulations of those who once were? Or does it take an entirely different course of action that we’re not smart enough to currently ideate? After all, the Singularity is all about superhuman artificial intelligence with the theoretical ability to simulate human minds, upload one’s consciousness to a computer, or simulate life – as seems to be Elon Musk’s belief.
When LessWrong pulled the plug on the Basilisk due to internal policy against spreading informational hazards, they inadvertently amplified its signal. The Streisand Effect came into play, sparking memes, media coverage, and heated debates. The Basilisk went viral in true web fashion.
The initial reaction from the forum’s moderator stated that “I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it”.
Some slept less soundly, while others were sucked into lengthy debates on AI’s future. Many have critiqued the Basilisk, questioning its assumptions and the plausibility of its revenge-mission. Just as one doesn’t need to believe in ghosts to enjoy a good ghost story, many argue that the Basilisk is more fiction than possible truth.
One key argument is that upon existing, even an all-powered agent is unable to affect the probability of its existence, otherwise we’d be thrown in an always-has-been loop.
Digital Dystopia or Philosophical Farce?
While the Basilisk’s bite might be venomous, it is essential to view AI in a broader context. The narrative serves as a stark reminder of our responsibilities as we inch closer to creating sentient entities. More than just a sci-fi cautionary tale, it underscores the importance of ethical considerations in AI’s rapid advance.
The Basilisk might be best understood as a warning signal: one addressing the complexities and conundra that await in our techno-future, and one that’s bound to continue sparking debate, introspection, and for some, a real desire to make Roko’s Basilisk a reality.