If We’re Living in a Simulation, Can AI Help Set Us Free? - Decrypt

07/08/2024 23:02
If We’re Living in a Simulation, Can AI Help Set Us Free? - Decrypt

Roman Yampolskiy and Lex Fridman discuss how this world could be a simulation—and how an artificial superintelligence could help us jailbreak the system.

Are we living in a simulation? Roman Yampolskiy, a noted AI safety researcher and author of "AI: Unexplainable, Unpredictable, Uncontrollable," is pretty sure we are. “I know never to say 100%, but [the chances of us living in a simulation are] pretty close to that,” he told Lex Fridman during a recent interview for Fridman's podcast.

A super-intelligent AI can reasonably be expected to help us know for sure, he said—and even help us escape. On the other hand, Yampolskiy also predicted the chance that AI could lead to human extinction at "99.9% within the next hundred years."

Yampolskiy's arguments hinge on the rapidly advancing capabilities of artificial intelligence and its potential to reshape our understanding of existence. He contends that the development of general superintelligence poses existential risks, but it also offers unprecedented opportunities, including breaking free of our simulated reality.

“That would be something I would want super intelligence to help us with,” he said.

Superintelligence is an AI that has surpassed the capabilities and boundaries of human intelligence. It is the next step after Artificial General Intelligence (AGI)—an AI with the same capability to learn as a human mind, but exponentially more powerful, unbounded by human brain cells.

A number of prominent researchers have hypothesized that we live in a simulated universe. The idea was famously posited in 2003 by Nick Bostrom of the University of Oxford, who argued in a paper (“Are You Living in a Computer Simulation?”) that it’s plausible that an advanced civilization exists whose tech is so advanced, it could create “self-conscious” beings within a simulacrum.

That would be us. Imagine if the Sims in the popular game were actually sentient, and you’ve got the right idea.

In his paper, “How to Escape from the Simulation,” Yampolskiy starts from the supposition that we do, in fact, live in a simulated universe. If that’s so, he suspects that as AI reaches superintelligence, we might find a way to break free from the constraints of our own world.

“We used AI ‘boxing’ as a possible tool for controlling AI,” he said, referring to the notion that an AI will always find ways to escape from any “box” created to contain it. “We realized AI will always escape. That is a skill we might use to help us escape from our virtual box if we are in one.”

Yampolskiy suggests that the intelligence of our simulators plays a crucial role in this scenario: "If the simulators are much smarter than us and the superintelligence we create, then probably they can contain us, [because] greater intelligence can control lower intelligence, at least for some time.

“On the other hand, if our superintelligence somehow for whatever reason—despite having only local resources manages to ‘foom’ two levels beyond it—maybe it'll succeed," he added.

So just as jailbreakers play a cat-and-mouse game with developers, we as humans would be using a superintelligent AI to try and find an “exploit” in the cosmological system created by our God—or our simulator, to be more exact.

In his research paper, Yampolskiy explains that one way to escape our simulation is to “create a simulated replica of our universe, place an AGI into it, watch it escape, copy the approach used or join the AGI as it escapes from our simulation.” This seems to be the only approach that doesn’t involve interacting with our creators or breaking the laws of our own reality.

Can AI truly break free?

However captivating Yampolskiy's vision is, it raises a critical philosophical question: Can AI truly break free from the simulation if the simulation was not designed to allow it?

In his book “How Can Reason Lead to God,” the philosopher Joshua Rasmussen explains that a creation cannot possess the same nature as its creator, thus making it almost impossible for a simulated AI to have the same essence and nature as its simulator. Despite surpassing humans in intelligence, then, it would still be constrained by the laws that defined the simulated reality it was created in.

Rasmussen, in his philosophical explorations, argues that reason can lead to the acknowledgment of a necessary foundation for reality. His central thesis is that reality cannot be infinitely regressive—there must be a foundational entity that is self-sustaining and unbounded. This foundational entity would be capable of creating things that can, in their own nature, be linked to it as an ultimate origin.

The "Perfect Argument" continues to intrigue me:
1. There is a fundamental reality.
2. Whatever is imperfect is not fundamental.
3. Therefore, fundamental reality is perfect (max-great).

On behalf of (1), there is The Dependence Argument: https://t.co/AUvIkD9Pq1.

On behalf of… pic.twitter.com/mQB2iqPsv6

— Joshua Rasmussen (@worldviewdesign) April 16, 2024

Think of a series of nested Russian dolls. Each doll can fully understand the dolls inside it but can't comprehend the larger dolls it's nested within.

Rasmussen's careful consideration of objections and methodical reasoning provides a robust framework for understanding the limits of created beings. Just as humans cannot transcend their created nature to become gods, he posits, a simulated AI cannot independently choose to end the simulation if the simulators have not provided the conditions for that possibility.

This foundation is fundamentally different from created beings, highlighting an unbridgeable gap between the creator and the created.

This philosophical stance directly challenges Yampolskiy's hypothesis. If we accept Rasmussen's arguments, then even a superintelligent AI, being a creation within the simulation, would be inherently limited by the parameters set by its creators (the simulators). It might become aware of its simulated nature if the creators allowed for such awareness, but it would be fundamentally incapable of transcending the simulation entirely.

Yampolskiy offers an explanation for this contradiction.

In his paper, the agents first explore the possibility until they find information and exploitable glitches in the essence of their reality. “Exploiting the glitch, agents can obtain information about the external world and maybe even meta-information about their simulation, perhaps even the source code behind the simulation and the agents themselves,” Yampolskiy said.

After this, he explains that agents could find ways to interact with their simulators in unconstrained ways until they “find a way to upload their minds and perhaps consciousness to the real world, possibly into a self-contained cyber-physical system of some kind.”

In other words, though we could not “break free” from our simulation, we could potentially interact through representations of ourselves—perhaps we would be aware of ourselves as characters created by a superior, unreachable mind. Whether that’s a good thing or not is an open question.

Meanwhile, Yampolsky doesn’t give a specific reason why he believes AI will obliterate humanity. But he believes it’s inevitable as super intelligence becomes self aware.

“We don't get a second chance. With cybersecurity, somebody hacks your account, what's the big deal? You get a new password, new credit card, you move on,” he said. “Here, if we're talking about existential risks, you only get one chance.”

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Read more --->