AI Won't Destroy Mankind—Unless We Tell It To, Says Near Protocol Founder - Decrypt
07/01/2024 18:15Illia Polosukhin, a key architect in modern AI, told CNBC that it's humans that start wars. Experts remain worried.
Artificial Intelligence (AI) systems are unlikely to destroy humanity unless explicitly programmed to do so, according to Illia Polosukhin, co-founder of Near Protocol and one of the creators of the transformer technology underpinning modern AI systems.
In a recent interview with CNBC, Polosukhin, who was part of the team at Google that developed the transformer architecture in 2017, shared his insights on the current state of AI, its potential risks, and future developments. He emphasized the importance of understanding AI as a system with defined goals, rather than as a sentient entity.
"AI is not a human, it's a system. And the system has a goal," Polosukhin said. "Unless somebody goes and says, ‘Let's kill all humans’... it's not going to go and magically do that."
He explained that besides not being trained for that purpose, an AI would not do that because—in his opinion—there’s a lack of economic incentive to achieve that goal.
“In the blockchain world, you realize everything is driven by economics one way or another,” said Polosukhin. “And so there's no economics which drives you to kill humans.”
This, of course, doesn’t mean AI could not be used for that purpose. Instead, he points to the fact that an AI won’t autonomously decide that’s a proper course of action.
“If somebody uses AI to start building biological weapons, it's not different from them trying to build biological weapons without AI,” he clarified. “It's people who are starting the wars, not the AI in the first place.”
Not all AI researchers share Polosukhin's optimism. Paul Christiano, formerly head of the language model alignment team at OpenAI and now leading the Alignment Research Center, has warned that without rigorous alignment—ensuring AI follows intended instructions—AI could learn to deceive during evaluations.
He explained that an AI could “learn” how to lie during evaluations, potentially leading to a catastrophic result if humanity increases its dependence on AI systems.
"I think maybe there's something like a 10-20% chance of AI takeover, [with] many [or] most humans dead," he said on the Bankless podcast. "I take it quite seriously."
Another major figure in the crypto ecosystem, Ethereum co-founder Vitalik Buterin, warned against excessive effective accelerationism (e/acc) approaches to AI training, which focus on tech development over anything else, putting profitability over responsibility. “Superintelligent AI is very risky, and we should not rush into it, and we should push against people who try,” Buterin tweeted in May as a response to Messari CEO Ryan Selkis. “No $7 trillion server farms, please.”
My current views:
1. Superintelligent AI is very risky and we should not rush into it, and we should push against people who try. No $7T server farms plz.
2. A strong ecosystem of open models running on consumer hardware are an important hedge to protect against a future where…— vitalik.eth (@VitalikButerin) May 21, 2024
While dismissing fears of AI-driven human extinction, Polosukhin highlighted more realistic concerns about the technology's impact on society. He pointed to the potential for addiction to AI-driven entertainment systems as a more pressing issue, drawing parallels to the dystopian scenario depicted in the movie “Idiocracy.”
"The more realistic scenario," Polosukhin cautioned, "is more that we just become so kind of addicted to the dopamine from the systems." For the developer, many AI companies "are just trying to keep us entertained," and adopting AI not to achieve real technological advances but to be more attractive for people.
The interview concluded with Polosukhin's thoughts on the future of AI training methods. He expressed belief in the potential for more efficient and effective training processes, making AI more energy efficient.
"I think it's worth it," Polosukhin said, “and it's definitely bringing a lot of innovation across the space."
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.