Vitalik Buterin, co-founder of Ethereum, has voiced his concerns regarding the rapid development of superintelligent AI and the concentration of power within the AI industry. In a recent statement, Buterin emphasized the risks associated with superintelligent AI and cautioned against rushing into its development. He specifically opposed allocating $7 trillion for server farms, referencing Sam Altman’s ambitious funding goals for AI chip fabrication.
“Superintelligent AI is very risky and we should not rush into it, and we should push against people who try. No $7T server farms plz.”
Buterin advocates for a decentralized AI ecosystem, highlighting the importance of open models that can run on consumer hardware. He argues that such models serve as a crucial hedge against a future where AI value is monopolized by a few central entities, potentially leading to a scenario where a limited number of powerful servers mediates most human thought. According to Buterin, this decentralized approach poses a lower risk of catastrophic outcomes than corporate or military control of AI.
Furthermore, Buterin supports a regulatory framework that distinguishes between “small” and “large” AI models, with the latter subject to more stringent regulations. He notes that while models with 405 billion parameters are beyond the scope of consumer hardware, those with 70 billion parameters are not, as he himself operates such models. However, he expresses concern that many current regulatory proposals might inadvertently classify all models as “large” over time, thereby stifling innovation and developing smaller, open-source AI models.
Buterin’s comments come in the wake of significant developments in the AI industry, including Altman’s controversial efforts to secure massive funding for AI chip projects and his temporary ouster from OpenAI, which highlighted deep divisions within the AI community over the pace and direction of AI development. Altman’s push for rapid AI advancement has been met with both support and criticism, with several high-profile departures recently reflecting broader debates about the future of AI regulation and governance.
According to Buterin, fostering a diverse and decentralized AI landscape is essential to mitigate risks and ensure that AI’s benefits are widely distributed rather than concentrated in the hands of a few powerful entities. This perspective aligns with his broader vision for technological progress, which prioritizes democratic decision-making and decentralization.