Leopold Ashenbrenner, a former OpenAI employee, has recently released a document that could shape our understanding of the future of Artificial General Intelligence (AGI). Ashenbrenner, dismissed for leaking internal documents, has shared his vision for the decade ahead, revealing how the journey to AGI will unfold and what it means for the world.
Ashenbrenner emphasizes that the race to AGI is intensifying, with compute clusters scaling from billions to trillions of dollars. By 2025-2026, he predicts that AI systems will surpass human intelligence, reaching superintelligence by the decade's end. This leap will not only transform industries but also trigger unprecedented national security measures.
The journey from GPT-4 to AGI is mapped out in seven sections, each detailing the steps necessary for achieving superintelligence. Ashenbrenner believes that by 2027, models will be capable of performing the tasks of AI researchers and software engineers. This prediction is supported by a graph showing the exponential growth in effective compute from GPT-2 to GPT-4 and beyond.
One of the most striking aspects of Ashenbrenner’s document is the predicted rapid increase in AI capabilities. From GPT-2's preschool-level intelligence to GPT-4's high school-level sophistication, the progression is clear. By 2027, AI systems could handle complex research and engineering tasks, setting the stage for recursive self-improvement and the emergence of superintelligence.
Algorithmic efficiencies play a crucial role in this growth. Ashenbrenner highlights the dramatic improvements in efficiency, which have reduced the cost of achieving certain benchmarks by orders of magnitude. For instance, achieving 50% accuracy on a math benchmark has become 1,000 times cheaper in just two years.
Ashenbrenner discusses the concept of "unhobbling" AI models, where the constraints on current models are removed, unlocking their full potential. He believes that future models, with enhanced context lengths and post-training improvements, will exhibit capabilities far beyond our current expectations.
A critical issue raised is the security of AGI research. Ashenbrenner warns that current AI labs do not prioritize security, making them vulnerable to espionage and theft. He stresses the need for robust security measures to protect AGI secrets, which are crucial for maintaining a competitive edge and ensuring safe development.
Ashenbrenner outlines a path to superintelligence, starting with Proto-automated researchers in 2026-2027 and culminating in fully autonomous AI researchers by 2029. This rapid progression could lead to superintelligent systems capable of conducting research and making breakthroughs at an unprecedented pace.
The document explores the profound implications of superintelligence, including the potential for military revolutions and economic transformations. Ashenbrenner warns that whoever controls superintelligence could dominate globally, making the race to AGI a matter of national security.
Ensuring the alignment of superintelligent systems is a significant challenge. Ashenbrenner emphasizes that controlling AI systems much smarter than humans is an unsolved problem, with the potential for catastrophic failures if not managed correctly.
Ashenbrenner's vision suggests that the next decade will be the final industrial revolution, with AGI transforming every aspect of society. The document serves as a call to action for governments, researchers, and the public to prepare for the profound changes ahead.
Leopold Ashenbrenner's insights into the future of AGI are both fascinating and alarming. As we move closer to achieving superintelligence, the need for strategic planning, robust security, and ethical considerations becomes more critical than ever. The next decade will shape the future of humanity, and the stakes have never been higher.
Comments