top of page
Writer's pictureRich Washburn

Straight Lines on a Logarithmic Scale: All Signs Point to an Intelligence Explosion


Audio cover
Intelligence Explosion

In the world of artificial intelligence (AI), we're not just inching toward a revolution—we're accelerating at breakneck speed. Some would say we're about to hit an "intelligence explosion," where the growth of AI's capabilities takes off exponentially, leaving us scrambling to keep up. And while that sounds like the plot of a sci-fi thriller, the data suggests we might be living it soon. But what does the data actually say? What do the experts, like Sam Altman of OpenAI fame, predict about our future with AI? And, importantly, what constraints might stop this runaway train? Let's break it down.


Sam Altman, the CEO of OpenAI, recently dropped a blog post that feels equal parts visionary and vague. In typical fashion, Altman paints a picture of AI boosting human capabilities dramatically—think of it as a futuristic renaissance, where AI assistants are ubiquitous, personalized education reshapes learning, and "unprecedented prosperity" becomes the norm. Altman even suggests that we could reach superintelligence within a thousand days, give or take. For those keeping score, that’s anywhere from 2025 to 2027.


But it’s easy to get swept up in lofty promises. So, what are the tangible takeaways from Altman’s predictions?


They boil down to five key points:


  1. AI as Personal Assistants: Already in the works. Think ChatGPT, Siri, or Alexa, but supercharged.

  2. Personalized Education: AI-tailored learning programs are on the horizon. The edtech space is buzzing about this, and it could revolutionize learning at all levels.

  3. Job Transformation: Jobs will evolve, but perhaps more slowly than the alarmists predict.

  4. Infrastructure Demands: The need for vast AI infrastructure—compute power, energy, data centers—is going to skyrocket.

  5. Scaling Works: The boldest statement from Altman? “Deep learning worked.” It's clear the current AI scaling approach—throw more data and compute at the problem—keeps delivering results.


Altman’s blog has stirred the pot, especially with his timeline for superintelligence. But if he's right about the numbers, we could be knocking on the door of artificial general intelligence (AGI) within just a few years.


Data Doesn't Lie


Let’s pivot to the hard data from Epoch AI. Their research focuses on the raw numbers behind the AI explosion, and the graphs they’ve produced are eye-opening. Imagine a straight line on a logarithmic scale, steadily rising over time. That's what Epoch AI's research shows—training compute for AI models has been doubling every six months since 2021. To put that into perspective: from 1951 to 2010, compute for AI went up by 1.4x per year. Since the advent of deep learning, that rate has jumped to 4.1x per year.



For the data nerds out there, these straight lines tell a powerful story. On a logarithmic scale, steady, straight lines mean exponential growth. The data Epoch AI presents isn't just some fleeting trend—it's a durable signal that the AI train is far from slowing down. In fact, their numbers show:


  • Training compute is doubling every six months.

  • Training costs are doubling every nine months for the largest models, despite cost-saving measures from hardware scaling and energy efficiency.

  • Data used for training is doubling every eight months, though not as fast as compute.


So, why are we not seeing the same explosion in costs? As NVIDIA’s CEO Jensen Huang famously noted, economies of scale are kicking in. Larger AI clusters and data centers are getting more efficient, saving on energy and computing costs.


If you're still unconvinced, consider this: language models are scaling faster than vision models. Why? Language has broader utility—it can write code, read logs, draft emails, and more. In contrast, vision models are a bit more niche, though still incredibly powerful. The takeaway? The explosion in AI isn't just happening—it's happening faster in areas with the greatest economic and practical value.


Constraints Beyond Intelligence: The Law of Limits


Now, before you start prepping for a superintelligence-led utopia, we need to talk about constraints. Even if we achieve AGI, AI isn't a magic wand that solves all of humanity's problems overnight. The primary constraints that hold back scientific and economic progress aren't always tied to intelligence—they’re often about time, money, and raw materials.


Take, for instance, the Large Hadron Collider (LHC) or the James Webb Space Telescope. These projects are frontier-pushing experiments in physics and astronomy, yet their main hurdles weren’t about intellectual power—they were about building enormous, costly, and energy-intensive infrastructure. More PhDs wouldn’t have sped up these projects dramatically; the bottleneck was materials, time, and cash.


The truth is, even if we reach superintelligence by 2027, we’re still going to face limits in every industry:


  • Energy and raw materials: Whether it’s solar panels, nuclear fusion, or the next generation of microchips, the need for physical resources won’t vanish.

  • Time and space: Building test reactors, launching satellites, and mining rare materials take time and physical space. AI might speed up design, but it won’t eliminate the laws of physics.

  • Entropy: Biological processes, human labor, and the speed of light aren’t going away. AI can optimize, but it can’t break the fundamental rules of the universe.


In short, while intelligence might soon cease to be the primary constraint on progress, there are plenty of other hurdles that will slow us down. The law of constraints is clear: as soon as you solve one bottleneck, another one appears.


The Automation Cliff


Lastly, let’s talk about jobs. Altman predicts that the shift in employment will be slower than expected, and I tend to agree. We may not see mass unemployment overnight, but the concept of an "automation cliff" looms. 


Imagine this: AI and robots take over 99% of human tasks in a given industry. Everything is automated, but there’s still that last 1%—the jobs humans still do. As long as that 1% exists, humans will be employed, working alongside machines. But what happens when AI and robots figure out that final 1%? The whole industry falls off the automation cliff, and suddenly, there's no work left for humans. Call centers, for instance, are nearing this edge.


The question isn’t if we’ll see job displacement—it’s when and how. The good news? New economic paradigms will likely emerge. The bad news? It’s unclear how fast we can adapt.


The Intelligence Explosion Is Inevitable, but It’s Not the End of the Story


The numbers don’t lie: AI is scaling faster than ever before, and an intelligence explosion seems all but inevitable. Whether it’s superintelligence in 2027 or AGI by 2025, the exponential trends in AI compute, data, and cost efficiency are painting a clear picture of our future. But, as Sam Altman points out—and as the raw data from Epoch AI confirms—there’s much more to this story than just intelligence. Material constraints, economic realities, and good old-fashioned time and space will continue to play a crucial role in shaping how this AI revolution unfolds.


One thing’s for sure: we’re on the brink of something massive. But whether it’s utopia, dystopia, or something in between will depend on how we navigate the complexities ahead.

Comments


bottom of page