top of page

Ex-OpenAI Employee Leaked Doc to Congress! What’s Coming Next for AGI?


Audio cover
William Saunders

In a twist straight out of a tech thriller, a former OpenAI employee, William Saunders, has dropped a bombshell in front of a Senate subcommittee. If you haven't been paying attention to the AI world, get ready, because things are getting crazy… ok well, crazier. According to Saunders, OpenAI is on the brink of achieving Artificial General Intelligence (AGI)—and it might arrive much sooner than we think.


How soon, you ask? Three years. That’s right. AGI could be upon us by 2027. But before you start imagining an army of robots replacing human workers or a HAL 9000 scenario, let’s break down what AGI actually is, why it matters, and why Saunders felt the need to blow the whistle on the company trying to build it.


AGI, or Artificial General Intelligence, refers to an AI that can perform virtually any task a human can—and probably do it better. We’re talking about a highly autonomous system that’s not just crunching numbers or scheduling emails. AGI would have the capability to plan long-term, solve unexpected problems, and even outperform humans in most economically valuable work. From customer service to creative writing, coding, and more—AGI could take it all on.


For now, AGI’s potential is mostly envisioned in the realm of digital tasks—anything that can be done on a computer. But in the long run, physical labor might also be on the table with the right robotics support. Think of AGI as your ultimate digital co-worker, one who never clocks out, never makes mistakes (hopefully), and can learn just about any skill—fast.


What Did Saunders Reveal?


Saunders, a former OpenAI insider, didn’t just appear in front of Congress to talk tech theory. He’s sounding the alarm, suggesting that OpenAI’s newest model, referred to as "01" or "OpenAI1," is advancing much faster than anyone anticipated. And that’s not just for routine office work or basic automation.


In his testimony, Saunders shared that the new model had already achieved significant milestones, including outperforming him in a prestigious international competition he trained for in high school. This wasn’t just about beating humans at games—this was a test of advanced computational and problem-solving abilities, areas critical to fields like engineering and cybersecurity.


According to Saunders, AGI isn’t just about taking over office clerks’ jobs or replacing freelancers on Upwork. The implications stretch into national security, as the model has demonstrated the ability to assist in biological weapon creation planning. Yes, you read that right—biological weapons.


Risks of AGI


If AGI can outsmart top mathematicians and handle complex tasks, it could revolutionize industries, economies, and everyday life. But with this power comes massive risk. Saunders is concerned that AGI could be misused, either intentionally or accidentally, to cause harm. The specter of AI conducting autonomous cyberattacks or aiding in the creation of dangerous biological weapons isn’t far-fetched in his view.


To make matters worse, Saunders emphasized that OpenAI has been more focused on rapid deployment than on rigorous safety checks. While the company has developed some safeguards, he warns that they may not be enough to prevent AGI from slipping into dangerous territory. And here’s the kicker: he claims there were vulnerabilities at OpenAI that could have allowed insiders to bypass security controls and access their most advanced AI systems, including GPT-4.


Can you imagine the consequences if someone, say, a rogue engineer or even a foreign adversary, gained control of an AGI system? The world might change overnight.


Threat of Economic Disruption


It’s not just the existential threats that worry Saunders. He’s also raising alarms about the economy. If AGI can perform most economically valuable tasks, what happens to the millions of jobs currently filled by humans?


Picture this: AGI systems that can code, write, design, and manage businesses. That’s not science fiction—it’s a future Saunders sees coming in as little as three years. And it’s not just digital jobs at risk. With robotics in the mix, AGI could eventually handle physical labor too, from construction to delivery services.


Are we ready for this shift? Saunders doesn’t think so, and frankly, neither do I.


We’ve all heard of Universal Basic Income (UBI)—a system where everyone gets a regular paycheck from the government, whether they work or not. Saunders hints that OpenAI’s CEO, Sam Altman, has floated a similar idea—a “Freedom Dividend” or “Nation’s Dividend.” But UBI experiments so far have had mixed results, and there’s no clear plan in place if millions of people suddenly find themselves out of work.


OpenAI’s new model, the 01, represents a leap forward in AI development. Unlike previous models, which primarily relied on training to improve performance, the 01 introduces a concept called test-time compute. This means it can essentially “think” more deeply before answering questions, making it vastly more powerful.


Saunders isn’t alone in raising concerns about AGI. Google’s DeepMind and other AI research labs are racing toward similar breakthroughs. But OpenAI’s 01 model seems to be setting the pace, and with it comes a slew of unanswered questions about safety, security, and societal readiness.


Are We Prepared?


The Senate subcommittee listening to Saunders is made up of mostly non-tech folks, many in their 60s and older. While there’s no doubt that the importance of AI is on their radar, it’s hard to imagine that policymakers are fully equipped to tackle the complexities of AGI. As one Reddit user quipped, "It’s all good, I’m sure the Senate, with an average age of 60, will totally wrap their brains around AGI and come up with relevant proposals."


Yeah, that’s probably sarcasm. And it highlights a bigger issue: Is anyone really ready for AGI? We don’t just need a plan; we need a plan that can evolve as fast as the technology itself.


AGI is no longer some distant dream of sci-fi writers—it’s on the horizon. According to whistleblower William Saunders, we could be facing the dawn of AGI within three years. But as exciting as that is, it also comes with monumental risks. From economic upheaval to security vulnerabilities, the world isn’t ready for the pace at which this technology is advancing.


What can we do? Saunders suggests stronger protections for whistleblowers, more rigorous safety testing, and greater oversight of AI development. But the bigger question remains: Can we control what we create? Or are we rushing headlong into a future we don’t fully understand?


Let me know what you think. Is AGI three years away, and if so, are we ready for it?

Comments


bottom of page