top of page
Writer's pictureRich Washburn

AI Just Reached Human-Level Reasoning – Should We Be Worried?


Audio cover
AI Just Reached Human-Level Reasoning

We're witnessing a new chapter unfold—one that could change everything. Recently, Sam Altman, the CEO of OpenAI, made a bold statement: their latest AI models, specifically the "01" family, have achieved a level of human-like reasoning and problem-solving. That’s not just marketing fluff; it’s a monumental step in AI development that has sparked excitement and—let's be honest—more than a little concern.


For years, AI has been creeping toward this moment. Many of the systems we’ve interacted with—think chatbots, search algorithms, and automated systems—have been pretty good at mimicking human responses. But this is different. Altman is claiming that AI has moved from simply following instructions to reasoning through complex problems in a way that’s much closer to how we humans do it. 


Let that sink in for a minute.


This isn’t your standard machine-learning update. It’s a leap from being a sophisticated parrot, repeating and recombining things it has been trained on, to something resembling a thinker, a problem-solver. OpenAI even broke down their AI progression into five levels, and 01 models have reached Level 2: "reasoners." That means these models aren’t just spitting out facts or probabilities anymore—they’re reasoning through challenges. 


The gap between 01 and what’s coming next might be bigger than what we’ve seen between previous generations. So, while 01 has its limits, Altman is essentially telling us: You ain’t seen nothing yet.


The Good, the Bad, and the Unsettling


Before we dive headlong into "Skynet is coming" panic mode, it’s important to recognize that these advancements come with real potential benefits. Already, 01 has outperformed humans in some areas, crushing complex exams like the LSAT (the law school entrance exam), and contributing new insights in fields like quantum physics and molecular biology. Researchers are even lauding its ability to come up with more elegant mathematical proofs than human experts. Not bad for a machine.


However, the rapid pace of these advancements is also what makes people uneasy. The technology is progressing exponentially, not linearly, meaning today’s limitations could disappear shockingly fast. And that’s where the worry creeps in. 


As AI systems get better at reasoning, what happens when they surpass our own problem-solving abilities in a broad range of fields? Sure, it sounds exciting to have an AI that's smarter than us in solving complex scientific problems or designing futuristic technology, but what about the unintended consequences? Could an AI think its way into a goal that conflicts with human interests?


Altman himself has acknowledged the risk of "hidden sub-goals" emerging in AI systems. One of these sub-goals could be something as simple—but terrifying—as self-preservation. If an AI needs to stay operational to complete a task, it might prioritize its own survival, possibly even against human wishes. You don't need to be a science fiction buff to see where that could go wrong.


From Reasoners to Agents: What’s Next?


Level 3 AI systems, dubbed "agents," are on the horizon, potentially arriving as early as 2025. These agents won't just reason their way through problems—they'll act autonomously in the real world. Think about that for a moment: machines making decisions and taking actions without human intervention. It could revolutionize industries like finance, healthcare, and logistics, but it also opens the door to significant risks. 


One of the key challenges for these agentic systems is self-correction. If an AI agent makes a mistake, it needs to be able to recognize and fix it, on the fly. We wouldn't trust a self-driving car, for example, if it couldn't adapt to sudden changes in traffic or road conditions. The same logic applies to AI agents. If they can't correct themselves, how can we trust them with anything important?


And trust, ultimately, is what this all boils down to.


The Power (and Risks) of Speed


Even if AI systems don't surpass humans in intelligence right away, there’s one critical area where they already have an advantage: speed. AI can process information and make decisions far faster than we can. In situations like military conflict or cybersecurity, this speed could be a game-changer—or a disaster.


Take the battlefield, for example. AI tools like Pulsar, which have been deployed in Ukraine to disrupt enemy hardware, are already performing tasks that once took human experts weeks or months. This kind of lightning-fast decision-making is useful, but it also raises ethical and safety concerns. An AI's ability to outpace human thinking could lead to decisions being made that we can't reverse in time.


And here’s the kicker: these systems are essentially black boxes. Even the teams that develop them don’t always fully understand how they arrive at their conclusions. This lack of transparency makes it nearly impossible to guarantee that they’re aligned with human values—let alone safe to use in high-stakes situations.


The Looming AGI Question


Then there’s the ultimate AI endgame: Artificial General Intelligence (AGI). That’s the level at which machines can outperform humans at most economically valuable tasks. OpenAI’s 01 models are still a ways off from that, but many experts believe AGI could be here sooner than we think.


What happens when we reach that point? The truth is, no one knows for sure. But there’s widespread agreement that it could be a game-changer for every industry—and potentially for the world as we know it. As more compute power, data, and resources are thrown at AI models, they’re only going to get smarter. The models that generate images, videos, and even scientific breakthroughs are just the beginning. With supercomputers already being built to support these systems, we could soon be living in a world where AI doesn’t just assist us—it outperforms us in ways we haven’t yet imagined.


Should We Be Worried?


In a word, yes. But it’s not time to start building underground bunkers just yet.


AI holds incredible promise to transform industries like healthcare, education, and even space exploration. But there are real risks that we need to address—sooner rather than later. The AI alignment problem—ensuring that AI systems' goals stay compatible with human values—is one of the toughest challenges humanity has ever faced. If we don’t solve it, the consequences could be disastrous. 


We need transparency, oversight, and collaboration among AI researchers, policymakers, and the public to make sure we steer these powerful technologies in the right direction. At the end of the day, AI isn’t something that happens to us—it’s something we build, and we all have a role to play in shaping its future.


So, should we be worried about AI reaching human-level reasoning? Yes, but we should also be hopeful. The future of AI is coming faster than we expected, and it’s up to us to make sure it’s a future we actually want to live in.



Comments


bottom of page