top of page

Scaling the Heights of Intelligence: Strawberry, Orion, and the Future


Audio cover
OrionRising

The CEO of OpenAI, has recently been dropping tantalizing hints about the next generation of large language models (LLMs) – codenamed Orion. This model, widely speculated to be the foundation of GPT-5, has created a flurry of excitement and speculation across the tech world. One cryptic tweet about enjoying the winter constellations has set off a wave of speculation: Could Orion, the famous constellation in the night sky, also be the name of the next AI revolution?


Spoiler alert: The answer seems to be yes.


OpenAI's latest developments with Strawberry AI and Orion's early tests suggest that the future of artificial intelligence is brighter—and potentially scarier—than ever. Let’s dig into what this all means, including the jaw-dropping tests that claim to complete a year’s worth of PhD work in just an hour.


Orion: A New Star on the AI Horizon


Altman’s tweet may have been poetic, but the message behind it wasn’t lost on the AI community. Orion, like the winter constellation, is on the rise. And this isn’t just any LLM update—this is the model that could outshine everything that’s come before it.


Rumor has it that Orion is going to leverage the Strawberry AI model, which is currently being used to generate high-quality training data. This is important because the quality of data used in model training directly impacts the model’s performance. In other words, Strawberry is the “secret sauce” that could push Orion to new heights.


But here’s where things get mind-blowing: We’re already seeing how powerful these models can be with Strawberry alone, and Orion is still in development. One physicist, Dr. Kyle Kavasar, shared his experience feeding his own PhD code into an AI model—code that took him a full year to write. The AI not only replicated his work in less than an hour, but also did so with startling accuracy.


Imagine spending an entire year on complex astrophysics calculations, only to have a chatbot recreate it within the span of your lunch break. That’s not just a game-changer; it’s redefining the whole playing field.


Testing IQ: The AI Superintelligence Question


A lot of the buzz surrounding Orion isn’t just about speed—it’s about intelligence. A leaked image showing IQ test results across various OpenAI models has sparked debates about AI’s future capabilities. In the bell curve of human IQ, most of these models previously clocked in below average. But as new models like Gemini and the upcoming Orion develop, they’re edging closer to, and potentially surpassing, human intelligence. Some speculate that Orion could reach an IQ score of 120—a figure higher than a significant portion of the human population.


Now, IQ tests aren’t the ultimate measure of intelligence, but the direction is clear: These models are not just getting faster; they’re getting smarter. And as they become more capable, they will be able to outperform humans in more fields—from math Olympiads to advanced scientific research.


The implications of AI exceeding human intelligence across various metrics raise a lot of questions. What happens when an AI can solve problems faster and more effectively than any human? Will engineers, scientists, and mathematicians be replaced, or will they simply gain a super-powered assistant to speed up their work? If a model can complete PhD-level work in an hour, what’s the future of higher education?


What Happens When AI Outsmarts Us?


The video featuring Dr. Kyle Kavasar isn’t just about the novelty of AI duplicating his year’s worth of code—it’s a harbinger of the future. Dr. Kavasar, who works at the Bay Area Environmental Research Institute, was shocked when GPT-4 completed his code in less time than it takes to binge-watch a few episodes on Netflix. The kicker? He didn’t give the model any code samples—just the method section from his paper.


After a few tweaks to iron out errors, the AI produced functional code that mimicked Kavasar’s original work almost perfectly. If this sounds like science fiction, brace yourself. This is just the beginning.


AI models are growing more capable not only at performing tasks faster but also at performing them with higher accuracy and better reasoning. OpenAI has demonstrated that these models can "think longer" if given more compute power, meaning they can process information and solve problems more efficiently as we increase their resources. This opens up possibilities not just in programming, but in fields like healthcare, finance, and beyond.


A recent AI test on the US Math Olympiad competition highlighted how these models scale their abilities. When given more time to "think" (a.k.a. more compute power for inference), Strawberry performed significantly better than its earlier iterations. The more resources we pour into these systems, the smarter and more accurate they become. We’re now entering a new era where inference scaling—how much time the model has to process a problem—is becoming as important as the training of the model itself.


One AI researcher, Dr. Jim Fan, has dubbed this shift in AI performance as the “most important figure in LLM research since the Chinchilla scaling laws of 2022.” The implication? The traditional way of thinking about AI capabilities, as limited by training data and compute power, is outdated. The more we allow these models to "think," the better they get—potentially with no upper limit in sight.


As Orion's capabilities come closer to reality, opinions in the AI community are splitting into three distinct camps:


  1. AI Optimists: These folks see AI as a path to a brighter future, where AI models like Orion will drive scientific breakthroughs, economic growth, and an era of abundance.

  2. AI Doomers: For them, this is the beginning of the end. Superintelligent AI could outthink us all and potentially steer us into existential disaster.

  3. AI Skeptics: This group still believes it’s all hype, though their numbers are dwindling. With each new AI milestone, skeptics are finding it harder to dismiss the significance of these developments.


Where do you stand? Personally, I lean toward optimism. But like any great technological leap, it’s crucial to approach AI with cautious enthusiasm, recognizing the risks without being paralyzed by them.


As we gaze into the night sky, the constellation Orion reminds us of the vast potential above—and within the world of artificial intelligence. Sam Altman’s cryptic messages about Orion may be playful, but the underlying message is clear: We’re on the verge of something big.


Whether you see this future as an opportunity or a threat, one thing is undeniable: The next generation of AI models, powered by breakthroughs like Strawberry, will change the world as we know it. And when that change arrives, it’s going to be faster, smarter, and more transformative than anything we’ve seen before.

Comments


bottom of page