top of page
Writer's pictureRich Washburn

OpenAI’s o1-Series: Strawberry, Orion, and the Future of AI – Should We Be Excited or Worried?


If you’ve been keeping up with AI rumors, you’ve probably heard whispers of OpenAI’s "Strawberry" model. For months, the AI community has speculated what this codename could mean, and today, we might have our answer. Enter the OpenAI o1-series, freshly dropped and packed with features that could mark the next evolution in artificial intelligence—while raising some eyebrows along the way.


Strawberry = o1-Preview?



When I first wrote about the "Strawberry" model, we had only hints and speculation. Would it improve reasoning? Was it some next-gen beast designed to handle coding, science, or even more abstract challenges like human creativity? Fast forward to today, and OpenAI has just unveiled their new o1-series models, and they sure sound like what we’ve been speculating about. While they’ve ditched the fruit-inspired moniker for something more formal, the core idea seems intact.


The first model in the series, o1-preview, is a reasoning powerhouse. It promises to spend more time thinking before it answers, making it capable of tackling problems that would make even PhD students break a sweat. Need help with quantum optics? No problem. Need code that not only works but also adapts to complex workflows? Done. In OpenAI’s testing, this model scored 83% on International Mathematics Olympiad qualifiers—compared to GPT-4's 13%. It’s clear we’ve leveled up.


The o1-preview seems to have a sibling lurking in the shadows—a smaller, more efficient variant called o1-mini. It’s a cost-effective model, designed to handle complex coding tasks with speed and precision while being 80% cheaper than its bigger counterpart. This leaves us wondering: is "Orion" simply another codename for the next big step in this series? Could we be seeing a full rollout of an Orion-branded model down the line, specifically designed to tackle reasoning at an even higher level?


Whatever the case, these names—Strawberry, Orion—are starting to feel more like milestones on the OpenAI roadmap, guiding us towards ever more specialized and advanced AI capabilities.


OpenAI has been transparent (to their credit) about the risks that come with creating AI models this powerful. They’ve designed new safety protocols that tap into the reasoning ability of the o1 models, enabling them to better navigate tricky ethical situations. For instance, in their tests to prevent "jailbreaking" (where users try to get the AI to violate its own rules), the o1-preview model scored an impressive 84 out of 100—compared to GPT-4's dismal 22. That’s a serious improvement in safety, no doubt about it.


But will that be enough? AI models are getting smarter, yes, but they’re also becoming more integrated into critical industries—healthcare, finance, cybersecurity—where one slip-up could have real-world consequences. OpenAI is working with governments in the U.S. and U.K. to ensure these models don’t go rogue, but you can’t help but feel that with great power comes great risk.


Now, before we go full "doomsday prepper," let’s talk about what these new models can do—and why we should be excited. The potential applications are staggering.


Physicists could use the o1-preview to solve complex equations in quantum mechanics, while healthcare researchers might finally crack the code on genetic sequencing by feeding the AI massive datasets for analysis. Developers—especially those working with intricate, multi-step coding workflows—stand to gain the most from o1-mini, the stripped-down but still potent coding model.


Imagine an AI that doesn’t just understand your requests but can anticipate your actual needs, like a digital project manager. It could debug code, sequence DNA, or even predict financial market trends by reasoning through historical data in a way that’s almost human—but better. That's the real promise here.


So, should we be excited or worried? Honestly, both.


The OpenAI o1-series represents an incredible leap in AI’s ability to reason, making models more useful for complex problems in science, math, and coding. But with that leap comes a responsibility we’re still figuring out how to handle. OpenAI’s safety measures are promising, but as these models get smarter, so too will the ways in which they can be misused—or even misinterpret tasks on their own.


It’s clear that this is just the beginning for the o1-series. Whether “Orion” or other codenames appear later remains to be seen, but one thing’s for sure: AI is about to get a lot more powerful, and how we handle that power will shape the future.


For now, I’ll be diving into the o1-preview and keeping an eye out for what’s next—after all, these models are just the beginning of what’s shaping up to be an exciting (and slightly unnerving) journey into the future of AI.


Recent Posts

See All

Comments


bottom of page