top of page
Writer's pictureRich Washburn

OpenAI Releases New AI Agent, Google's Strawberry Model, and Sam Altman Predicts AGI Arrival


Audio cover
The world of artificial intelligenceAria

The world of artificial intelligence (AI) is moving at lightspeed, and if you blink, you'll miss the next groundbreaking innovation. The recent OpenAI Dev Day brought some monumental announcements, new AI capabilities, and even fresh competition from Google. From advanced AI voice features to Sam Altman's prediction of AGI (Artificial General Intelligence), we’re witnessing history unfold. Let’s break down the latest big AI news—because the future isn’t just near, it’s here.


OpenAI’s Advanced Voice Mode: A New Era of Conversation


Let’s start with the feature that has the internet buzzing: OpenAI's Advanced Voice Mode. Remember when Siri and Alexa were cool? Well, this new voice tech is ready to blow them out of the water. In the latest version of ChatGPT, OpenAI has rolled out a voice feature that is so realistic, it has people thinking they’re chatting with another human. Users can interact with AI in a low-latency, speech-to-speech conversation. This is not your typical robotic voice; the AI can change accents, mimic human inflections, and respond with almost eerie human-like fluency.


Take one viral case, for example: A user asked ChatGPT to role-play as an Indian scammer in a mock "tech support" call. What followed was a hilariously accurate performance—down to the accent and scammer lingo we’ve all heard in those sketchy phone calls. The post racked up over 3.5 million views on social media, proving that, while incredibly advanced, these AI interactions also offer endless entertainment value. 


But beyond the laughs, the implications are profound. OpenAI’s real-time voice API has been released in beta, meaning developers can now integrate this technology into their own apps. Picture the possibilities: AI customer service reps, automated tutoring, or even AI-driven negotiations. The future of voice interactions is officially up for grabs.


Google’s “Strawberry Model”: A Silent Power Player


While OpenAI’s latest upgrades have stolen headlines, Google is quietly flexing its AI muscles too. At one point, OpenAI demoed an AI agent ordering 400 chocolate-covered strawberries—delivered within minutes. This is the kind of task automation Google had nailed years ago. In fact, Google’s “Strawberry Model”—not its official name, but one fitting for this context—is designed to optimize voice-based interactions with real-world tasks. 


Years ago, Google introduced its AI system capable of making calls for things like scheduling a haircut—remember that mind-blowing 2018 demo where an AI booked an appointment at a salon? Well, Google’s been perfecting that technology for a while now, adding layers of reasoning that even surpass what OpenAI is currently doing. With projects like Gemini and Google's proprietary reasoning software, the tech giant is working to make AI agents more autonomous and capable of managing more complex, real-world tasks.


And don’t sleep on Google’s push for AI reasoning software. Their “chain of thought” prompting system allows AI to pause, think, and deliver more accurate responses. It’s AI that doesn’t just respond quickly, but thoughtfully, opening the door to systems that can solve complex problems, such as advanced math or computer programming. According to insiders, Google is positioning itself to leap ahead in the race to AGI.


Sam Altman’s AGI Prediction: 2025 Could Be the Year of AI Agents


Speaking of AGI, no conversation about AI’s future would be complete without Sam Altman, OpenAI’s CEO, weighing in. During OpenAI's Dev Day, Altman dropped a major hint: we’re closer to AGI than most people think. For the uninitiated, AGI is the holy grail of AI—an intelligence system as capable as a human in virtually any task. 


Altman described a new framework for understanding AI’s development in stages. According to him, we're currently at "Level 2" of AI capabilities, where AI can reason and hold conversations. The next step, "Level 3," is when AI becomes more agentic—able to act autonomously without human intervention.


The kicker? Altman expects this to happen soon. He speculates that by 2025, AI agents will not only be executing basic tasks but will also drive innovation in scientific discovery, research, and technology development. Imagine an AI that can autonomously conduct research, solve complex problems, and even make new scientific breakthroughs. Altman believes that leap is not far off, and 2025 could be the year we begin to see AI agents acting much more like independent entities than simple tools.


Meta and Google Gear Up for the Future of AI Hardware


While AI software gets all the glory, AI hardware is also advancing in fascinating ways. Meta recently launched its Ray-Ban smart glasses, and despite their current limitations, they represent a future where AI and wearable tech become ubiquitous. These glasses are a stepping stone to a world where your primary computing device may no longer be your phone, but your glasses.


Meta CEO Mark Zuckerberg envisions a future where smart glasses replace smartphones by the 2030s. If that sounds far-fetched, consider how quickly mobile phones overtook desktop computers. With AI and Augmented Reality (AR) advancing in parallel, it’s not hard to imagine a world where your glasses handle calls, navigation, and real-time information overlays.


Google, too, has been investing in this future. They’ve got AI assistants like Astra, capable of understanding visual information, and hardware projects that are poised to revolutionize how we interact with AI in the real world. Whether it’s through smart glasses, pins, or home assistants, the race to integrate AI seamlessly into our daily lives is heating up.


The Takeaway: The Future is Happening Faster Than You Think


With companies like OpenAI and Google pushing boundaries daily, AI’s capabilities are expanding at a breakneck pace. Whether it's voice-activated agents, autonomous task management, or predictions of AGI by 2025, the AI revolution is already here, and it’s moving faster than most people realize.


So, here are a few things to watch out for:


  1. AI Agents Everywhere: Expect to see AI agents handling everything from customer service to scientific research in the next two years.


  2. Autonomous Reasoning: Both Google and OpenAI are pushing towards AI systems that can reason, solve complex problems, and engage in multi-step tasks autonomously.


  3. AI-Powered Hardware: From Meta’s Ray-Bans to future smart glasses, expect wearable tech to evolve rapidly, integrating AI in ways we can’t yet fully grasp.


  4. AGI on the Horizon: If Sam Altman’s prediction is correct, 2025 could be the year AI breaks into AGI territory, bringing us closer to machines that think and learn like humans.


With the AI landscape changing this rapidly, the next few years promise to be nothing short of transformative. The question is: are you ready for it?



Comments


bottom of page