OpenAI debuts a speedier model for powering ChatGPT and more free features

OpenAI, led by Sam Altman, has introduced ‘GPT-4o’, an advanced version of the GPT-4 model capable of producing text, audio, and image outputs. This new model can quickly process audio inputs in just 232 milliseconds on average, matching the speed of human conversation.

Voice Mode with GPT-4o

All ChatGPT users, including those on the free plan, now have access to GPT-4o’s text and image features. Plus users enjoy even higher message limits. OpenAI is set to release an alpha version of Voice Mode with GPT-4o within ChatGPT soon.

GPT-4o for Everyone

“Our latest model, GPT-4o, is the finest we’ve created. It’s intelligent, swift, and supports multiple modes of communication,” Altman shared. Previously, GPT-4 models were exclusive to subscribers, but now, they’re available to everyone, aligning with our goal to democratize access to powerful AI tools.

Quick Audio Response

GPT-4o stands out for its superior vision and audio comprehension, responding to audio prompts almost as quickly as a person would in a conversation.

Upcoming Features

OpenAI plans to introduce GPT-4o’s audio and video features to select API partners soon. GPT-4o is a unified model trained to understand and generate text, images, and audio, offering a glimpse into the potential and boundaries of this technology.

Additional Launches

Alongside, OpenAI has released a Mac desktop app for ChatGPT and made the GPT Store, where users can create and share their own chatbots, free for all.

Leave a Comment