OpenAI kicked off its inaugural “12 Days of OpenAI” media blitz on December 5, each day unveiling new features, models, subscription tiers, and capabilities for its growing ChatGPT product ecosystem during a series of live-stream events.
Here’s a quick rundown of everything the company announced.
Day 1: OpenAI unleashes its o1 reasoning model and introduces ChatGPT Pro
OpenAI o1 and o1 pro mode in ChatGPT — 12 Days of OpenAI: Day 1
OpenAI kicked off the festivities with a couple major announcements. First, the company revealed the full-function version of its new o1 family of reasoning models and announced that they would be immediately available, albeit in limited amounts, to its $20/month Plus tier subscribers. In order to get full use of the new model (as well as every other model that OpenAI offers, plus unlimited access to Advanced Voice Mode), users will need to spring for OpenAI’s newest, and highest, subscription package: the $200/month Pro tier.
Day 2: OpenAI expands its Reinforcement Fine-Tuning Research program
Reinforcement Fine-Tuning—12 Days of OpenAI: Day 2
On the event’s second day, the OpenAI development team announced that it is expanding its Reinforcement Fine-Tuning Research program, which allows developers to train the company’s models as subject matter experts that “excel at specific sets of complex, domain-specific tasks,” according to the program’s website. Though it is geared more toward institutes, universities, and enterprises than individual users, the company plans to make the program’s API available to the public early next year.
Day 3: OpenAI’s Sora video generator has finally arrived. Huzzah?
Sora–12 Days of OpenAI: Day 3
On the third day of OpenAI, Sam Altman gave to me: Sora video generation. Yeah, OK, so the cadence for that doesn’t quite work but hey, neither does Sora. OpenAI’s long-awaited and highly touted video generation model, which has been heavily hyped since February, made its official debut on December 9 to middling reviews. Turns out that two years into the AI boom, being the leading company in the space and only rolling out 20-second clips at 1080p doesn’t really move the needle, especially when many of its competitors already offer similar performance without requiring a $20- or $200-per-month subscription.
Day 4: OpenAI expands its Canvas
Canvas—12 Days of OpenAI: Day 4
OpenAI followed up its Sora revelations with a set of improvements to its recently released Canvas feature, the company’s answer to Anthropic’s Artifacts. During its Day 4 live stream, the OpenAI development team revealed that Canvas will now be integrated directly into the GPT-4o model, making it natively available to users at all price tiers, including free. You can now run Python code directly within the Canvas space, enabling the chatbot to analyze it directly and offer suggestions for improvement, as well as use the feature to construct custom GPTs.
Day 5: ChatGPT teams up with Apple Intelligence
ChatGPT x Apple Intelligence—12 Days of OpenAI: Day 5
On day five, OpenAI announced that it is working with Apple to integrate ChatGPT into Apple Intelligence, specifically Siri, allowing users to invoke the chatbot directly through iOS. Apple had announced that this would be a thing back when it first unveiled Apple Intelligence but, with the release of iOS 18.2, that functionality is now a reality. If only Apple’s users actually wanted to use Apple’s AI.
Day 6: Advanced Voice Mode now has the power of sight and can speak Santa
Santa Mode & Video in Advanced Voice—12 Days of OpenAI: Day 6
2024 was the year that Advanced Voice Mode got its eyes. OpenAI announced on Day 6 of its live-stream event that its conversational chatbot model can now view the world around it through a mobile device’s video camera or via screen sharing. This will enable users to ask the AI questions about their surroundings without having to describe the scene or upload a photo of what they’re looking at. The company also released a seasonal voice for AVM which mimics Jolly Old St. Nick, just in case you don’t have time to drive your kids to the mall and meet the real one in person.
Day 7: OpenAI introduces Projects for ChatGPT
Projects—12 Days of OpenAI: Day 7
OpenAI closed out the first week of announcements with one that is sure to bring a smile to the face of every boy and girl: folders! Specifically, the company revealed its new smart folder system, dubbed “Projects,” which allows users to better organize their chat histories and uploaded documents by subject.
Day 8: ChatGPT Search is now available to everybody
Search—12 Days of OpenAI: Day 8
OpenAI’s ChatGPT Search function, which debuted in October, is now available to all logged-in users, regardless of their subscription tier. The feature works by searching the internet for information about the user’s query, scraping the info it finds from relevant websites, and then synthesize that data into a conversational answer. It essentially eliminates the need to click through a search results page and is functionally identical to what Perplexity AI offers, allowing ChatGPT to compete with the increasingly popular app. Be warned, however: a recent study has shown the feature to be “confidently wrong” in many of its answers.
Day 9: the full o1 model comes to OpenAI’s API
Dev Day Holiday Edition—12 Days of OpenAI: Day 9
Like being gifted a sweater from not one but two aunts, OpenAI revealed on day nine that it is allowing select developers to access the full version of its o1 reasoning model through the API. The company is also rolling out real-time API updates, a new model customization technique called Preference Fine-Tuning, and new SDKs for Go and Java.
Day 10: 1-800-ChatGPT
1-800-CHAT-GPT—12 Days of OpenAI: Day 10
In an effort to capture that final market segment that it couldn’t already reach — specifically, people without internet access — OpenAI has released the 1-800-ChatGPT (1-800-242-8478) chatline. Dial in from any land or mobile number within the U.S. to speak with the AI’s Advanced Voice Mode for up to 15 minutes for free.
Day 11: ChatGPT now works with even more coding apps
Work with Apps—12 Days of OpenAI: Day 11
Last month, OpenAI granted the Mac-based desktop version of ChatGPT the ability to interface directly with a number of popular coding applications, allowing its AI to pull snippets directly from them rather than require users to copy and paste the code into its chatbot’s prompt window. On Thursday, the company announced that it is drastically expanding the number of apps and IDEs that ChatGPT can collaborate with. And it’s not just coding apps; ChatGPT now also works with conventional text programs like Apple Notes, Notion, and Quip. You can even launch Advanced Voice Mode in a separate window as you work, asking questions and getting suggestions from the AI about your current project.
Day 12: OpenAI teases its upcoming o3 and o3-mini reasoning models
OpenAI o3 and o3-mini—12 Days of OpenAI: Day 12
For the 12th day of OpenAI’s live-stream event, CEO Sam Altman made a final appearance to discuss what the company has in store for the new year — specifically, its next-generation reasoning models, o3 and o3-mini. The naming scheme is a bit odd (and done to avoid copyright issues with U.K. telecom, O2) but the upcoming models reportedly offer superior performance on some of the industry’s most challenging math, science, and coding benchmark tests — even compared to o1, the full version of which was formally released less than a fortnight ago. The company is currently offering o3-mini as a preview to researchers for safety testing and red teaming trials, though there’s no word yet on when everyday users will be able to try the models for themselves.