LIVE COVERAGE
Welcome to our live coverage of the “12 Days of OpenAI,” an exciting event where OpenAI unveils a series of innovations, updates, and surprises over the holiday season. This initiative showcases the latest in AI technology and engages the community in the journey of AI development. From new models to subscription tiers, each day promises something unique.
Follow along as we bring you the latest developments, community reactions, and expert analysis right here.
Timeline of “12 Days of OpenAI” Event
December 5, 2024
Day 1: Full Release of o1 Model and ChatGPT Pro
OpenAI launched the full version of the o1 reasoning model and introduced a new subscription tier, ChatGPT Pro.
December 6, 2024
Day 2: Reinforcement Fine-Tuning Advancements
Announcements included improvements in reinforcement fine-tuning for the o1 model, allowing for more personalized AI interactions.
December 7, 2024
Day 3: Sora-Turbo Unveiled
OpenAI revealed Sora, their text-to-video AI generator, promising to transform video content creation with AI.
December 10, 2024
Day 4: Canvas Expansion
Canvas, previously in beta, was made available to all users, with new features like Python integration and use in custom GPTs.
December 11, 2024
Day 5: ChatGPT with Apple Intelligence
Integration of ChatGPT with Apple’s ecosystem was announced, enhancing Siri’s capabilities on Mac and mobile.
December 12, 2024
Day 6: Advanced Voice Mode with Vision and Santa Mode
Update to Advanced Voice Mode included real-time video interaction, making conversations with AI more natural and introduction of Santa Voice for the Holidays.
December 13, 2024
Day 7: Projects and Folders for ChatGPT
Introduction of Projects, a feature to organize chats, manage files, and set custom instructions within ChatGPT.
December 16, 2024
Day 8: Enhanced ChatGPT Search, AVM Integration, Free Access
Improvements to search functionality within ChatGPT, allowing for real-time information retrieval during conversations.
December 17, 2024
Day 9: Mini Dev Day – Holiday Edition
OpenAI celebrated “Mini Dev Day” with a focus on developer tools and enhancements.
December 18, 2024
Day 10: Enhancing ChatGPT Accessibility
OpenAI introduced voice access via toll-free number and WhatsApp integration for broader AI interaction.
December 19, 2024
Day11: Desktop App Enhancements
ChatGPT desktop app gets new capabilities, including macOS app interactions and a preview of agentic features for 2025.
December 20, 2024
Day 12: o3 Models Unveiled and macOS Integration
Introduced o3 and o3 mini models, committing to external safety testing, and unveiled macOS app updates for enhanced AI integration.
Day 1 – December 5, 2024
- OpenAI officially launches the “12 Days of OpenAI” with the much-anticipated full release of the o1 reasoning model. The model, which was in limited preview since September, now offers enhanced capabilities in understanding and generating responses.
- Announcement of a new subscription tier, “ChatGPT Pro,” at $200/month, providing unlimited access to o1, GPT-4o, and Advanced Mode. This tier targets heavy users and businesses looking for top-tier AI support.
Day 2 – December 6, 2024
OpenAI reveals advancements in reinforcement fine-tuning for the o1 model, allowing for more personalized AI responses. This could significantly impact sectors like customer service and content creation.
Community feedback starts to roll in. X posts under #OpenAI12Days show a split in opinion; some users are thrilled about the customization potential, while others question the cost and benefit ratio of the new subscription model.
Ongoing Live Coverage
Speculation is rife about upcoming days’ reveals. The community is buzzing with guesses about what might be next, including whispers of Sora, the text-to-video AI, making an appearance.
Sam Altman’s cryptic X posts add to the excitement, hinting at both major releases and minor updates, keeping the AI community on its toes.
General Insights
This event format by OpenAI shows a clever strategy to maintain engagement and demonstrate continuous innovation. The daily unveilings keep the audience coming back for more, creating a festive atmosphere around AI development.
The “12 Days” approach has not only sparked interest but also opened a dialogue about the direction of AI, its ethical implications, and its integration into daily life.
Upcoming Schedule
Next Live Stream: Tomorrow’s announcement expected to be streamed live, keep your eyes peeled for more groundbreaking updates or perhaps some “AI holiday cheer.”
Join us back here for more updates as we dive deeper into the “12 Days of OpenAI.”
Day 3: December 09, 2024
RUMOR: There’s a rumor circulating on X that OpenAI might be releasing “Agents” today, as mentioned by Chris, a former Anthropic PM. However, this has not been confirmed by official sources.
No Official Announcements or Keynotes Today: As of 12:51 PM MEST on December 09, 2024, there are no confirmed new announcements or keynotes from OpenAI for today in the “12 Days of OpenAI” event. Please keep an eye on official OpenAI channels or tech news outlets for any last-minute announcements.
Looking Forward: The event is scheduled to continue with daily announcements or demos through December 23, 2024. Given the nature of the event, expect a mix of significant updates and smaller “stocking stuffers.”
Community and Speculation: The AI community remains engaged, with speculation about what might be announced, including the much-anticipated Sora or further developments with the o1 model.
New Announcement
OpenAI has officially announced that today, December 09, 2024, they are unveiling Sora, their long-awaited text-to-video AI generator. The announcement came through their official channels, highlighting this as “Something you’ve been waiting for.” This is considered one of the major releases of the event.
Live Stream
The reveal of Sora is being streamed live, with the session titled “Something you’ve been waiting for.“
This is a significant update, marking a key moment in the “12 Days of OpenAI” series. Sora has been anticipated for its potential to revolutionize video content creation with AI.
Stay tuned for more details or any additional announcements that might follow today’s reveal.
Day 4: December 10, 2024
Announcement
On Day 4, OpenAI announced updates related to Canvas, their collaboration-focused interface for writing and code projects:
Canvas for All Users: The Canvas feature, which has been in beta for ChatGPT Plus members since October 2024, was rolled out to all users.
Python Integration: OpenAI introduced the ability to run Python code directly within Canvas, enhancing its utility for coding tasks.
Canvas in Custom GPTs: Canvas functionality was extended to custom GPTs, allowing for more complex interactions and project management within personalized AI assistants.
General Commentary
The expansion of Canvas to all users is a clear move to democratize access to advanced AI tools, potentially fostering a broader community of creators and developers.
The integration of Python within Canvas marks a significant enhancement, blending AI assistance with practical coding, likely to be well-received by the tech community.
Live Stream: These announcements were made live, with demonstrations showing how these new features work in practice, aiming to improve productivity and creativity for users of various skill levels.
This update emphasizes OpenAI’s focus on enhancing user interaction with AI, particularly in collaborative and development environments.
Day 5: December 11, 2024
On Day 5, OpenAI announced that ChatGPT is now integrated with Apple Intelligence. This integration allows ChatGPT to work seamlessly within Apple’s ecosystem, enhancing the capabilities of Siri on both Mac and mobile devices.
Siri and ChatGPT Integration: Users can now use Siri to interact with ChatGPT directly, asking questions or getting assistance on documents open on their devices. This includes the ability to ask Siri to review documents using ChatGPT’s capabilities, ask questions about them, or even send tasks to the ChatGPT desktop app for more complex operations.
Demonstration: A demo was shown where Siri interfaces with ChatGPT, showcasing how users can open a document, ask questions about its content, and use ChatGPT to perform tasks like writing code to visualize document data.
Impact: This integration is seen as a significant step towards making AI assistance more accessible and integrated into daily computing tasks, especially for those within the Apple ecosystem.
General Commentary
The integration of ChatGPT with Apple Intelligence marks a notable collaboration between two tech giants, potentially setting a new standard for how AI assistants can be utilized across different platforms.
This move also highlights OpenAI’s strategy to expand the reach and utility of ChatGPT beyond its standalone applications.
Stay Engaged
Check back here for further updates as we continue our coverage of the “12 Days of OpenAI” event. We’ll keep you informed on the latest developments and reactions from the tech community.
Day 6: December 12, 2024
OpenAI’s Day 6 announcement was centered on Advanced Voice with Vision, introducing real-time visual capabilities to voice interactions.
Announcement: OpenAI unveiled the Advanced Voice Mode with Video, allowing ChatGPT to see users in real-time during voice conversations, making the interaction as natural as a video call with a human.
Live Demo
Real-Time Visual Interaction: The demo showcased ChatGPT being able to see the user while they spoke. This included recognizing the user’s movements and changes in their environment, providing feedback or responses based on what it observed. For example, if a user was holding an object or performing an action, the AI could comment or assist accordingly.
Memory and Contextual Understanding: The demonstration emphasized ChatGPT’s memory capabilities for video input, where it could remember and refer back to visual information throughout the conversation. This was shown by the AI recalling details from earlier in the demo when asked subsequent questions.
Natural Conversation: The interaction was highlighted by ChatGPT’s ability to maintain a natural, kind voice, adapting its tone and even laughing appropriately, showing a level of conversational engagement that mimics human interaction closely.
Santa Voice Feature: Additionally, a seasonal touch was added with the introduction of a Santa voice for Advanced Voice Mode, featuring a British accent, which was demonstrated to engage users in a festive manner.
Community Reaction
There’s excitement on X about the naturalness of the interaction, with some users already imagining various use cases. However, privacy concerns are also being voiced, considering the AI now has the ability to process video feeds.
No Major Technical Issues: The day’s announcements and demo went off without significant technical hitches, focusing purely on the new features.
General Commentary
This update significantly enhances the interactivity of AI, bringing it closer to real-time human-like engagement, which could have wide-ranging applications in education, entertainment, and customer service.
The privacy aspect of allowing AI to “see” users in real-time is a critical point of discussion, emphasizing the need for transparency and security in AI interactions.
Stay tuned for more insights and announcements in this dynamic series.
Day 7: December 13, 2024
Day 7 of the “12 Days of OpenAI” brings us Projects for ChatGPT, aimed at enhancing organization and customization within the platform.
Announcement: OpenAI introduces Projects, a new feature allowing users to group chats, upload files, and set custom instructions for specific projects within ChatGPT. This is designed to make managing multiple related conversations or tasks much more straightforward.
Live Demo
Organizing Chats: The demonstration showed how users can create projects like organizing a Secret Santa or planning a personal website. Projects can include multiple chats, each tailored with specific instructions that override general settings for more focused assistance.
File Upload and Integration: Users were shown uploading files like spreadsheets or images into a project, which could then be referenced in conversations. This was particularly useful for the Secret Santa example, where the AI could manage and update gift lists based on the data provided.
Custom Instructions: A key aspect of the demo was setting project-specific instructions, allowing ChatGPT to adapt its responses to the context of that project. For instance, in a coding project, specific guidelines could be set for code style or language preference.
Search Within Projects: The feature of searching within a project was also highlighted, allowing users to quickly find information or past conversations relevant to their current tasks.
Community Reaction
The introduction of Projects is well-received on X, with users appreciating the ability to tailor ChatGPT to their workflows. There are comments about how this could streamline project management and personal organization.
Rollout Information
The feature began rolling out immediately after the announcement to Plus, Pro, and Teams users, with plans to extend it to free users “as soon as possible.”
General Commentary
The Projects feature signifies OpenAI’s push towards making ChatGPT not just a conversational tool but a comprehensive productivity aid, competing with traditional project management tools.
This could encourage more businesses and individuals to integrate AI into their daily workflows, potentially leading to more innovative uses of AI in project management.
Continue to follow our live coverage for more updates from the “12 Days of OpenAI” event. We’ll keep you updated on how these new features are being adopted and any further announcements.
Day 8: December 16, 2024
My apologies for the confusion. Let’s correct the record for Day 8 of the “12 Days of OpenAI” event:
Day 8′s announcement from OpenAI centers on ChatGPT Search, enhancing the search capabilities within the ChatGPT platform.
Announcement
OpenAI introduces improvements to the ChatGPT Search feature, allowing users to search for information in real-time while conversing with the AI. This update aims to provide more accurate and up-to-date responses by integrating search capabilities directly into the chat interface.
Live Demo
Real-Time Search Integration: The demo began with a live demonstration where OpenAI’s product lead, Kevin, showed how users can now leverage search during a conversation with ChatGPT. For example, if a user asks about recent news events or specific data, ChatGPT can pull up the latest information from the web to inform its response.
Enhanced User Experience: It was shown how this feature could be used in practical scenarios, like planning a trip where ChatGPT could search for current travel advisories, weather updates, or local event calendars in real-time.
Voice Mode Integration: There was also a demonstration of how search functionality could be invoked through voice commands in ChatGPT’s Advanced Voice Mode, making the search process seamless and hands-free.
The enhanced ChatGPT Search feature is being made available globally on all platforms where ChatGPT is accessible, starting today for logged-in free users, as well as Plus and Pro subscribers.
Community Reaction
Posts on X reflect a positive reception to the enhanced search capabilities, with users anticipating easier access to information without having to leave the chat interface. However, there’s also a call for transparency on how search results are curated and presented.
General Commentary
- The integration of search directly into ChatGPT conversations is a logical evolution, aiming to make the AI more versatile and useful for real-world inquiries.
- This update also underscores the importance of real-time data in conversational AI, potentially setting a new standard for how AI assistants interact with users.
Join us back here for more live updates as we continue exploring the innovations from OpenAI.
Day 9: December 17, 2024
OpenAI declared Day 9 as a “Mini Dev Day,” focusing on updates and enhancements for developers:
o1 Model API Release: The o1 model was made available through the OpenAI API, allowing developers to leverage its reasoning capabilities in their applications.
New API Features: Function Calling, Structured Outputs, and Developer Messages were added to the o1 API, providing developers with more tools for integration and customization.
Reasoning Effort Parameter: A new parameter was introduced to allow developers to adjust how much “thinking” effort the model applies before responding, offering more control over response quality versus speed.
Vision Inputs in API: Developers can now include vision inputs, expanding the capabilities of AI applications to process and respond to visual data.
Real-Time WebRTC Support: This addition enables real-time, interactive features in AI applications, enhancing user experiences in communications and beyond.
Preference Fine-Tuning: A method was introduced to fine-tune models more specifically to user preferences, potentially improving the personalization of AI responses.
This day was significant for developers, offering new tools and capabilities to build more sophisticated AI-driven solutions.
Day 10: December 18, 2024
OpenAI announced several updates aimed at improving the accessibility of ChatGPT:
Voice Access: A new feature allowing users to interact with ChatGPT via voice commands, including the ability to call ChatGPT directly using a toll-free number (1-800-CHATGPT for US users) or through WhatsApp for international users. This feature aims to make AI assistance more accessible for those who might find typing challenging or prefer voice interaction.
WhatsApp Integration: Expanding on the voice access, users can now use ChatGPT on WhatsApp, making it easier to integrate AI assistance into daily communication, especially for users outside the US.
These updates signify OpenAI’s commitment to making AI more inclusive and accessible to a broader audience, particularly focusing on those with different needs or preferences in interacting with technology.
Day 11: December 19, 2024
OpenAI focused on enhancing the ChatGPT desktop experience:
Working with Apps on macOS: ChatGPT can now interact more dynamically with macOS applications, offering users the ability to automate tasks or get assistance within their desktop environment.
Agentic Features: The announcement hinted at future developments where ChatGPT could act more autonomously or “agentic,” essentially performing tasks on behalf of or for the user, with a timeline set for 2025.
New Integration: Support for additional note-taking and coding apps was added, allowing for a more seamless integration with tools like Apple Notes, Quip, and Notion.
This day’s announcements were centered around making ChatGPT a more integral part of the user’s desktop workflow, promising increased productivity and a more interactive AI experience.
Day 12: December 19, 2024
The Grand Finale of “12 Days of OpenAI”
The final day of OpenAI’s “12 Days of OpenAI” event was met with high expectations. The livestream kicked off at 10 AM PT, with hosts Sam Altman, CEO of OpenAI, Mark Chen, Senior Vice President of Research, and Hongyu Ren, a key Research Scientist. They were joined by a special guest, Greg Kamradt, President of the ARC Prize Foundation, adding an extra layer of significance to the announcements.
Introducing o3 and o3 Mini
Sam Altman took the lead to unveil the o3 and o3 mini models, describing them as pivotal advancements in AI reasoning. Hongyu Ren detailed, “With o3, we’re pushing the boundaries of what AI can achieve in terms of logical reasoning, especially in mathematics and coding.” Live demonstrations showed o3 tackling complex problems, impressing viewers with its human-like problem-solving skills.
Emphasis on Safety and Ethics
Mark Chen then shifted the focus to safety, emphasizing the importance of responsible AI development. “We’re inviting the community to help us ensure these models are safe for widespread use,” he announced. This included a call for external safety testing of the o3 and o3 mini models, reinforcing OpenAI’s commitment to transparency and ethical AI deployment.
New Alignment Strategy
A significant part of the discussion was the introduction of a new alignment strategy for the o-series models, presented by Greg Kamradt. He spoke about the collaboration between OpenAI and the ARC Prize Foundation to develop benchmarks that not only test AI capabilities but also align them more closely with human values and ethical standards. This strategy aims to ensure that as AI becomes more advanced, it remains beneficial and under control.
macOS App Enhancements
The day’s announcements also included updates to the macOS app, enhancing its integration with desktop applications. Although not the main focus, Mark Chen briefly showcased how these updates would allow for smoother, more intuitive interaction with tools like Notion and coding environments.
A Celebration of Innovation
In closing, Sam Altman reflected on the journey of the event, celebrating the innovations and the community’s engagement. “This event has shown us the power of AI when developed with care and collaboration,” he remarked. The hosts expressed gratitude to the viewers and participants, emphasizing that this was not an end but a stepping stone to further AI advancements.
Conclusion
As the stream ended, the excitement for the future of AI was palpable. The inclusion of Greg Kamradt and the focus on safety testing and new alignment strategies underscored OpenAI’s commitment to not just innovation but ethical AI progression. The screen faded out, leaving everyone eager for what’s next in the AI landscape.