Advanced Voice Mode Gains Visual Context in OpenAI Holiday Campaign Over the “12 Days of OpenAI” holiday campaign, the company will debut new features, one every day, via live streams beginning December 5. The campaign promises major launches and small updates galore.
On December 12, OpenAI announced a feature game-changer: Advanced Voice Mode now includes screen-sharing and visual capabilities. The implications are that ChatGPT can help users based on what it sees through camera phone or screens for better contextually accurate and interactive guiding. For instance, during a demo, ChatGPT guided the user in making coffee; the user was provided real-time verbal instructions based on visual input.
OpenAI also introduced a Santa voice mode for the holiday season, accessible by clicking the snowflake icon in voice mode. The Santa feature offers users the joy of festive interactions and resets their usage limits for the first conversation. These updates are rolling out on mobile apps for Team, Pro, and subscribers this week, with extended access for Europe and Enterprise users expected in early 2024.
The day before, on December 11, Apple focused on ChatGPT integrations in iOS 18.2, including Visual Intelligence, Writing Tools, and Siri. Siri now routes complex queries to ChatGPT but with permission from the user. Visual Intelligence lets the user point the iPhone 16 camera at objects to identify, translate, or summarize via ChatGPT. The feature Writing Tools allows users to compose text or images directly within iOS through the “Compose” tool.
Livestreams hosted by OpenAI also tackled earlier technical hiccups, assuring its users that measures are being taken to improve reliability.
With these innovative features, OpenAI continues its innovative expansion of ChatGPT capabilities, merging AI with intuitive and practical solutions that enhance the experience of the users during the festive season.