Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

OpenAI GPT-4o With Real-Time Responses and Video Interaction Announced, GPT-4 Features Now Available for Free

OpenAI held its highly anticipated update event on Monday, announcing a new desktop app for Chatgpt, minor UI changes to the ChatGPT web client, and a new flagship artificial intelligence (AI) model called GPT-4o. The event was streamed online on YouTube and held in front of a small live audience. During the event, the AI ​​company also announced that all GPT-4 features previously only available to premium users will now be available for free to everyone.

Updated ChatGPT desktop app and OpenAI interface

Meera Moratti, CTO of OpenAI, opened the event and introduced the new ChatGPT desktop app. This application is equipped with computer vision and can see the user’s screen. Users can turn the feature on and off and the AI ​​will analyze what it sees to get help. The CTO also announced that the web version of ChatGPT has received minor interface updates. The new interface looks minimalistic and an offer card appears when users visit your website. Additionally, the icons are smaller and the entire side panel is hidden, leaving more screen real estate for conversations. Notably, ChatGPT can now access web browsers to provide real-time search results.

Features of GPT-4o

The highlight of the OpenAI event was the company’s latest flagship Artificial Intelligence Model, the GPT-4o (o stands for Omni model). Moratti emphasized that the new chatbot is 2x faster, 50% cheaper and has a 5x higher rate limit compared to the GPT 4 Turbo model.

GPT-4o also significantly improves response latency and can provide real-time responses even in audio mode. In a live demonstration of the artificial intelligence model, OpenAI demonstrated its ability to communicate with and respond to users in real time. With GPT-4o support, ChatGPT can now be paused to answer other questions, which was not possible before. However, the biggest improvement to the announced model is the inclusion of emotional voices.

When ChatGPT speaks, his responses incorporate various voice modulations to sound less robotic and more human. The demo showed that AI can also recognize and respond to human emotions vocally. For example, if the user speaks in a panicked voice, the user speaks in a worried voice.

There have also been improvements to computer vision, allowing it to process and respond to live video feeds from the device’s camera based on live demos. Watch users solve equations and provide step-by-step instructions. If the user makes an error, it can be corrected in real time. Likewise, you can now process large-scale coding data, analyze it instantly, and share suggestions for improvements. Finally, users can now open the camera and talk to their faces so that the AI ​​can recognize their emotions.

Finally, another live demo showed that ChatGPT performs live language translation using the latest AI models and can even speak multiple languages ​​in a row. OpenAI did not mention the subscription price for access to the GPT-4o model, but emphasized that it will be available as an API in the coming weeks.

GPT-4 is now available for free

In addition to all the new features, OpenAI has made the GPT-4 artificial intelligence model and its functions available for free. Users of the platform’s free tier can access features such as GPT (mini chatbots designed for specific use cases), GPT storage, storage functions and advanced data analytics without paying.



This post first appeared on Udaipur Kiran, please read the originial post: here

Share the post

OpenAI GPT-4o With Real-Time Responses and Video Interaction Announced, GPT-4 Features Now Available for Free

×

Subscribe to Udaipur Kiran

Get updates delivered right to your inbox!

Thank you for your subscription

×