Natural language processing

Google’s Gemini models add native video understanding

Google’s Gemini models add native video understanding


Google has integrated native video understanding into its Gemini models, enabling users to analyze YouTube content through Google AI Studio. Simply enter a YouTube video link into your prompt. The system then transcribes the audio and analyzes the video frames at one-second intervals. You can, for example, reference specific timestamps and extract summaries, translations, or visual descriptions. Currently in preview, the feature permits processing up to 8 hours of video per day, with limitations of one public video per request. Gemini Pro processes videos up to two hours in length, while Gemini Flash handles videos up to one hour. The update follows the implementation of native image generation in Gemini.

Video: via Logan Kilpatrick

Ad

Join our community

Join the DECODER community on Discord, Reddit or Twitter – we can’t wait to meet you.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:

Google's Gemini models add native video understanding

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.

Join our community

Join the DECODER community on Discord, Reddit or Twitter – we can’t wait to meet you.

Google's Gemini models add native video understanding

Source link