Keynote: What’s new in Google AI Studio? From vibe coding to agentic AI
By Gerard Sans
Keynote: What’s new in Google AI Studio? From vibe coding to agentic AI
Generative AI has shifted gears. In this talk we explore how Google AI Studio now supports multimodal generation (text, image, audio, video) and real-time agents. You’ll see how Studio connects the latest model families (such as Gemini 2.5 and the expected Gemini 3) into a unified workflow of “vibe coding” rather than writing boilerplate. We’ll also dive into the voice-first capabilities made possible via the Gemini Live API: real-time, bidirectional voice and video conversations, tone-aware responses, tool-integration and session memory. Together we’ll look at prompt-to-code flows, media generation, voice-first use cases and the next generation of MCP-driven agentic AI.



























