Formula E and Google Cloud Launch AI-Powered Audio Podcast to Bring Racing to the Visually Impaired

Formula E, in partnership with Google Cloud, has unveiled a groundbreaking initiative to make electric racing more inclusive for blind and visually impaired fans. The new offering uses AI to create rich, multilingual audio race reports that vividly describe key moments—from overtakes and pit strategies to crowd reactions—within minutes of each race finish. These immersive podcasts will be accessible on platforms like Spotify in over 15 languages following the conclusion of each E-Prix.
The concept originated during a Google Cloud hackathon at the 2024 London E-Prix, and has since been refined with input from the Royal National Institute of Blind People (RNIB). Ongoing testing is planned at upcoming race weekends in Berlin and London, with a full rollout expected for Season 12.
Formula E CEO Jeff Dodds emphasized the franchise’s commitment to inclusivity, calling the AI audio initiative “a fantastic example of how technology can be used for good.” Google Cloud’s John Abel described the project as equipping visually impaired fans with a “digital storyteller,” enabling them to experience the thrill of racing through detailed, expressive narration.
Technically, the system uses a three-step process via Google Cloud’s Vertex AI: real-time transcription of live commentary, AI-driven analysis to identify and summarize pivotal race events, and natural-sounding text-to-speech synthesis in multiple languages. The result is a polished, shareable audio brief released shortly after the race concludes.
By combining generative AI, global accessibility, and expert user feedback, the Formula E–Google Cloud audio podcast represents a bold step toward inclusive motorsport experiences—a model that could inspire other leagues to follow suit.