Speaker "Rachel Batish" Details Back



How we used machine learning to define audio-talk listening experiences


For the past three-and-a-half years, Audioburst has built the best search and discovery engine for audio talk content. By analyzing and transcribing more than 5.5 million minutes of audio content every month (we literally listen to hundreds of radio shows and thousands of podcasts every day!), we have built the largest database of broadcast audio-talk content on the globe. In this talk I will demonstrate how we leveraged machine learning to build the largest audio search database.

Who is this presentation for?
People who are interested in machine learning and audio search and discovery.

Prerequisite knowledge:
Not needed


RACHEL BATISH is VP of Product at Audioburst where she leads product roadmap and strategy thinking, connecting unique audio and content listening experiences to consumers on cars, 3-party-speaker and headphones devices and apps. Prior to Audioburst, Rachel was founder of, a build-once-deploy-anywhere platform for voice and chat conversational applications, focusing on non-developers and complex use cases. She held a similar role at Zuznow. Rachel is also the author of Voicebot and Chatbot Design and has founded and participated in a number of organizations to promote networking and education in the AI and voice space.