Are you the executive in charge of podcast curation for a library or a cultural, educational or civil institution?
Do you need to auto-catalog interviews, lectures, public speeches or civil proceedings?
Are you frustrated with the high cost of manual extraction of explicit and inferred concepts from podcasts, MP3's and the audio tracks of videos?
Would you be curious to know how automated speech summarization along the timeline of digital audio files could unlock access to key passages of an audio track among thousands of hours of recordings?
Hi, my name is Jeh Daruvala. I am the CEO of a disruptive startup that enables low-cost summarization of human speech in videos and podcasts.
We strive to give media asset managers and curators greater control over audio content (class lectures, civil proceedings, conference sessions and speeches). This includes any file from a LiveScribe smart pen.
Our software extracts all meaningful concepts—called semantic “topics”—from podcast and video content, supporting wider dissemination of information and knowledge.
Many of our customers tag their otherwise “dark content” multimedia files with these topics, thus supercharging these files for search engine optimization (Google, Bing, Ask), website search functions, and expanded discovery by your key stakeholders.
Our low-cost cloud-based service helps to build on useful but incomplete crowd-sourced blog data and user-generated folksonomies.
In more technical terms, our service indexes videos (that may include podcasts and YouTube videos), creating speech-to-topic metadata. It will change everything you know about curation and speech-mining of video content. We call this service Speech2Topics™.
Our system processes audio-visual content using successive layers of speech recognition, natural language processing, semantic classification and time-stamping of discrete topics.
Our patent-pending Speech2Topics system then analyzes topic metadata for frequency, relevance, and other proprietary variables and metrics, building hyper-precise semantic signatures (in near real-time) of spoken-word content inside Youtube videos.
These semantic signatures represent meanings and inferences of human speech. Our system allows you to define your own user-configured topic taxonomy so you have control over a flexible solution.
This precise understanding of the meaning in a video creates the most value for your users.
Let's Do a Test
You have audio-visual content. We have a cloud service.
Let’s schedule a webconference and we can work out how best to conduct a free pilot.