Multilingual Access to Large Spoken Archives (Invited talk)

0
359

Spoken word collections promise access to unique and compelling content, and most of the technology needed to realize that promise is now in place. Decreasing storage costs, increasing network capacity, and the availability of software to encode and exchange digital audio make possible physical access to spoken word collections at a previously unimaginable scale. Effective support for intellectual access — the problem of finding what you are looking for — is much more challenging, however. In this talk I will briefly describe work that has been done on this problem at the Text Retrieval Conferences, the Topic Detection and Tracking evaluations, and in individual research projects around the world. I will then describe a unique resource, a collection of 116,000 hours of oral history interviews recorded in 32 languages in 57 countries that has been assembled by the Survivors of the Shoah Visual History Foundation. Nearly 10,000 hours of this audio has been manually segmented, summarized and indexed, making this an unrivaled resource with which we can explore a broad array of data-driven techniques. My main focus will be to explain how we are leveraging this exceptional resource to develop the ability to index similar materials automatically. The project we call MALACH (Multilingual Access to Large spoken ArCHives) builds on a long heritage of increasingly demanding applications for speech recognition technology. The accented, emotional and elderly speech in the Shoah Foundation’s collection are so challenging that state-of-the-art systems initially yielded a 90% word error rate! We now have speech recognition systems that achieve better than half that error rate for two languages, English and Czech. That’s nowhere near good enough to produce readable transcripts, but it is approaching a point where other language technologies can begin to make headway. I’ll illustrate that point with our latest results from across the project on speech recognition, natural language processing components, and information retrieval system design. The scope of this one project is breathtaking, directly involving nine research teams from six institutions on two continents (Charles University, IBM T.J. Watson Research Lab, Johns Hopkins University, the Shoah Foundation, the University of Maryland, and the University of West Bohemia), with interests that range from the information needs of historians to the modeling of Czech colloquial pronunciation. Virtually every topic in computational linguistics finds expression in that range. We plan to ultimately build speech recognition systems in at least five languages (adding Russian, Polish and Slovak to what we have now), so morphology and language modeling are critical issues. The diverse range of languages in the collection makeÂ