Long Audio Aligner Landed in Trunk

After three years of development we have finally merged an aligner for long audio files into trunk. The aligner takes audio file and corresponding text and dumps timestamps for every word in the audio. This functionality is useful for processing of the transcribed files like podcasts with further applications like better support for audio editing or for automatic subtitle syncronization. Another important application is acoustic model training, with a new feature you can easily collect databases of thousand hours for your native language with the data from the Internet like news broadcasts, podcasts and audio books. With that new feature we expect the list of supported languages will grow very fast.

To access new feature checkout sphinx4 from subversion or from our new repository on Github http://github.com/cmusphinx/sphinx4 and build code with maven with "mvn install"

For the best accuracy download En-US generic acoustic model from downloads as well as g2p model for US English.

Then run the alignment

java -cp sphinx4-samples/target/sphinx4-samples-1.0-SNAPSHOT-jar-with-dependencies.jar \
edu.cmu.sphinx.demo.aligner.AlignerDemo file.wav file.txt en-us-generic \
cmudict-5prealpha.dict cmudict-5prealpha.fst.ser

The result will look like this:

+ of                        [10110:10180]
  there                     [11470:11580] 
  are                       [11670:11710]
- missing

Where + denotes inserted word and - is for missing word. Numbers are times in milliseconds.

Please remember that input file must be 16khz 16bit mono. Text must be preprocessed, the algorithm doesn't handle numbers yet.

The work on long alignment started during 2011 GSoC project with proposal from Bhiksha Raj and James Baker. Apurv Tiwari made a lot of progress on it, however, we were not able to produce a robust algorithm for alignment. It still failed on too many cases and failures were critical. Finally we changed algorithm to multipass decoding and it started to work better and survive gracefully in case of errors in transcription. Alexander Solovets was responsible for the implementation. Still, the algorithm doesn't handle some important tokens like numbers or abbreviations and the speed requires improvement, however, the algorithm is useful already so we can proceed with further steps of model training. We hope to improve the situation in near future.

SOA Architecture For Speech Recognition

It's interesting that since speech recognition becomes widespread the approach to the architecture of speech recognition system changes significantly. When only a single  application needed speech recognition it was enough to provide a simple library for the speech recognition functions like pocketsphinx and link it to the application. It's still a valid approach for embedded devices and specialized deployments. However, approach changes significantly when you start to plan the speech recognition framework on a desktop. There are many applications which require voice interface and we need to let all of them interact with the user. Each interaction requires time to load the models into memory and memory to hold the models. Since the requirements are pretty high it becomes obvious that speech recognition service has to be placed into a centralized process. Naturally a concept of speech recognition server appears.

It's interesting that many speech recognition projects start to talk about the server:

Simon has been using a common daemon (SimonD) managed over the sockets in order to provide speech recognition functions

Rodrigo Parra implements dbus-based server for TamTam Listens project -  a speech recognition framework for Sugar OLPC project. This is a very active work in progress, subscribe to the Tumblr blog to get the latest updates .

Andre Natal talks about speech recognition server for the FirefoxOS during his summer project.

Right now the solution is not yet stable, it is more work in progress. It would be great if such efforts could converge to a single point in the future, probably CMUSphinx can be the common denominator here and provide the desktop service for the applications looking to implement voice interfaces. A standalone library is certainly needed, we shouldn't only focus on the service architecture, but service would be a good addition too. It could provide the common interfaces for the applications which just need to register required commands on the service.

Of course there is an option to put everything in the cloud, but cloud solution has its own disadvantages. Privacy concerns are still here and the data connection is still expensive and slow. There are similar issues with other resource-intensive APIs like text-to-speech, desktop natural language processing, translation, and so on, so soon quite a lot of memory on the desktop will be spent on desktop intelligence. So reserve few more gigabytes of memory in your systems, it will be taken pretty soon.

OpenEars introduces easy grammars for Pocketsphinx

The proper design of the voice interface is still quite a challenging task. It seems easy to generate the string from user speech and act on it, however, in practice things are way more complicated. On mobile devices there are no resources to decode every possible speech, sometimes you need to restrict the interaction to a some domain. Even if we restrict ourselves to a domain, it's not clear how to handle non-straightforward interaction with the user with repetitions, delays and corrections. Consider you want to recognize just "left" or "right", what will you do if user says "left, hm, no, right". What if the word "right" will be uttered in context like "you are right", do you need to react on it as well?

We provide two major ways to describe the user language - grammars and language models. Many people prefer to use language models due to the simple way to create them with a web service. You just submit a list of phrases and get the data back, however, this is a slippery road. The issue is that language model generation code usually makes some significant assumptions about the distribution of the probabilities of unseen ngrams in the target language and calculates unseen combination probabilities using those assumptions. For most of the simple cases the assumptions are wrong. For example our SimpleLM uses constant backoff with 0.5 discount ratio which means you get some unusual word combinations with nonzero probability. Most times it's now what you expect. If you are using language modeling toolkits, please be aware that default smoothing methods like Good-Turing or Knesser-Ney assume you submit them really huge texts. For small data sizes you most likely need different discounting.

On the other hand grammars are complex to create online, they are hard to debug and there are many unseen cases that are hard to cover with a grammar. If you have more than 10 rules in the grammar, I can tell you that you are doing something wrong. You do not account the probabilities of the rules properly and probably your grammar is suboptimal for the efficient recognition. Grammars make sense only for a very simple lists of commands. Next comes the issue with the format itself which should be both readable and parseable by machine. We are using JSGF grammars but they require special parsers and not so well-supported by automatic tools outside of CMUSphinx. Most of the world is using XML-based grammars like SRGS, however, you know how is it hard to edit XML manually. Thanks Matrix, we don't use XML for everything anymore, there are way more readable formats like JSON. Next, you probably want to create the grammars on the fly based on context from a simple list of strings, without writing any text files on the storage.

Its amazing to see that OpenEars, a speech recognition toolkit for iOS based on CMUSphinx is proposed a solution for this issue. In recently released version 1.7 it introduced a nice way to create on-the-fly grammars with the API directly from the in-memory data. The grammar looks like this:

     ThisWillBeSaidOnce : @[
         @{ OneOfTheseCanBeSaidOnce : @[@"HELLO COMPUTER", @"GREETINGS ROBOT"]},
         @{ OneOfTheseWillBeSaidOnce : @[@"DO THE FOLLOWING", @"INSTRUCTION"]},
         @{ OneOfTheseWillBeSaidOnce : @[@"GO", @"MOVE"]},
         @{ThisWillBeSaidWithOptionalRepetitions : @[
             @{ OneOfTheseWillBeSaidOnce : @[@"10", @"20",@"30"]},
             @{ OneOfTheseWillBeSaidOnce : @[@"LEFT", @"RIGHT", @"FORWARD"]}
         @{ OneOfTheseWillBeSaidOnce : @[@"EXECUTE", @"DO IT"]},
         @{ ThisCanBeSaidOnce : @[@"THANK YOU"]}

and is defined directly in the code. This method uses Objective-C native primitives for grammar construction so you don't need to learn any other syntax for the grammars and you do not need to create any text files. I think this approach will be popular across developers of OpenEars and probably one day an approach similar to this one will be merged in Pocketsphinx core. The final design is still evolving, but it seems to be the step in the right direction.

Feel free to contact Halle Winkler, the OpenEars author, if you are interested in this new way to define grammars.

Speech projects on GSOC 2014

Google summer of code is definitely one of the largest projects in open source world. 1400 students will enjoy the participation in the open source projects during this summer. Four projects of the pool are dedicated to speech recognition and it is really amazing all of them are planning to use CMUSphinx!

Here is the list of new hot projects for you to track and participate:

Speech to Text Enhancement Engine for Apache Stanbol
Student: Suman Saurabh
Organization: Apache Software Foundation
Assigned mentors: Andreas Kuckartz

Enhancement engine uses Sphinix library to convert the captured audio. Media (audio/video) data file is parsed with the ContentItem and formatted to proper audio format by Xuggler libraries. Audio speech is than extracted by Sphinix to 'plain/text' with the annotation of temporal position of the extracted text. Sphinix uses acoustic model and language model to map the utterances with the text, so the engine will also provide support of uploading acoustic model and language model.

Development of both online as offline speech recognition to B2G and Firefox

Student:Andre Natal
Organization: Mozilla
Assigned mentors: Guilherme Gonçalves
Short description: Mozilla needs to fill the gap between B2G and other mobile OSes, and also desktop Firefox lacks this important feature already available at Google Chrome. In addition, we’ll have a new Web API empowering developers , and every speech recognition application already developed and running on Chrome, will start to work on Firefox without changes. On future, this can be integrated on other Mozilla products, opening windows to a whole new class of interactive applications.

I know Andre very well, he is a very talented person, so I'm sure this project will be a huge success. Between, you can track it in github repository too: https://github.com/andrenatal/speechrtc

Sugar Listens - Speech Recognition within the Sugar Learning Platform

Student: Rodrigo Parra
Organization: Sugar Labs
Assigned mentors: tchx84
Short description: Sugar Listens seeks to provide an easy-to-use speech recognition API to educational content developers, within the Sugar Learning Platform. This will allow developers to integrate speech-enabled interfaces to their Sugar Activities, letting users interact with Sugar through voice commands. This goal will be achieved by exposing the open-source speech recognition engine Pocketsphinx as a D-Bus service.

Integrate Plasma Media Center with Simon to make navigation easier

Student: Ashish Madeti
Organization: KDE
Assigned mentors: Peter Grasch, Shantanu Tushar
Short description: User can currently navigate with keyboard and mouse in Plasma Media Center. Now, I will add Voice as a way for a user to navigate and use PMC. This will be done by integrating PMC with Simon.

I know Simon has a large and successful history of GSOC participation, so this project is also going to be very interesting.

Also, this summer we are going to run few student internships unrelated to GSOC, it's going to be very interesting too, stay tuned!