Announcing GSoC Students

We would like to thank applicants for putting the time and effort into creating GSoC applications to work on CMUSphinx.  We were ultimately provided with two slots and had many great applications that made choosing very difficult.  We hope that students who were not accepted will still get involved with CMUSphinx and look forward to receiving your applications next year.

We are pleased to announce that  two spots were awarded to Michal Krajňanský and Apurv Tiwari.
Michal

Michal is a student at Masaryk University in Brno, Czech Republic.  He is taking Informatics - Artificial Intelligence & Natural Language Processing.  Michal will be working on training acoustic models on long audio files.  This will be done by optimizing SphinxTrain through the utilization of massively parallel hardware - the NVIDIA CUDA framework. It will enable acoustic model training on long audio files by the utilization of the NVIDIA CUDA architecture that will reduce the memory requirements of the Baum-Welch algorithm and significantly speed things up. Lastly, he will also modify SphinxTrain to be able to process long input audio files.


Apurv is a student at the Indian Institute of Technology Delhi in New Delhi, India. He is taking Mathematics and Computing.  Apurv will be working on adding Long Audio Alignment to CMUSphinx. The problem he will solve is to align a given approximate-transcription for audio data corresponding to the audio file as well as improve the transcription at points of low confidence.

The mentors team includes Prof. James Baker, Prof Bhiksha Raj. as well as all the members of our community.

Both Apurv and Michal will blog weekly about their experience.  The blogs will appear here at  https://cmusphinx.github.io/

We want to thank Google for providing this wonderful opportunity and the mentors for donating their valuable time.  We eagerly anticipate great things from Apurv and Michal.  Stay tuned!

CMUSphinx 0.7 Is Released

We are pleased to announce the availability of the updated CMUSphinx toolkit. You can find updated sphinxbase, pocketsphinx, sphinxtrain, cmuclmtk and sphinx4 in downloads section

https://cmusphinx.github.io/wiki/download/

Major changes include

  • Sphinxtrain actively uses sphinxbase functions
  • Training is more user-friendly
  • Various advanced training techniques are implemented
  • Pocketsphinx is way faster on big FSG grammars
  • Many bug fixes and user-friendly improvements

See NEWS in each package for more details. More changes to come soon, enjoy.

CMUSphinx at GSOC 2011

GSOC

We are pleased to announce that CMUSphinx project is accepted to Google Summer Of Code 2011 program. That will enable us to help several students to start their way in speech recognition, open source development and in CMUSphinx. We are really excited about that.

http://www.google-melange.com/gsoc/program/home/google/gsoc2011

If you are interested to participate as a student, an application period will open soon but it's better to start preparation of your application right now. Feel free to contact us for any questoins! For more details see:

https://cmusphinx.github.io/wiki/summerofcodestudents

If you would like to be a mentor please sign in into gsoc web application and add your ideas to the ideas list:

https://cmusphinx.github.io/wiki/summerofcodeideas

We invite you to participate!

CMLLR Adaptation in SphinxTrain

The problem is that with a complexity of ASR algorithms it's very hard to implement them all. While some of them are sometimes better, some are worse. For specific application you can always choose most reasonable approach but it may be not readily available in your system and it might be quite resource-consuming to implement them. That's why frameworks
like CMUSphinx are valuable for both researchers and speech application developers. That's why we are so happy to see your contributions to CMUSphinx.

Good example of this is a set of approaches to train MLLR transform. Basically there is MLLR where mean and variance of the gaussians are estimated alone or CMLLR where mean and variance of the gaussian distribution are estimated together. CMLLR is more complex to estimate but because of smaller amount of parameters it does make sense to apply
it when your adaptation data is small. For example if you have just a minute of speech to adapt, CMLLR can give you better results than MLLR.

Why do we write this today you'll ask? Easy. Today CMLLR estimation code landed in Sphinxtrain trunk. See the file cmllr.py. Thanks a lot to Stephan Vanni who contributed that part, that's really valuable addition! Enjoy!