We are happy to see you in our Telegram group, join us for realtime discussions about CMUSphinx
With the launch of Android Wear's new version 2.0 now it is possible to run standalone apps on wearables - indepentent of a phone.
After several years break we are pleased to announce that CMUSphinx project is accepted to Google Summer Of Code 2017 program. That will enable us to help several students to start their way in speech recognition, open source development and in CMUSphinx. We are really excited about that. See the organization page for details.
This year will be more focused on pronunciation evaluation, two major pronunciation tasks will be our preference. If you are interested to participate as a student, an application period will open soon but it’s better to start preparation of your application right now. Feel free to contact us for any questions, but do your own googling before asking very simple things! For more details see:
https://cmusphinx.github.io/wiki/summerofcodestudents
If you would like to be a mentor please sign in into gsoc web application and add your ideas to the ideas list:
https://cmusphinx.github.io/wiki/summerofcodeideas
We invite you to participate!
Guenter Bartsch writes us:
The latest release of my audio models built from voxforge submissions is up to 70 hours of audio and 27k dictionary entries, available for download here.
This release includes:
- A CMU Sphinx audio model
- Several Kaldi models (still very experimental)
- A Sequitur g2p model
- Language models created using cmuclmtk and srilm
For the first time, the audio models include small portions of openpento und german-speechdata-package-v2.tar.gz - reviewing and transcribing those is quite laborious, so it will take some time until they are fully reviewed and integrated into the models. Also note that this model includes more distant-microphone recordings than older releases which means the word error rate has increased accordingly.
It is amazing more and more languages get accurate speech recognition support in CMUSphinx. While you might think a project might support a variety of languages, in practice without local person it is very hard to train a good database. Simply because you do not know where to take audio for training. A local person is needed to evaluate recognition results too. For example Spanish has half a billion speakers around the world, while we still have no good resources to train Spanish models.
So we encourage you once again to build the models for your own language, to collect transcribed speech, to contribute to Voxforge. Only joined effort will enable really good coverage of languages in speech recognition.