Official CMU Sphinx wiki is now located on sourceforge
https://cmusphinx.github.io/wiki
The information will be slowly tranferred there from other sources.
Are you doing your business with CMU Sphinx? Are you looking for developer experienced in CMU tools? Are you interested in professional ASR group or want to get recommendation. Join CMU Sphinx group on LinkedIn, connect to other professionals who are using Sphinx professionally.
Huh, thanks god I didn't book tickets already as wife suggested! Hope you are too. Anyway, the workshop will be on March 13, be careful about that. Between, the program is promising to be extremely interesting.
Preliminary program
1:30-1:45: Coffee
1:45-2:10: A Sphinx Based Speech-Music Segmentation Front-End For Improving The Performance Of An Automatic Speech Recognition System In Turkish
Cemil Demir, TUBITAK-UEKAE; Erdem Ünal, TUBITAK-UEKAE; Mehmet Ugur Dogan,TUBITAK-UEKAE
2:10-2:35: LIUL_SpkDiarization: An Open Source Toolkit For Diarization
Sylvain Meignier, LIUM ; Teva Merlin, LIUM
2:35-3:00: Scientific Learning Reading Assistant(TM): CMU Sphinx Technology in a Commercial Educational Software Application
Valerie L. Beattie, Scientific Learning Corporation
3:00-3:15: Coffee break
3:15-3:40: Myovox: A Plug And Play Device Emulating A Mouse And Keyboard Using Speech And Muscle Inputs
Matthew Belgiovine, University of Pennsylvania; Mike DeLiso, University of Pennsylvania; Steve McGill, University of Pennsylvania
3:40-4:05: Some recent research works at LIUM based on the use of CMU Sphinx
Yannick Estève, LIUM ; Paul Deléglise, LIUM ; Sylvain Meignier, LIUM ; Holger Schwenk, LIUM ; Loic Barrault, LIUM ; Fethi Bougares, LIUM ; Richard Dufour, LIUM ; Vincent Jousse, LIUM ; Antoine Laurent, LIUM ; Anthony Rousseau, LIUM
4:05-4:30: Implementing and Improving MMIE training in SphinxTrain
Long Qin, Carnegie Mellon University; Alexander Rudnicky, Carnegie Mellon University
Support for phonetically-tied mixture acoustic models has been added to the Subversion repository for SphinxTrain, Sphinx3, and PocketSphinx. Briefly, phonetically-tied mixture models are somewhere between semi-continuous and fully-continuous models, offering most of the speed of the former combined with the ability of the latter to effectively use large amounts of training data.
Parameter settings for training PTM models are present in the template sphinx_train.cfg file created by SphinxTrain, and can be enabled by setting $CFG_HMM_TYPE to ".ptm.". The development version of PocketSphinx will automatically recognize PTM models, while Sphinx3 requires you to add "-senmgau .ptm." to the command line.
We have made PTM models for English and Mandarin available for download on the SourceForge dowloads page. These have not been extensively optimized, but the English models, at least, already offer better performance than comparable fully-continuous models. Compressed and optimized versions of these in 8k bandwidth will be released with PocketSphinx 0.6.
n.b. A dictionary and language model (caution: very large) for Mandarin are also available.