It is not easy to build an intelligent software, the cooperation on all the levels of the speech processing needs to be tight. For example, you should not only recognize the input but also understand it and, more importantly, respond to it. You need to give your software an ability to understand the current situation, correct the results and respond intelligently. A great piece of software to assist you with that has been recently release.
OpenDial is a Java-based software toolkit that can be used to develop robust and adaptive dialogue systems for various domains. Dialogue understanding, management and generation models are expressed in OpenDial through probabilistic rules encoded in a simple XML format.
You can find more information about the toolkit on the project website:
http://opendial.googlecode.com
The current release contains a completely refactored code base, a large set of unit tests and a full-fledged graphical interface. You can also find on the website practical examples of dialogue domains and a step-by-step documentation on how to use the toolkit.
Try it!
The biggest challenge for developers today is a natural user interface. People already use gesture and speech to interact with their PCs and devices; such natural ways to interact with technologies make it easier to learn how to operate them. Biggest companies like Microsoft and Intel are putting a lot of effort into research in natural interaction.
CMUSphinx is a critical component of the open source infrastructure for creating natural user interfaces. However, it is not the only one component required to build an application. One of the most frequently asked questions are - how do I analyze speech recognition output to turn it into actionable information. The answer is not simple, again, it is all about a complex NLP technology which you can apply to analyze user intent as well as a dataset to help you with analysis.
In simple cases you can just parse the number strings to turn them into values, you can apply regex pattern matching to extract the name of the object to act upon. In Sphinx4 there exist a technology which can parse grammar output to assign semantic values in user request. In general, this is more complex task.
Recently, a Wit.AI has announced the availability of their NLP technology for developers. If you are looking for a simple technology to create a natural language interface, Wit.AI seems to be a good thing to try. Today, with the combination of the best engines like CMUSphinx and Wit, you can finally bring the power of voice to your app.
You can build a NLP analysis engine with Wit.AI in three simple stages:
Bringing natural language understanding to the masses of developers is really a hard problem and we great that tools appear to simplify the solution.
As of today a large change of using SWIG-generated python bindings has been merged into pocketsphinx and sphinxbase trunk.
SWIG is an interface compiler that connects programs written in C and C++ with scripting languages such as Perl, Python, Ruby, and Tcl. It works by taking the declarations found in C/C++ header files and using them to generate the wrapper code that scripting languages need to access the underlying C/C++ code. In addition, SWIG provides a variety of customization features that let you tailor the wrapping process to suit your application.
By this port we hope to increase coverage of pocketsphinx bindings and provide a uniform and documented interface in various language: Python, Ruby, Java.
To test the change checkout sphinxbase and pocketsphinx from trunk and see the examples in pocketsphinx/swig/python/test.
It is an old idea to implement an open source dictation tool everyone could use. Without servers, networking, without the need to share your private speech with someone else. This is certainly not a trivial project which was started many times, but it's something really world-changing. Now, it's live again, powered by CMUSphinx.
Consider details about ongoing efforts of Simon project to implement open source dictation.