Porting phonetisaurus many-to-many alignment python script to C++

(author: John Salatas)

Foreword
Following our previous article on phonetisaurus [1] and the decision to use this framework as the g2p conversion method for my GSoC project, this article will describe the port of the dictionary alignment script to C++.

1. Installation
The procedure below is tested on an Intel CPU running openSuSE 12.1 x64 with gcc 4.6.2. Further testing is required for other systems (MacOSX, Windows).

The alignment script requires the openFST library [2] to be installed on your system. Having downloaded, compiled and installed openFST, the first step is to checkout the alignment code from the cmusphinx SVN repository:

$ svn co https://cmusphinx.svn.sourceforge.net/svnroot/cmusphinx/branches/g2p/train

and compile it
$ cd train
$ make
g++ -c -g -o src/align.o src/align.cpp
g++ -c -g -o src/phonetisaurus/M2MFstAligner.o src/phonetisaurus/M2MFstAligner.cpp
g++ -c -g -o src/phonetisaurus/FstPathFinder.o src/phonetisaurus/FstPathFinder.cpp
g++ -g -l fst -l dl -o src/align src/align.o src/phonetisaurus/M2MFstAligner.o src/phonetisaurus/FstPathFinder.o
$

2. Usage
Having compiled the script, running it without any command line arguments will print out it's usage, which is similar to that of the original phonetisaurus m2m-aligner python script:
$ cd src
$ ./align
Input file not provided
Usage: ./align [--seq1_del] [--seq2_del] [--seq1_max SEQ1_MAX] [--seq2_max SEQ2_MAX]
[--seq1_sep SEQ1_SEP] [--seq2_sep SEQ2_SEP] [--s1s2_sep S1S2_SEP]
[--eps EPS] [--skip SKIP] [--seq1in_sep SEQ1IN_SEP] [--seq2in_sep SEQ2IN_SEP]
[--s1s2_delim S1S2_DELIM] [--iter ITER] --ifile IFILE --ofile OFILE

--seq1_del, Allow deletions in sequence 1. Defaults to false.
--seq2_del, Allow deletions in sequence 2. Defaults to false.
--seq1_max SEQ1_MAX, Maximum subsequence length for sequence 1. Defaults to 2.
--seq2_max SEQ2_MAX, Maximum subsequence length for sequence 2. Defaults to 2.
--seq1_sep SEQ1_SEP, Separator token for sequence 1. Defaults to '|'.
--seq2_sep SEQ2_SEP, Separator token for sequence 2. Defaults to '|'.
--s1s2_sep S1S2_SEP, Separator token for seq1 and seq2 alignments. Defaults to '}'.
--eps EPS, Epsilon symbol. Defaults to ''.
--skip SKIP, Skip/null symbol. Defaults to '_'.
--seq1in_sep SEQ1IN_SEP, Separator for seq1 in the input training file. Defaults to ''.
--seq2in_sep SEQ2IN_SEP, Separator for seq2 in the input training file. Defaults to ' '.
--s1s2_delim S1S2_DELIM, Separator for seq1/seq2 in the input training file. Defaults to ' '.
--iter ITER, Maximum number of iterations for EM. Defaults to 10.
--ifile IFILE, File containing sequences to be aligned.
--ofile OFILE, Write the alignments to file.
$

The two required options are the pronunciation dictionary to align (IFILE) and the file in which the aligned corpus will be saved (OFILE). The script provide default values for all other options and cmudict (v. 0.7a) can be aligned simply by the following command
$ ./align --seq1_del --seq2_del --ifile --ofile
allowing for deletions in both graphemes and phonemes
and
$ ./align --ifile --ofile
not allowing for deletions

3. Performance
In order to test the new alignment script's performance in both its results and its requirements for cpu and memory usage, I have performed two tests for aligning of the full cmudict (v. 0.7a) allowing deletions in both sequenses:
$ ./align --seq1_del --seq2_del --ifile ../data/cmudict.dict --ofile ../data/cmudict.corpus.gsoc
and compared with the original phonetisuarus script
$ ./m2m-aligner.py --align ../train/data/cmudict.dict -s2 -s1 --write_align ../train/data/cmudict.corpus

3.1. Alignment
Comparing of the two outputs using the linux diff util, didn't result in major differences. Minor differences were noticed in case of the alignment double vowels and consonants with a single phoneme, as in the two following examples:
--- cmudict.corpus
+++ cmudict.corpus.gsoc
....
-B}B O}AO1 R}_ R}R I}IH0 S}S
+B}B O}AO1 R}R R}_ I}IH0 S}S
....
-B}B O}AO1 S|C}SH H}_ E}_ E}IY0
+B}B O}AO1 S|C}SH H}_ E}IY0 E}_
....

3.2. CPU memory usage
In the system described above, the average (of two runs) running time for new aligned command was 1h:14m in comparison to an average of 1h:28m of the original phonetisaurus script. Both scripts consumed the same RAM amount (~ 1.7GB).

Conclusion – Future works
This article presented the new g2p align script which seems to produce the same results as the original one and is a little bit faster than that.
Although it should compile and run as expected to any modern linux system, further testing is reeuired for other systems (like MacOSX, windows). We need also to investigated the alignment differnces (compared to the original script) in the vowels and consonants as described above. Although it doesn't seem critical, it may cause problems later.

References
[1] Phonetisaurus: A WFST-driven Phoneticizer – Framework Review

[2] OpenFst Library, http://www.openfst.org/twiki/bin/view/FST/WebHome

[GSoC 2012: Pronunciation Evaluation #ronanki] Work prior to Official Start

Well, it has been a month since I got accepted into this year's Google Summer of Code. This has been a great time for me, during the community bonding period within the CMU Sphinx organization.

It has been four days since GSoC 2012 started officially. Prior to that, I became familiarized with a few different things with the help of my mentor. He created a wiki page for our projects at https://cmusphinx.github.io/wiki/pronunciation_evaluation. Troy and I are going to blog here and update the wiki there during this summer. So please check here for important updates.

Currently, my goal is to build a web interface which allows users to evaluate their pronunciation. Some of the sub-tasks have already been accomplished, and some of them are still ongoing:

Work accomplished:

  • Created an initial web interface which allows users to record and playback their speech using the open source wami-recorder which is being designed by the spoken language systems at MIT.
  • When the recording is completed, the wave file is uploaded to the server for processing.
  • Sphinx3 forced alignment is used to align a phoneme string expected from the utterance with the recorded speech to calculate time endpoints acoustic scores for each phoneme.
  • I tried many different output arguments in sphinx3_align from  https://cmusphinx.github.io/wiki/sphinx4:sphinxthreealigner and successfully tested producing the phoneme acoustic scores using two recognition passes.
    • In the first pass, I use -phlabdir as an argument to get a .lab file as output, which contains the list of recognized phonemes.
    • In the second pass, I use that list to get acoustic scores for each phoneme using -wdsegdir as an input argument.
  • Later, I integrated sphinx3 forced alignment with the wami-recorder microphone recording applet so that the user sees the acoustic scores after uploading their recording.
  • Please try this link to test it:  http://talknicer.net/~ronanki/test/
  • Wrote a program to convert a list of each phoneme's "neighbors," or most similar other phonemes, provided by the project mentor from the Worldbet phonetic alphabet to CMUbet.
  • Wrote a program to take a string of phonemes representing an expected utterance as input and produce a sphinx3 recognition grammar consisting of a string of alternatives representing each expected phoneme and all of its neighboring, phonemes for automatic edit distance scoring.
Ongoing work:
  • Reading about Worldbet, OGIbet, ARPAbet, and CMUbet, the different ASCII-based phonetic alphabets and their mappings between each other and the International Phonetic Alphabet.
  • Will be enhancing the first pass of recognition described above using the generated alternative neighboring phoneme grammars to find phonemes which match the recorded speech more closely than the expected phonemes without using complex post-processing acoustic score statistics.
  • Trying more parameters and options to derive acoustic scores for each phoneme from sphinx3 forced alignment.
  • Writing an exemplar score aggregation algorithms to find the means, standard deviations, and their expected error for each phoneme in a phrase from a set of recorded exemplar pronunciations of that phrase.
  • Writing an algorithm which can detect mispronunciations by comparing a recording's acoustic scores to the expected mean and standard deviation for each phoneme, and aggregating those scores to biphones, words, and the entire phrase.

[GSoC 2012: Pronunciation Evaluation #Troy] Before Week 1

Google Summer of Code 2012 officially started this Monday (21 May). Our expected weekly report should begin next Monday, but here is a brief overview of the preparations we have accomplished during the "community bonding period."

We started with a group chat including our mentor James and the other student Ronanki. The project details are becoming more clear to me, from the chat and subsequent email communications. For my project, the major focuses will be:
1) A web portal for automatic pronunciation evaluation audio collection; and
2) An Android-based mobile automatic pronunciation evaluation app.
The core of these two applications is edit distance grammar based-automatic pronunciation evaluation using CMU Sphinx3.

Here are the preparations I have accomplished during the bonding period:

  1. Trying out the basic wami-recorder demo on my school's server;
  2. Changing rtmplite for audio recording. Rtmplite is a Python implementation of an RTMP server with minimum support needed for real-time streaming and recording using Adobe's AMF0 protocol. On the server side, the RTMP server daemon process listens on TCP port 1935 by default, for connections and media data streaming. On the client side, the Flash user needs to use Adobe ActionScript 3's NetConnection function to set up a session with the server, and the NetStream function for audio and video streaming, and also microphone recording. The demo application has been set up at: http://talknicer.net/~li-bo/testClient/bin-debug/testClient.html
  3. Based on my understanding of the demo application, which does the real time streaming and recording of both audio and video, I started to write my own audio recorder which is a key user interface component for both the web-based audio data collection and the evaluation app. The basic version of the recorder was hosted at: http://talknicer.net/~li-bo/audioRecorder/audioRecorder.html . The current implementation:
    1. Distinguishes recordings from different users with user IDs;
    2. Loads pre-defined text sentences to display for recording, which will be useful for pronunciation exemplar data collection;
    3. Performs peal-time audio recording;
    4. Can play back the recordings from the server; and
    5. Has basic event control logic, such as to prevent users from recording and playing at the same time, etc.
  4. Also, I have also learned from https://cmusphinx.github.io/wiki/sphinx4:sphinxthreealigner on how to get phoneme acoustic scores from "forced alignment" using sphinx3. To generate the phoneme alignment scores, two steps are needed. The details of how to perform that alignment can be found on my more tech-oriented posts at http://troylee2008.blogspot.com/2012/05/testing-cmusphinx3-alignment.html and http://troylee2008.blogspot.com/2012/05/cmusphinx3-phoneme-alignment.html on my personal blog.
Currently, these tasks are ongoing:
  1. Set up the server side process to manage user recordings, i.e., distinguishing between users and different utterances.
  2. Figure out how to use ffmpeg, speexdec, and/or sox to automatically convert the recorded server side FLV files to PCM .wav files after the users upload the recordings.
  3. Verify the recording parameters against the recording and speech recognition quality, possibly taking the network bandwidth into consideration.
  4. Incorporating delays between network and microphone events in the recorder. The current version does not wait for the network events (such as connection set up, data package transmission, etc.) to successfully finish before processing the next user event, which can often cause the recordings to be clipped.

My GSoC Project Page: http://www.google-melange.com/gsoc/proposal/review/google/gsoc2012/troylee2008/1

Porting openFST to java: Part 2

(author: John Salatas)

Foreword

This article, the second in a series regarding, porting openFST to java, briefly presents some additional base classes and raise some issues regarding the java fst architecture in general and its compatibility with the original openFST binary format for saving models.

1. FST java library base architecture
The first article in the series [1] introduced the Weight class, the semiring operations related classes and interfaces and finally a solid class implementation for the tropical semiring based on float weights. In the last svn commit (revision 11363) [2] includes the edu.cmu.sphinx.fst.weight.LogSemiring class which implements a logarithmic semiring based on Double weight values.

Furthermore revision 11363 includes the edu.cmu.sphinx.fst.state.State>, edu.cmu.sphinx.fst.arc.Arc> and edu.cmu.sphinx.fst.fst.Fst> classes which implement the state, arc and fst functionality, respectively.

2. Architecture design issues

2.1. Java generics support

As described in the first part [1], the edu.cmu.sphinx.fst.weight.Weight acts basically as a wrapper for the weight’s value. The current implementation of State, Arc and Fst classes take as a type parameter any class that extends the Weight base class. Although this approach provides a great flexibility on buildin custom types of FSTs, the implementations can be greatly simplified if we assume only the base Weight class and modify the State, Arc and Fst classes definition to simply usse a type parameter.

As an example the Arc class definition would be simplified to

public class Arc implements Serializable{

private static final long serialVersionUID = -7996802366816336109L;

// Arc's weight
protected Weight weight;
// Rest of the code.....
}

instead of its current definition

public class Arc implements Serializable{

private static final long serialVersionUID = -7996802366816336109L;

// Arc's weight
protected W weight;
// Rest of the code.....
}

The proposed modification can be applied also to State and Fst classes and provide an easier to use api. In that case the construction of a basic FST in the class edu.cmu.sphinx.fst.demos.basic.FstTest would be simplified as follows

// ...
Fst fst = new Fst();

// State 0
State s = new State();
s.AddArc(new Arc(new Weight(0.5), 1, 1, 1));
s.AddArc(new Arc(new Weight(1.5), 2, 2, 1));
fst.AddState(s);

// State 1
s = new State();
s.AddArc(new Arc(new Weight(2.5), 3, 3, 2));
fst.AddState(s);

// State 2 (final)
s = new State(new Weight(3.5));
fst.AddState(s);
// ...

The code could be further simplified by completely dropping generics support in State, Arc and Fst classes by just providing solid implementations based on Weight weights.

2.2. Compatibility with the original openFST binary format

A second issue is the compatibility of the serialized binary format with the original openFST format. A compatible java library that is able to load/save openFST models, would provide us the ability to share trained models between various applications. As an example, in the case of ASR appliactions, trained models could be easily shared between between sphinx4 and kaldi [3] which is written in C++ and already uses the openFST library.

2.3. Logarithmic Semiring implementation issues

A final issue has to do with a possible inconsistency of the plus operation definition between Allauzen's et. Al paper [4] and the actual openFST code (version 1.3.1.): The plus operation ( $latex \oplus_{\log} $ ) is defined in [4] as $latex x \oplus_{\log} y = -\log(e^{-x} +e^{-y}) $, however in code it is implemented as follows

template
inline T LogExp(T x) { return log(1.0F + exp(-x)); }

template
inline LogWeightTpl Plus(const LogWeightTpl &w1,
const LogWeightTpl &w2) {
T f1 = w1.Value(), f2 = w2.Value();
if (f1 == FloatLimits::kPosInfinity)
return w2;
else if (f2 == FloatLimits::kPosInfinity)
return w1;
else if (f1 > f2)
return LogWeightTpl(f2 - LogExp(f1 - f2));
else
return LogWeightTpl(f1 - LogExp(f2 - f1));
}

References

[1] “Porting openFST to java: Part 1”, last accessed: 18/05/2012.

[2] CMUSphinx g2p SVN repository

[3] Kaldi Speech recognition research toolkit , last accessed: 18/05/2012.

[4] C. Allauzen, M. Riley, J. Schalkwyk, W. Skut, M. Mohri, “OpenFst: a general and efficient weighted finite-state transducer library”, Proceedings of the 12th International Conference on Implementation and Application of Automata (CIAA 2007), pp. 11–23, Prague, Czech Republic, July 2007.