Web data collection for pronunciation evaluation

(author: Troy)

(status: week 4)

[Project mentor note:  I have been holding these more recent blog posts pending some issues with Adobe Flash security updates which periodically break cross-platform audio upload web browser solutions. We have decided to plan for a fail-over scheme using low-latency HTTP POST multipart/form-data binary Speex uploads to provide backup in case Flash/rtmplite fails again in the future. This might also support most of the mobile devices. Please excuse the delay and rest assured that progress continues and will continue to be announced at such time as we are confident that we won't need to contradict ourselves as browser technology for audio upload continues to develop. --James Salsman]

The data collection website now can provide basic capabilities. Anyone interested, please check out http://talknicer.net/~li-bo/datacollection/login.php and give it a try. If you encounter any problems, please let us know.

Here are my accomplishments from last week:

1) Discussed the project schema design with  the project mentor and created the database with MySQL. The current schema is shown at http://talknicer.net/w/Database_schema.  During the development of the user interface, slight modifications were made to refine the database schema, such as the age field in for the users table: Storing the user's birth date is much better. Other similar changes were made. I learned that good database design comes from practice, not purely imagination.

2) Implemented the two types of user registration pages: one for students and one for exemplar uploaders. To avoid redundant work and allow for fewer constraints on types of users, the registration process involves two steps: one basic registration and one extra information update. For students, only the basic one is mandatory, but the exemplar uploaders have to fill out two separate forms.
3) Added extra supporting functionality for user management, including password reset and mode selection for users with more than one type.
4) Incorporated the audio recorder with the website for recording and uploading to servers.

edit-distance grammar decoding using sphinx3: Part 2

(Author: Srikanth Ronanki)
(Status: GSoC 2012 Pronunciation Evaluation Week 4)

The source code for the functions below [1] have been uploaded to http://cmusphinx.svn.sourceforge.net/viewvc/cmusphinx/branches/speecheval/ronanki/scripts/
Here are some brief notes on how to use those programs:

Method 1: (phoneme decode)
Path:
neighborphones_decode/one_phoneme/
Steps To Run:
1. Use split_wav2phoneme.py to split a sample wav file in to individual phoneme wav files
$ python split_wav2phoneme.py
2. Create split.ctl file using extracted split_wav directory
$ ls split_wav/* > split.ctl
$ sed -i 's/.wav//g' split.ctl

3. Run feature_extract.sh program to extract features for individual phoneme wav files
$ sh feature_extract.sh
4. Java Speech Grammar Format (JSGF) files are already created in FSG_phoneme
5. Run jsgf2fsg.sh in FSG_phoneme to convert from jsgf to fsg.
$ sh jsgf2fsg.sh
6. Run decode_1phoneme.py to get the required output in output_decoded_phones.txt
$ python decode_1phoneme.py

Method 2: (Three phones decode)
Path:

neighborphones_decode/three_phones/
Steps To Run:
1. Use split_wav2threephones.py to split a sample wav file in to individual phoneme wav files which consists of three phones the other two being served as contextual information for the middle one.
$ python split_wav2threephones.py
2. Create split.ctl file using extracted split_wav directory
$ ls split_wav/* > split.ctl
$ sed -i 's/.wav//g' split.ctl

3. Run feature_extract.sh program to extract features for individual phoneme wav files
$ sh feature_extract.sh
4. Java Speech Grammar Format (JSGF) files are already created in FSG_phoneme
5. Run jsgf2fsg.sh in FSG_phoneme to convert from jsgf to fsg
$ sh jsgf2fsg.sh
6. Run decode_3phones.py to get the required output in output_decoded_phones.txt
$ python decode_3phones.py

Method 3: (Single/Batch phrase decode)
Path:

neighborphones_decode/phrases/
Steps To Run:
1. Construct grammar file (JSGF) using my earlier scripts from phonemes2ngbphones [2] and then use jsgf2fsg in sphinxbase to convert from JSGF to FSG which serves as input Language Model to sphinx3_decode
2. Provide the input arguments such as grammar file, feats, acoustic models etc., for the input test phrase
3. Run decode.sh program to get the required output in sample.out
$ sh decode.sh

References:

[1] edit-distance grammar decoding using sphinx3: Part 1

[2] Input string of phonemes to CMUBet neighboring phones

Porting openFST to java: Part 3

(author: John Salatas)

Foreword

This article, the third in a series regarding, porting openFST to java, introduces the latest update to the java code, which resolve the previously raised issues regarding the java fst architecture in general and its compatibility with the original openFST format for saving models. [1]

1. Code Changes

1.1. Simplified java generics usage

As suggested in [1], the latest java fst code revision (11456), available in the cmusphinx SVN Repository [2], assumes only the base Weight class and modifies the State, Arc and Fst classes definition to simply use a type parameter.

The above modifications provide an easier to use api. As an example the construction of a basic FST in the class edu.cmu.sphinx.fst.demos.basic.FstTest is simplified as follows

...
Fst fst = new Fst();

// State 0
State s = new State();
s.AddArc(new Arc(new Weight(0.5), 1, 1, 1));
s.AddArc(new Arc(new Weight(1.5), 2, 2, 1));
fst.AddState(s);

// State 1
s = new State();
s.AddArc(new Arc(new Weight(2.5), 3, 3, 2));
fst.AddState(s);

// State 2 (final)
s = new State(new Weight(3.5));
fst.AddState(s);
...

1.2. openFST models compatibilty

Besides the simplified java generics usage above, the most important change is the code to load an openFST model in text format and convert it to a java fst serialized model. This is achieved also in the latest java fst code revision (11456) [2].

2. Converting openFST models to java

2.1. Installation

The procedure below is tested on an Intel CPU running openSuSE 12.1 x64 with gcc 4.6.2, Oracle Java Platform (JDK) 7u5, and ant 1.8.2.

In order to convert an openFST model in text format to java fst model, the first step is to checkout from the cmusphinx SVN repository the latest java fst code revision:

# svn co https://cmusphinx.svn.sourceforge.net/svnroot/cmusphinx/branches/g2p/fst

Next step is to build the java fst code
cd fst
# ant jar
Buildfile: /fst/build.xml
jar:
build-subprojects:
init:
[mkdir] Created dir: /fst/bin
build-project:
[echo] fst: /fst/build.xml
[javac] /fst/build.xml:38: warning: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds
[javac] Compiling 10 source files to /fst/bin
[javac] /fst/build.xml:42: warning: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds
build:
[jar] Building jar: /fst/fst.jar
BUILD SUCCESSFUL
Total time: 2 seconds
#

2.2. Usage

Having completed the installation as described above, and trained an openfst model named binary.fst as described in [3], with the latest model training code revision (11455) [4] the model is also saved in the openFST text format in a file named binary.fst.txt. The conversion to a java fst model is performed using the openfst2java.sh which can be found in the root directory of the java fst code. The openfst2java.sh accepts two parameters being the openfs input text model and the java fst output model as follows:

# ./openfst2java.sh binary.fst.txt binary.fst.ser
Parsing input model...
Saving as binary java fst model...
Import completed.
Total States Imported: 1091667
Total Arcs Imported: 2652251
#

The newly generated binary.fst.ser model can then be loaded in java, as follows:

try {
Fst fst = (Fst) Fst.loadModel("binary.fst.ser");
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}

3. Performance: Memory Usage

Testing the conversion and loading of the cmudict.fst model generated in [3], reveal that the conversion task requires about 1.0GB and the loading of the converted model requires about 900MB of RAM.

4. Conclusion – Future Works

Having the ability to convert and load an openFST model in java, takes the “Letter to Phoneme Conversion in CMU Sphinx-4” project to the next step, which is the port of phonetisaurus decoder to java which will eventually lead to its integration with cmusphinx 4.

A major concern at this point is the high memory utilization while loading large models. Although it is expected for java applications to consume more memory compared to a similar C++ application, this could be a problem especially when running in low end machines and needs further investigation and optimization (if possible).

References

[1] Porting openFST to java: Part 2

[2] Java fst SVN (Revision 11456)

[3] Automating the creation of joint multigram language models as WFST: Part 2

[4] openFST model training SVN (Revision 11455)

Automating the creation of joint multigram language models as WFST: Part 2

(author: John Salatas)

Foreword

This a article presents an updated version of the model training application originally discussed in [1], considering the compatibility issues with phonetisaurus decoder as presented in [2]. The updated code introduces routines to regenerate a new binary fst model compatible with phonetisaurus’ decoder as suggested in [2] which will be reviewed in the next section.

1. Code review

The basic code for the model regeneration is defined in train.cpp in procedure

void relabel(StdMutableFst *fst, StdMutableFst *out, string eps, string skip, string s1s2_sep, string seq_sep);

where fst and out are the input and output (the regenerated) models respectively.

In the first initialization step is to generate new input, output and states SymbolTables and add to output the new start and final states [2].

Furthermore, in this step the SymbolTables are initialized. Phonetisauruss decoder requires the symbols eps, seq_sep, “ ”, “” and “” to be in keys 0, 1, 2, 3, 4 accordingly.

void relabel(StdMutableFst *fst, StdMutableFst *out, string eps, string skip, string s1s2_sep, string seq_sep) {
ArcSort(fst, StdILabelCompare());
const SymbolTable *oldsyms = fst->InputSymbols();

// Uncomment the next line in order to save the original model
// as created by ngram
// fst->Write("org.fst");
// generate new input, output and states SymbolTables
SymbolTable *ssyms = new SymbolTable("ssyms");
SymbolTable *isyms = new SymbolTable("isyms");
SymbolTable *osyms = new SymbolTable("osyms");

out->AddState();
ssyms->AddSymbol("s0");
out->SetStart(0);

out->AddState();
ssyms->AddSymbol("f");
out->SetFinal(1, TropicalWeight::One());

isyms->AddSymbol(eps);
osyms->AddSymbol(eps);

//Add separator, phi, start and end symbols
isyms->AddSymbol(seq_sep);
osyms->AddSymbol(seq_sep);
isyms->AddSymbol("");
osyms->AddSymbol("");
int istart = isyms->AddSymbol("");
int iend = isyms->AddSymbol("
");
int ostart = osyms->AddSymbol("");
int oend = osyms->AddSymbol("
");

out->AddState();
ssyms->AddSymbol("s1");
out->AddArc(0, StdArc(istart, ostart, TropicalWeight::One(), 2));
...

In the main step, the code iterates through each State of the input model and adds each one to the output model keeping track of old and new state_id in ssyms SymbolTable.

In order to transform to an output model with a single final state [2] the code checks if the current state is final and if it is, it adds an new arc connecting from the current state to the single final one (state_id 1) with label “:” and weight equal to the current state's final weight. It also sets the final weight of current state equal to TropicalWeight::Zero() (ie it converts the current state to a non final).

...
for (StateIterator siter(*fst); !siter.Done(); siter.Next()) {
StateId state_id = siter.Value();

int64 newstate;
if (state_id == fst->Start()) {
newstate = 2;
} else {
newstate = ssyms->Find(convertInt(state_id));
if(newstate == -1 ) {
out->AddState();
ssyms->AddSymbol(convertInt(state_id));
newstate = ssyms->Find(convertInt(state_id));
}
}

TropicalWeight weight = fst->Final(state_id);

if (weight != TropicalWeight::Zero()) {
// this is a final state
StdArc a = StdArc(iend, oend, weight, 1);
out->AddArc(newstate, a);
out->SetFinal(newstate, TropicalWeight::Zero());
}
addarcs(state_id, newstate, oldsyms, isyms, osyms, ssyms, eps, s1s2_sep, fst, out);
}
out->SetInputSymbols(isyms);
out->SetOutputSymbols(osyms);
ArcSort(out, StdOLabelCompare());
ArcSort(out, StdILabelCompare());
}

Lastly, the addarcs procuder is called in order to relabel the arcs of each state of the input model and add them to the output model. It also creates any missing states (ie missing next states of an arc).

void addarcs(StateId state_id, StateId newstate, const SymbolTable* oldsyms, SymbolTable* isyms,
SymbolTable* osyms, SymbolTable* ssyms, string eps, string s1s2_sep, StdMutableFst *fst,
StdMutableFst *out) {
for (ArcIterator aiter(*fst, state_id); !aiter.Done(); aiter.Next()) {
StdArc arc = aiter.Value();
string oldlabel = oldsyms->Find(arc.ilabel);
if(oldlabel == eps) {
oldlabel = oldlabel.append("}");
oldlabel = oldlabel.append(eps);
}
vector tokens;
split_string(&oldlabel, &tokens, &s1s2_sep, true);
int64 ilabel = isyms->AddSymbol(tokens.at(0));
int64 olabel = osyms->AddSymbol(tokens.at(1));

int64 nextstate = ssyms->Find(convertInt(arc.nextstate));
if(nextstate == -1 ) {
out->AddState();
ssyms->AddSymbol(convertInt(arc.nextstate));
nextstate = ssyms->Find(convertInt(arc.nextstate));
}
out->AddArc(newstate, StdArc(ilabel, olabel, (arc.weight != TropicalWeight::Zero())?arc.weight:TropicalWeight::One(), nextstate));
//out->AddArc(newstate, StdArc(ilabel, olabel, arc.weight, nextstate));
}
}

2. Performance – Evaluation

In order to evaluate the perfomance of the model generated with the new code. A new model was trained with the same dictionaries as in [4]

# train/train --order 9 --smooth "kneser_ney" --seq1_del --seq2_del --ifile cmudict.dict.train --ofile cmudict.fst

and evaluated with phonetisaurus evaluate script

# evaluate.py --modelfile cmudict.fst --testfile ../cmudict.dict.test --prefix cmudict/cmudict
Mapping to null...
Words: 13335 Hyps: 13335 Refs: 13335
######################################################################
EVALUATION RESULTS
----------------------------------------------------------------------
(T)otal tokens in reference: 84993
(M)atches: 77095 (S)ubstitutions: 7050 (I)nsertions: 634 (D)eletions: 848
% Correct (M/T) -- %90.71
% Token ER ((S+I+D)/T) -- %10.04
% Accuracy 1.0-ER -- %89.96
--------------------------------------------------------
(S)equences: 13335 (C)orrect sequences: 7975 (E)rror sequences: 5360
% Sequence ER (E/S) -- %40.19
% Sequence Acc (1.0-E/S) -- %59.81
######################################################################

3. Conclusions – Future work

The evaluation results in cmudict dictionary, are a little bit worst than using the command line procedure in [4]. Although the difference doesn't seem to be important, it needs a further investigation. For that purpose and for general one can uncomment the line fst->Write("org.fst"); in the relabel procedure as depicted in a previous section, in order to have the original binary model saved in a file called “org.fst”.

Next steps would probably be to write code in order to load the binary model in java code and to port the decoding algorithm along with the required fst operations to java and eventually integrate it with CMUSphinx.

References
[1] Automating the creation of joint multigram language models as WFST
[2] Compatibility issues using binary fst models generated by OpenGrm NGram Library with phonetisaurus decoder
[3] fstinfo on models created with opengrm
[4] Using OpenGrm NGram Library for the encoding of joint multigram language models as WFST