Executive Summary: Voice Activity Detection is necessary but not sufficient for endpointing and wake-up word detection, which are different and more complex problems. One size does not fit all. For this reason it is better to do it explicitly and externally.
Un jour j’irai vivre en Théorie, car en Théorie tout se passe bien.
– Pierre Desproges
Between the 0.8 and prealpha5 releases, PocketSphinx was modified to
do voice activity detection by default in the feature extraction
front-end, which caused unexpected behaviour, particularly when doing
batch mode recognition. Specifically, it caused the timings output by
the decoder in the logs and hypseg
file to have no relation to the
input stream, as the audio classified “non-speech” was removed from
its input. Likewise, sphinx_fe
would produce feature files which
did not at all correspond to the length of the input (and could even
be empty).
When users noticed this, they were instructed to use the continuous listening API, which (in Theory) reconstructed the original timings. There is a certain logic to this if:
- You are doing speech-to-text and literally nothing else
- You are always running in live mode
Unfortunately, PocketSphinx is not actually very good at converting
speech to text, so people were using it for other things, like
pronunciation evaluation, force-alignment, or just plain old acoustic
feature extraction using sphinx_fe
, where timings really are quite
important, and where batch-mode recognition is easier and more
accurate. Making silence removal the default behaviour was therefore
a bad idea, and hiding it from the user behind two command-line
options, one of which depended on the other, was a bad API, so I
removed it.
But why did we put voice activity detection in the front-end in the first place? Time For Some (more) Audio Theory!
Although we, as humans, have a really good idea of what is and isn’t speech (unless we speak Danish)1, at a purely acoustic level, it is not nearly as obvious. There is a good, if slightly dated summary of the problem on this website. In Theory, the ideal way to recognize what is and isn’t speech is just to do speech recognition, since by definition a speech recognizer has a good model of wnat is speech, which means that we can simply add a model of what isn’t and, in Theory, get the best possible performance in an “end-to-end” system. And this is an active research area, for example.
There are fairly obvious drawbacks to doing this, primarily that
speech recognition is computationally quite expensive, secondarily
that learning all the possible types of “not speech” in various
acoustic environments is not at all easy to do. So in practice what
we do, simply put, is to create a model of “not speech”, which we call
“noise”, and assume that it is added to the speech signal which we are
trying to detect. Then, to detect speech, we can subtract out the
noise, and if there is anything left, call this “speech”. And this is
exactly what PocketSphinx prealpha5 did, at least if you enabled
both the -remove_noise
and -remove_silence
options.
This is a reasonably simple and effective way to do voice activity detection. So why not do it?
First, because of the problem with the implementation mentioned at the top of this post, which is that it breaks the contract of frames of speech in the input corresponding to timestamps in the output. This is not insurmountable but, well, we didn’t surmount it.
Second, because it requires you to use the built-in noise subtraction in order to get voice activity detection, and you might not want to do that, because you have some much more difficult type of noise to deal with.
Third, because the feature extraction code in PocketSphinx is badly written (I can say this because I wrote it) and not easy to integrate VAD into, so… there were bugs.
Fourth, because while PocketSphinx (and other speech recognizers) use overlapping, windowed frames of audio, this is unnecessary and inefficient for doing voice activity detection. For speech segments, the overhead of a heavily-optimized VAD like the WebRTC one is minimal, and in non-speech segments we save a lot of computation by not doing windowing and MFCC computation.
And finally, because voice activity detection, while extremely useful for speech compression, is less useful for speech recognition.
A little like we saw previously with respect to audio hardware and APIs, the reason VAD was invented was not to do speech recognition, but to increase the capacity of telephone networks. Fundamentally, it simply tells you if there is speech (which should be transmitted) or not-speech (which can be omitted) in a short frame of audio. This isn’t ideal, because:
- It breaks the signal into acoustic rather than linguistic segments.
- Speech contains things that don’t actually “sound like speech”, e.g. stop consonants (which are mostly silence), but really also anything unvoiced. Some languages like Georgian, Kwak’wala, and Nuxalk have lots of these things.
What you actually want to do for speech recognition really depends on what speech recognition task you’re doing. For transcription we talk about segmentation (if there is only one speaker) or diarization (if there are multiple speakers) which is a fancy word for “who said what when”. For dialogue systems we usually talk about barge-in and endpointing, i.e. detecting when the user is interrupting the system, and when the user has stopped speaking and is expecting the system to say something. And of course there is the famous “wake-up word” task where we specifically want to only detect speech that starts with a specific word or phrase.
Segmentation, diarization and endpointing are not the same thing, and
none of them is the same thing as voice activity detection, though all
of them are often built on top of a voice activity detector. Notably,
none of them belong inside the decoder, which by its design can only
process discrete “utterances”. The API for
pocketsphinx-python, which
provides the wrapper classes
AudioFile
for segmentation and
LiveSpeech
for endpointing, is basically the right approach, and something like
it will be available in both C and Python for the 5.0 release, but
with the flexibility for the user to implement their own approach if
desired.