2006 NIST Speaker Recognition Evaluation Test Set Part 2 was developed by LDC
and NIST (National Institute of Standards and Technology). It contains 568 hours
of conversational telephone and microphone speech in English, Arabic, Bengali, Chinese, Farsi,
Hindi, Korean, Russian, Spanish, Thai and Urdu and associated English transcripts used as test
data in the NIST-sponsored
2006 Speaker Recognition Evaluation (SRE).
The ongoing series of SRE yearly evaluations conducted by NIST are intended to be of
interest to researchers working on the general problem of text independent speaker
recognition. To this end the evaluations are designed to be simple, to focus on
core technology issues, to be fully supported and to be accessible to those wishing
The task of the 2006 SRE evaluation was speaker detection, that is, to determine
whether a specified speaker is speaking during a given segment of conversational
telephone speech. The task was divided into 15 distinct and separate tests involving
one of five training conditions and one of four test conditions. Further information
about the test conditions and additional documentation is available at the NIST
web site for the 2006 SRE and within the
2006 SRE Evaluation Plan.
LDC previously published
2006 NIST Speaker Recognition Evaluation Training Set and
2006 NIST Speaker Recognition Evaluation Test Set Part 1.
The speech data in this release was collected by LDC as part of the Mixer
project, in particular Mixer Phases 1, 2 and 3. The Mixer project supports the
development of robust speaker recognition technology by providing carefully
collected and audited speech from a large pool of speakers recorded simultaneously
across numerous microphones and in different communicative situations and/or
in multiple languages. The data is mostly English speech, but includes some
speech in Arabic, Bengali, Chinese, Farsi, Hindi, Korean, Russian, Spanish, Thai and Urdu.
The telephone speech segments are multi-channel data collected simultaneously
from a number of auxiliary microphones. The files are organized into four types:
two-channel excerpts of approximately 10 seconds, two-channel conversations
of approximately 5 minutes, summed-channel conversations also of approximately
5 minutes and a two-channel conversation with the usual telephone speech replaced
by auxiliary microphone data in the putative target speaker channel. The auxiliary
microphone conversations are also of approximately five minutes in length.
The speech files are stored as 8-bit u-law speech signals in separate SPHERE
files. In addition to the standard header fields, the SPHERE header for each
file contains some auxiliary information such as the language of the conversation.
English language transcripts in .ctm format were produced using an automatic
speech recognition (ASR) system.
For an example of the data contained in this corpus, review this audio sample.
None at this time.
Portions © 2004-2006, 2012 Trustees of the University of Pennsylvania