LDC is involved in a number of projects that support language education, research and technology development.
DEFT (Deep Exploration and Filtering of Test) (DARPA)
The DARPA DEFT Program will develop automated systems to process text information and enable the understanding of connections in text that might not be readily apparent to humans. LDC supports the DEFT Program by collecting, creating and annotating a variety of data sources to support Smart Filtering, Relational Analysis and Anomaly Analysis.
The HAVIC Corpus comprises thousands of hours of real-world amateur video data, annotated for features including topics and events depicted in the video (or its corresponding audio). Currently, the HAVIC corpus is being used to support the NIST TRECVid Multimedia Event Detection (MED) and Multimedia Event Recounting (MER) Evaluations.
Language Application Grid (NSF)
The Language Application Grid is an NSF-sponsored collaboration involving Vassar University, Brandeis University, Carnegie Mellon University and LDC. The stated goal is to develop a platform for natural language processing tools and resources that can be used and accessed by any researcher or developer.
Language Preservation 2.0: Crowdsourcing Oral Language Documentation Using Mobile Devices (NSF)
LDC and University of Melbourne have joined forces to collect stories and oral histories from speakers of endangered languages in Brazil and Papua New Guinea. In addition to these recordings, collected via mobile hand-held devices, researchers will also collect speaker information and transcribe the speech data. This work is supported by a Documenting Endangered Languages grant from NSF.
LDC develops linguistic resources to support the NIST LRE series. The LRE-11 corpus included narrowband broadcast news speech and conversational telephone speech in 24 languages, including several closely related/confusable varieties. Collection of the next LRE corpus is underway.
NIEUW is an LDC project supported by an NSF CISE Research Infrastructure planning grant. The goal is to build a framework to develop multilingual language resources employing crowdsourcing techniques proven to work in multiple scientific disciplines.
OpenMT (Machine Translation) (NIST)
LDC supports the NIST Open Machine Translation (OpenMT) Evaluation series by developing test sets in multiple languages and genres and by sharing linguistic resources developed in other programs including DARPA GALE and TIDES. The objective of the OpenMT evaluation series is to support research in machine translation (MT) technologies -- technologies that translate text between human languages -- and to advance the state of the art in the MT field. Input may include all forms of text. The goal is for the output to be an adequate and fluent translation of the original.
For MT12, which took place in spring 2012, LDC provided source data and reference translations for the evaluation of Arabic, Chinese, Dari, Farsi, and Korean to English translations of newswire and web text.
Prosodic Systems in New Guinea (NSF)
This project is NSF-sponsored research conducted by Steven Bird in connection with UC Berkeley, University of Pennsylvania and the Australia National University. The project is collecting new bodies of recorded and transcribed data from undescribed tone languages in New Guinea. It will use computational and theoretical methods to analyze the geographical distribution of tonal properties and the interaction of tone and other prosodic features.
SRE (Speaker Recognition Evaluation) (NIST)
LDC develops linguistic resources to support the NIST Speaker Recognition Evaluation (SRE) series. For the SRE-12 evaluation, LDC collected multiple telephone calls from each of 414 English speakers who were also present in earlier SRE corpora. All calls were audited for language, speaker identity and other features.
The Text Analysis Conference (TAC) is a series of evaluation workshops organized by NIST to encourage research in Natural Language Processing and related applications. LDC provides linguistic resources including source data, annotations and system assessment for the KBP (Knowledge Base Population) Track, which promotes research in automated systems that can discover information about named entities as found in a large corpus and incorporate this information into a knowledge base.