Return to search results
TREC 2001 CROSS LANGUAGE DATASET
Ten groups participated in the TREC-2001 cross-language information retrieval track, which focussed on retrieving Arabic language documents based on 25 queries that were originally prepared in English. French and Arabic translations of the queries were also available. This was the first year in which a large Arabic test collection was available, so a variety of approaches were tried and a rich set of experiments performed using resources such as machine translation, parallel corpora, several approaches to stemming and/or morphology, and both pre-translation and post-translation blind relevance feedback. On average, forty percent of the relevant documents discovered by a participating team were found by no other team, a higher rate than normally observed at TREC. This raises some concern that the relevance judgment pools may be less complete than has historically been the case.
Complete Metadata
| @type | dcat:Dataset |
|---|---|
| accessLevel | public |
| bureauCode |
[
"006:55"
]
|
| contactPoint |
{
"fn": "Ian Soboroff",
"hasEmail": "mailto:ian.soboroff@nist.gov"
}
|
| description | Ten groups participated in the TREC-2001 cross-language information retrieval track, which focussed on retrieving Arabic language documents based on 25 queries that were originally prepared in English. French and Arabic translations of the queries were also available. This was the first year in which a large Arabic test collection was available, so a variety of approaches were tried and a rich set of experiments performed using resources such as machine translation, parallel corpora, several approaches to stemming and/or morphology, and both pre-translation and post-translation blind relevance feedback. On average, forty percent of the relevant documents discovered by a participating team were found by no other team, a higher rate than normally observed at TREC. This raises some concern that the relevance judgment pools may be less complete than has historically been the case. |
| distribution |
[
{
"title": "LDC2001T55 document collection",
"accessURL": "https://catalog.ldc.upenn.edu/LDC2001T55",
"description": "These are the documents used in this dataset. You must obtain them from the LDC at this URL."
},
{
"title": "TREC 2001 cross language topics in English",
"format": "Traditional TREC SGML topic format",
"mediaType": "text/SGML",
"description": "English topics for the 2001 CLIR track.",
"downloadURL": "https://trec.nist.gov/data/topics_noneng/english_topics.txt"
},
{
"title": "TREC 2001 cross language topics in Arabic",
"format": "Traditional TREC SGML topic format",
"mediaType": "text/SGML",
"description": "The Arabic search topics, for monolingual search.",
"downloadURL": "https://trec.nist.gov/data/topics_noneng/arabic_topics.txt"
},
{
"title": "TREC 2001 cross language topics in French",
"format": "Traditional TREC SGML topic format",
"mediaType": "text/SGML",
"description": "The French topics for the TREC 2001 cross-language track",
"downloadURL": "https://ir.nist.gov/trec.nist.gov/data/topics_noneng/french_topics.txt"
},
{
"title": "TREC 2001 CLIR Relevance judgments",
"format": "Whitespace-separated: Topic, "0", document, relevance level",
"mediaType": "text/plain",
"description": "This file indicates the documents judged relevant for each of the topics.",
"downloadURL": "https://ir.nist.gov/trec.nist.gov/data/qrels_noneng/xlingual_t10qrels.txt"
}
]
|
| identifier | ark:/88434/mds2-3588 |
| issued | 2024-11-22 |
| keyword |
[
"TREC text retrieval conference"
]
|
| landingPage | https://data.nist.gov/od/id/mds2-3588 |
| language |
[
"en"
]
|
| license | https://www.nist.gov/open/license |
| modified | 2024-10-02 00:00:00 |
| programCode |
[
"006:045"
]
|
| publisher |
{
"name": "National Institute of Standards and Technology",
"@type": "org:Organization"
}
|
| references |
[
"http://trec.nist.gov/pubs/trec10/papers/clirtrack.pdf"
]
|
| theme |
[
"Information Technology"
]
|
| title | TREC 2001 CROSS LANGUAGE DATASET |