We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.
Also published online by CEUR Workshop Proceedings (CEUR-WS.org, ISSN 1613-0073) Proceedings of the Workshop on Semantic Search (SemSearch 2009) at the 18th International World Wide Web Conference (WWW 2009)The construction of standard datasets and benchmarks to evaluate
ontology-based search approaches and to compare then against
baseline IR models is a major open problem in the semantic technologies
community. In this paper we propose a novel evaluation
benchmark for ontology-based IR models based on an adaptation
of the well-known Cranfield paradigm (Cleverdon, 1967) traditionally
used by the IR community. The proposed benchmark
comprises: 1) a text document collection, 2) a set of queries and
their corresponding document relevance judgments and 3) a set of
ontologies and Knowledge Bases covering the query topics. The
document collection and the set of queries and judgments are
taken from one of the most widely used datasets in the IR community,
the TREC Web track. As a use case example we apply the
proposed benchmark to compare a real ontology-based search
model (Fernandez, et al., 2008) against the best IR systems of
TREC 9 and TREC 2001 competitions. A deep analysis of the
strengths and weaknesses of this benchmark and a discussion of
how it can be used to evaluate other ontology-based search systems
is also included at the end of the paper
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.