We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.
We describe the third edition of the CheckThat! Lab,
which is part of the 2020 Cross-Language Evaluation Forum (CLEF).
CheckThat! proposes four complementary tasks and a related task from
previous lab editions, offered in English, Arabic, and Spanish. Task 1 asks
to predict which tweets in a Twitter stream are worth fact-checking. Task
2 asks to determine whether a claim posted in a tweet can be verified
using a set of previously fact-checked claims. Task 3 asks to retrieve text
snippets from a given set of Web pages that would be useful for verifying
a target tweet’s claim. Task 4 asks to predict the veracity of a target
tweet’s claim using a set of potentially-relevant Web pages. Finally, the
lab offers a fifth task that asks to predict the check-worthiness of the
claims made in English political debates and speeches. CheckThat! features a full evaluation framework. The evaluation is carried out using
mean average precision or precision at rank k for ranking tasks, and F1
for classification tasks
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.