Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

LT@Helsinki at SemEval-2020 Task 12 : Multilingual or language-specific BERT?

Abstract

This paper presents the different models submitted by the LT@Helsinki team for the SemEval2020 Shared Task 12. Our team participated in sub-tasks A and C; titled offensive language identification and offense target identification, respectively. In both cases we used the so called Bidirectional Encoder Representation from Transformer (BERT), a model pre-trained by Google and fine-tuned by us on the OLID dataset. The results show that offensive tweet classification is one of several language-based tasks where BERT can achieve state-of-the-art results.Peer reviewe

Similar works

This paper was published in Helsingin yliopiston digitaalinen arkisto.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.