Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Fine-tuning on Clean Data for End-to-End Speech Translation: FBK @ IWSLT 2018

Abstract

This paper describes FBK’s submission to the end-to-end English-German speech translation task at IWSLT 2018. Our system relies on a state-of-the-art model based on LSTMs and CNNs, where the CNNs are used to reduce the temporal dimension of the audio input, which is in generalmuch higher than machine translation input. Our model wastrained only on the audio-to-text parallel data released forthe task, and fine-tuned on cleaned subsets of the originaltraining corpus. The addition of weight normalization andlabel smoothing improved the baseline system by1.0BLEUpoint on our validation set. The final submission also fea-tured checkpoint averaging within a training run and ensem-ble decoding of models trained during multiple runs. On testdata, our best single model obtained a BLEU score of9.7,while the ensemble obtained a BLEU score of10.24

Similar works

Full text

thumbnail-image

Archivio della ricerca - Fondazione Bruno Kessler

redirect
Last time updated on 03/09/2019

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.