Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Handwriting Styles: Benchmarks and Evaluation Metrics

Abstract

International audienceExtracting styles of handwriting is a challenging problem, since the style themselves are not well defined. It is a key component to develop systems with more personalized experiences for humans. In this paper, we propose baseline benchmarks, in order to set anchors to estimate the relative quality of different handwriting style methods. This will be done using deep learning techniques, which have shown remarkable results in different machine learning tasks, learning classification, regression, and most relevant to our work, generating temporal sequences. We discuss the challenges associated with evaluating our methods, which is related to evaluation of generative models in general. We then propose evaluation metrics, which we find relevant to this problem, and we discuss how we evaluate the performance metrics. In this study, we use IRON-OFF dataset [1]. To the best of our knowledge, no existing benchmarks or evaluation metrics for this task exit yet, and this dataset has not been used before in the context of handwriting synthesis

Similar works

Full text

thumbnail-image

Hal - Université Grenoble Alpes

redirect
Last time updated on 07/01/2019

This paper was published in Hal - Université Grenoble Alpes.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.