Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Automatic segmentation of grammatical facial expressions in sign language: towards an inclusive communication experience

Abstract

Nowadays, natural language processing techniques enable the development of applications that promote communication between humans and between humans and machines. Although the technology related to automated oral communication is mature and affordable, there are currently no appropriate solutions for visual-spatial languages. In the scarce efforts to automatically process sign languages, studies on non-manual gestures are rare, making it difficult to properly interpret the speeches uttered in those languages. In this paper, we present a solution for the automatic segmentation of grammatical facial expressions in sign language. This is a low-cost computational solution designed to integrate a sign language processing framework that supports the development of simple but high value-added applications for the context of universal communication. Moreover, we present a discussion of the difficulties faced by this solution to guide future research in this area

Similar works

This paper was published in AIS Electronic Library (AISeL).

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.