Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Human-Level Performance on Word Analogy Questions by Latent Relational Analysis

Abstract

This paper introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, machine translation, and information retrieval.Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason/stone is analogous to the pair carpenter/wood; the relations between mason and stone are highly similar to the relations between carpenter and wood. Past work on semantic similarity measures has mainly been concerned with attributional similarity. For instance, Latent Semantic Analysis (LSA) can measure the degree of similarity between two words, but not between two relations. Recently the Vector Space Model (VSM) of information retrieval has been adapted to the task of measuring relational similarity, achieving a score of 47% on a collection of 374 college level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) the patterns are derived automatically from the corpus (they are not predefined), (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data (it is also used this way in LSA), and (3) automatically generated synonyms are used to explore reformulations of the word pairs. LRA achieves 56% on the 374 analogy questions, statistically equivalent to the average human score of 57%. On the related problem of classifying noun-modifier relations, LRA achieves similar gains over the VSM, while using a smaller corpus.Cet article pr\ue9sente l'Analyse relationnelle latente (ARL), une m\ue9thode qui permet de mesurer la similarit\ue9 relationnelle. L'ARL a des applications \ue9ventuelles dans de nombreux domaines, dont l'extraction de l'information, la d\ue9sambiguation s\ue9mantique, la traduction automatique et la recherche documentaire.La similarit\ue9 relationnelle est la correspondance entre des relations par opposition \ue0 la similarit\ue9 attributionnelle, qui est la correspondance entre des attributs. Lorsque deux mots ont un degr\ue9 \ue9lev\ue9 de similarit\ue9 attributionnelle, on parle de synonymie. Lorsque deux paires de mots ont un degr\ue9 \ue9lev\ue9 de similarit\ue9 relationnelle, on dit que leurs relations sont analogues. Par exemple, la paire ma\ue7on/pierre est analogue \ue0 la paire menuisier/bois, c'est-\ue0-dire que les relations entre ma\ue7on et pierre sont hautement similaires aux relations entre menuisier et bois. Dans le pass\ue9, les chercheurs qui ont travaill\ue9 sur les mesures de la similarit\ue9 s\ue9mantique se sont surtout int\ue9ress\ue9s \ue0 la similarit\ue9 attributionnelle. Par exemple, l'Analyse s\ue9mantique latente (ASL) peut mesurer le degr\ue9 de similarit\ue9 entre deux mots, mais non entre deux relations. R\ue9cemment, le mod\ue8le de l'espace vectoriel (MEV) de la recherche documentaire a \ue9t\ue9 adapt\ue9 en vue de mesurer la similarit\ue9 relationnelle, et a obtenu un taux de succ\ue8s de 47 % sur une s\ue9rie de 374 questions \ue0 choix multiple d'analogie s\ue9mantique de niveau coll\ue9gial. Dans l'approche MEV, la relation entre une paire de mots est caract\ue9ris\ue9e par un vecteur de fr\ue9quences de combinaisons pr\ue9d\ue9finies dans un corpus important. L'ARL enrichit l'approche du MEV de trois fa\ue7ons : 1) les combinaisons sont tir\ue9es automatiquement du corpus (elles ne sont pas pr\ue9d\ue9finies) ; 2) la D\ue9composition en valeurs singuli\ue8res (DVS) est utilis\ue9e pour lisser les donn\ue9es sur la fr\ue9quence (elle est \ue9galement utilis\ue9e \ue0 cette fin dans l'analyse s\ue9mantique latente), et 3) on a recours \ue0 des synonymes g\ue9n\ue9r\ue9s automatiquement pour explorer des reformulations des paires de mots. L'ARL obtient un taux de succ\ue8s de 56 % aux 374 questions d'analogie s\ue9mantique, ce qui est statistiquement \ue9quivalent au r\ue9sultats humain moyen. Quant au probl\ue8me connexe li\ue9 \ue0 la classification des relations noms-\ue9pith\ue8tes, l'ARL pr\ue9sente des avantages semblables par rapport au MEV tout en utilisant un corpus plus restreint.NRC publication: Ye

Similar works

Full text

thumbnail-image

NRC Publications Archive

redirect
Last time updated on 08/06/2016

This paper was published in NRC Publications Archive.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.