Topic Sensitive Attention on Generic Corpora Corrects Sense Bias in Pretrained Embeddings

Piratla, Vihari ; Sarawagi, Sunita ; Chakrabarti, Soumen (2019) Topic Sensitive Attention on Generic Corpora Corrects Sense Bias in Pretrained Embeddings In: 57th Annual Meeting of the Association for Computational Linguistics.

[img] PDF
393kB

Official URL: http://doi.org/10.18653/v1/P19-1168

Related URL: http://dx.doi.org/10.18653/v1/P19-1168

Abstract

Given a small corpus D_T pertaining to a limited set of focused topics, our goal is to train embeddings that accurately capture the sense of words in the topic in spite of the limited size of D_T. These embeddings may be used in various tasks involving D_T. A popular strategy in limited data settings is to adapt pretrained embeddings E trained on a large corpus. To correct for sense drift, fine-tuning, regularization, projection, and pivoting have been proposed recently. Among these, regularization informed by a word’s corpus frequency performed well, but we improve upon it using a new regularizer based on the stability of its cooccurrence with other words. However, a thorough comparison across ten topics, spanning three tasks, with standardized settings of hyper-parameters, reveals that even the best embedding adaptation strategies provide small gains beyond well-tuned baselines, which many earlier comparisons ignored. In a bold departure from adapting pretrained embeddings, we propose using D_T to probe, attend to, and borrow fragments from any large, topic-rich source corpus (such as Wikipedia), which need not be the corpus used to pretrain embeddings. This step is made scalable and practical by suitable indexing. We reach the surprising conclusion that even limited corpus augmentation is more useful than adapting embeddings, which suggests that non-dominant sense information may be irrevocably obliterated from pretrained embeddings and cannot be salvaged by adaptation.

Item Type:Conference or Workshop Item (Paper)
Source:Copyright of this article belongs to Association for Computational Linguistics
ID Code:130896
Deposited On:01 Dec 2022 06:17
Last Modified:27 Jan 2023 09:40

Repository Staff Only: item control page