What's in a Name? Are BERT Named Entity Representations just as Good for any other Name?

Balasubramanian, Sriram ; Jain, Naman ; Jindal, Gaurav ; Awasthi, Abhijeet ; Sarawag, Sunita (2020) What's in a Name? Are BERT Named Entity Representations just as Good for any other Name? In: 5th Workshop on Representation Learning for NLP.

[img] PDF
349kB

Abstract

We evaluate named entity representations of BERT-based NLP models by investigating their robustness to replacements from the same typed class in the input. We highlight that on several tasks while such perturbations are natural, state of the art trained models are surprisingly brittle. The brittleness continues even with the recent entity-aware BERT models. We also try to discern the cause of this non-robustness, considering factors such as tokenization and frequency of occurrence. Then we provide a simple method that ensembles predictions from multiple replacements while jointly modeling the uncertainty of type annotations and label predictions. Experiments on three NLP tasks show that our method enhances robustness and increases accuracy on both natural and adversarial datasets.

Item Type:Conference or Workshop Item (Paper)
Source:Copyright of this article belongs to ResearchGate GmbH
ID Code:128285
Deposited On:19 Oct 2022 06:10
Last Modified:15 Nov 2022 08:58

Repository Staff Only: item control page