Shankar, Shiv ; Garg, Siddhant ; Sarawagi, Sunita (2018) Surprisingly Easy Hard-Attention for Sequence to Sequence Learning In: 2018 Conference on Empirical Methods in Natural Language Processing.
PDF
277kB |
Official URL: http://doi.org/10.18653/v1/D18-1065
Related URL: http://dx.doi.org/10.18653/v1/D18-1065
Abstract
In this paper we show that a simple beam approximation of the joint distribution between attention and output is an easy, accurate, and efficient attention mechanism for sequence to sequence learning. The method combines the advantage of sharp focus in hard attention and the implementation ease of soft attention. On five translation tasks we show effortless and consistent gains in BLEU compared to existing attention mechanisms.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Source: | Copyright of this article belongs to Association for Computational Linguistics |
ID Code: | 128327 |
Deposited On: | 19 Oct 2022 09:07 |
Last Modified: | 15 Nov 2022 09:04 |
Repository Staff Only: item control page