skip to main content
Primo Search
Search in: Busca Geral
Tipo de recurso Mostra resultados com: Mostra resultados com: Índice

When FastText Pays Attention: Efficient Estimation of Word Representations using Constrained Positional Weighting

Novotný, Vít ; Štefánik, Michal ; Eniafe, Festus Ayetiran ; Sojka, Petr ; Řehůřek, Radim

arXiv.org, 2022-02

Ithaca: Cornell University Library, arXiv.org

Texto completo disponível

Citações Citado por
  • Título:
    When FastText Pays Attention: Efficient Estimation of Word Representations using Constrained Positional Weighting
  • Autor: Novotný, Vít ; Štefánik, Michal ; Eniafe, Festus Ayetiran ; Sojka, Petr ; Řehůřek, Radim
  • Assuntos: Computer Science - Computation and Language ; Criteria ; Language ; Modelling ; Representations
  • É parte de: arXiv.org, 2022-02
  • Descrição: In 2018, Mikolov et al. introduced the positional language model, which has characteristics of attention-based neural machine translation models and which achieved state-of-the-art performance on the intrinsic word analogy task. However, the positional model is not practically fast and it has never been evaluated on qualitative criteria or extrinsic tasks. We propose a constrained positional model, which adapts the sparse attention mechanism from neural machine translation to improve the speed of the positional model. We evaluate the positional and constrained positional models on three novel qualitative criteria and on language modeling. We show that the positional and constrained positional models contain interpretable information about the grammatical properties of words and outperform other shallow models on language modeling. We also show that our constrained model outperforms the positional model on language modeling and trains twice as fast.
  • Editor: Ithaca: Cornell University Library, arXiv.org
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.