Analyzing the Surprising Variability in Word Embedding Stability Across Languages


Word embeddings are powerful representations that form the foundation of many natural language processing architectures, both in English and in other languages. To gain further insight into word embeddings, we explore their stability (e.g., overlap between the nearest neighbors of a word in different embedding spaces) in diverse languages. We discuss linguistic properties that are related to stability, drawing out insights about correlations with affixing, language gender systems, and other features. This has implications for embedding use, particularly in research that uses them to study language trends.

Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Jonathan K. Kummerfeld
Jonathan K. Kummerfeld
Tenure-Track Faculty at the University of Sydney (mid-2022)

Postdoc working on Natural Language Processing.