Exploring the Value of Personalized Word Embeddings


In this paper, we introduce personalized word embeddings, and examine their value for language modeling. We compare the performance of our proposed prediction model when using personalized versus generic word representations, and study how these representations can be leveraged for improved performance. We provide insight into what types of words can be more accurately predicted when building personalized models. Our results show that a subset of words belonging to specific psycholinguistic categories tend to vary more in their representations across users and that combining generic and personalized word embeddings yields the best performance, with a 4.7{%} relative reduction in perplexity. Additionally, we show that a language model using personalized word embeddings can be effectively used for authorship attribution.

Proceedings of the 28th International Conference on Computational Linguistics
Jonathan K. Kummerfeld
Jonathan K. Kummerfeld
Tenure-Track Faculty at the University of Sydney (mid-2022)

Postdoc working on Natural Language Processing.