Error-Driven Analysis of Challenges in Coreference Resolution

Abstract

Coreference resolution metrics quantify errors but do not analyze them. Here, we consider an automated method of categorizing errors in the output of a coreference system into intuitive underlying error types. Using this tool, we first compare the error distributions across a large set of systems, then analyze common errors across the top ten systems, empirically characterizing the major unsolved challenges of the coreference resolution task.

Publication
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing
Date
Links
Details PDF Slides PDF Slides Code BibTeX Abstract Citations