Faster parsing and supertagging model estimation


Parsers are often the bottleneck for data acquisition, processing text too slowly to be widely applied. One way to improve the efficiency of parsers is to construct more confident statistical models. More training data would enable the use of more sophisticated features and also provide more evidence for current features, but gold standard annotated data is limited and expensive to produce.\n\nWe demonstrate faster methods for training a supertagger using hundreds of millions of automatically annotated words, constructing statistical models that further constrain the number of derivations the parser must consider. By introducing new features and using an automatically annotated corpus we are able to double parsing speed on Wikipedia and the Wall Street Journal, and gain accuracy slightly when parsing Section 00 of the Wall Street Journal.

Proceedings of the Australasian Language Technology Association Workshop 2009
Jonathan K. Kummerfeld
Jonathan K. Kummerfeld
Postdoctoral Research Fellow

Postdoc working on Natural Language Processing.