LT World

You are here: Home kb Information & Knowledge Technologies Deep Parser Performance Evaluation

Deep Parser Performance Evaluation


Collaborative Language Engineering: A Case Study in Efficient Grammar-Based Processing.
Stephan Oepen and Dan Flickinger and Jun-ichi Tsujii and Hans Uszkoreit.
Center for the Study of Language and Information. CSLI Lecture Notes. Stanford. 2001.



  • Beyond PARSEVAL
  • Workshop at LREC-98
  • Workshop Series at Coling 2000



  • XTAG


The task of parser evaluation is to measure the speed and accuracy of parsers with respect to a manually parsed test corpus. In the evaluation of so-called deep parsers, which are based on comprehensive linguistic theories, the assignment of the correct semantic analysis plays an important role. Another aspect is the handling and reduction of ambiguities, which is important not only from the linguistic point of view but because it plays a major role in the time and memory requirements of deep parsers. Test data can either be a treebank constructed from naturally occurring data, such as the "Wall Street Corpus", or a "test suite" containing constructed examples which cover all interesting linguistic phenomena. An important issue in deep parser evaluation is the comparison of parsers and grammars based on different linguistic theories.