Deep Parser Performance Evaluation — LT World

LT World

Supporters

provided by

dfki logo

with support by

eu star logofp7 logo

through

meta logo
clarin logo

as well as by

bmbf logo

through

take logo

N.B.

This site uses Google Analytics to record statistics about site visits - seeĀ Legal Information.

You are here: Home kb Information & Knowledge Technologies Deep Parser Performance Evaluation

Deep Parser Performance Evaluation


Collaborative Language Engineering: A Case Study in Efficient Grammar-Based Processing.
Stephan Oepen and Dan Flickinger and Jun-ichi Tsujii and Hans Uszkoreit.
Center for the Study of Language and Information. CSLI Lecture Notes. Stanford. 2001.




The task of parser evaluation is to measure the speed and accuracy of parsers with respect to a manually parsed test corpus. In the evaluation of so-called deep parsers, which are based on comprehensive linguistic theories, the assignment of the correct semantic analysis plays an important role. Another aspect is the handling and reduction of ambiguities, which is important not only from the linguistic point of view but because it plays a major role in the time and memory requirements of deep parsers. Test data can either be a treebank constructed from naturally occurring data, such as the "Wall Street Corpus", or a "test suite" containing constructed examples which cover all interesting linguistic phenomena. An important issue in deep parser evaluation is the comparison of parsers and grammars based on different linguistic theories.