TIPSTER Text Summarization Evaluation Conference (SUMMAC) — LT World

LT World

Supporters

provided by

dfki logo

with support by

eu star logofp7 logo

through

meta logo
clarin logo

as well as by

bmbf logo

through

take logo

N.B.

This site uses Google Analytics to record statistics about site visits - seeĀ Legal Information.

You are here: Home kb Information & Knowledge Information Sources Relevant Sources TIPSTER Text Summarization Evaluation Conference (SUMMAC)

TIPSTER Text Summarization Evaluation Conference (SUMMAC)


SUMMAC has established definitively in a large-scale evaluation that automatic text summarization is very effective in relevance assessment tasks. Summaries at relatively low compression rates (17% for adhoc, 10% for categorization) allowed for relevance assessment almost as accurate as with full-text (5% degradation in F-score for adhoc and 14% degradation for categorization, both degradations not being statistically significant), while reducing decision-making time by 40% (categorization) and 50% (adhoc). In the question-answering task, automatic methods for measuring informativeness of topic-related summaries were introduced; the systems' scores using the automatic methods were found to correlate positively with informativeness scores rendered by human judges. The evaluation methods used in the SUMMAC evaluation are of intrinsic interest to both summarization evaluation as well as evaluation of other "output-related" NLP technologies, where there may be many potentially acceptable outputs, with no automatic way to compare them.

For more information please contact: Inderjeet Mani (imani@mitre.org)