TIPSTER Text Summarization Evaluation Conference (SUMMAC)
SUMMAC has established definitively in a large-scale evaluation that automatic text summarization is very effective in relevance assessment tasks. Summaries at relatively low compression rates (17% for adhoc, 10% for categorization) allowed for relevance assessment almost as accurate as with full-text (5% degradation in F-score for adhoc and 14% degradation for categorization, both degradations not being statistically significant), while reducing decision-making time by 40% (categorization) and 50% (adhoc). In the question-answering task, automatic methods for measuring informativeness of topic-related summaries were introduced; the systems' scores using the automatic methods were found to correlate positively with informativeness scores rendered by human judges. The evaluation methods used in the SUMMAC evaluation are of intrinsic interest to both summarization evaluation as well as evaluation of other "output-related" NLP technologies, where there may be many potentially acceptable outputs, with no automatic way to compare them.
For more information please contact: Inderjeet Mani ([email protected])