<XML><RECORDS><RECORD><REFERENCE_TYPE>3</REFERENCE_TYPE><REFNUM>9038</REFNUM><AUTHORS><AUTHOR>Kamps,J.</AUTHOR><AUTHOR>Lalmas,M.</AUTHOR><AUTHOR>Pehcevski,J.</AUTHOR></AUTHORS><YEAR>2007</YEAR><TITLE>Evaluating Relevant in Context: Document Retrieval with a Twist</TITLE><PLACE_PUBLISHED>ACM SIGIR Conference on Research and Development in Information Retrieval, Amsterdam, The Netherlands</PLACE_PUBLISHED><PUBLISHER>ACM Press</PUBLISHER><PAGES>749-750</PAGES><LABEL>Kamps:2007:9038</LABEL><KEYWORDS><KEYWORD>Evaluation</KEYWORD></KEYWORDS<ABSTRACT>The Relevant in Context retrieval task is document or article retrieval with a twist, where not only the relevant articles should be retrieved but also the relevant information within each article (captured by a set of XML elements) should be correctly identi?ed. Our main research question is: how to evaluate the Relevant in Context task? We propose a generalized average precision measure that meets two main requirements: i) the score re?ects the ranked list of articles inherent in the result list, and at the same time ii) the score also re?ects how well the retrieved information per article (i.e., the set of elements) corresponds to the relevant information. The resulting measure was used at INEX 2006.</ABSTRACT><NOTES>Poster</NOTES></RECORD></RECORDS></XML>