<XML><RECORDS><RECORD><REFERENCE_TYPE>10</REFERENCE_TYPE><REFNUM>8810</REFNUM><AUTHORS><AUTHOR>Sanderson,M.</AUTHOR><AUTHOR>Braschler,M.</AUTHOR><AUTHOR>Ferro,N.</AUTHOR><AUTHOR>Gonzalo,J.</AUTHOR></AUTHORS><YEAR>2008</YEAR><TITLE>Workshop on Novel Methodologies for Evaluation in Information Retrieval</TITLE><PLACE_PUBLISHED>DCS Technical Report Series</PLACE_PUBLISHED><PUBLISHER>Dept of Computing Science, University of Glasgow</PUBLISHER><ISBN>TR-2008-265</ISBN><LABEL>Sanderson:2008:8810</LABEL><KEYWORDS><KEYWORD>ECIR 2008 Novel Methodologies for Evaluation in IR Workshop</KEYWORD></KEYWORDS<ABSTRACT>Information retrieval is an empirical science; the field cannot move forward unless there are means of evaluating the innovations devised by researchers. However the methodologies conceived in the early years of IR and used in the campaigns of today are starting to show their age and new research is emerging to understand how to overcome the twin challenges of scale and diversity. The methodologies used to build test collections in the modern evaluation campaigns were originally conceived to work with collections of 10s of thousands of documents. The methodologies were found to scale well, but potential flaws are starting to emerge as test collections grow beyond 10s of millions of documents. Support for continued research in this area is crucial if IR research is to continue to evaluate large scale search. With the rise of the large Web search engines, some believed that all search problems could be solved with a single engine retrieving from a one vast data store. However, it is increasingly clear that evolution of retrieval is not towards a monolithic solution, but instead to a wide range of solutions tailored for different classes of information and different groups of users or organizations. Each tailored system on offer requires a different mixture of component technologies combined in distinct ways and each solution requires evaluation.</ABSTRACT></RECORD></RECORDS></XML>