<XML><RECORDS><RECORD><REFERENCE_TYPE>3</REFERENCE_TYPE><REFNUM>9283</REFNUM><AUTHORS><AUTHOR>Athanasakos,K.</AUTHOR><AUTHOR>Stathopoulos,V.</AUTHOR><AUTHOR>Jose,J.M.</AUTHOR></AUTHORS><YEAR>2010</YEAR><TITLE>A Framework For Evaluating Automatic Image Annotation Algorithms</TITLE><PLACE_PUBLISHED>32nd European Conference on Information Retrieval</PLACE_PUBLISHED><PUBLISHER>Springer</PUBLISHER><LABEL>Athanasakos:2010:9283</LABEL><KEYWORDS><KEYWORD>multimedia information retrieval</KEYWORD></KEYWORDS<ABSTRACT>Several Automatic Image Annotation (AIA) algorithms have been introduced recently, which have been found to outperform previous models. However, each one of them has been evaluated using either different descriptors, collections or parts of collections, or "easy" settings. This fact renders their results non-comparable, while we show that collection-specific properties are responsible for the high reported performance measures, and not the actual models. In this paper we introduce a framework for the evaluation of image annotation models, which we use to evaluate two state-of-the-art AIA algorithms. Our findings reveal that a simple Support Vector Machine (SVM) approach using Global MPEG-7 Features outperforms state-of-the-art AIA models across several collection settings. It seems that these models heavily depend on the set of features and the data used, while it is easy to exploit collection-specific properties, such as tag popularity especially in the commonly used Corel 5K dataset and still achieve good performance.</ABSTRACT></RECORD></RECORDS></XML>