<XML><RECORDS><RECORD><REFERENCE_TYPE>3</REFERENCE_TYPE><REFNUM>8337</REFNUM><AUTHORS><AUTHOR>Urban,J.</AUTHOR><AUTHOR>Hilaire,X.</AUTHOR><AUTHOR>Hopfgartner,F.</AUTHOR><AUTHOR>Villa,R.</AUTHOR><AUTHOR>Jose,J.M.</AUTHOR><AUTHOR>Chantamunee,S.</AUTHOR><AUTHOR>Gotoh,Y.</AUTHOR></AUTHORS><YEAR>2006</YEAR><TITLE>Glasgow University at TRECVid2006</TITLE><PLACE_PUBLISHED>Proceedings of the TRECVid'06 Workshop</PLACE_PUBLISHED><PUBLISHER>National Institute of Standards and Technology</PUBLISHER><LABEL>Urban:2006:8337</LABEL><KEYWORDS><KEYWORD>video retrieval; relevance feedback; interactive evaluation</KEYWORD></KEYWORDS<ABSTRACT>In the first part of this paper we describe our experiments in the automatic and interactive search tasks of TRECVID 2006. We submitted five fully automatic runs, including a text baseline, two runs based on visual features, and two runs that combine textual and visual features in a graph model. For the interactive search, we have implemented a new video search interface with relevance feedback facilities, based on both textual and visual features. The second part is concerned with our approach to the highlevel feature extraction task, based on textual information extracted from speech recogniser and machine translation outputs. They were aligned with shots and associated with high-level feature references. A list of significant words was created for each feature, and it was in turn utilised for identification of a feature during the evaluation.</ABSTRACT><URL>http://www-nlpir.nist.gov/projects/tvpubs/tv6.papers/glasgow.pdf</URL></RECORD></RECORDS></XML>