<XML><RECORDS><RECORD><REFERENCE_TYPE>3</REFERENCE_TYPE><REFNUM>8140</REFNUM><AUTHORS><AUTHOR>Balasuriya,L.S.</AUTHOR><AUTHOR>Siebert,J.P.</AUTHOR></AUTHORS><YEAR>2006</YEAR><TITLE>An Architecture for Object-based Saccade Generation using a Biologically Inspired Self-organised Retina</TITLE><PLACE_PUBLISHED>Proceedings of the International Joint Conference on Neural Networks, Vancouver</PLACE_PUBLISHED><PUBLISHER>N/A</PUBLISHER><LABEL>Balasuriya:2006:8140</LABEL><ABSTRACT>Our paper presents a fully automated computational mechanism for targeting a space-variant retina based on the high-level visual content of a scene. Our retina’s receptive fields are organised at a high density in the central foveal region of the retina and at a sparse resolution in the surrounding periphery in a non-uniform, locally pseudo-random tessellation similar to that found in biological vision. Multi-resolution, space-variant visual information is extracted on a scale-space continuum and interest point descriptors are extracted that represent the visual appearance of local regions. We demonstrate the vision system performing simple visual reasoning tasks with the extracted visual descriptors by combining the sparse information from its periphery (which gives it a wide field of view) and the high resolution information from the fovea (useful for accurate reasoning). High-level semantic concepts about content in the scene such as object appearances are formed using the extracted visual evidence, and the system performs saccadic explorations by serially targeting ‘interesting’ regions in the scene based on the location of high-level visual content and the current task it is trying to achieve.</ABSTRACT></RECORD></RECORDS></XML>