The workshop will feature the following keynote speakers:
Getting Personal: Personalization of Support for Interaction with Information
Rutgers University, USA
One important aspect of adaptive information retrieval is personalization of the interaction with information to an individual's (or perhaps group's) context, situation, characteristics, and other factors. In this talk, I identify the goals of such personalization, discuss previous and current research in personalization, propose a classification of factors according to which personalization might be accomplished, and speculate on future research in personalization of interaction with information. I also discuss possible methods for large-scale, community-wide evaluation and comparison of personalization techniques.
A Model of IR Testing and Evaluation: From Laboratory towards User-Involved
National Institute of Informatics, Japan
Adaptive information retrieval probably has two sub-classes; collaborative adaptation by groups of users, and adaptation by single users within interaction or exploration. Either case, IR testing and evaluation methodologies and metric which have been widely used in the research and practice of the IR need to accommodate to the new environment. In this talk, for the first, I briefly introduce the activities of NTCIR, and then as an extension of these, I propose a model, or framework, of IR testing which covering from laboratory-type testings to user-involved tests in interactive setting and discuss about feasible strategies towards evaluation of adaptive information retrieval systems by step-by-step wise extension to the features related to adaptive.
Building Test Collections for Adaptive Information Retrieval: What to Abstract for What Cost?
National Institute of Standards and Technology, USA
Traditional Cranfield test collections represent an abstraction of a retrieval task that Sparck Jones calls the "core competency" of retrieval: a task that is necessary, but not sufficient, for user retrieval tasks. The abstraction facilitates research by controlling for (some) sources of variability, thus increasing the power of experiments that compare system effectiveness while reducing their cost. However, even within the highly-abstracted case of the Cranfield paradigm, meta-analysis demonstrates that the user/topic effect is greater than the system effect, so experiments must include relatively large number of topics to distinguish systems' effectiveness. The evidence further suggests that changing the abstraction slightly to include just a bit more characterization of the user will result in a dramatic loss of power or increase in cost of retrieval experiments. Defining a new, feasible abstraction for supporting adaptive IR research will require winnowing the list of all possible factors that can affect retrieval behavior to a minimum number of essential factors.