Workshop Programme

  • 09:00-09:15 - Introduction and Welcome by Leif, Kal, Jaap and Mark
  • 09.15-09.45 - Invited Speaker: Donna Harman, NIST on User Study -> What-if experiment -> User Study
  • 09.45-10.15 - Invited Speaker: Ryen White, Microsoft on Building models of simulated users from large quantities of searching and browsing logs
  • 10:15-10:45 Morning Coffee Break
  • 10.45-11.30 - Scene Setting - Simulation of Interaction, by Leif, Kal, Jaap and Mark
  • 11.30-12:00 - Boaster Session: 90 seconds and one slide per poster
  • 12:00-14:00 - Working Lunch: Lunch and Poster session
  • 14:00-15:40 Breakout Sessions - Small groups will be formed (around 10 participants per group) and lead by a chair to discuss one or more of the following topics in detail based.
    • Breakout Group A: Making Simulations Work
    • Breakout Group B: Generating and Modeling Queries and Interaction
    • Breakout Group C: Creating Simulations with Search Logs
    • Breakout Group D: Simulated Browsing and User Interfaces
  • 15:40-16:10 - Coffee Breaks
  • 16:10-16:40 - Group Presentations, chaired by Daniel Tunkelang
  • 16:40-17:40 Road Map, Future Challenges, Wrap and What Next?
  • 17:40 BEER! WHISKEY! and Philosophy 101 at a local pub.

Towards Automated Evaluation of Interactive IR

This workshop aims to explore the use of Simulation of Interactions to enable automated evaluation of Interactive Information Retrieval Systems and Applications.

Standard test collections only enable a very limited type of interaction to be evaluated (i.e. query - response). This is largely due to the high costs involved in going beyond this limited interaction and problems associated with replicability and repeatability of experiments.

Arguably, Simulation of Interaction provides a cost-effective way to construct and repeat evaluations of interactive systems and applications. This powerful automated evaluation technique provides a high degree of control and ensures that experiments can be replicated --- but we need your help in developing "standardized" methodologies for simulations, techniques for simulations, models and methods for simulations, measures of performance given simulations, and more.

This workshop would also like to obtain contributions, opinions, and feedback in an open and collaborative way from researchers working in the areas below, but not limited to:

  • Users: Do you know what makes users tick? Do you know how to test users and systems in the lab/in the wild?
  • Systems: Have you been thinking about testing your IR models with interactive IR experiments?
  • Theory and Models in IR: Are you developing models that capture interaction?
  • Interaction and behavior: Have you been studying the interactions with systems and the behavior of users?
  • Measuring and Validating: Have you been wondering how to measure performance in the wild? Building test collections for novel tasks?
  • Interfaces: Have you been designing interfaces but require objective evaluations?

In a day mainly of discussion and debate your expertise in these areas will play a guiding role in the development of Automated Evaluation of Interactive Information Retrieval.

Workshop Goals

  1. To provide an open and friendly forum for the discussion, debate and definition on Simulation of Interaction for Automated Evaluation
  2. To discuss, debate and defined Automate Evaluation, in particular through Simulation of Interaction
  3. To provide report on the developments and outcomes of the workshop and provide a useful online resource for the community on automated evaluation for IIR.

Interested in Participating?

If you would like to participate in this event, then get in touch and register your interest.

Participation will be happening online before the workshop and also after, so even if you cant attend the workshop you can still get involved!

To provide an open and friendly forum participants will be encouraged to provide constructive comments on each others submissions and contribution to the online group resource.

To partake in the workshop we cordially invite your contributions to the SimInt Forum and submissions to the workshop -- position papers and poster papers (see below).

Submit a paper to ensure attendance!

Submissions

We are fielding two types of submissions:

  • New and Novel Contributions: For example, but not limited to:
    • Propose a new automated evaluation methodology
    • Propose a new or novel technique for simulation of interaction
    • Define a model of interaction or behavior
    • Experimental Results of Simulations
    • etc
  • Positions, Reflections and Summaries: Contribute to the discussion and/or report by providing your opinion on simulation of interaction, or contribute to the report by providing notes on different methods and summaries of your favorite techniques, and references.
    • Propose or state research questions that could be answered using automated evaluation of Interactive Information Retrieval
    • Provide a reflection on simulation/automated evaluation in IR
    • State your stance for/or against simulation, or discuss the pros and cons of simulation
    • Given your area of expertise, how can we build better simulators, or what lessons have been learnt about simulation in your area?
    • And, on pretty much anything else that you think could be relevant.

Submitted papers should be 2 pages in length and prepared in the ACM SIG Format ( http://www.acm.org/sigs/pubs/proceed/template.html ). Submissions need not be anonymized and will be reviewed by at least two members of the PC.

To submit please use the EasyChair website for SimInt 2010
( http://www.easychair.org/conferences/?conf=simint2010 ).

Accepted submissions will be published as part of the workshop proceedings and also help to form a report of the workshop (see below).

Workshop Report

We hope to have a very interactive, excuse the pun, and lively workshop, that produces a useful resource of the state of the art in Automated Evaluation for Interactive Information Retrieval. The report will aim to address the themes below. The report will surmize the workshop findings and be a product of all the participants (and that is why we are looking for active contributors). The report will be edited by the organizers, and hopefully will result in a joint publications with participants.

  1. What is Automated Evaluation?
    • Definitions of automated evaluation
    • Methods and methodologies for automated evaluation
    • The ideal simulation (akin to the ideal test collection)
  2. Where can we apply it?
    • Types of experiments that could be performed
    • Control and limitations of the different approaches
    • Why and why not, cases for and against such evaluation
  3. What and How to Model?
    • Simulating queries, judgments, clicks, etc.
    • User modeling and estimating models of interaction
    • Requirements for future development
  4. What are the Limits of Simulation?
    • Where is the user or goal in automated evaluation
    • Where is interaction in automated evaluation?
    • Realistic and unrealistic assumptions, barriers to success
  5. Research agenda with a road map of future challenges
Please Note: Submissions that help answers or address these themes will be preferred. However, other themes that are relevant to the workshop will also be acceptable!

Schedule

  • Now: Expressions of Interest and Intentions to Submit
  • June 11, 2010: Deadline for Submissions
  • June 18, 2010: Notification
  • June 23, 2010: Camera-Ready Versions Due
  • June 25, 2010: Group Forum Open for Discussion
  • July 23, 2010: Workshop