Copyright Chris Johnson, 2003.
Xday, XX May 2003.

00:00 am - 00:00am

### University of Glasgow

#### COMPUTING SCIENCE - 3W: Interactive Systems 3

(Answer one question from Section A and one question from Section B).

Section A (Chris' Questions)

1.

a) Briefly explain how graph theory has been used to provide static metrics for the usability of an interactive web site.

[5 marks]

[Seen problem] A range of graph theoretic metrics has been proposed to assess the usability of web sites. Individual pages are thought of as nodes or vertices. Links are represented as the edges in a graph. Thimbleby is arguably the best-known exponent of this approach, although variations have been implemented within the US Govt NIST tool suite. The central idea is that measures of connectivity have some significant for usability. A totally connected graph provides the shortest number of clicks from one page to another. However, such simplistic approaches have been refined to acknowledge differences with web-based applications. For instance, it is common to place upper bounds on the total number of connections per page as the size of the network increases

b) Outline the main features of a graph theoretic algorithm that can be used to assess the navigational structure of a web site. What are the principle limitations of this approach?

[10 marks]

[Seen/unseen problem] A number of solutions are possible here. In the lectures, I have mentioned the following:

1. Travelling salesman tour. - shortest number of steps to visit every page on a site.
2. Chinese postman tour. - if you can go A to C or B to C then check both routes.
3. Spanning tree. rooted on home page - measure minimal depth or breadth?
4. Domination number: site map provides dominant set to access rest of site; cardinality of minimal set indicates cognitive complexity.
5. Vertex metrics: sum of distance from a page to every other page; pages with minimum total distances are median pages etc
I would be happy to accept informal descriptions without the names being mentioned. However, in either case I would like a well-reasoned argument about the potential problems these can be summarised as follows. Firstly, many web sites increasingly rely upon dynamic page generation. The structure of a web site does not exist without users performing some search action or information request, for example over a product database. This can complicate the static analysis of page lines. A second set of problems relate not to the identification of web structure but to the interpretation of the results that obtained. For example, if a travelling salesman tour takes 50 steps on one site and 150 on another what can we conclude about the usability of each site? So many other factors impact upon usability; including issues such as consistency and the visibility of navigational structures that such information can be unhelpful.

c) A major UK Bank has recently discovered that many users are so focussed on finding their account information that they miss the links and other features that designers provide to help them navigate around the Bank's web site. As a result, staff members spend much of their time responding to telephone calls and email helping customers find information even though those customers have visited pages that include links to this information. You have been commissioned to write a brief technical report for the bank's site manager. This document will provide ten guidelines on how to address the problem of users missing important navigational information.

[12 marks]

[Unseen problem] There are many different guidelines that could be proposed  this is only an initial set:

1. consistency. Always make sure that the pages have their navigational information in the same place, using the same look and feel so that once people are confident in how to use any widgets that the confidence can be transferred between different areas of the site.
2. visibility. Ensure that any navigational help is place in the same location on a web page and is in a position where users are likely to see it without obscuring other critical information. Frames can be used but consider accessibility issues). Testing can be conducted to determine the likelihood of a design achieving these objectives but notice the previous comments on the existing web site.
3. home page. A home page can provide a central resource for users to return to should they get lost. Additional navigational information may be provided from this location  such as access to search facilities and a site map if this information is not accessible from all other pages.
4. site map and search facilities. See previous comments. These techniques can reduce the plethora of links and guide users ina structured way (site map) or by providing natural language retrieval.
5. 'tool tips' feature. Some sites greet users with specific tips on how to navigate the site. This can be annoying and patronising for some users but studies especially of older age groups have shown significant benefits from the use of these techniques.
6. personalised navigation Some sites, such as banks, can monitor user behaviour and tailor the presentation of navigation links. For instance, only active accounts may be shown,
7. locally based navigation. One issues that can prevent good navigation is the plethora of links that are seen on some pages. Paradoxically, navigation can be supported by reducing the number of links  for instance, by only showing those relevant to a group of pagers and leaving more global navigation via a home page.
Longitudinal studies involve the evaluation of user behaviour over a prolonged period of time. Telephone and email requests for help could be monitored for some period after the guidelines were introduced to see whether specific requests for help actually declines. Such results are difficult to interpret, however, if the number of users on the site goes up. The increase in new users may mask any positive effect on site navigation. Alternatively, system logs might be monitored to determine patterns of navigational behaviour for users from a particular IP address or at a more fine-grained level using bank log-in data. We could monitor the repertoire of URL's within a site over time. However, previous studies have shown that people operate with relatively small working sets of URL's over prolonged periods of time and so it would be important not to set unrealistic expectations on the range of pages that might be visited more than once.

2.

a) Describe three different metrics for subjective satisfaction with interactive web sites.

[4 marks]

This is another starter question  relatively simple with several alternative answers. One of which can be inferred from the following part of the question. Here are some alternatives:

1. There are standard questionnaires that can be used (e.g., Chin and Dyer questionnaire).
2. Workload metrics, including NASA TLX can be used to make inferences about subjective satisfaction.
3. Functional measures including web server logs for 'hit' information or number of 'revisits'.
4. Sales statistics provide a relatively crude measure but could be used.
5. Physiological measures such as galvanic skin resistance have been used.
6. etc.

b) Briefly explain why web logs cannot be relied upon as an indicator of subjective satisfaction and usability for many web sites.

[6 marks]

[Unseen problem] Web server logs are difficult to interpret as a measure of subjective satisfaction. On the one hand it can be argued that an increasing number of hits will indicate that users are accessing the site more often and so are 'happier' with their experience of using the site. On the other hand, an increasing number of hits can be interpreted as users have considerable problems in finding the information that they need. It might suggest navigational problems. There may be effects due to natural increases in the numbers of people gaining Internet access. Hence, server logs may be filtered to remove abandoned requests or to only focus on repeat visits. Even so, in some markets there are no natural competitors and so repeated visits may indicate that this is the only possible source even though it is frustrating to use and poorly designed. This is an unsatisfactory or dangerous position when competitors enter a market place.

c) A recent survey of E-commerce usability has identified a number of different tasks that users have to perform whenever they make a purchase over the Internet:

1. Order one or several products.
2. Compare product prices and features.
3. Search to find a product and its price.
4. Find related or similar products.
5. Login and trace the status of an order.
6. Bookmark a page and return to find it again.
7. Email the location of a product to a friend.
8. Subscribe to email about products or special offers.
9. Post a product review or provide website feedback
10. Request information about a product or service.
11. Find products with certain features (eg red kettles)
12. Find site using a search engine
13. Find out about shipping charges and terms of sale.
14. Browse the site to find products of interest.
You have been asked to write a brief technical report for a company that is planning to launch a new E- commerce site. The company will sell electrical products ranging from TV's to kettles. The report must describe a testing strategy that is intended to ensure that the eventual site supports a majority of these tasks. The company are, however, also concerned that to ensure that resources are focussed on evaluating those tasks that will have the greatest impact on the usability of their eventual site.

[15 marks]

This is a more open-ended question with more scope for the first class students. The starting point would be to identify the key tasks that customers of this particular site are likely to perform. This can be done prior to site development by looking at consumer behaviour with the web sites of rival companies in several different countries. Formative evaluation using informal techniques, such as think alouds, can help in this elicitation phase. Users can describe any problems that they have with competitor web sites as they perform sample on-line purchases. This may not reveal particular preferences for features, especially if the existing sites do not support all of the above options. Questionnaires can be used to get individuals to rank elements in the above list. The results of such expressed preferences are also subject to biases. For instance, recency bias can affect expressed preferences if an individual recalls an immediate problem in the past with a particular site but forgets poor features that were fixed on a competitor web site in the past or the lack of that feature has led them to abandon the competitor site entirely. Once customer priorities are identified, meetings can then be held to reconsider their preferences in the light of client priorities. For instance, customers might wish to view competitor information on the web site. However, some controls may have to be placed over this functionality unless it is accounted for in the site development strategy. This process of arbitration between customer and client preferences must be informed by resource availability for site development.

Initial development can be further informed by formative evaluation as users attempt to perform common tasks on prototype implementations. This can be problematic as it is likely that the initial versions of a site will be tailored to more obvious tasks. These evaluations are likely to determine whether the site fails to support any less obvious tasks that may have been missed during the initial elicitation. It can be difficult to interpret response to 'have we missed anything' questions. This can provoke a lengthy wish list or a minimal response based on a lack of knowledge about what might be possible.

Once the site goes live, the subjective satisfaction techniques, mentioned in the previous parts of this question, can be used to assess whether the site meets expectations. Again, however, the summative use of these techniques is not particularly well tailored to identify missing functionality. More specific facilities can be introduced so customers can provide feedback on their use of the site. This increases the salience of item 9 in the previous list. Any testing strategy should not stop with summative evaluation but continue into site support and maintenance

Section B (Rob's Questions)

3. Question on second half of the course.

[Total 25 marks]

4. Question on second half of the course.

[Total 25 marks]

[end]