a) Briefly explain how graph theory has been used to provide static metrics for the usability of an interactive web site.
[Seen problem] A range of graph theoretic metrics has been proposed to assess the usability of web sites. Individual pages are thought of as nodes or vertices. Links are represented as the edges in a graph. Thimbleby is arguably the best-known exponent of this approach, although variations have been implemented within the US Govt NIST tool suite. The central idea is that measures of connectivity have some significant for usability. A totally connected graph provides the shortest number of clicks from one page to another. However, such simplistic approaches have been refined to acknowledge differences with web-based applications. For instance, it is common to place upper bounds on the total number of connections per page as the size of the network increases
b) Outline the main features of a graph theoretic algorithm that can be used to assess the navigational structure of a web site. What are the principle limitations of this approach?
[Seen/unseen problem] A number of solutions are possible here. In the lectures, I have mentioned the following:
c) A major UK Bank has recently discovered that many users are so focussed on finding their account information that they miss the links and other features that designers provide to help them navigate around the Bank's web site. As a result, staff members spend much of their time responding to telephone calls and email helping customers find information even though those customers have visited pages that include links to this information. You have been commissioned to write a brief technical report for the bank's site manager. This document will provide ten guidelines on how to address the problem of users missing important navigational information.
[Unseen problem] There are many different guidelines that could be proposed this is only an initial set:
a) Describe three different metrics for subjective satisfaction with interactive web sites.
This is another starter question relatively simple with several alternative answers. One of which can be inferred from the following part of the question. Here are some alternatives:
b) Briefly explain why web logs cannot be relied upon as an indicator of subjective satisfaction and usability for many web sites.
[Unseen problem] Web server logs are difficult to interpret as a measure of subjective satisfaction. On the one hand it can be argued that an increasing number of hits will indicate that users are accessing the site more often and so are 'happier' with their experience of using the site. On the other hand, an increasing number of hits can be interpreted as users have considerable problems in finding the information that they need. It might suggest navigational problems. There may be effects due to natural increases in the numbers of people gaining Internet access. Hence, server logs may be filtered to remove abandoned requests or to only focus on repeat visits. Even so, in some markets there are no natural competitors and so repeated visits may indicate that this is the only possible source even though it is frustrating to use and poorly designed. This is an unsatisfactory or dangerous position when competitors enter a market place.
c) A recent survey of E-commerce usability has identified a number of different tasks that users have to perform whenever they make a purchase over the Internet:
This is a more open-ended question with more scope for the first class students. The starting point would be to identify the key tasks that customers of this particular site are likely to perform. This can be done prior to site development by looking at consumer behaviour with the web sites of rival companies in several different countries. Formative evaluation using informal techniques, such as think alouds, can help in this elicitation phase. Users can describe any problems that they have with competitor web sites as they perform sample on-line purchases. This may not reveal particular preferences for features, especially if the existing sites do not support all of the above options. Questionnaires can be used to get individuals to rank elements in the above list. The results of such expressed preferences are also subject to biases. For instance, recency bias can affect expressed preferences if an individual recalls an immediate problem in the past with a particular site but forgets poor features that were fixed on a competitor web site in the past or the lack of that feature has led them to abandon the competitor site entirely. Once customer priorities are identified, meetings can then be held to reconsider their preferences in the light of client priorities. For instance, customers might wish to view competitor information on the web site. However, some controls may have to be placed over this functionality unless it is accounted for in the site development strategy. This process of arbitration between customer and client preferences must be informed by resource availability for site development.
Initial development can be further informed by formative evaluation as users attempt to perform common tasks on prototype implementations. This can be problematic as it is likely that the initial versions of a site will be tailored to more obvious tasks. These evaluations are likely to determine whether the site fails to support any less obvious tasks that may have been missed during the initial elicitation. It can be difficult to interpret response to 'have we missed anything' questions. This can provoke a lengthy wish list or a minimal response based on a lack of knowledge about what might be possible.
Once the site goes live, the subjective satisfaction techniques, mentioned in the previous parts of this question, can be used to assess whether the site meets expectations. Again, however, the summative use of these techniques is not particularly well tailored to identify missing functionality. More specific facilities can be introduced so customers can provide feedback on their use of the site. This increases the salience of item 9 in the previous list. Any testing strategy should not stop with summative evaluation but continue into site support and maintenance
3. Question on second half of the course.
[Total 25 marks]
4. Question on second half of the course.
[Total 25 marks]