Methodologies for evaluating accessibility
This section provides a summary of the main techniques which can be used to evaluate the accessibility of ICT products and systems.
The generic goal of an evaluation is to provide feedback about a system or potential system, either to improve the features of a system under development, or to critique a completed system.
Categories of evaluation
Broadly speaking, there are two main categories of evaluation:
Formative evaluations are those conducted throughout the design process, prior to implementation of the final product, to ensure the product meets the users' needs.
Summative evaluations are those that take place after implementation of the final product, with the aim of testing the function of the finished product. This is normally done in order to test compliance with a particular standard or to satisfy a sponsoring agency.
The type of evaluation used will ultimately be determined by the stage of development of the product and the available resources. A brand-new product should, ideally, undergo numerous formative evaluations throughout the design cycle, the emphasis on assessing how well a design meets users' needs and modifying the design accordingly to address any shortfalls. A product upgrade on the other hand, is more likely to consist of a summative assessment of how well a product conforms with current guidelines. The aim would be to identify ways of improving the overall product, rather than modifying multiple elements.
Types of evaluation methodology
An evaluation methodology is a collective term for a procedure used to gather relevant data on the operation of a system or product. There are numerous evaluation methodologies, which can be categorised in various ways. However, for the purposes of this guide, evaluation methodologies will be categorised as follows:
Observational analysis involves an investigator gathering data on what users do when they interact with a product or system. Observational analysis can be direct, where the investigator is present during the task, or indirect, where task execution is viewed by a means other than first person observation, such as through the use of activity logs. Observational analysis is useful for: 1) identifying user needs, leading to the development of a new type of product; and 2) helping to evaluate prototypes.
User surveys are a means of identifying likely users of a product as well as providing information on how specific users may interact with the system. More extensively, the user survey method can be used to gather subjective opinions on the usability of a product, offering an insight into which design solutions will be successful and which will not, prior to the inception of the product. Survey evaluations are normally administered through the use of either a questionnaire or interviews.
Expert evaluation involves the assessment of a product or system by individuals who have the professional training or experience to make an informed judgment on the design and predict the potential problems users would have using the product. Typically, these techniques are relatively inexpensive and easy to learn and provide effective results.
User testing involves representative users carrying out representative tasks with the product to provide insight into the strengths and weaknesses of the current design. User testing is often conducted by comparing two or more designs, or by assessing the design of a prototype against user requirements and/or current guidelines.
- Cambridge Engineering Design Centre (n.d). Inclusive design toolkit. [accessed 06/06/08].
- European Design for All e-Accessibility Network (n.d.). [accessed 05/06/08].
- Preece, J., Rogers, Y. & Sharp H. (2002) Interaction design: beyond human-computer interaction. New York: John Wiley & Sons.
- The Department of Trade and Industry. (1990) A Guide to Usability. Exeter: BPCC Wheatons Ltd.
Last updated: 20.11.2009 © Copyright reserved Website design: Digital Accessibility Team