Home / Services / Content Writing for Agencies / Adequateness of usability evaluation methods regarding adaptivity

Adequateness of usability evaluation methods regarding adaptivity

Abstract – The aim of this paper is to make a step towards usability evaluation of adaptive systems. Hence, there is no universal method for evaluating usability of a system, we used evaluation methods divided into three groups according to their mainly activities: inspection, inquiry and testing. In order to provide usability engineering approach for evaluation of adaptive systems we pointed out characteristics of usability evaluation methods regarding adaptivity according to the following criteria: stages, impact, intrusiveness, time, layers, rules and type of data. We gave an overview of usability evaluation methods according to above criteria so that further work on discovering proper approach towards usability evaluation can be performed.


In various application domains, user-adaptive software systems have already proved to be more effective and usable than non-adaptive systems. As pointed out in [9], adaptive systems with clear user benefits are user-adaptive (personalized) tutoring systems which significantly improve the overall learning progress.

Adaptivity of Web-based educational systems assumes collecting information about the student working with the system and creating the appropriate student model, which can then be further used to adapt the presentation of the learning material, navigation through it, and sequencing and annotation to the student. From the learners’ perspective it supports pro-active learning that at the same time adapts itself to the potentials and needs of the individual learner.

Recently, a lot of work has been done under the theme of adaptivity of web-based educational systems to psychological learner’s characteristics and is still undergoing. It is a joint effort of educational psychologist, knowledge engineers, educational system developers and developers of educational system authoring tools. As pointed out in [10] the main problems in a Web-based learning environment is to determine which attributes should be used (are worth modeling) and how (what can be done differently for students with different styles).

For assuring usability of a system it is necessary to identify proper requirements and goals which cover the critical aspects of usability. An elementary need for a systematical

approach, involving thinking, modeling and analyzing, exist from the beginning of system’s life-cycle. There is also a need to develop and take advantage of the usability principles or classifications of usability aspects. These principles can be assured with the evaluation methodologies which enables us a methodological accessibility for the evaluation. However, evaluation can be difficult to perform, time consuming and also expensive in case we do not take the right approach.


Because a universal method for usability evaluation of systems does not exist, it is reasonable to use evaluation methods together. Table 1 enlists some of the evaluation methods for non-adaptive systems [1] [2]. We divided them into three groups according to their mainly activities: inspection, inquiry and testing.

Inspection methods do not contain any users (except of the pluralistic walkthrough where a user is a member of a group). Evaluators need medium to high expertise. The most popular inspection method is the heuristic evaluation, which is very quickly to conduct and not much equipment is needed. The other three inspection methods (cognitive, pluralistic walkthrough and action analysis) require more time and higher expertise in comparison to heuristic evaluation. The difference between cognitive and pluralistic walkthrough make the users. Action analysis divides tasks into individual actions, which evaluator performs and measures time needed for an action.

Inquiry methods involve focus groups, questionnaires, interviews and field studies. They differ in a manner of interaction. Interviews possess an individual inquiry, while in focus groups more users discuss about topics. Focus groups have also a moderator, who is directing the conversation flow based on questions. The main purpose of questionnaires lies in statistical results of a broad demographic population. Because there is no direct interaction with participants, the realization time of questionnaires is low. The simplest inquiry method is observing users in the field (for example, a working place) where we note down information we see.

User testing is the last group of methods and is a fundamental activity by evaluating usability. It enables a direct access to information about system’s usability and problems that users run into during the testing. A frequently used method is the thinking aloud method, where a user verbally express his thoughts and behavior. It is a good approximation of system in practice, because we see the cause why a user has done his move. Since the thinking aloud method is not a natural way of user testing, while actions are described in loud, a better choice in this aspect is the constructive interaction, where two users test the system simultaneously. Consequently, we are dealing with higher costs.

Table 1. Comparison of usability evaluation methods


There is no doubt that the evaluation of a system should be taken into account during the entire development life cycle of applications. However, can we use abovementioned methods for evaluating adaptive systems under a usability engineering approach?

Because of the various adaptivity factors the evaluation of adaptive systems is difficult to perform. Therefore, a layered approach has been introduced. There are defined two layers at least: content and interface [6]. Totterdell and Boyle define the layered approach as: “Two types of assessment were made of the user model: an assessment of the accuracy of the model’s inferences about user

difficulties; and an assessment of the effectiveness of the changes made at the interface” [5]. In other words, evaluation of adaptive systems should not be evaluated as whole but as components – layers.

According to [7], the evaluation methodologies can be seen

as generative models, since they contribute to the improvement of adaptable systems during the evaluation phases by combining requirement specification and evaluation.

An important aspect of human-computer interaction plays the participants. They seek different kind of information. Likewise, they are the target population of the applications and represent the main source of information. Because of that their role is significant. A user-centered approach for evaluating adaptive systems has been proposed by [4].

Adaptive systems differ regarding the activities for achieving changes: initiative, proposal, decision and execution [3]. Therefore, a group of these stages have been considered in the first criteria. Full adaptivity occurs if all adaptation stages are feasible by the system. Another mode of adaptation is a user-controlled self-adaptation, where the user decides upon action to be taken. By the term system we do not refer only to the system itself, but also to the system

The intrusiveness of adaptivity is the third criterion. By intrusive we refer to the frequently given suggestions of a system. Since there is no direct user interaction with the system, all inquiry methods are not intrusive (with the exception of field studies).

The time aspect of adaptation is divided into three phases: before the first use, during the system usage and between two accesses into the system. It shows the adaptivity appearance.

Presentation, navigation and content are the main adaptivity criteria. In general, adaptive presentation the adaptable aspect of the content difficulty (a simpler content for a less advanced user and additional, deeper information for more advanced user). The goal of adaptive navigation is to assist student by orientating and navigating him, to easier achieve goals [10]. The content shows the information which is relevant for an individual user.

With the use of reusability adaptation rules, creation of new adaptation rules from scratch is being avoided. Our criterion shows simple or complex reusability adaptation rules.

The last criterion represents qualitative and quantitative data which can be gathered during the evaluation of a system. It can be helpful to decide which method will produce necessary data. Qualitative results can be gathered from an

interpretative evaluation [8], focusing on individual users which give a rich set of information and a deep knowledge. Qualitative approach comes into account by measuring processes and calculating dependence of variables for making generalizations on the obtained results.


We presented an approach towards usability evaluation of adaptive systems. Since, there is a variety of adaptivity factors; the evaluation of adaptive systems is difficult to perform. Therefore, a layered approach has been introduced. We have considered the evaluation during the entire development life-cycle and we suggest that evaluation takes place before the implementation started. Discussion has been performed about characteristics of usability evaluation methods regarding adaptivity according to the following criteria: stages, impact, intrusiveness, time, layers, reusability rules and types of data. The next step would be further development of evaluation methods concerning specific adaptivity criteria.


[1] Holzinger, A. Usability engineering methods for software developers, Communications of the ACM, januar 2005, pages 71-74

[2] Nielsen J. Usability Engineering, Morgan Kaufmann,

September, 1994

[3] M. Schneider-Hufschmidt, T. Kühme, and U. Malinowski, eds. Adaptive User Interfaces: Principles and Practise. Amsterdam: North Holland Elsevier, 1993

[4] Gena C. »User-Centered Approach for Adaptive Systems«

[5] Totterdell, P and Boyle, E, 1990, The evaluation of adaptive systems, Adaptive User Interfaces. London: Academic Press, pp. 161–194.

[6] Brusilovsky P. The Benefits of Layered Evaluation of Adaptive Applications and Services

[7] Dix, A, Finlay, J, Abowd, G and Beale, R, 1998, Human Computer Interaction, 2nd edn. Englewood Cliffs, NJ: Prentice-Hall.

[8] J. Preece, Y. Rogers, and H. Sharp, Interaction Design: Beyond Human-Computer Interaction, Wiley & Sons, 2002

[9] Fink, J., Kobsa, A. A Review and Analysis of Commercial User Modeling Servers for Personalization on the World Wide Web. User Modeling and User-Adapted Interaction, UMUAI, Kluwer Academic Publishers, Vol. 10, No. 2-3, 209-

  1. 249. (2000)
[10] Brusilovsky, P., Peylo, C., Adaptive Hypermedia. User Modeling and User-Adapted Interaction 11, Kluwer Academic Publishers, 2001, pp. 87-110.

Leave a Reply