key: cord-0038901-zwl2rzng authors: Kruger, Rendani; Brosens, Jacques; Hattingh, Marie title: A Methodology to Compare the Usability of Information Systems date: 2020-03-10 journal: Responsible Design, Implementation and Use of Information and Communication Technology DOI: 10.1007/978-3-030-45002-1_39 sha: 0a8163e04829513151425477612a4ce8ca976864 doc_id: 38901 cord_uid: zwl2rzng The usability of customer facing software interfaces is a source of competitive advantage for organisations. The usability of systems has also shown to encourage the effective and efficient completion of tasks and in consequence, operations. Furthermore, competitive analysis of the usability of software products has been shown to be a useful tool in the adoption of the user centred design philosophy within organisations. However, low adoption of usability evaluation is prevalent due to a lack of methodologies to support organisations in their endeavours to achieve better usability. Therefore, the purpose of this study is to present a methodology to compare the usability of information systems. By using such a methodology, organisations will be able to gauge the standard of the usability of their information systems, by comparing it to others. The usability of information systems (IS) has been shown to improve the productivity of employees. Furthermore, when customers need to use information systems, bad usability leads to a lack of loyalty, desertion of tasks (such as cart abandonment) and a decrease in satisfaction [1, 2] . Market research suggests that poor usability is the biggest reason why mobile applications are rejected by customers [3] . Sykes and Venkatesh [4] explain that the potential of using enterprise software within organisations is limited because many implementations fail due to a lack of usability testing. They suggest that in addition to testing the usability of customer facing systems, it is important to examine how people use enterprise software systems to do their work and achieve their goals. It is vital that the usability of systems, such as mobile applications and business intelligence systems, remains optimal for organisations to gain and maintain a competitive advantage [5] . For example, a usable system may lead to a reduction in system transaction time, which could have direct cost benefits [6] . Due to the importance of usability, usability evaluations can be used to test how well people use IS and have great potential in improving the usability of systems [7, 8] . Nielsen and Gilutz [9] explains that the main benefits of usability evaluations are: (1) Reduced training cost with relation to the information system, (2) Limited user investment risk. The user makes less mistakes and completes tasks more effectively when the usability of an information system is optimal, (3) The enhanced performance of users that are completing a task, (4) Enhanced user efficiency and satisfaction, (5) Lower personnel cost for organisations due to computer based tasks being completed more effectively and efficiently by employees, (6) Reduced software development cost and length of the development lifecycle because users are more satisfied with systems, (7) Easier to use online interfaces for enhanced customer interactions, (8) Improved competitive advantage. One method organisations use to achieve a competitive advantage is through benchmarking. Benchmarking is important because it allows organisations to set standards, measure the effectiveness of operations and processes, and analyse competitive advantage [10] . Organisations often conduct comparative studies to benchmark [11] . Comparative studies may be conducted at the inception, during operations and at the closure of an organisation. Stakeholders in an organisation may have various questions that could be answered through benchmarking. These include why some competing organisations are doing better than others, what the strengths and weaknesses of an organisation are, and why certain organisations may easily mitigate threats and take advantage of opportunities [12] [13] [14] . Similar questions could be asked about the organisation's competition and this may enable the organisation posing the questions to gain competitive advantage [12] [13] [14] . A typical organisation may compare the current manner of operations to alternative ways in which these operations could be conducted. If the alternative promises to be an improvement and it is feasible to implement the better method of operations, the organisation may do so. This may have a variety of influences on that organisation, including greater profits or business growth [15] . It is challenging for organisations to determine the value they will derive from making a change to an IS or its environment [13] . Thus, a methodology to compare the usability of ISs with the view of obtaining a competitive advantage, may be useful. A literature search yielded no research regarding methods to compare the usability of information systems within a competitive environment. A/B testing was the closest result. A/B testing, also called split testing or bucket testing, is a method of comparing different versions of an information system to determine the effectiveness of a change or the effectiveness of the operations of a development team [16] . A methodology is a system of methods used in a particular activity. For example, in software development, a general definition of a methodology is: "a recommended series of steps and procedures to be followed during the development of an IS" [17] . The goal of the research presented in this paper was to create a series of methods that practitioners can use to compare the usability of information systems. Therefore, this paper presents a methodology to compare the usability of two or more corresponding IS (within competitive environments). We used a design-based approach [18] to develop a Comparative Inter-Organisational Usability Evaluation (CIUE) methodology. CIUEs can be used to (1) compare the usability of user interfaces (UIs), (2) work towards the benchmarking of the usability of user interfaces, (3) Create a method to convince stakeholders of IS to invest in the application of User Centred Design (UCD) in systems development, (4) scale the value of an investment made in usability enhancements, (5) encourage the use of usability enhancement mechanisms to gain competitive advantage through user interfaces of IS. The CIUE is presented by discussing the research context in the next section, followed by a section on the method adopted to create the CIUE, then a section that discusses the results of the method applied and finally the CIUE. Possible future research is presented in the concluding section. The main theoretical concepts that form the basis of this research are usability, UCD, usability evaluation and competitive analysis. We begin our contextual discussion by defining these concepts and showing how they relate to each other. We also delve into the lack of adoption of UCD practices by organisations. Since the outcome of this research is a usability evaluation methodology, we provide a critical overview of existing usability evaluation techniques and explain how these informed our CIUE. We further discuss comparative evaluation and benchmarking to substantiate our preference for comparative usability testing above single system evaluations. Usability relates to the effectiveness, efficiency and satisfaction with which users achieve goals using computer systems [19] . Usability evaluations examine the quality of computer interfaces, particularly regarding how easy they are to use [8] . The improved usability of an IS may reduce the anxiety and fear associated with computer use, improve learnability, could direct users more concisely and may improve the navigation of systems [6] . A user's interaction with an IS is not only bound to that user's interaction with computers as it may include the interaction with human system actors [20] . The term user interface, however, commonly refers to the technological elements through which users interact with computer systems [21] . UCD is a broad term used to describe the design process where the end user has an influence on the design [22] . In UCD, the usability, user environment, user opinions, user characteristics, tasks and workflow of products are key considerations in the design process. What may be an effective design for one group of users may be an ineffective design for a different group [23] . UCD addresses these differing user requirements. The involvement of users to influence the design of IS increases the number of breakthrough ideas in research and development projects [24] . The formation of concepts and scenarios regarding the adoption of IS, which may otherwise not have been thought of, was also improved when UCD was adopted in a number of projects [24] . Despite its clear benefits for the improvement of the user experience of IS, the adoption of UCD practices in organisations has been limited [25, 26] . This has been attributed to the fact that business potential of an IS and technological requirements are considered before users are attended to [25, 26] . Venturi et al. [27] showed that when competitive analysis is done on the usability of software products it may increase the adoption of UCD. There is a strong emphasis on the usability of information systems when UCD is adopted. Some of the basic UCD principles that relate to usability are [28] : 1. Make it easy to determine which actions are possible for a user at any time. 2. User should easily determine what the current state of task completion is 3. It should be intuitive to follow mappings between intentions and the required actions, between actions and the resulting effect, and between the information that is visible and the system state. These principles put the user at the center of the design process [28] . UCD is often regarded as a philosophy of IS design as it influences the entire design process. This section presents a summary of the most widely used usability evaluation methods. The discussions to follow informed the development of the CIUE methodology. We look at the advantages and disadvantages of the different methods and consider whether they are appropriate for use in comparative analysis. The main advantages and disadvantages of the various methods to evaluate usability are summarised in Table 1 . A goal of the methodology is to use objective data, therefore focus groups and heuristic evaluations are not suitable for use in CIUEs. Furthermore, the variable nature of the environment of remote usability evaluation renders remote user evaluation inappropriate for use during CIUEs because it is impossible to account for all the variables in such unknown and vastly different environments. Controlled environment evaluations are more suited for CIUEs because the variability in data and environment can be controlled. As such, the focus can be placed on the difference between the usability of the different competing computer systems. This can also be optimised by using specialised equipment. This, in turn, increases the value and quality of the comparison. This extensive range of indicators reduces the need for possibly subjective data such as expert recommendations or user opinions. Context is often lost, there is typically no interaction with users Controlled environment evaluation [39] [40] [41] Objective quantitative data The environment is not always realistic, i.e. as it would be in the real world Benchmarking and comparative evaluations have often been used to identify an entities strengths and weaknesses [42] . Single system evaluations tend to lack data to show how important a system change with be (quantitatively), this can be addressed by CUIEs. The study followed a Design Science Research (DSR) approach proposed by Kuechler and Vaishnavi [43] . The study followed five steps as prescribed by the methodology and is outlined in Fig. 1 . The first step was the "awareness of the problem" step, this was done from literature analysis and industry requests for CIUEs. The second step was the suggestion step which was done abductively with the use of peer reviewed literature regarding usability and benchmarking. The first version of the CIUE methodology was applied in step three by evaluating and comparing three Mobile communications websites. The CIUE methodology was then refined and evaluated in step 4 through further application in the aviation and medical insurance sectors, and further through expert consultations, both from the organisations involved in the study and from academia. This paper, however, only reports on the final step of the DSR approach which is the presentation of the artefact -A methodology to conduct CIUEs. The earlier phases were reported on in [44] . The contribution of the research study is a methodology to conduct Comparative Inter-Organisational Usability Evaluations (CIUEs). This section presents the final CIUE methodology in terms of its underlying philosophy, strategies, principles and procedures. The components of the CIUE is depicted in Fig. 2 . User Centred Design (UCD) is a design process in which the end users influence how an IS is designed [22] . To follow UCD, the user is placed at the centre of the design process [45] . UCD can be regarded as the philosophy that underlies a CIUE. UCD is about optimising the experience that a user will have when using a product by involving the user in the design process rather than expecting the user to change their behaviour or attitude to accommodate the product [45] . This is done primarily by involving users in the design process. The goal of a CIUE is ultimately to improve the usability of an IS by showing that it is not on the same standard as that of a competing IS, from the user's point of view. Step 1: Awareness of problem-No methodology for CIUEs. Step 2: Suggestion-Methodology for CIUE from literature review and introduction. Step 3: Development-Pilot case study where tentative artefact is applied in the Mobile communications industry Step 4: Evaluation First evaluation case study Application in the Medical insurance industry Step A usability evaluation is essentially a research project that has to be designed in a way that will ensure valid results. As in research, the investigator has to make choices with regard to data collection and analysis methods and the type of data that would be suitable for a specific evaluation case. The investigative approach that we recommend for CIUEs is controlled user observation. Controlling the observation requires some form of experimental design. The observation is controlled to minimise the influence of the environment on data collection. Variability in the data should be focused on the differences between the interfaces being compared rather than on contextual factors. Quantitative or qualitative methods could be used to collect and analyse data that are relevant to the usability of a system interface [46] . A goal of our methodology is objectivity. Collecting data through qualitative methods such as expert recommendations can, for example, be doubted due to their perceived subjective nature [47] . CIUEs therefore follow a quantitative approach in the collection and analysis of the usability data. The epistemological assumption is that an objective reality exists in which the usability of IS can be measured and analysed deductively [46] . The data collected during observation should therefore be suitable for quantitative analysis. In addition to the principles of UCD, we have identified the following principles specific to CIUEs: 1. IS evaluated in a CIUE should not contain any system breaks. A system break will influence the user's interaction with the system and the value of many metrics (e.g. time taken to complete the task) will be meaningless for comparison purposes 2. The environment where the CIUE is conducted should be controlled to minimise contextual or environmental influences on the outcome of the evaluation (e.g. the lighting in the evaluation space should be consistent for the duration of the tests and across all tests) 3. Gather data for as many different metrics as possible. This way one set of data can confirm other sets of data, making the conclusions stronger, or one set can contradict another set, showing that further evidence is required 4. The tasks that the users will perform on the different systems should be similar with regard to the outcome for the user 5. The number of participants should be the same for each system being evaluated 6. The less homogenous the group of participants is, the more participants are required. This is an attempt to minimise the influence that characteristics of the participants have on the comparison 7. The evaluations should be done in the same way for all the participants. The procedure followed, the evaluation environment and the tasks that participants are requested to complete should be the same for all participants and all systems 8. The results of the CIUE should be presented in a manner that is easily understandable and unambiguous 9. A CIUE should be done at a point in time. The versions of the systems being evaluated should not change. This section discusses the steps and procedures to be followed when conducting a CIUE. The broader steps are planning, setting up the environment, conducting the usability evaluations, exporting data, performing the analysis and presenting results. We elaborate on each step below. Figure 3 provides an overview of the CIUE process. Step 1: Plan the CIUE. The following aspects need to be considered when planning the CIUE: Timing. If the lifetime of the product is limited, the CIUE may not be useful. Furthermore, if the organisation for which the evaluation is being done, cannot give priority to the CIUE it may not be worthwhile. Data to be Collected. Depending on the nature of the systems included in the CIUE, the evaluator should identify metrics that will best serve the comparative evaluation. The quality of the data is also important, therefore criteria that will result in the elimination of data should be formulated beforehand (e.g. if eye tracking is used, the required percentage of fixations picked up during the task can be set to 80%, eliminating the data of all users with less than 80% fixations recorded). The cost of data collection and data analysis should also be considered. The Candidate Organisations. The amount of evaluation resources required for a CIUE increases with the number of organisations, but the CIUE results could be more reliable when more organisations are included. This is due to the performance of a system becomes clearer as the number of systems it is compared to increases. If the data is going to be used to determine benchmarks for the future, the number of organisations should be maximised to increase the reliability of the benchmarks [48] . Recruit Participants for the Study. The number of participants depends on the diversity of the user population of the systems involved. If a group of similar participants are selected, then fewer participants are required. If the user population is diverse, a larger number of participants is required. The advantage of a more diverse group of participants is that the results of the study are valid for a larger part of the total population of possible users. Generally, a sample size of 16 ± 4 is accepted as adequate for usability studies [49] . To make sure all participants have had the same level of exposure to the systems being tested, select people who had not used the systems before. Divide the participants in as many groups as the number of organisations included in the study. These groups have to be equal in size. Location. The location should be easily accessible for the participants and easy to control in terms of lighting, sounds, smell, available hardware, setup of tools, cleanliness, et cetera. The Tasks. The tasks should have the same outcome for the user if performed on the different systems. For example, if a participant is going to buy an airline ticket on one of the systems then the same should be done on the rest of the systems involved in the CIUE. The organisation that requested the CIUE should provide input into the task selection. Data Collection Methods. Make sure the required recording devices are available and working. If specialised methods such as eye tracking will be used, an expert in that method should collect the data. This could influence the cost of a CIUE. Data Analysis and Reporting. A complete written report should be provided but we recommend that the results are also presented in a face-to-face meeting so that the evaluators can answer questions that arise. Step 2: Conduct the usability evaluations for each IS. As stated in the section regarding usability evaluation techniques, the controlled lab technique is most suited for CIUEs because the focus can be placed on the comparison. In the case studies we used to develop the methodology to conduct CIUEs, we used an eye tracker to maximise the amount of data collected but the methodology will not be prescriptive regarding this as a CIUE can be done without an eye tracker. Ensure that the data is organised to avoid confusing the results of the CIUE. Step 3: Extract the data from the recording tools and import the data into the analysis tools. Ensure that the data is strictly organised to avoid mistakes in the results of the CIUE. For example, first create separate spreadsheets for each organisation. Then use a separate spreadsheet to compare all the data. Data should preferably be stored in the cloud and backups should be made to avoid data loss. Step 4: Perform the analysis of the data by comparing similar metrics. Stick to objective, quantitative analysis. Create graphs that juxtapose the results of the different organisations for easy comparison. Step 5: Presenting the outcomes. Present the outcomes to the parties that requested the CIUE. This can be done using reports or presentations. We developed a methodology to conduct CIUEs and presented the resulting methodology in this article. We used DSR using a multiple-case study and interviews. The distinguishing characteristics of the CIUE methodology are that it should be done in a manner that emphasises the comparison being conducted, and so it was suggested that the CIUE be conducted in a controlled environment. Furthermore, it was found that CIUEs can be useful in selecting IS products based on usability, to work towards the benchmarking of the usability of user interfaces, to scale the value of an investment made in usability enhancements, to encourage the use of usability enhancement mechanisms to gain competitive advantage. Since UCD is the underlying philosophy of the CIUE methodology, its application may be useful to encourage the adoption of UCD in organisations. CIUE can also aid the development of benchmarks. During our research, various ways to use CIUEs were identified that may be elaborated in future. Firstly, research can be done into how to conduct a longitudinal CIUEs, where the change in comparison can be measured over time. This would entail comparison of the usability of systems of competing organisations over time to measure, for example, the effectiveness of changes made to improve usability. The application of CIUEs in the selection of software products could also be explored. The development of usability benchmarks using a similar approach as CIUEs may be useful. Gen Y customer loyalty in online shopping: an integrated model of trust, user experience and branding The role of satisfaction and website usability in developing customer loyalty and positive word-of-mouth in the e-banking services Mobile application usability: conceptualization and instrument development Explaining post-implementation employee system use and job performance: impacts of the content and source of social network ties Business intelligence systems use in performance measurement capabilities: implications for enhanced competitive advantage Designing the User Interface: Strategies for Effective Human-Computer Interaction Lessons learned in usability consulting Usability 101: introduction to usability Return on investment (ROI) for usability Modelling Techniques for Business Process Re-engineering and Benchmarking Entrepreneurship: starting and operating a small business Benchmarking information technology practices in small firms Strategic management of information systems Entrepreneurship as the basic element for the successful employment of benchmarking and business innovations From infrastructure to culture: A/B testing challenges in large scale social networks Methodologies for developing information systems: a historical perspective Design Science Research Methods and Patterns: Innovating Information and Communication Technology Extending quality in use to provide a framework for usability measurement Principles of Information Systems Management information systems 13e User-centered design Designing for digital transformation: lessons for information systems research from the study of ICT and societal challenges Living lab research landscape: from user centred design and user experience towards user cocreation. Presented at the First European Summer School "Living Labs The user experience landscape of South Africa Applying user centred and participatory design approaches to commercial product development People, organizations, and processes: an inquiry into the adoption of user-centered design in industry Design principles for cognitive artifacts Focus Groups: A Practical Guide for Applied Research Integrating agile and user-centered design: a systematic mapping and review of evaluation and validation studies of Agile-UX The focus groups in social research: advantages and disadvantages Evaluating the usability of transactional web sites Presented at the Conference Companion on Human Factors in Computing Systems Interaction Design: Beyond Human-Computer Interaction Heuristic evaluation of a telehealth system from the Danish TeleCare North Trial Heuristic evaluation of user interfaces A comparative study of synchronous and asynchronous remote usability testing methods Usability testing: too early? Too much talking? Too many problems? Usability testing: a review of some methodological and technical aspects of the method Research Methods in Human-Computer Interaction Usability Testing Essentials: Ready, Set… Test! Benchmarking deep reinforcement learning for continuous control On theory development in design science research: anatomy of a research project The value of comparative usability and UX evaluation for e-commerce organisations User centered system design Business Research Methods To measure or not to measure UX: an interview study. Presented at the I-UxSED Quantitative Models for Performance Evaluation and Benchmarking: Data Envelopment Analysis with Spreadsheets How many participants are really enough for usability studies?