Quality enhancement on e-learning Quality enhancement on e-learning E.S.I. Ossiannilsson Department of Engineering and Management, Oulu University, Oulu, Finland Abstract Purpose – Benchmarking, a method for quality assurance has not been very commonly used in higher education with regard to e-learning. Today, e-learning is an integral part of higher education, and so should also be an integral part of quality assurance systems. However, quality indicators, benchmarks and critical success factors on e-learning have not been taken seriously into consideration, nor incorporated in ordinary national or international quality assurance systems. The purpose of this paper is to describe how The European Association of Distance Teaching Universities (EADTU) initiated and developed E-xcellenceþ , a quality benchmarking assessment method and tool. Design/methodology/approach – This paper, which is part of a larger research project on European benchmarking, focuses on experiences from universities taking part in the E-xcellenceþ valorization process. Findings – The results showed that benchmarking is a powerful tool to support improved governance and management in higher education, in alignment with national and international quality agencies. The tool can serve for quality improvements in teaching and learning. Additionally, the results showed critical success issues for e-learning. Originality/value – This original paper reports on a Europe-wide study examining benchmarking of e-learning and presents suggestions for tackling quality issues. Keywords Europe, Higher education, Universities, Distance learning, Benchmarking, E-learning, Quality assurance, Critical success factors, Quality enhancement Paper type Research paper 1. Introduction Benchmarking as a method for quality enhancement has until now not been very commonly used in higher education (Moriarty and Smallman, 2009) and especially not with regard to e-learning (Ossiannilsson, 2010a). Quality assurance, quality indicators, benchmarks and critical success factors for e-learning have not been taken seriously into account in regular quality assurance within higher education (The Swedish National Agency for Higher Education (NAHE), 2008; Ossiannilsson, 2011; Ubachs, 2009). The quality concepts have not been conceptualised. In any case, the quality of e-learning has been discussed in quality assurance methods, but e-learning has been considered and managed more disconnected according to an international study by NAHE (2008). Though, few methods have so far focused on parameters of quality assurance governing e-learning. Nevertheless, criteria based on ease of access, new forms of interaction, flexibility, accessibility and personalisation, and other pedagogical aspects relevant for e-learning are missing. Additionally, there is a lack of experiences and theoretical frameworks about values and impacts of benchmarking e-learning in higher education (Ossiannilsson, 2010a, 2011; Bacsich, 2005, 2009; Schreurs, 2009). Obviously, there is a need for enhanced understanding of how The current issue and full text archive of this journal is available at www.emeraldinsight.com/1065-0741.htm Campus-Wide Information Systems Vol. 29 No. 4, 2012 pp. 312-323 r Emerald Group Publishing Limited 1065-0741 DOI 10.1108/10650741211253903 The author would like to express her thanks to EADTU, and to colleagues who have participated in the E-xcellenceþ project; also to Professor Pekka Kess, the author’s supervisor, Oulu University, FI, and Professor and Senior Consultant Paul Bacsich, Matic Media, Ltd, UK. 312 CWIS 29,4 benchmarking can be used in new contexts, focusing particularly on values and impacts for higher education institutions and their stakeholders participating in benchmarking exercises (Ossiannilsson, 2010a; Ossiannilsson and Landgren, 2011). Recently, one benchmarking initiative at European level was conducted by The European Association of Distance Teaching Universities (EADTU). Under the e-learning programme 2004 the E-xcellence benchmarking project was carried out by a consortium from European countries into lifelong, open and flexible learning and, in addition, expertise of quality assurance and accreditation processes from The European Association for Quality Assurance in Higher Education (ENQA) members in cooperation with the Association of European institutions of higher education (EUA) and United Nations Educational, Scientific and Cultural Organisation (UNESCO). The intention with E-xcellence was to supplement existing quality assurance systems on e-learning specific issues, and not to interfere with ordinary quality assurance systems in higher education (Ubachs, 2009). This paper focuses on experiences of European universities that participated in local seminars and took part in the process of QuickScan in the framework of E-xcellenceþ by EADTU. In ongoing research by Ossiannilsson (2010b, 2011) (Ossiannilsson and Landgren, 2011) two recently completed European benchmarking initiatives on e-learning in 2008/2009, is the centre of attention. One, which is the one elaborated on in this paper, was carried out by EADTU, E-xcellenceþ (Ubachs, 2009) and the other one was conducted by the European Centre for Strategic Management of Universities (ESMU), in cooperation with EADTU, the ESMU e-learning benchmarking exercise 2009 (Ossiannilsson, 2010b, 2011; Ossiannilsson and Landgren, 2011). The paper will not focus on single benchmarks, indicators, critical success factors, or the benchmark methodology as such, but on values and impacts for stakeholders that participated in benchmarking exercises. The research regards aspects of value and impact and aims to be innovative, in regard to new concepts of benchmarking on e-learning in higher education. 2. Benchmarking e-learning Today, universities are facing new challenges as well as in the years ahead in the twenty-first century, to take action to be competitive not just in educational, social, managerial and technological aspects, but also to work in global perspectives, as well as to be a driver for innovation and contribute to sustainable development (Ehlers and Pawlowski, 2006; Ehlers and Schneckenberg, 2010; Ossiannilsson, 2010a, 2011; Ossiannilsson and Landgren, 2011). Issues such as demonstrating respect for the individual student and their learning processes, accountability for the use of funding, both public and private, quality of education and research, and contributing to economic growth and sustainability have thus become more important (Ehlers and Pawlowski, 2006; Ehlers and Schneckenberg, 2010; Ubachs, 2009). Higher education institutions have to face the fact of increased demands on enhanced learning through new technology: digital skills in education, learning for the future in a global context within sustainable dimensions and integrating technology into all aspects of their strategic planning to ensure their survival in the years to come. The survey by NAHE (2008) emphasised that e-learning must be accessed from a holistic point of view and argue that: Existing methods of quality assessment need to be adapted. There is a need that quality aspects for e-learning are integrated into existing quality assurance systems. Internal 313 Quality enhancement on e-learning competence and the provision of information in the e-learning area need to be guaranteed. Internal working methods need to be adapted to the special conditions which apply for the assessment of borderless education (NAHE, 2008, p. 10). Research and experience shows that knowledge gaps on how e-learning can be embedded and integrated in ordinary quality assurance are both explicit and demanding (Ossiannilsson and Landgren, 2011). Benchmarking is a rather new phenomenon in higher education (Ubachs, 2009, Ossiannilsson, 2010b; Moriarty, 2008; ESMU, 2008a, b, 2010). The definition of benchmarking is, on the other hand, not very explicit and clear (ReVica, 2011). ENQA defined benchmarking as “[y] a learning process, which requires trust, understanding, selecting and adapting good practices in order to improve” (ESMU, 2008a, p. 7). The locus of benchmarking lies between the current and desirable states of affairs, and contributes to the transformation process that realise these improvements (Moriarty, 2008; Moriarty and Smallman, 2009) Benchmarking might identify changes necessary to achieve the aims. The concept change seems to be implicit in benchmarking; a change consistent with benchmarking-directed improvements processes. Benchmarking is not only about change, but also about improvements or as Harrington, already in 1995, summarized: “all improvement is change, but not all change is improvement” (Moriarty, 2008, p. 29). Moriarty elaborated it further and stated that, benchmarking is not just about changes, it is more about identification and successful implementation. ESMU (2008a, b, 2010) emphasises that benchmarking is an ongoing process to improve the performance of higher education institutions. An extended literature review on benchmarking was carried out by ESMU (2008b) aiming to clarify the understanding of the concept. Conversely, one of the underlying purposes of the study was to improve the practice of benchmarking in higher education, as a powerful tool to support improved governance and management in higher education. According to ESMU (2008b) there are at least ten good reasons to use benchmarking as a management tool in higher education; to self-assess their institutions; for a better understanding of processes; to measure, compare and discover new ideas; to obtain data to support decision making; to identify targets for improvement; to strengthen institutional identity; for strategy formulation and implementation; to enhance reputation; to respond to national performance indicators and benchmarks; and to set new standards for the sector in the context of higher education reforms. ESMU (2008b, p. 16) defined benchmarking as an “[y] internal organisational process aiming to improve the organization’s performance by learning about possible improvements of its primary and/or support processes by looking at these processes in other, better-performing organizations”. E-learning is not very easy to define either. Most often the concept of e-learning covers both technical and digital means, but covers also e-learning as learning, and learning through e-learning (Ossiannilsson, 2010b). The concept is used to cover a wide set of applications and pedagogical processes and learning supported by information and communication technology, such as web-based learning, computer-based learning, virtual classrooms and digital collaboration, with an added value of increased accessibility, flexibility and interactivity. McLoughlin and Lee (2008) stress the “three P’s of pedagogy” for the networked society, personalisation, participation and productivity. Bonk (2009) shows how technology has transformed educational opportunities for learners, as well as those of innovators from the worlds of technology and education that reveal the power of opening up the world of learning. New conceptualisations of e-learning in the twenty-first century will change the scene 314 CWIS 29,4 (Ehlers and Pawlowski, 2006; Ehlers and Schneckenberg, 2010; Ossiannilsson and Landgren, 2011) and may have an impact on how benchmarking e-learning in higher education in the future will be conducted, and what kind of quality issues will matter. In a comprehensive literature review by Ossiannilsson (2010a), the context of benchmarking e-learning in higher education was explored. Conversely, as the literature showed, the trend today is that e-learning is more and more embedded in strategies of learning and teaching at universities (Ehlers and Pawlowski, 2006; Ehlers and Schneckenberg, 2010; NAHE, 2008; Ossiannilsson and Landgren, 2011; Ubachs, 2009). Enhancing learning, teaching and assessment by the use of technology is one of a number of ways in which institutions can address their own strategic missions. 3. Material and methods E-xcellenceþ The EADTU’s E-xcellence instrument was developed to complement existing quality assurance systems in higher education, and not to interfere with current systems (Ubachs, 2009). The quality benchmarking assessment instrument which was developed, covered pedagogical, organisational and technical frameworks, with special attention on accessibility, flexibility, interactivity and personalization. The instrument was based on three elements: . first, a manual on quality assurance covering 33 benchmarks on e-learning, with indicators related to benchmarks, guidance for improvement and references to E-xcellence level performance. The benchmarks were grouped into three areas covering six fields in total, namely: first, strategic management second, products (curriculum design, course design, course delivery) and finally, services (staff and student support) as illustrated in Figure 1; . second, assessors’ notes provided a more detailed description of the issues and approaches; and . finally, the tools, i.e. the online instrument. The tool QuickScan, which is based on E-xcellence level benchmarks, and independent of particular institutional or national systems, is supplemented by a full online manual, all fully available on a web portal was launched in 2007 (Ubachs, 2009). During the development process of E-xcellenceþ , besides the partnership within the project, stakeholders and policymakers were involved. The benchmarking can be Products Services Management Source: Ossiannilsson and Landgren (2011) Figure 1. The three main areas for the benchmarks and indicators according to E-xcellenceþ 315 Quality enhancement on e-learning accomplished both as so-called online QuickScan, and as a Full Assessment with evidence, or both. The QuickScan is a simplified version of the Full Assessment tool, which in turn is a comprehensive tool. The online QuickScan offers the opportunity to make comments on the specific issues by indicating: not adequate, partially adequate, largely adequate or fully adequate. After a completed online QuickScan feedback are immediately generated based on the manual and assessors notes and e-mailed back to the responsible respondent. Though, feedback is just given for answers not adequate, partially adequate. The approach with the QuickScan was to a high extent greatly valued and led to commitments during the work. The instrument also offers with the Full Assessment the opportunities to make comments on the specific issue and to refer to documents or other references or links which can be used as reference on that specific aspect of e-learning. In 2007, EUA highlighted the initiative as: By modelling the E-xcellence tool on the needs and interests of institution and giving them a choice of modes with different degrees of intensity, the tool incorporates what has been endorsed on the European level as good practice in external quality assurance processes. Moreover, by developing a set of benchmarks for the European level to build its tool on, the E-xcellence project has contributed toward building a European dimension for the specific field of e-learning (Ubachs, 2009, p. 8). E-xcellenceþ became the phase for valorisation of the instrument at local, national and European levels within higher and adult education. Within E-xcellenceþ , EADTU wanted to broaden the implementation and to receive feedback for enhancing the instrument. The E-xcellenceþ consortium consisted of expert representatives from open universities, traditional universities and assessment and accreditation bodies for higher and adult education. The consortium encompassed 13 countries with an outreach to the rest of Europe. E-xcellenceþ was piloted during 2008/2009 at local seminars, and three universities carried out the Full Assessment, together with site visits and road maps. Several universities carried out the QuickScan. Universities who conducted the Full Assessment, site visits and road maps, and committed themselves to continue every second year with benchmarking e-learning in higher education, obtained the E-xcellence associated label. EADTU, with its E-xcellenceþ initiative, emphasised that any e-learning benchmarking initiatives need to be integrated, and not interfere with ordinary quality assessment in higher education institutions (Ubachs, 2009). E-learning courses have, for a long time, been seen as special tracks in many universities. Probably in the 1990s this was needed, as the phenomenon and development of the internet was fairly new. At the present time, in the twenty-first century, where e-learning is embedded in universities and personalised interactive and mobile learning, the use of social media and open educational resources (OER) is emphasised, thus e-learning quality criteria must be integrated into any quality assurance systems, methods and movements and critical success factors have to be identified within new environments, e.g. social media and OER. This is almost certainly one of the crucial aspects and one of the benefits of benchmarking e-learning in higher education. The tool QuickScan was valorised through the project E-xcellenceþ during 2008 and 2009. Introduction and dissemination of the tool was organised through local seminars in 13 European countries. EADTU supported the improvement processes of e-learning by self-assessment, onsite assessment and accreditation, by embedding the instrument in national and institutional policy frameworks. Five cases out of the 13 universities during the time being are included in this research. 316 CWIS 29,4 The cases In order to explore the complex and multifaceted phenomena in depth, this study used an exploratory multiple case study strategy (Yin, 2003). A mixed-method approach was applied, utilising a combination of quantitative but mainly qualitative data sources and integrated methods for analysing data (Creswell and Clarke, 2007; Yin, 2003). A case study protocol was worked out for the data procedure (Yin, 2003). The cases for the current study were selected from the local seminars conducted by EADTU at European universities (five out of 13) (see Table I). Data for the cases were collected by the author, assisted by EADTU in 2009/2010. In this paper, the analyses from the conducted seminars are discussed. Data collection, procedure and analysis Altogether some 175 participants (vice-rectors, management, professors and students) attended the five local seminars at the involved institutions in Europe (explored in this paper) in the dissemination and valorisation phase of E-xcellenceþ . One out of the five conducted by the time being the Full Assessment, site visits and worked out roadmaps. The data were collected mainly through reports from the seminar, but also using questionnaires and interviews following the case study protocol. The data were analysed within a holistic, but also within an embedded multiple case design (Yin, 2003). According to Yin (2003) the cases were analysed also as cross cases in order to identify similarities and differences and to provide further insight in processes and generalising of the case study results. 4. Findings The questions for the seminars covered areas such as: application, added value, shortcomings, integration, institutional integration, next step and other issues. In the following, the answers from the five participating institutions based on cross case analyses according to the areas mentioned above are summarised. Application The QuickScan was conducted with staff at different levels (vice-rectors, professors, management and students). It was carried out through meetings, seminars, dialogues and questionnaires, both on an institutional and programme level (e.g. Master program level). Added value The institutions indicated that new views and recommendations came out of the assessment for further improvements. They stressed that it was a valuable exercise and process to go through and they obtained an overview of the performance at programme, faculty or institutional level. E-xcellenceþ allowed the institutions to University Number of individuals Local seminar date (I) Alpha 15 13-14 November 2008 (II) Beta 20 11-12 March 2009 (III) Gamma 10 20-21 January 2009 (IV) Delta 50 19-20 February 2009 (V) Epsilon 80 9-10 March 2009 Table I. Universities involved in local seminars, E-xcellenceþ , by EADTU 317 Quality enhancement on e-learning show their expertise in e-learning more than conventional assessments were doing. Within E-xcellenceþ dialogues an agenda was initiated for processes of quality enhancement and improvements. Additionally the need for policy beyond a virtual learning environment was highlighted. As a team approach was necessary for conducting the QuickScan, this also enabled teambuilding at all levels and allowed different stakeholders to take part, everyone from students to management. A comprehensive assessment approach was made possible at the same time as it served as a checklist. The documentation and the internal discussions were expressed as benefits of high value. All institutions emphasised the power of benchmarking and the internal dialogues which were initiated through E-xcellenceþ . Through a guided dialogue the team obtained a clearer understanding of the opportunity it offered to a critical study of the institution’s position in relation to other institutions, and they also discovered clearly defined paths of enhancement. It was explicitly expressed that the tool has to be used as a total entity. The benchmarks were relevant for the institutions. Student evaluations were still missing as benchmarks and have to be added in the tool. The tool offered opportunities for different ambitions. The fundamental principles were easy to understand for formulating decisions; namely, what is the position now and what are the aims for the future? In addition, what are the central issues in the organisation and what will be the policy outlines? It was highlighted that the tool as such is flexible enough to make choices but needs fine-tuning. Moreover, it is important to bear in mind that benchmarks can even be pre-selected based on relevance. The tools are improvement tool and not accreditation tools, which is important to bear in mind. In summary, the respondents expressed values on conducting benchmarking on e- learning as it obtains transparency, to start and maintain internal dialogues, to strengthen teambuilding and to develop trust and a culture of scholarship of teaching and learning. Additional values were expressed as through the benchmarking process also discussions on the meaning and understanding of concepts such as e-learning meant different things to different persons and within the teams and that this was allowed among the institutions. Thus, the understanding of benchmarks could be understood differently in different contexts. Shortcomings Mentioned shortcomings were that the benchmarks were overly dedicated to distance learning educational institutions. Some institutions expressed that normative definitions should be used. Benchmarks should be in a position to balance the context of the institution. The institutions emphasised that students are not involved explicitly, and should be added in the system or create their own benchmark exercise or to be involved with the team. Additional shortcomings were that the QuickScan only provides answers that are not fully adequate or adequate. Users might want feedback on all given answers. Other shortcomings were that the benchmark formulations were sometimes too general but often also too complex. Interpretations of the benchmarks were sometimes difficult, and there were also sometimes far too many aspects covered per benchmark. In addition, as the tool is in English, there were both language and linguistic barriers. Institutional integration Some institutions said that they operate in accordance with the ENQA standards and have, therefore, a strong wish to have E-xcellence integrated/recognised by 318 CWIS 29,4 ENQA. They also stated that it was immediately applicable as a self-assessment tool. In addition, institutions mentioned that it fitted in with the aims of the organisation. Conversely, the tool needs fine-tuning. It was emphasised that the ambition must be in congruence with the ambition of the institution and within a step-by-step approach. Contextualisation is necessary and the benchmarks should reflect a blended mode approach to teaching and learning. Next steps The next step would be to investigate the integration of the benchmarks in the internal quality assurance processes and systems. All institutions expressed their willingness and their need to work out road maps based on E-xcellence. One of the institutions stated that their national agency for higher education would like to integrate the system, and had taken initiatives to develop e-learning criteria themselves, but are now inspired by the E-xcellence. On the other hand, another institution stated that their national agency for higher education was doubtful of an E-xcellence associated label. Other issues for next steps were expressed as the needs to include social media, Web 2.0 and OER in the benchmarks and indicators. Other issues As has been stated above students’ input was missing within the benchmarks and indicators. The tool as it was at the time being probably is best used for open universities and the issues in a blended mode context are underestimated. Institutions stressed the challenges to incorporate e-learning in ordinary quality assurance processes. The function of the QuickScan was not immediately clear and there were requests for a guide, e.g. to use the tool on an individual basis, within a team approach, and from certain roles within the institution, or to select relevant themes. There were even requests for guidelines for different scenarios on how to use the QuickScan, e.g. who is rating and which benchmarks are answered by whom? Feedback options and cultural differences were also emphasised. Even demands for better links between the benchmarks and the manual were suggested. Recommendations were also to provide a “light” version vs an advanced version. Issues were raised on language and interpretations of benchmarks. Some benchmarks were too compact and too complex, and there should be possibilities to give neutral answers. The QuickScan was presented as an assessment, whereas some institutions understood it more like a signal tool for internal use, and thus with no need for any label. Nevertheless, a label is just issued for institutions going through the whole process with Full Assessment, site visits and working out roadmaps. The institutions emphasised the discussions about costs for recognition and according to this the use of the label and its usefulness and sustainability. In summary at least five key findings became explicit through the research and on three levels. Values and impact of going through EADTU’s benchmarking was expressed within the institution at all levels. On the foundational level dialogue within the institution or the department, teambuilding and transparency was highlighted. On the second level policy making and decisions e.g. policy statement was emphasised and finally on top on that the third level, quality improvement and quality assurance was highlighted as values and impact of taking part in benchmarking processes (see Figure 2). 319 Quality enhancement on e-learning 5. Discussion The ten good reasons described by ESMU (2008a) to conduct benchmarking were almost confirmed and verified by the participating institutions in the local seminars. They also emphasised that challenges for universities in the twenty-first century are to bring together all aspects of e-learning in a holistic framework, and perceive it in a more contextualised manner. The fact that e-learning is more and more embedded in strategies on learning and teaching at universities nowadays are almost benefits, but what will the consequences be and how should they pay attention to critical success factors, if there are any? Experience from the E-xcellenceþ by EADTU can be expressed as both internal and external outcomes. Internal outcomes were that within the universities individuals’ conducting the QuickScan remained to the same conceptual framework which led to trust, transparency, and internal and extended dialogues. External outcomes were described as visibility for stakeholders, students, agencies and the public. Findings from this study emphasised that benchmarking must always fall within the identification of strengths and weaknesses and gain a better insight of the institutions, with a vision to set targets and benchmarks for improvement and enhancement. Benchmarking requires an explicit focus on continuous improvement and enhancement, the search for best practices and to be more than just a comparison of statistical data. A benchmark exercise must always be envisaged as a dynamic exercise with relevant benchmarks, as the aims are to identify good practice, which will lead to improvement and implementation of changes. Further benchmarking requires institutional willingness to increase organisational performance, to act as a learning organisation and to review processes on an ongoing basis. In addition, the process as such requires the motivation to search for new practice and readiness to implement new models of operation. There is a strong need of commitment already from the beginning both on individual as well as on management level, especially if the result of the process will demand any overriding changes and for the implementation process. Moreover, one success factor is the commitment to change. Benchmarking requires institutional strategic development and is based on a continuous, long-term and professional approach. Quality enhancement Policy statement Dialogue, teambuilding, transparency Figure 2. Key findings at different levels on the use of EADTU benchmarking QuickScan tool 320 CWIS 29,4 6. Conclusions The impression seems to be that issues of constructive alignment, of benchmarking e-learning in universities according to national government and quality agencies’ mandates will change the scenario and be of importance for quality enhancement in the twenty-first century. This will be owing to changed learning and teaching paradigms with among issues as blended mode approaches, personalisation, participation, collaborative- ubiquitous- and open learning, OER, and social media and changed and new demands from the new millennium learners entering higher education. Quality has to a higher extent to be valued from the learners’ dimensions and perspective as well as which currently are the most common i.e. learning outcomes and management. In addition, the discourse on scholarship of teaching and learning, including digital scholarship in a global knowledge-based sustainable society will be of utmost importance. Although key benefits of benchmarking are well known, significant gaps still appear in the use of benchmarking practices in European higher education institutions. Benchmarking is a powerful strategic tool to assist decision makers to improve quality and effectiveness of organisational processes and, ultimately, aims to build a European platform. Through benchmarking, there can be large improvements in higher education institutions to meet international standards and guidelines, and to reach the position of the best international player in the higher education arena. Other aspects are about fast-changing professional practice and globalisation and how to keep the staff in line with newly required competencies in a lifelong learning perspective. Technology and digital scholarship is a useful tool for creating a new kind of university, but much more important are structural and cultural changes in which technology will play a supporting role. Without these cultural and structural changes, technology cannot change the university on its own. Will benchmarking on e-learning, in higher education in alignment with national and international quality boards and agencies, be an answer as a powerful tool for improvements on teaching and learning in a blended mode in the twenty-first century, to support improved governance and management in higher education? More research has to be done in a holistic perspective to answer questions on the value and impact of benchmarking e-learning in higher education, like as the following W-questions: why and how shall benchmarking be conducted, what shall be scrutinized, when shall it be done and duration, where shall it be done and by and for who/whom? References Bacsich, P. (2005), “Evaluating impact of e-learning: benchmarking”, Towards a Learning Society: Proceedings of the eLearning Conference, Brussels, May, pp. 162-76. Bacsich, P. (2009), “Benchmarking e-learning in UK universities: the methodologies”, in Mayes, T. and Higher Education Academy (Eds), Higher Education Academy and Related National e-Learning Initiatives, Higher Education Academy, Bristol, pp. 90-106. Bonk, C.J. (2009), The World is Open: How Web Technology is Revolutionizing Education, Jossey- Bass, San Francisco, CA. Creswell, J.W. and Clarke, P. (2007), Designing and Conducting Mixed Methods Research, Sage Publications, Thousand Oaks, CA. Ehlers, U.-D. and Pawlowski, J. (Eds) (2006), “Quality in European e-learning: an introduction”, Handbook on Quality and Standardization in e-Learning, Springer, Berlin, Hamburg and New York, NY, pp. 1-14. 321 Quality enhancement on e-learning http://www.emeraldinsight.com/action/showLinks?crossref=10.1007%2F3-540-32788-6_1 Ehlers, U.-D. and Schneckenberg, D. (Eds) (2010), “Introduction: changing cultures in higher education”, Changing Cultures in Higher Education, Springer, Berlin, Heidelberg, pp. 1-14. The European Centre for Strategic Management of Universities (ESMU). (Eds) (2008a), Benchmarking in European Higher Education. Findings of a Two-Year EU Funded Project, ESMU, Brussels. The European Centre for Strategic Management of Universities (ESMU). (Eds) (2008b), A Practical Guide. Benchmarking in European Higher Education, ESMU, Brussels. The European Centre for Strategic Management of Universities (ESMU). (Eds) (2010), A University Benchmarking Handbook. Benchmarking in Higher Education, ESMU, Brussels. McLoughlin, C. and Lee, M.J.W. (2008), “The three P’s pedagogy for the networked society: personalisation, participation and productivity”, International Journal of Teaching and Learning in Higher Education, Vol. 20 No. 1, pp. 10-27. Moriarty, J.P. (2008), “A theory of benchmarking”, unpublished PhD thesis, Lincoln University, Lincoln. Moriarty, J.P. and Smallman, C. (2009), “En route to a theory on benchmarking”, Benchmarking: An International Journal, Vol. 16 No. 4, pp. 484-503. Ossiannilsson, E. (2010a), “Benchmarking on e-learning in universities: impact and value, European perspectives”, International Journal of Management in Education, Special Issue on Virtual University, accepted. Ossiannilsson, E. (2010b), “Benchmarking e-learning in higher education findings from EADTU’s E-xcellenceþproject and ESMU’s e-learning benchmarking exercise”, in Soinila, M. and Stalter, M. (Eds), Quality Assurance of e-Learning, The European Association for Quality Assurance in Higher Education (ENQA), Helsinki, pp. 32-44. Ossiannilsson, E. (2011), “Findings from European benchmarking exercises on e-learning: value and impact”, Journal of Creative Education, Vol. 2 No. 3, pp. 208-19. Ossiannilsson, E. and Landgren, L. (2011), “Quality in e-learning – a conceptual framework based on experiences from three international benchmarking projects at Lund University, Sweden”, Journal of Computer Assisted Learning, Special Issue on Quality in e-Learning, Vol. 28 No. 1, pp. 42-51. ReVica (2011), “Bibliography of benchmarking, reviewing traces of European virtual campuses”, available at: www.virtualcampuses.eu/index.php/Bibliography_of_benchmarking (accessed 1 June 2011). Schreurs, B. (2009), Reviewing the Virtual Campus Phenomenon. The Rise of Large-Scale e-Learning Initiatives Worldwide, EuroPACE ivzw, Leuven. The Swedish National Agency for Higher Education (NAHE) (2008), E-Learning Quality: Aspects and Criteria, Högskoleverket, NAHE, Stockholm. Ubachs, G. (2009), Quality Assessment for e-Learning a Benchmarking Approach, European Association of Distance Teaching Universities (EADTU), Heerlen. Yin, R.K. (2003), Case Study Research. Design and Methods, Sage Publications, Inc, Thousand Oaks, CA. About the author E.S.I. Ossiannilsson is a PhD Candidate at Oulu University, Department of Industrial Engineering and Management, Finland. She is also a Senior Administrative Officer/Project Manager/Flexible Learning Adviser at Lund University, Human Resources, Staff and Educational Development, Centre for Educational Development, Sweden. She has, for the last ten years, worked with regional, national and international projects on e-learning and open educational resources and Web 2.0 in higher education and within quality issues and 322 CWIS 29,4 http://www.emeraldinsight.com/action/showLinks?crossref=10.1007%2F978-3-642-03582-1_1 http://www.emeraldinsight.com/action/showLinks?crossref=10.4236%2Fce.2011.23029 http://www.emeraldinsight.com/action/showLinks?system=10.1108%2F14635770910972423 http://www.emeraldinsight.com/action/showLinks?system=10.1108%2F14635770910972423 http://www.emeraldinsight.com/action/showLinks?crossref=10.1111%2Fj.1365-2729.2011.00439.x&isi=000299042600005 benchmarking. She is affiliated with several international organisations such as EDEN-NAP, EFQUEL, EUCEN, ICDE and SVERD. She serves as expert in the quality grid EPPROBATE and is Service Development Partner at OER Services. She serves as referee for national and international journals and eLearning europa.eu, as well as being on the boards for international conferences. Her research focuses on quality and benchmarking e-learning in higher education, principally regarding processes, values and impacts and with particular interest in the benchmarking exercises carried out through EADTU, the excellence and excellenceþ projects and the ESMU eLearning2009, benchmarking exercises. E.S.I. Ossiannilsson can be contacted at: Ebba.Ossiannilsson@oulu.fi To purchase reprints of this article please e-mail: reprints@emeraldinsight.com Or visit our web site for further details: www.emeraldinsight.com/reprints 323 Quality enhancement on e-learning