About the Author(s)


Vaneshveri Naidoo Email symbol
Department of Physiotherapy, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa

Aimée V. Stewart symbol
Department of Physiotherapy, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa

Morake E.D. Maleka symbol
Department of Physiotherapy, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa

Citation


Naidoo, V., Stewart, A.V. & Maleka, M.E.D., 2022, ‘A tool to evaluate physiotherapy clinical education in South Africa’, South African Journal of Physiotherapy 78(1), a1759. https://doi.org/10.4102/sajp.v78i1.1759

Original Research

A tool to evaluate physiotherapy clinical education in South Africa

Vaneshveri Naidoo, Aimée V. Stewart, Morake E.D. Maleka

Received: 10 Dec. 2021; Accepted: 10 June 2022; Published: 31 Aug. 2022

Copyright: © 2022. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: Physiotherapy clinical education is complex. The dynamic learning milieu is fluid and multidimensional, which contributes to the complexity of the clinical learning experience. Consequently, there are numerous factors which impact the clinical learning experience which cannot be measured objectively – a gap which led to the development of our study.

Objectives: To develop, validate, and test the reliability of an assessment tool that evaluates the effectiveness and quality of physiotherapy clinical education programmes.

Method: A mixed methods approach in three phases included physiotherapy academics, clinical educators, and clinicians throughout South Africa. Phase One was a qualitative study: focus group discussions determined items and domains of the tool. Phase Two established the content and construct validity of the tool, a scoring system and a name for the tool, using the Delphi method. In Phase Three, factor analysis reduced the number of items, and the feasibility and utility of the tool was determined cross-sectionally.

Results: The Vaneshveri Naidoo Clinical Programme Evaluation Tool (VN-CPET) of 58 items and six domains was developed and found to be valid, reliable (α = 0.75) and useful. The six domains of VN-CPET include governance; academic processes; learning exposure; clinical orientation; clinical supervision and quality assurance and monitoring and evaluation.

Conclusion: The Vaneshveri Naidoo Clinical Programme Evaluation Tool is a valid, reliable and standardised tool, that evaluates the quality and effectiveness of physiotherapy clinical education programmes.

Clinical implications: This tool can objectively evaluate the quality and effectiveness of physiotherapy clinical education programmes in South Africa, and other health science education programmes, both locally and globally, with minor modification.

Keywords: physiotherapy clinical education; programme evaluation; monitoring and evaluation; quality assurance; context; input; process; product (CIPP).

Introduction

Currently, there are no tools to evaluate the quality and effectiveness of physiotherapy clinical education programmes. Several scholars have attempted to do this, but were unsuccessful because clinical education is complex, diverse, and multidimensional (Higgs 1993; Jette et al. 2014; McCallum et al. 2013; Stachura, Garven & Reed 2000; Strohschein, Hagler & May 2002). The curriculum review process by academic departments largely focusses on the theoretical component (learning objectives, activities and outcomes), while the structure and processes of clinical education are largely overlooked; yet clinical education is a core component of a physiotherapy undergraduate programme (Baldry Currens & Bithell 2000; Chetty et al. 2018; Delany & Bragge 2009; Higgs 1993; McCallum et al. 2013; Moghadam, Kashfe & Abdi 2017).

A standardised, valid, and reliable monitoring and evaluation tool will facilitate the summative and formative evaluation of physiotherapy clinical education programmes (Frye & Hemmer 2012; Persky, Joyner & Cox 2012; Stachura et al. 2000). The programme’s structure and processes will be analysed, not only the outcomes (Frye & Hemmer 2012; Owston 2008). This kind of inquiry will enable strategic quality assurance mechanisms to be incorporated, and high-quality clinical learning experiences for students are likely to be achieved. Therefore, it is imperative that the clinical education component of the curriculum is independently and objectively evaluated.

Evaluation tools provide the data required to track the implementation of processes, to determine the programme’s intended and unintended effects, and to establish the programme’s effectiveness. Moreover, valid, and reliable tools are needed to facilitate the complex evaluation process of physiotherapy clinical education, with its unique, multifaceted constructs. Thus, the purpose of our study was to develop and validate the reliability of an assessment tool that evaluates the effectiveness and quality of a physiotherapy undergraduate clinical education programme: A Programme Evaluation Tool.

Methods and procedure

A three-phase exploratory, sequential design that included mixed methods was used to develop the tool (Ivankova, Creswell & Stick 2006). Data collection commenced country-wide following ethical clearance to conduct this study from the Human Research Ethics Committee of the University of the Witwatersrand (Wits) – (ethical clearance number: M210160), as well as ethical clearance and/or permission, and informed consent from seven out of eight academic departments and clinical departments: University of Cape Town (UCT); University of Stellenbosch (US); University of the Free state (UFS); University of Kwazulu-Natal (UKZN); University of the Western Cape (UWC); Sefako Makgatho Health Sciences University (SMU); Chris Hani Baragwaneth Academic Hospital (CHBAH); Steve Biko Academic Hospital (SBAH) and Helen Joseph Academic Hospital (HJ).

After the pilot study, and through purposive sampling, 81 key stakeholders involved in student training (academics, clinical managers and clinicians, including new graduates) participated in the focus group discussions (FGDs). Focus group discussions allowed the first author to collect national data, efficiently and cost-effectively (Jayasekara 2012). Fourteen FGDs, each with approximately eight participants, took place. Data saturation occurred after the eighth FGD. However, appointments for the FGDs had been made 1 year in advance, so 14 FGDs were conducted. The FGDs consisted of mixed groups of participants, or distinct groups, depending on participants’ availability. A broadly structured script with prompts was used. The recorded FGDs were transcribed verbatim, coded, categorised and themed inductively by the first author, using Tesch’s (1992) method of data analysis (Vuso & James 2017), and MaxQda, version 2018.2 (a qualitative data analysis tool). Thematic content analysis was also conducted by the co-authors and an independent qualitative expert. There was a high level of agreement on coding and themes, and disagreements were discussed. Prior to the thematic content analysis, the transcripts were checked by the first author for errors, and member-checks confirmed that true accounts of the FGD had been captured. Data and investigator triangulation ensured trustworthiness (Halcomb & Andrew 2005; Leech & Onwuegbuzie 2007), leading to Phase Two of our study. My bias as clinical co-ordinator (CC) was curbed by conducting FGDs, where a broad statement was used to elicit the data; as well as inductive coding, member checks and co-coders.

For the Delphi process, 79 FGD participants were invited to participate in Phase Two of our study to determine the face and content validity of the preliminary tool (two FGD participants from Phase One were excluded due to invalid email addresses). The preliminary tool of 131 questions was emailed to the participants, who were asked to decide on which items should remain in the tool. Two Delphi rounds were undertaken and an 80% agreement on items in each round was obtained to keep the item in the preliminary tool. There is no standard level of consensus, although 70% – 80% is usually adopted (Diamond et al. 2014; Maleka, Stewart & Hale 2017; Trevelyan & Robinson 2015).The third Delphi round confirmed a scoring system and a name for our tool. Following each Delphi round, the first author and the co-authors reviewed the comments and edited questions, as recommended by the participants, and as appropriate. Descriptive statistics (frequencies and percentages) were used to analyse the data.

Following the Delphi process, a Research Electronic Data Capture (REDCap) link (a secure web platform for building and managing online databases and surveys) (projectredcap.org) of the preliminary tool was emailed to 13 participants (heads of departments [HOD] and/or CC and/or undergraduate co-ordinators [UG]) of the eight academic physiotherapy departments in South Africa, to enable principal component factor analysis to reduce the number of items in the tool (Abdi & Williams 2010). The participants were requested to answer all the questions. The internal consistency of the items was determined using Cronbach’s alpha.

Phase Three of our study, a cross-sectional survey, was used to determine the construct validity of the tool. A REDcap link of the provisional tool and questions testing the feasibility and utility (Appendix 1) of the tool was purposively emailed to 35 participants nationally and internationally (HODs and/or CCs in universities in the countries listed in Appendix 2). The participants were requested to complete the 58 questions in the tool, and to answer the following open-end questions:

  1. Does this tool evaluate what you consider to be important regarding clinical education?

  2. What are the strengths of this tool?

  3. Indicate the weaknesses of this tool.

  4. Is this tool useful for your institution?

The data were tabulated, and descriptive statistics, frequencies, and percentages, were used to analyse the data. Principal component analysis was conducted using Stata (16.0).

Ethical considerations

Our study was coducted under a strict code of ethics; the anonymity of all participants were maintained where possible; there was no identifying data on any of the data collection sheets and the data was handled under utmost confidentiality. The raw data was stored in a locked cupboard M210160. 20210204. A second ethics cleareance certificate was applied for as the first one (M140706 – 22/08/2014) expired.

Results

Phase one

The preliminary tool of 131 items which emerged following the FGDs, contained three key areas: Governance, structure and experience; the macro-, meso- and micro-components, respectively, as seen in Figure 1.

FIGURE 1: Themes: Governance, structure and experience.

Table 1 provides an overview of the categories and subcategories under each theme.

TABLE 1: Summary of themes, categories and subcategories that emerged after the focus group discussions.
Phase two

Figure 2 illustrates the outcomes the Delphi rounds, the exploratory factor analysis and the internal reliability of the tool. The 131 items were reduced to 85 items after the Delphi rounds. Principal component factor analysis reduced the items to 73, and five sections, which were edited and reorganised by the first author to produce the final tool of 58 questions and six sections.

FIGURE 2: Combined results from phase two of the study.

Vaneshveri Naidoo Clinical Programme Evaluation Tool (VN-CPET) was the name chosen for our tool, after numerous suggestions. The VN-CPET is a self-administered programme evaluation tool. Scoring options reflect self-assessment, as illustrated in Table 2.

TABLE 2: Scoring system.

Five completed questionnaires (38.5%) were used to run principal component factor analysis and determine the internal reliability of the questions. Exploratory factor analysis split the tool into five sections and a total of 73 questions, with an acceptable internal consistency of 0.75 (Bolarinwa 2015; Hulin, Netemeyer & Cudeck 2001).

The authors reviewed the provisional tool following factor analysis and further reduced items due to redundancy, which resulted in the final tool containing 58 questions. We also reorganised the sections of the tool based on my experience as a CC (praxis), which resulted in the final tool containing six domains:

  1. Section 1 – Governance (5 questions)

  2. Section 2 – Academic processes (5 questions)

  3. Learning exposure (6 questions)

  4. Clinical orientation (7 questions)

  5. Clinical supervision (18 questions)

  6. Monitoring and evaluation and quality assurance (19 questions)

See Appendix 1 for the complete tool.

Phase three

In this phase, although 71% (25) responded (n = 35), only 68% (17) of the participants completed the entire questionnaire. Eight (32%) questionnaires were incomplete.

Figure 3 illustrates that 88% of the respondents found that the VN-CPET evaluates useful constructs of clinical education, while 59% indicated that VN-CPET would be useful for their institution. Of the remainder, 29% thought it was likely to be useful for their institution, thus elevating the institutional usefulness of the tool to 88%.

FIGURE 3: Feasibility and usefulness of clinical programme evaluation tool.

In Figure 4, 53% of the respondents commented on the comprehensiveness of the tool as a remarkable strength, followed by the wide range of influences that were captured (18%); and 12% thought it was a bench-marker. The length of the tool was its major drawback, which was pointed out by 35% of the respondents (Figure 5).

FIGURE 4: Strengths of clinical programme evaluation tool.

Discussion

A physiotherapy clinical education programme requires independent and objective evaluation to determine its merits and shortcomings. Such an evaluation enables educators/academics to strengthen aspects, add quality assurance mechanisms where required, or remove unintended effects (Frye & Hemmer 2012; Stufflebeam 2003). Mixed methods (Strohschein et al. 2002) enabled us to explore the length, breadth and depth of physiotherapy clinical education by using FGDs, the Delphi method, exploratory factor analysis and a cross-sectional survey. The FGDs, a data-intensive process (Doody, Slevin & Taggart 2013; Greenwood et al. 2017; Portney & Watkins 2009; Winke 2017), immersed us in the complexities of physiotherapy clinical education. Three themes unfolded from the FGDs: governance (macro), structure (meso) and experience (micro), which emphasised the complex interaction of these themes. The qualitative leg thus allowed us to gain insight into the complexities of clinical education (Moretti et al. 2011), as affirmed by numerous scholars (Higgs 1993; Jette et al. 2014; McCallum et al. 2013; Patton, Higgs & Smith 2018; Stachura et al. 2000; Strohschein et al. 2002). Additionally, the impact on students’ clinical learning, as a result of the structure and processes of a clinical education programme, was discussed.

FIGURE 5: Weaknesses of clinical programme evaluation tool.

The preliminary tool of 131 items was refined into the provisional tool (73 questions; five sections) by using two key processes: the Delphi process and exploratory factor analysis. The Delphi process, through a posteriori consensus (knowledge based on experience or personal observation), which was set at 80% (Maleka et al. 2017), was used to determine the items and domains of the tool. This was appropriate in our study, as we were able to cost-effectively include several participants in a field of study where there is a paucity of information (Diamond et al. 2014; Okoli & Pawlowski 2004; Powell 2003; Trevelyan & Robinson 2015). Exploratory factor analysis further delineated the items and domains of the tool by grouping similar variables into smaller groups, while eliminating variables that had a low factor loading and/or lowered the internal consistency of the items. (Portney & Watkins 2009). The final tool, through praxis, was created: 58 questions and six sections.

An acceptable internal reliability of 0.75 (n = 73) indicated that that the inter-relatedness of the items is satisfactory, and thus the tool will always, consistently measure the diverse, complex and multidimensional construct of physiotherapy clinical education. A high internal consistency (> 0.90) does not always mean the tool is more reliable; it could indicate that there is a high degree of redundancy in the items (McCrae et al. 2011; Taber 2018; Tavakol & Dennick 2011). An acceptable internal consistency appears to be more suitable for a newly developed tool (Bolarinwa 2015; Taber 2018; Tavakol & Dennick 2011).

Weiner et al. (2017) confirmed that the acceptability, appropriateness, and feasibility of an instrument must be determined to ensure its implementation. In other words, is this innovation satisfactory, fit for purpose, and useable in its context? A decisive ‘yes’ by 88% (15 out of 17; n = 17) of the participants was the reply. The VN-CPET was deemed comprehensive and a bench-marker that captured a wide range of influences, although it is long. This tool, therefore, considers everything that is important in evaluating the effectiveness and quality of a physiotherapy clinical education programme, under these sections: governance; academic processes; learning exposure; clinical orientation; clinical supervision; monitoring and evaluation, and quality assurance.

Governance refers to a multitude of factors: people, roles, structures, and policies. It is a framework under which stakeholders perform activities within regulated boundaries (Bigdeli et al. 2020; Pyone et al. 2017). In this section of the tool, governance refers to policies and agreements that guide the educational programme at a macro, meso- and micro level. Programme governance establishes processes and provides a structure for communication, implementation, and monitoring. It also ensures that policies and best practices are followed. Additionally, it ensures that the programme’s goals and objectives are aligned with the larger institutional and regulatory bodies.

Academic processes refers to the educational strategies (curriculum; teaching; learning; assessment; resources – human and others) that have been instituted to ensure adequate pre-clinical preparation, while learning exposure refers to the hands-on learning opportunities that students experience to ensure competency in all areas of physiotherapy, meeting the needs of their country inclusively, efficiently and cost-effectively (Hirsh et al. 2007). Clinical orientation programmes are aimed at enhancing students’ transition (vertically or horizontally) and student success, as they adapt to a new environment (Nguyen et al. 2018; Perrine & Spain 2008).

Clinical supervision is central to the effective training of health science students (Delany & Bragge 2009; Ernstzen & Bitzer 2012; Ernstzen, Bitzer & Grimmer-Somers 2009, 2010; Kilminster et al. 2007; Laitinen-Väänänen, Talvitie & Luukka 2007; Meyer, Louw & Ernstzen 2019; Patton et al. 2018; Pront, Gillham & Schuwirth 2016). It is integral to teaching and learning, and achieving competency in health science education (Laitinen-Väänänen et al. 2007; McAllister, Higgs & Smith 2008)

Monitoring and evaluation and quality assurance are different processes that occur simultaneously to ensure that objectives and goals are met; to identify and mitigate unintended effects; to determine the effectiveness and impact of an activity or programme, and to ensure that delivery of activities and their outcomes match the gold standard (Annecke 2008; Jette et al. 2014; Myezwa, M’Kumbuzi & Mhuri 2001; Stachura et al. 2000; Tsinidou, Gerogiannis & Fitsilis 2010).

Monitoring is the continual assessment of a project or programme to determine its intended and unintended effects (formative evaluation), whereas evaluation is the periodic retrospective assessment of a project or programme to determine its worth: relevance, impact, effectiveness, efficiency and sustainability (summative evaluation) (Annecke 2008; Porter & Goldman 2013; Stem et al. 2005; Stone-Jovicich et al. 2019). Quality assurance, on the other hand, is the evaluation of activities against a gold standard or guideline (Stachura et al. 2000). Summative and formative evaluation of physiotherapy clinical education informs the evaluator of the length, breadth, and depth of physiotherapy clinical education. The VN-CPET enables the aforementioned and allows quality assurance measures to be inserted where necessary. Most importantly, the VN-CPET provides a standardised, valid and reliable way of evaluating a physiotherapy clinical education programme.

Conclusion

The VN-CPET reflects the complexity and diversity of clinical education, due to its ability to be ‘comprehensive’ and to capture a ‘wide range of influences’. Although long, it was found to be acceptable, appropriate, and feasible. Furthermore, the VN-CPET is a valid and reliable tool and can be used to objectively evaluate the effectiveness and quality of a physiotherapy clinical education programme. Even though the scoring system is subjective, an evaluative response is obtained. A link to the online tool can be requested from the corresponding author.

The strength of the VN-CPET lies in its rigorous development using mixed methods (Strohschein et al. 2002), and the South African context is by no means a barrier to its global application in clinical physiotherapy education: the educational framework of physiotherapy clinical education is the same, despite different contexts. This tool will be shortened, and the scoring system refined in future studies.

The limitations include the subjectivity in the existing scoring system; the length of the tool, which is a potential barrier to its use; and the purposive sampling that was used to determine the feasibility and usefulness of this tool. Therefore, this tool should be used bearing these limitations in mind.

Acknowledgements

I would like to thank the study participants and the Funders of this study: National Research Foundation (NRF) Thuthuka, Wits Faculty Research Funds, and the South African Society of Physiotherapy.

Competing interests

The authors declare that they have no financial or personal relationships that may have inappropriately influenced them in writing this article.

Authors’ contributions

V.N. was responsible for conceptualisation, study design and execution, data collection, analysis and interpretation of data, drafting and critical revision of manuscript and final approval of the version to be published. A.V.S. and M.E.D.M. contributed to the conceptualisation and study design, review of data analysis, validation, as well as review and editing of manuscript.

Funding information

Funders of this study:

  • NRF Thuthuka (TTK150709124590)
  • Wits Faculty Research Funds (001.283.8491105.5121105.4922), and the
  • South African Society of Physiotherapy.

Data availability

The data that support the findings of this study are available on request from the corresponding author.

Disclaimer

The views and opinions expressed in this article are that of the authors, and do not reflect official policy or position of any of the affilations.

References

Abdi, H. & Williams, L.J., 2010, ‘Principal component analysis’, Wiley Interdisciplinary Reviews: Computational Statistics 2(4), 433–459. https://doi.org/10.1002/wics.101

Annecke, W., 2008, ‘Monitoring and evaluation of energy for development: The good, the bad and the questionable in M&E practice’, Energy Policy 36(8), 2839–2845. https://doi.org/10.1016/j.enpol.2008.02.043

Baldry Currens, J. A. and Bithell, C. P. (2000) ‘Clinical education: Listening to different perspectives’, Physiotherapy, 86(12), pp. 645–653. doi: 10.1016/s0031-9406(05)61302-8.

Baldry Currens, J.A. & Bithell, C.P., 2000, ‘Clinical education: Listening to different perspectives’, Physiotherapy 86(12), 645–653. https:/doi.org/10.1016/s0031-9406(05)61302-8

Bigdeli, M., Rouffy, B., Lane, B.D., Schmets, G., Soucat, A. & The Bellagio Group, 2020, ‘Health systems governance: The missing links’, BMJ Global Health 5(8), e002533. https:/doi.org/10.1136/bmjgh-2020-002533

Bolarinwa, O., 2015, ‘Principles and methods of validity and reliability testing of questionnaires used in social and health science researches’, Nigerian Postgraduate Medical Journal 22(4), 195. https://doi.org/10.4103/1117-1936.173959

Chetty, V. et al. (2018) ‘Physiotherapy clinical education at a South African university’, African Journal of Health Professions Education, 10(1), p. 13. doi: 10.7196/ajhpe.2018.v10i1.987

Chetty, V., Maddocks, S., Cobbing, S., Pefile, N., Govender, T., Shah, S. et al., 2018, ‘Physiotherapy clinical education at a South African university’, African Journal of Health Professions Education 10(1), 13. https://doi.org/10.7196/ajhpe.2018.v10i1.987

Delany, C. & Bragge, P., 2009, ‘A study of physiotherapy students’ and clinical educators’ perceptions of learning and teaching’, Medical Teacher 31(9), e402–e411. https://doi.org/10.1080/01421590902832970

Diamond, I.R., Grant, R.C., Feldman, B.M., Pencharz, P.B., Ling, S.C., Moore, A.M. et al., 2014, ‘Defining consensus: A systematic review recommends methodologic criteria for reporting of Delphi studies’, Journal of Clinical Epidemiology 67(4), 401–409. https://doi.org/10.1016/j.jclinepi.2013.12.002

Doody, O., Slevin, E. & Taggart, L., 2013, ‘Focus group interviews in nursing research: Part 1’, British Journal of Nursing 22(1), 16–19. https://doi.org/10.12968/bjon.2013.22.1.16

Ernstzen, D.V, Bitzer, E. & Grimmer-Somers, K., 2010, ‘Physiotherapy students’ and clinical teachers’ perspectives on best clinical teaching and learning practices: A qualitative study’, South African Journal of Physiotherapy 66(3), 25–31. https://doi.org/10.4102/sajp.v66i3.70

Ernstzen, D.V. & Bitzer, E., 2012, ‘The roles and attributes of the clinical teacher that contribute to favourable learning environments: A case study from physiotherapy’, South African Journal of Physiotherapy 68(1), 9–14. https://doi.org/10.4102/sajp.v68i1.3

Ernstzen, D.V., Bitzer, E. & Grimmer-Somers, K., 2009, ‘Physiotherapy students’ and clinical teachers’ perceptions of clinical learning opportunities: A case study’, Medical Teacher 31(3), e102–e115. https://doi.org/10.1080/01421590802512870

Frye, A.W. & Hemmer, P.A., 2012, ‘Program evaluation models and related theories: AMEE Guide No. 67’, Medical Teacher 34(5), e288–e299. https://doi.org/10.3109/0142159X.2012.668637

Greenwood, M., Kendrick, T., Davies, H. & Gill, F.J., 2017, ‘Hearing voices: Comparing two methods for analysis of focus group data’, Applied Nursing Research 35, 90–93.

Halcomb, E.J. & Andrew, S., 2005, ‘Triangulation as a method for contemporary nursing research’, Nurse Researcher 13(2), 71–82. https://doi.org/10.7748/nr2005.10.13.2.71.c5969

Higgs, J., 1993, ‘Managing clinical education: The programme’, Physiotherapy, 79(4), 239–246. https://doi.org/10.1016/S0031-9406(10)60705-5

Hirsh, D.A. Ogur, B., Thibault, G.E. & Cox, M., 2007, ‘Sounding board: “Continuity” as an organizing principle for clinical education reform’, The New England Journal of Medicine 356(8), 858–866.

Hulin, C., Netemeyer, R.G. & Cudeck, R., 2001, ‘Can a reliability coefficient be too high?’, Journal of Consumer Psychology 10(1), 55–58.

Ivankova, N.V., Creswell, J.W. & Stick, S.L., 2006, ‘Using mixed-methods sequential explanatory design: From theory to practice’, Field Methods 18(1), 3–20. https://doi.org/10.1177/1525822X05282260

Jayasekara, R.S., 2012, ‘Focus groups in nursing research: Methodological perspectives’, Nursing Outlook 60(6), 411–416. https://doi.org/10.1016/j.outlook.2012.02.001

Jette, D.U., Nelson, L., Palaima, M. & Wetherbee, E., 2014, ‘How do we improve quality in clinical education? Examination of structures, processes, and outcomes’, Journal of Physical Therapy Education 28, 6–12. https://doi.org/10.1097/00001416-201400001-00004

Kilminster, S., Cottrell, D., Grant, J. & Jolly, B., 2007, ‘AMEE guide no. 27: Effective educational and clinical supervision’, Medical Teacher 29(1), 2–19. https://doi.org/10.1080/01421590701210907

Laitinen-Väänänen, S., Talvitie, U. & Luukka, M.R., 2007, ‘Clinical supervision as an interaction between the clinical educator and the student’, Physiotherapy Theory and Practice 23(2), 95–103. https://doi.org/10.1080/09593980701212018

Leech, N.L. & Onwuegbuzie, A.J., 2007, ‘An array of qualitative data analysis tools: A call for data analysis triangulation’, School Psychology Quarterly 22(4), 557–584. https://doi.org/10.1037/1045-3830.22.4.557

Maleka, D.M., Stewart, A.V. & Hale, L.A., 2017, ‘The development of a community reintegration outcome measure to assess people with stroke living in low socioeconomic areas’, Journal of Disabililty Rehabilatation 3, 11–24. https://doi.org/10.5348/D05-2017-26-OA-2

McAllister, L., Higgs, J. & Smith, D., 2008, ‘Facing and managing dilemmas as a clinical educator’, Higher Education Research and Development 27(1), 1–13. https://doi.org/10.1080/07294360701658690

McCallum, C., Mosher, P.D., Jacobsen, P.J., Gallivan, S.P. & Giuffre, S.M., 2013, ‘Quality in physical therapist clinical education: A systematic review’, Physical Therapy 93(10), 1298–1311. https://doi.org/10.2522/ptj.20120410

McCrae, R.R., Kurtz, J.E., Yamagata, S. & Terracciano, A., 2011, ‘Internal consistency, retest reliability, and their implications for personality scale validity’, Personality and Social Psychology Review 15(1), 28–50. https:/doi.org/10.1177/1088868310366253

Meyer, I.S., Louw, A. & Ernstzen, D., 2019, ‘Perceptions of physiotherapy clinical educators’ dual roles as mentors and assessors: Influence on teaching–learning relationships’, South African Journal of Physiotherapy 75(1), 1–7. https://doi.org/10.4102/sajp.v75i1.468

Moghadam, A., Kashfe, P. & Abdi, K., 2017, ‘Exploring the challenges of physiotherapy clinical education: A qualitative study’, Iranian Rehabilitation Journal 15(3), 207–214. https://doi.org/10.29252/nrip.irj.15.3.207

Moretti, F., Vliet, L.V., Bensing, J., Deledda, G., Mazzi, M., Rimondini, M. et al., 2011, ‘A standardized approach to qualitative content analysis of focus group discussions from different countries’, Patient Education and Counseling 82(3), 420–428. https://doi.org/10.1016/j.pec.2011.01.005

Myezwa, H., M’Kumbuzi, V. & Mhuri, F., 2001 ‘Quality Assurance in a rehabilitation service’, SA Journal of Physiotherapy 57(1), 7–12. https://doi.org/10.4102/sajp.v57i1.488

Nguyen, N., Muilu, T., Dirin, A. & Alamaki, A., 2018, ‘An interactive and augmented learning concept for orientation week in higher education’, International Journal of Educational Technology in Higher Education 15(1), 35. https://doi.org/10.1186/s41239-018-0118-x

Okoli, C. & Pawlowski, S.D., 2004, ‘The Delphi method as a research tool: An example, design considerations and applications’, Information and Management 42(1), 15–29. https://doi.org/10.1016/j.im.2003.11.002

Owston, R., 2008, ‘Models and methods for evaluation’, in D. Jonassen, M.J. Spector, M. Driscoll, M.D. Merrill, J. van Merrienboer & M.P. Driscoll (ed.), Handbook of research on educational communications and technology, pp. 605–617, Taylor and Francis, New York.

Patton, N., Higgs, J. & Smith, M., 2018, ‘Clinical learning spaces: Crucibles for practice development in physiotherapy clinical education’, Physiotherapy Theory and Practice 34(8), 589–599. https://doi.org/10.1080/09593985.2017.1423144

Perrine, R.M. & Spain, J.W., 2008, ‘Impact of a pre-semester college orientation program: Hidden benefits?’, Journal of College Student Retention: Research, Theory and Practice 10(2), 155–169. https://doi.org/10.2190/CS.10.2.c

Persky, A.M., Joyner, P.U. & Cox, W.C., 2012, ‘Development of a course review process’, American Journal of Pharmaceutical Education 76(7), 1–8. https://doi.org/10.5688/ajpe767130

Porter, S. & Goldman, I., 2013, ‘A growing demand for monitoring and evaluation in Africa’, African Evaluation Journal 1(1), 1–9. https://doi.org/10.4102/aej.v1i1.25

Portney, L.G. & Watkins, M.P., 2009, Foundations of clinical research: Applications to practice, 3rd edn., Pearson, Hoboken, NJ.

Pront, L., Gillham, D. & Schuwirth, L.W.T., 2016, ‘Competencies to enable learning-focused clinical supervision: A thematic analysis of the literature’, Medical Education 50(4), 485–495. https://doi.org/10.1111/medu.12854

Powell, C., 2003, ‘The Delphi technique: Myths and realities’, Journal of Advanced Nursing 41(4), 376–382. https:/doi.org/10.1046/j.1365-2648.2003.02537

Pyone, T., Smith, H. & Van Den Brock, N., 2017, ‘Frameworks to assess health systems governance: A systematic review’, Health Policy and Planning 32(5), 710–722. https://doi.org/10.1093/heapol/czx007

Stachura, K., Garven, F. & Reed, M., 2000, ‘Quality assurance: Measuring the quality of clinical education provision’, Physiotherapy 86(3), 117–126. https://doi.org/10.1016/S0031-9406(05)61154-6

Stem, C., Margoluis, R., Salafsky, N. & Brown, M., 2005, ‘Monitoring and evaluation in conservation: A review of trends and approaches’, Conservation Biology 19(2), 295–309. https://doi.org/10.1111/j.1523-1739.2005.00594.x

Stone-Jovicich, S., Percy, H., McMillan, L., Turner, J.A., Chen, L. & White, T., 2019, ‘Evaluating monitoring, evaluation and learning initiatives in the New Zealand and Australian agricultural research and innovation systems: The MEL 2 framework’, Evaluation Journal of Australasia 19(1), 8–21. https://doi.org/10.1177/1035719X18823567

Strohschein, J., Hagler, P. & May, L., 2002, ‘Assessing the need for change in clinical education practices’, Physical Therapy 82(2), 160–172. https://doi.org/10.1093/ptj/82.2.160

Stufflebeam, D.L., 2003, ‘The CIPP model for evaluation’, in T. Kellaghan (ed.), International handbook of educational evaluation, pp. 31–62, Springer, Dordrecht.

Taber, K.S., 2018, ‘The use of Cronbach’s alpha when developing and reporting research instruments in science education’, Research in Science Education 48(6), 1273–1296. https://doi.org/10.1007/s11165-016-9602-2

Tavakol, M. & Dennick, R., 2011, ‘Making sense of Cronbach’s alpha’, International Journal of Medical Education 2, 53–55. https://doi.org/10.5116/ijme.4dfb.8dfd

Tesch, R., 1992, ‘The mechanics of interpretational qualitative analysis’, in Qualitative research analysis types and software tools, The Falmer Press, Basingstoke.

Trevelyan, E.G. & Robinson, N., 2015, ‘Delphi methodology in health research: How to do it?’, European Journal of Integrative Medicine 7(4), 423–428. https://doi.org/10.1016/j.eujim.2015.07.002

Tsinidou, M., Gerogiannis, V. & Fitsilis, P., 2010, ‘Quality assurance in education article information’, Quality Assurance in Education 18(3), 227–244. https://doi.org/10.1108/09684881011058669

Vuso, Z. & James, S., 2017, ‘Effects of limited midwifery clinical education and practice standardisation of student preparedness’, Nurse Education Today 55, 134–139. https://doi.org/10.1016/j.nedt.2017.05.014

Weiner, B.J., Lewis, C.C., Stanick, C., Powell, B.J., Dorsey, C.N., Clary, A.S. et al., 2017, ‘Psychometric assessment of three newly developed implementation outcome measures’, Implementation Science 12(1), 1–12. https://doi.org/10.1186/s13012-017-0635-3

Winke, P., 2017, ‘Using focus groups to investigate study abroad theories and practice’, System 71, 73–83. https://doi.org/10.1016/j.system.2017.09.018

Appendix 1: The Vaneshveri Naidoo Clinical Programme Evaluation Tool (VN-CPET)

The final VN-CPET tool of 58 items and six subsections was developed: ‘Governance; Academic Processes; Learning Exposure; Clinical Orientation; Clinical Supervision and Monitoring & Evaluation and Quality Assurance’.

How to use this tool

The assessor using this tool answers the question in column two first. After answering the question, the assessor then self-evaluates their answer by choosing an option in column three under scoring. For example, question 1, if your answer was only University policies, then the scoring option chosen would be ‘some’; alternatively, if all three options were chosen (macro, meso and micro), then the scoring option you would choose is ‘All’. The principle is answer question is column two first, and then you score your answer. The scoring option provided appraises the answer of the assessor (it’s a self-evaluation system)

TABLE 1-A1: The Vaneshveri Naidoo Clinical Programme Evaluation Tool (VN-CPET).

Appendix 2: Phase 3 participants

VN-CPET was emailed to the Heads of Departments and/or clinical coordinators of each university (i.e. the tool was emailed to the HOD only where departments did not have a clinical coordinator, and to both the HOD and clinical coordinator in departments that had a clinical coordinator, and therefore it was emailed to 35 participants in total).

TABLE 1-A2: Phase 3 participants.