Article Information

Author:
Gabedi N. Molefe¹

Affiliation:
¹Faculty of Management Sciences, Tshwane University of Technology, South Africa

Correspondence to:
Gabedi Molefe

email:
molefgn@vodamail.co.za

Postal address:
PO Box 12665, The Tramshed 0126, South Africa

Keywords
university lecturers; performance measurement dimensions; performance management; quantitative research; cross-sectional survey

Dates:
Received: 12 Aug. 2009
Accepted: 09 Sept. 2010
Published: 29 Nov. 2010

How to cite this article:
Molefe, G.N. (2010). Performance measurement dimensions for lecturers at selected universities: An international perspective. SA Journal of Human Resource Management/ SA Tydskrif vir Menslikehulpbronbestuur, 8(1), Art. #243, 13 pages. DOI: 10.4102/sajhrm.v8i1.243

Copyright Notice:
© 2010. The Authors. Licensee: OpenJournals Publishing. This work is licensed under the Creative Commons Attribution License.

ISSN: 1683-7584 (print)
ISSN: 2071-078X (online)

In This Original Research...
Open Access
Abstract
Introduction
   • Performance management and performance measurement
   • Competencies as performance dimensions
   • Performance dimensions for a university lecturer’s job
Research Design
   • Research Approach
      • Research methodology
      • Research context
      • Participants
   • Measuring instruments
   • Research procedure
   • Statistical analysis
   • Scale reliability testing and calculation of dimension scores
Results
Discussion
   • General perceptions of the performance measurement process
   • Performance measurement dimensions designed as subsets of questionnaire items to describe specific performance measurement issues
   • Analysis of variance
Summery of Findings
   • Reliability of the dimensions tested
   • Practical or managerial implications
   • Recommendations
   • Limitations of the study
Conclusion
References
Abstract

Orientation: The study was necessitated by the need to develop a generally accepted performance measurement dimension framework for lecturers at universities.

Research purpose: The aim of the inquiry was to investigate the performance measurement dimensions for lecturers at selected universities in countries such as South Africa, USA, UK, Australia and Nigeria. Universities were selected on the basis of their academic reputation – being the best in their respective countries or continents.

Motivation for the study: Whilst some studies mention certain attributes as important performance dimensions for the lecturer’s job, there was no scientific evidence to support this claim, hence the need for this study.

Research design: A quantitative research approach was adopted with the objective of casting the researcher’s net widely in order to obtain as much data as possible with the view to arriving at scientifically tested findings. A questionnaire was sent out to 500 academics and yielded a response rate of 36%.

Main findings: The study confirmed that a lecturer’s performance can be measured on the basis of seven performance dimensions and these dimensions, when tested, attracted a Cronbach Alpha reliability coefficient of above 0.70.

Practical and managerial implications: This study has the potential to equip the leadership at universities in South Africa with an empirically tested guideline for formulating policy on performance evaluation frameworks for the lecturing staff.

Contribution/value-add: The major contribution of this study has been its argument for performance measurement for lecturers in the higher education environment and also its confirmation of the seven postulated performance measurement dimensions for lecturers.

Introduction

The key focus of this study is on performance measurement at universities. As Simmons (2002) indicated, the past approaches to performance management in higher education in South Africa were given limited emphasis by the Government and its contribution to enhance institutional performance and quality has been neglected. Consequently, universities adopted a laissez-faire approach to performance management and thus operated on a ‘high trust’ basis within an ethos that emphasised independence of thought and scholarship, academic freedom and collegiality. The high trust mode of operation therefore meant that academic staff were not closely monitored or assessed. However, higher education institutions are now expected to face the economic and social realities and become accountable and more market and consumer responsive to provide ‘value for money’ to its clients. Furthermore, for almost a decade, South African higher education institutions have been undergoing radical transformation due to the release of a plethora of national policies which these institutions were expected to comply with. These policy demands lead not only to a change in scope, nature and intensity of academic work, but have also subjected academic work to performance management and quality assessment (Mapesela & Strydom, 2004). The aforesaid restorative national policies and legislative initiatives included the following:

• National Plan for Higher Education (2001)

• South African Qualification Authority Act (1995)

• Skills Development Act (1998)

• Skills Development Levies Act (1999)

• National Training Strategy Initiative (1994)

• White Paper on Transformation of Higher Education (1997) (Mapesela & Strydom, 2004; Tait, Van Eeden & Tait, 2002; Taylor & Harris, 2002; Wilkinson, Fourie, Strydom, Van der Westhuysen & Van Tonder, 2004).

Furthermore, the said policy directives exerted pressure on higher education institutions to review their human resources strategies and practices with the aim of developing and fostering a competent, motivated and capable workforce that could assist in achieving the levels of excellence envisioned by stakeholders. To achieve this, these institutions now have to develop management models which will bolster desired behaviour, engender core values and promote performance excellence, whilst at the same time reinforcing an ethos of scholarship that upholds the intrinsic nature of these institutions as centres of innovative learning. In addition, the White Paper on Human Resource Management in the South African Public Service also singles out performance management as an integral part of an effective Human Resources Management and Development Strategy and thus every organisation’s pillar of success (Wilkinson, et al., 2004). Therefore, this study is suggesting a call for an enquiry into efficacious performance measurement dimensions that can add value to the effectiveness of the models used for evaluating performance of academic staff and raise institutional growth measures that encompass increased graduate rates, research output and quality teaching. As McGregor (2002) averred, the aforesaid growth measures are currently not up to standard. They should, as Tait, Van Eeden and Tait (2002) also indicated, be enhanced and the improvement initiative be seen to be driven at the ‘coal face’ by an effective and efficient lecturing cadre as primary agents of transformation and as the teaching ‘corps’ of quality and substance at universities that effectively utilise research, teaching or learning and community engagement to identify and solve problems (Mapesela & Strydom, 2004).

Furthermore, research has shown that higher education institutions are facing major challenges regarding the management of the performance of academic staff (Mapesela & Strydom, 2004). It is therefore on the basis of the foregoing, as well as the preceding background that this study aims to address the following research problem that may assist the leadership in higher education institutions to face the challenges referred to in the study’s background: the need for empirical evidence to confirm the relevance of the seven postulated performance measurement dimensions for lecturers posited by Robbins, Odendaal and Roodt (2007).

The core research objectives of this study are therefore to:

• investigate the relevance of Robbins, et al.’s (2007, p. 373) seven performance measurement dimensions for lecturers and explore the influence of demographic variables on these dimensions

• explore and empirically test the seven performance dimensions for lecturers at universities as suggested by Robbins, et al. (2007) and thus contribute towards the creation of generally acceptable measures for assessing performance of lecturers at universities.

Although Robbins et al. (2007) intimated that research has shown that there are seven performance dimensions for the lecturer’s job, there was no evidence to support this claim and a need therefore arose to close this gap through this inquiry.

Therefore, the essential value-add that this study seeks to contribute is to provide a tested framework which universities can use as a guideline in policy formulation regarding performance measurement for lecturers.

The questions that require answers regarding this study are:

• Which of the seven performance dimensions suggested by Robbins et al. (2007) – knowledge or subject knowledge, testing procedure, student-lecturer relations, organisational skills, communication skills, subject relevance and utility of assignments – could be regarded as acceptable performance dimensions for lecturers?

• To what extent would the demographic variables of the respondents influence their perceptions of these dimensions?

This report presents the literature synopsis on performance measurement dimensions for lecturers followed by research design and the results and recommendations flowing from this study.

Performance management and performance measurement

Performance management is a goal-oriented process (Mondy, 2008, p. 224) and the term is often used interchangeably with performance evaluation, performance appraisal or performance measurement (Mello, 2006, p. 444). The conventional wisdom is that to manage performance one has to first be able to measure it (Thorpe & Holloway, 2008).

It may be necessary at the outset to discuss the underlying definitions and assumptions of the terms ‘evaluation’, ‘assessment’, ‘measurement’ and ‘performance management’, since it is essential to understand the true relationship of these terms in the performance measurement process for lecturers in the higher education environment and also, because of the closeness in the meaning of these terms (Arreola, 2000).

To give credence to these views, Airasian (2001) avers that performance evaluation judges the worth of information collected for a specific purpose such as determining effectiveness, whilst assessment is concerned with collecting, synthesising and interpreting the information that will be used in making the evaluation decision. In addition, performance measurement can be defined as a

process of assessing the performance against pre-determined measures of performance, based on key success factors (KSF) which may include measures of deviation from the norm, tracking past achievements and measures of output and input.
(Millmore, Lewis, Saunders, Thornhill &Morrow, 2007, p. 530)

Therefore, based on these assertions, it is clear that performance measurement monitors and reports how well someone or something is doing. In theory, it is a broad term applicable to people, things, situations, activities and organisations whilst performance management is a process that helps organisations to formulate, implement and change their strategy in order to satisfy their shareholders’ needs (Verweire & Van den Berghe, 2004).

In the ultimate analysis, the performance measurement concept rests on the foundation of performance management. It is not something that should be reserved for the selected few. Hence, in a high-performing organisation, measurement is very critical. If it is done correctly, both the organisation and the people within it will be impacted positively (Spitzer, 2007, p. 182).

Furthermore, if performance measurement simply means the introspection and collection of historical results, it is very likely that little useful purpose will be served from the point of view of performance management (Williams, 2002). The measurement process should therefore assist in the diagnosis of goal achievement and give some warnings in advance as input to the search for reasons for performance gaps (Williams, 2002, p. 66).

In the context of this discourse it is quite clear that measurement could be seen as an antidote to ambiguity; it forces one to impose clarity on vague concepts and to take action. What we measure communicates our priorities and thus has a powerful link to strategy (Hammer, 2007). Therefore, in the whole performance measurement process there has to be some measures that form the basis of performance measurement. The aforesaid statement begs the question: What exactly is a performance measure in the context of performance measurement? In light of the foregoing, Lichielle and Turnock (2007, p. 11) indicate that there is no ‘exactly’ when it comes to the extensive use of the term ‘performance measure’. Different people have different views regarding what constitutes a ‘measure’. The good news, however, is that although there are many different ideas about what comprises a measure, they have one commonality which is that ‘a performance measure measures something … usually progress towards an objective or goal’. Therefore it does not matter if it is called a performance measure, a performance indicator or in some cases a performance standard. What matters is the fundamental idea that a performance measure measures something! Thus, a measure can be defined as a specific quantitative or qualitative representation of a capacity, process or outcome deemed relevant to the assessment of performance. Hence, performance measures should be designed to drive people towards the overall vision of the organisation and to focus on the future and not simply on the past (Millmore, et al., 2007). This then translates into pressure on managers in the higher education environment to ensure that their staff (particularly academic staff) is working more productively and their institutions are responsive to the changing demands placed upon them by stakeholders. Furthermore, implicit in the pressure experienced by managers in the higher education environment is the demand for greater productivity in the wake of budget constraints, increased enrolments and more explicit social demands placed upon higher education institutions. Therefore it is inevitable in the aforesaid scenario that the work that academic staff is required to perform will continually be under scrutiny, thus challenging institutional managers to manage the performance of their staff more effectively with the view to achieving higher levels of productivity and attain the ever increasing social demands and the number and range of institutional objectives and goals (Parsons & Slabbert, 2001).

It may also be important to note that to attract much needed funding, the government and other private donors need to be sufficiently convinced about the institutional success in securing acceptable student numbers and a satisfactory pass rate. These expectations can reasonably be met by empowering staff to deliver through use of an effective performance management system that measures not only performance output in teaching research and service rendering, but also in the required competencies of the lecturing staff.

Competencies as performance dimensions

Competencies could be regarded as overt and manifest behaviour that allows a person to perform competently, which means that dimensions in the context of the aforesaid statement refer to a cluster of behaviours that are specific, observable and verifiable and that can be reliably and logically classified together. Therefore, competencies could be seen as synonymous with performance dimensions largely because ‘the behavioural interpretation of the term competency is simply a replacement (or synonym) for performance dimensions’ (Williams, 2002,

p. 101). This study suggests that the required performance output (results) be combined with the behavioural dimensions that brought about that level of performance (i.e. competencies) so as to determine the lecturer’s level of performance.

Performance dimensions for a university lecturer’s job

An approach adopted internationally in line with competency-based thinking suggests that the following are some of the competencies that may be associated with lecturers’ positions (Arreola, 2000; Franzen, 2003; Hill, Lomas & MacGregor, 2003; Sinclair & Johnson, 2000; Spitzer, 2007; White, 2008):

• communication

• interpersonal skills

• leadership

• self-development

• development of others

• change management

• commitment to quality

• student and stakeholder orientation

• innovation and creativity

• decision-making

• judgement

• research

• subject mastery

• professional relations

• learner assessment

• organisational skills

• listening skills

• project management

• change management

• originality

• critical analytical skills

• the ability to challenge conventional views.

For this study the focus will fall on investigating the seven performance dimensions of the lecturer’s job as posited by Robbins, et al. (2007). The researcher’s inquiry is poised to confirm or refute these dimensions as descriptors of the behaviour that lecturers are expected to exhibit when they successfully perform their duties. They are also the perspectives which could assist supervisors in the development of performance plans and ‘are generally categorised into three types, namely universal dimensions (included in all performance plans); job content dimensions (which vary from job to job); and other performance dimensions’ (Anon., 2006, p. 1). For the purpose of this study, attention will be drawn to the universal dimensions with particular focus on the seven performance dimensions for lecturers, as suggested by Robbins, et al. (2007):

• knowledge (subject knowledge)

• testing (assessment) procedures

• student-teacher relations

• organisational skills

• communication skills

• subject relevance

• utility of assignments.

It may be important to note that these dimensions overlap some of those suggested by scholars mentioned earlier.

Knowledge (subject knowledge): As learning is a relatively permanent positive or negative change in the learner’s behaviour that occurs as a result of practice or experience, the lecturer’s knowledge-base in the subject is fundamental to the creation and enhancement of the student’s opportunity to learn well (Analoui, 2007). The knowledge-base referred to may include the declarative knowledge of facts and concepts, procedural knowledge of what to do and the motivation which could include effort and persistence to excel (Aguinis, 2009,

p. 79). Sinclair and Johnson (2000) posit thorough knowledge of the subject material as essential to accurate instruction and clear communication of content to students. The performance measurement for lecturers should therefore include some mechanisms to measure the faculty member’s expertise in the content area. Competencies in this regard include not only content knowledge, but also the ability to organise, integrate, adjust and adapt this content in ways that make it accessible and thought provoking to the learner (Arreola, 2000). This dimension also includes the ability to advance scholarship and generate research. Implicit in the latter is the advancement of knowledge through discovery, integration, dissemination and application of knowledge (Gill & Johnson, 1997; White, 2008).

Testing (Assessment) procedure: This dimension entails designing, developing and implementing tools and procedures for assessing students’ learning outcomes and is part of instructional design. The required skills in this dimension are, amongst others (Arreola, 2000):

• designing tests

• preparing learning objectives

• developing syllabi

• preparing handouts and other supporting materials

• properly using media and other forms of instructional technology

• organising lectures and presentations for maximal instructional impact.

Feedback to students during the sessions and assignments is of paramount importance under this dimension (Hill, Lomas & MacGregor, 2003).

Student-teacher relations: This dimension relates to the creation and maintenance of a student-centred environment that maintains and sustains learning and development. It is a dimension that is integral to high learner-performance. A teacher who can develop relationships that foster and encourage student engagement will enhance learning (Arreola, 2000). Encouragement of active participation in the classroom creates a supportive environment where questions and class discussions are promoted, which imbues the lecturer with enthusiasm for the subject and facilitates opportunities for generating regular informal feedback on students, as well as deeper understanding of the subject matter (Sinclair & Johnson, 2000).

Organisational skills: Organising is a dimension that influences overall student experiences, as well as the quality of teaching (Sinclair & Johnson, 2000). It also relates to those bureaucratic skills utilised for operating and managing a course including, but not limited to, timely grading of examination, maintaining published office hours, arranging for and coordinating guest lecturers and generally making arrangements for facilities and resources required in the teaching of the course. Organisation of the course materials and other academic activities has a profound effect on the learners’ ability to succeed in their area of learning. The lecturer therefore needs to provide an ongoing framework that orients learners to the course ideas, materials and activities. Excellent teachers do their work in a well-prepared and well-organised manner. They organise their work in such a way that they allow themselves space to engage in activities relating to corporate citizenship and community outreach. Therefore, a performance measurement instrument should search for evidence of careful planning in view of the fact that quality of planning would be an indication of successful learning (Arreola, 2000).

Communication skills: Communication is an important aspect in structural delivery skills. Structural delivery skills can be defined as those human interaction skills and characteristics which:

• facilitate clear communication of information, concepts and attitudes

• promote learning by creating an appropriate and effective learning environment.

Characteristics such as clarity in exposition, demonstrated enthusiasm, ability to motivate, ability to capture and hold the interest and attention of learners and create an overall learning environment appropriate to the content being taught, are all included in the communication skills dimension (Arreola, 2000). It is therefore essential that the lecturer communicates ideas clearly and interestingly to the learners (Arreola, 2000).

Subject relevance: This dimension relates to the appropriateness of the content provided during the lesson and the way in which it is presented to the learners. Subject relevance should also entail accuracy of the facts encapsulated in the course content. It is also important that relevant assessment instruments used in the course add to the relevance of the design architecture of the course and are within the frame of reference of the course materials and the real world associated with the subject. Questions should be set at an appropriate level and graded according to the learning outcomes of the module. The text books and reference materials recommended by the lecturers, as well as the appropriate use of instructional methods and techniques used in the subject are also of vital importance. Lastly, the course being offered should be valued by the workplace (Hill, et al., 2003; White, 2008).

Utility of assignments: It is important that assignments given to students are meaningful and enhance their learning and developmental needs. They should be meaningful in the sense of being within the frame of reference of the course material and the ‘real world’ associated with the subject and socioeconomic life within which a student lives (White, 2008, p. 24). To further enhance utility, the assignment should reflect its learning objectives and make it interesting and challenging to the student (Layman, Williams & Slaten, 2007, p. 1).

In order to achieve the declared research objectives, the following research design was adopted for the study.

Research Design

Research approach

The quantitative research approach, guided by the positivist paradigm, was chosen for this study. In terms of this paradigm, everything is observable and can be measured and therefore explained (Van Heerden & Roodt, 2007).

The reason for choosing the quantitative research tradition is largely because the researcher’s inquiry falls within an international domain and across the divide of countries and cultures and where a number of globally understood and accepted variables can be observed. Also, the standardised and objective data related to this study’s research problem had to be collected from individual participants at universities across a wide divide of countries and cultures (Parker, 2007). A quantitative research approach would furthermore ensure uniformity and consistency of data gathering and analysis in an attempt to cast the researcher’s net quite widely in order to obtain as much data as possible with the intention of arriving at findings that could possibly be broadly generalised. Research methodology
A quantitative survey questionnaire method was employed to explore the performance dimensions for lecturers at universities.

Research context
Participants involved in the study were drawn from a population of lecturers at South African universities, as well as from ‘top universities’ in the USA, UK, Nigeria and Australia who had access to e-mail facilities. These universities were chosen on the basis of their reputation of being the best on their respective continents. Hence, those chosen in the USA were from the higher education institutions commonly referred to as Ivy League Universities. The rest of those chosen in other countries were amongst those ranked amongst the top hundred on their respective continents. For ethical reasons the names of the universities are not disclosed in this study. The prospective respondents were also assured that their identities would not be revealed to anyone, including their universities.

Participants
Participants in this study were permanently employed by their respective universities. However, in view of the time constraints and the potential administrative and logistical difficulty of drawing the desired sample from the total population, a convenient sampling method was adopted. A questionnaire was electronically sent out to 500 academics at the selected universities who had access to e-mail facilities. In total 178 questionnaire responses, representing a 36% response rate, were received and were included in the analyses. However, eight respondents described their positions within their institutions as administrative and had to be omitted from the analyses thus reducing the total of respondents from 178 to 170.

In general and especially when using an online questionnaire (as in the case of this study), a 30% response rate is regarded as satisfactory; the response rate of 36% therefore exceeded this threshold (Saunders, Lewis & Thornhill, 2003; Tustin, Ligthelm, Martins & Van Wyk, 2005).

An overview of the biographical details of the respondents is presented in Table 1.

The biographical data in Table 1 indicates that 56.47% of the respondents were male, whilst 43.53% were female and the majority of responses received in the age group category were from respondents between 25 and 50 years old. The respondents thus constituted a reasonable spread between the younger and the older, as well as between male and female. Regarding qualification, 51% of the respondents had doctorate degrees, 41% honours and master’s degrees and only 8% had bachelor’s degrees. All the respondents were therefore reasonably qualified.

Table 1,2,3 & 4

Measuring instruments

The measuring instrument specifically designed for this study was titled Performance Measurement Dimension Questionnaire. It was designed to measure specific performance dimensions for lecturers at selected universities, through the use of a 5-point Liekert rating scale questionnaire. The rating of level ‘1’ indicated strong dislike and was labeled ‘not at all’, rating ‘2’ indicating dislike, was labeled ‘to a very little extent’, the level ‘3’-agreement level indicated ‘to a moderate extent’, the level ‘4’-agreement level indicated ‘to a reasonable extent’ and level 5-agreement level represented ‘to a great extent’. The instrument referred to consisted of biographical information, general perception of performance management and performance dimensions for academic staff.

Biographical information (Section 1): This section contains the respondents’ personal particulars in respect of age, gender, length of service, position within the organisation, qualification and level of seniority. The information received could be used to make comparisons about respondents’ tendencies with respect to how they responded to questions and also, when employing factor analysis, to determine whether the postulated factors differ according to demographic variables.

General perception of performance management (Section 2): The items contained in this section attempted to solicit information in respect of the respondents’ general perception of performance management.

Performance dimensions for academic staff (Section 3): This section highlights the seven performance dimensions for the job of a lecturer. An attempt was also made to assign at least five sub-dimensions to each question on the basis of a 5-point intensity scale, where 1 signified low preference, whilst 5 signified high preference.

Research procedure

In broad terms, the research procedure involved the identification of international and South African universities that were considered by academic standards to be amongst the best.

In order to establish contacts and facilitate the distribution of the questionnaire at the selected universities, these universities were visited before the questionnaires were sent out. Thereafter, the questionnaire was developed and a pilot test administered to remove the possible flaws contained in it.

A hard copy of the questionnaire was then converted into a web-based electronic format in order to ensure a quicker and more accurate response and to cover as many respondents as possible. The respondents were requested to log in to the link provided. The URL button opened up a pop-up window reflecting the questionnaire they were expected to complete. Each question contained a tamper-proof encrypted serial number set to expire after a certain period. Thus, respondents could not change the contents of the questionnaire, nor could they write on the instrument itself.

Once the questionnaire had been completed, the respondents were requested to click the ‘submit’ window at the bottom of the questionnaire. The results would then be automatically encrypted and stored in the database hosted at the offices of the appointed statistical consultant responsible for managing the data. Each respondent’s name would then be crossed off the list, thus guaranteeing the privacy and confidentiality of the information submitted and also preventing the respondents from completing the questionnaire more than once. The questionnaire was resent several times to those who did not respond.

Statistical analysis

After a reasonable number of responses had been received, the report on the collated data was sent for analysis by a private statistical consultant who used the SAS statistical analysis package1 version 9.1 to do the analyses.

Scale reliability testing and calculation of dimension scores

The main focus of this subsection was to verify the internal consistency reliability of the performance measurement dimensions probed within the performance management questionnaire.

Scale reliability: In the foregoing discussion, factor analysis confirmed that all aspects probed within the performance measurement questionnaire validly addressed the academic performance management criteria. The next question considered was whether the subsets of questionnaire items designed to define the seven performance measurement dimensions (within the general performance measurement arena) all truly contributed towards explaining the particular performance measurement aspects.

Item analysis (also referred to as scale reliability testing) was conducted on each subset of questionnaire items to establish internal consistency reliability as shown in Table 2. Internal consistency is indicated by the Cronbach Alpha coefficient calculated as part of the scale reliability testing. An Alpha value greater than or equal to 0.7 is generally seen as a good indication of reliability (Hatcher, 1994, p. 137).

Calculation of dimension score: As can be seen from Table 2, each row of the table represents the results of an analysis conducted on the subset of questionnaire items designed to represent a particular performance measurement dimension. These columns reflect the subset of items designed to represent the construct, the items excluded from the construct as indicated by the analysis as not contributing towards explaining the construct, the Cronbach Alpha coefficient, the dimension mean scores and standard deviations.

The construct mean scores presented in the second to last columns of the summary table (Table 2) represent a general measure on respondents’ perceptions of the PM aspects. For example, the construct mean score for the ‘knowledge’ dimension with a value of 4.10 (high on the perception rating scale) indicates that respondents perceived an academic’s knowledge of subject matter as an important element of PM. The dimension means scores of 3.16 for ‘organisational’ and 3.17 for ‘assessment procedures’ indicate that respondents regarded these PM aspects as less important than subject knowledge.

Discussion

The general perception of the PM process and the summary of deductions are discussed in detail.

General perceptions of the performance measurement process

General perceptions of the PM process also enquired into the respondents’ perceptions of competencies regarded as crucial to the PM process. To this end, the combined agreement ratings on the various competencies indicated that the priorities on some competencies were regarded as significantly more important than others. For example, ‘subject mastery’ and ‘research’ was perceived as significantly more important than ‘change management’ and ‘project management’.

Performance measurement dimensions designed as subsets of questionnaire items to describe specific performance measurement issues

The general impression as to how respondents perceived the subsets of questions designed to jointly describe PM issues were obtained via frequency tables and descriptive statistics. Analysis of the subsets of items in this regard indicated that ‘subject knowledge’ (dimension 1), ‘learner-lecturer relations’ (dimension 2), ‘communication skills’ (dimension 3), ‘organisational skills’ (dimension 4) and the ‘utility of assignments’ (dimension 7) revealed a positive perception tendency amongst respondents. Although mostly positive, the perception pattern for ‘assessment methods’ seemed to present more than one aspect of assessment (addressed in the scale reliability analyses).

Analysis of variance

The statistical significance of the apparent influential effects of university, position, age and qualifications was validated by analyses of variance.

The significant influence of ‘university’ on ‘subject knowledge’ and ‘assessment procedure’ perceptions indicates that the USA rated ‘subject knowledge’ significantly higher than other universities and ‘assessment procedures’ significantly lower than other institutions.

Qualifications significantly influenced perceptions of ‘organisational skills’ and ‘assessment procedures’ in the sense that doctorate respondents perceived both these aspects significantly less positive than graduate respondents (‘organisational skills’) and master’s or honours respondents (‘assessment procedures’).

Perceptions of ‘assessment procedures’, ‘subject relevance’ and ‘utility of assignments’ indicated that the greatest difference in perceptions for all of these PM dimensions existed between ‘senior lecturers’ in the age category 56–60 and those in the age category of 41–45 years. For the purpose of assignments, professors in the age group 41–45 were included, the latter group being significantly more convinced of the impact of these PM issues than the senior group.

As stated initially, the core objectives of this study were to:

• investigate the relevance of Robbins, Odendaal and Roodt’s (2007, p. 373) seven performance measurement dimensions for lecturers and explore the influence of demographic variables on these dimensions

• explore and empirically test the seven performance dimensions for lecturers at universities as suggested by Robbins, et al. (2007) and thus contribute towards the creation of generally acceptable measures for assessing performance of lecturers at universities.

According to the results of this research, the essential value-add of this study has been achieved, that is, to contribute to a tested framework which universities can use as a guideline in policy formulation regarding performance measurement for lecturers.

Table 5,6,7 & 8

Summery of findings

The following are the findings in line with the core research objectives of the study.

Research objective 1: Investigate the relevance of Robbins, Odendaal and Roodt’s (2007, p. 373) seven performance measurement dimensions for lecturers and explore the influence of demographic variables on these dimensions.

It became evident from the results that respondents positively perceived aspects such as ‘subject knowledge’, including:

• the currency of the subject material and in-depth tuition of the subject matter

• ‘learner-lecturer relations’

• ‘communication skills’

• ‘subject relevance’

• ‘utility of assignments’

in a more positive light than ‘organisational skills’ and ‘assessment procedures’, albeit that these may be included in the PM dimension framework for lecturers. However, competencies such as ‘project management’ and ‘change management’ were perceived as unimportant, thus suggesting that they should not form part of the dimensions for performance measurement of lecturers. This, however, is contrary to the view that competencies including change management or project management are associated with performance excellence for lecturers (Franzen, 2003; Sinclair & Johnson, 2000).

Research literature also revealed that lecturers’ knowledge base in the subject area is fundamental to the creation and enhancement of the students’ opportunity to learn (Aguinis, 2009; Arreola, 2000; Sinclair & Johnson, 2000).

Insofar as ‘organisational skills’ and ‘assessment procedures’ are concerned, theory specifies that skills in organisation (such as designing tests, preparing learning objectives, developing syllabi, handouts and other such supporting materials) including organising lectures and presentations for maximum instructional impact, should form part of the organising ability and capability to execute assessment procedures (Arreola, 2000; Hill, et al., 2003).

The findings of the present study revealed that at the 5% level of significance based on ANOVA:

• The USA (respondents) rated ‘subject knowledge’ significantly higher than other universities whilst at the same time rating ‘assessment procedures’ significantly lower than other countries. The USA therefore perceived these aspects significantly different to the other countries.

• ‘University’, academic ‘position’, ‘age’ and to some extent ‘experience’ and ‘qualifications’ affected the respondents’ perceptions. However, ‘gender’ appeared to have had no influential effect and the opinions of both men and women were the same in respect of these matters.

• ‘Qualifications’ significantly influenced perceptions of ‘organisational skills’ and ‘assessment procedures’ in the sense that doctorate respondents perceived both these aspects significantly less positively than graduate respondents. In other words, the doctorate respondents felt ‘organisational skills’ and ‘assessment procedures’ should not be part of the required competencies for lecturers at universities.

In terms of theory, the findings of the present study revealed that the assessment criteria in PM should take into account teaching workload or the distribution of workload between members of the department, results of student evaluation based on an acceptable format used by the faculty, student numbers per course, research output with emphasis on accredited output and corporate citizenship which encompasses service to the community with the focus on service without compensation. The members’ participation in and availability to the faculty’s activities such as graduation ceremonies, meetings and committees, as well as their participation in and availability to the institution in general (e.g. portfolio committees, meetings, task teams, etc.) would also constitute a critical element of corporate citizenship (Wilkinson, et al., 2004, p. 105).

Research objective 2: Explore and empirically test the seven performance dimensions for lecturers at universities as suggested by Robbins, et al. (2007) and thus contribute towards the creation of generally acceptable measures for assessing performance of lecturers at universities.

The study confirmed that the seven postulated performance measurement dimensions of:

• ‘subject knowledge’

• ‘assessment skills’

• ‘student-lecturer relations’

• ‘organisational skills’

• ‘communication skills’

• ‘subject relevance’ (course design skills)

• ‘utility (meaningfulness) of assignments’

could be regarded as aspects of a uni-structured model of performance measurement.

Table 9a,9b & 9c

Reliability of the dimensions tested

The seven postulated performance dimensions were tested for reliability in line with the aforementioned objective and the generally accepted Cronbach Alpha limit of 0.70. The said test yielded a Cronbach Alpha coefficient level of between 0.70 and 0.89, indicating an acceptable reliability and internal consistency of the said postulated dimensions.

Practical or managerial implications

The study is poised to inform policy on performance measurement for lecturers and thus assist in introducing a performance culture and a broadly researched measuring tool that will assist universities to effectively manage performance of the lecturing staff and also to assist academic leaders in this sector to identify the developmental needs of the lecturing staff.

In addition, literature review has revealed that organisations lacking a performance culture and a reliable system of managing performance often find it extremely difficult to fairly reward good performers (Kaplan & Norton, 1996; Kushun, 2002; Viedge & Conidaris, 2000).

The absence of such a performance measuring tool often causes degrees of demotivation amongst staff, especially amongst the excellent performers and those in need of development who may come to realise that good performance and positive attitude towards work does not mean anything to them in terms of both intrinsic and extrinsic rewards (Mapesela & Strydom, 2004).

Table 10a, 10b, 11a & 11b

Recommendations

Based on the value of the study, the following recommendations are made:

• The suggested dimensions be used as a guiding framework for development of policies and as an instrument for measuring performance of academic staff at universities. Universities in South Africa that do not have a performance management system can use the PM framework to develop their systems.

• The seven performance dimensions tested in this study be integrated with mutually agreed goals and the workload consideration when performance agreements are being entered into with lecturers.

Limitations of the study

The limitations of the study were identified as follows:

• In general terms, academics seemed reluctant to respond to a survey questionnaire. Consequently, the ultimate response to the questionnaire was not very good, especially for factor analysis purposes and would render the results difficult to generalise.

• Sample size appeared to be a limitation in this study as well. Therefore, the suggested performance measurement framework can at best be seen as a preliminary design framework with the view to verifying these results in larger samples. The said results can, however, be used as guidelines for universities in their endeavour to develop performance measurement for their lecturing staff.

• Not all lecturers contacted through the Internet had e-mail addresses. A mail questionnaire could also have been used to supplement the electronic one and thus improve the response rate.

• The performance dimensions or the critical success factors and sub-factors were not weighted or ranked in order of significance.

• The inclusion of only five countries in the survey seemed to be insufficient to give a reasonable international or global picture of the results.

Suggestions for future research:

It is suggested that a future study:

• Utilises an instrument designed to measure performance dimensions which is designed in such a manner that it narrows its attention to the performance measurement dimensions only and excludes the other sections of the questionnaire relating to the general perceptions about performance management. This might improve the response rate and would be less intimidating as respondents would be expected to answer relatively fewer questions.

• Be limited to either distance education institutions or contact institutions and not both, as the modes of tuition delivery at distance and contact institutions are not necessarily the same.

• Be repeated on a wider sample, broadened and strengthened to include not only the seven dimensions, but the other related performance dimensions including research skills incorporating analytical and synthetic ability as well as scholarship, analytical skills, communication and writing skills and ability to challenge conventional views.

• Be repeated and the performance dimensions or critical success factors and sub-factors be weighted or ranked according to their order of significance.

Table 12a, 12b, 13a & 13b

Conslusion

This study achieved its objectives of testing the lecturers’ perceptions, as well as developing a foundation for a social science framework consisting of the seven performance measurement dimensions which could be used to assist universities in managing performance of academic staff. This was in line with the core research objectives of the study to:

• investigate the relevance of Robbins, et al.’s (2007, p. 373) seven performance measurement dimensions for lecturers and explore the influence of demographic variables on these dimensions

• explore and empirically test the seven performance dimensions for lecturers at universities as suggested by Robbins, et al. (2007) and thus contribute towards the creation of generally acceptable measures for assessing performance of lecturers at universities.

References

Aguinis, H. (2009). Performance Management. London: Pearson Prentice Hall.

Airasian, P.W. (2001). Classroom Assessment: Concepts and Applications. Blacklick: McGraw-Hill.

Analoui, F. (2007). Strategic Human Resource Management. London: Thomson Learning.

Anon. (2006). Virginia Tech. Performance Management Programme: Performance Dimension Guideline. June 23, 2006.

Arreola, R.A. (2000). Developing a Comprehensive Faculty Evaluation System. Bolton, MA: Anker Publishing Company, Inc.

Franzen, K. (2003). A Critical Overview of Trends and Practices in Performance Management in South African Higher Education Environment. SAJHE, 17(3), 131−138.

Gill, J., & Johnson, P. (1997). Research Methods for Managers. London: Paul Chapman Publishing.

Hammer, M. (2007). Deadly Sins of Performance Measurement and How to Avoid Them: Special Report, Measuring to Manage. MT Sloan Management Review, Spring, 2007.

Hatcher, L. (1994). SAS – System for Factor Analysis and Structural Equation Modelling. SAS Inst Inc., NC, USA.

Hill, L., Lomas, L., & MacGregor, J. (2003). Students’ Perceptions of Quality in Higher Education. Quality Assurance in Education, 11(1), 15−20.

Kaplan, S., & Norton, D.P. (1996). Balance Scorecard: Translating Strategy into Action. Boston: Harvard Business School Press.

Kushun, R. (2002). Guide to South African Universities and Technikons. Developing a Global Perspective. Durban: Artwork Publishing.

Layman, L., Williams, L., & Slaten, K. (2007). Note Self: Make Assignment Meaningful. Computer Science Education. North Carolina State University: Raleigh, N.C, USA.

Lichielle, P., & Turnock, B.J. (2007). Turning Point: Guidebook for Performance Measurement. Washington: Turning Point National Programme Office.

McGregor, K. (2002). Study South Africa. Higher Education in Transition. Retrieved May 31, 2002, from www.und.ac.za/und/ieasa

Mapesela, M.L.E., & Strydom, F. (2004, November. Performance Management of Academic Staff in South African Higher Education System: A Developmental Project. Presented at the OECD Conference on trends in the Management of Human Resources in Higher Education. University of Free State, Bloemfontein.

Mello, J.A. (2006). Strategic Human Resource Management. Boulevard, Mason, Ohio: Thomson, South Western.

Millmore, M., Lewis, P., Saunders, M., Thornhill, A., & Morrow, T. (2007). Strategic Human Resources Management Contemporary Issues. London: Prentice Hall.

Mondy, R.W. (2008). Human Resource Management. Upper Saddle River, NJ: Pearson Prentice-Hall.

Parker, A.J. (2007). Validity of World-class Business Criteria Across Developed and Developing Countries. PhD Thesis. Johannesburg: University of Johannesburg.

Parsons, P.G., & Slabbert, A.D. (2001). Performance Management and Academic Workload in Higher Education. South African Journal for Higher Education, 15(3), 74–81.

Robbins, S.P., Odendaal, A., & Roodt, G. (2007). Organisational Behaviour – Global and South African Perspective. South Africa: Pearson Education.

Saunders, M., Lewis, P., & Thornhill, A. (2003). Research Methods for Business Students. London: Prentice Hall – Financial Times.

Simmons, J. (2002). An Expert Witness Perspective on Performance Appraisal in Universities and Colleges. Employee Relations, 24(1), 86–100.

Sinclair, H., & Johnson, W. (2000). Students and Staff Perceptions of ‘Good’ Teaching Feedback. Educational Studies, 25(3), 1−5.

Spitzer, D. R. (2007). Transforming Performance Measurement: Rethinking the Way We Measure and Drive Organisational Success. New York: AMACOM.

Tait, M., Van Eeden, S., & Tait, M. (2002). An Exploratory Study on Perceptions of Previously Educationally Disadvantaged First Year Learners of Law regarding University Education. South African Journal for Higher Education, 16(2), 177−182.

Taylor, B., & Harris, G. (2002). The Efficiency of South African Universities: A Study based on Analytical Review Technique. South African Journal for Higher Education, 16(2), 183–192.

Thorpe, R., & Holloway, J. (2008). Performance Management – Multi-disciplinary Perspective. New York: Palgrave Macmillan.

Tustin, D.G., Lighthelm, A.A., Martins, J.H., & Van Wyk, P.J. (2005). Marketing Research in Practice. Pretoria: UNISA Press.

Van Heerden, W., & Roodt, G. (2007). Development of a Measuring Instrument for Assessing a High Performance Culture. South African Journal of Human Resources Management/SA Tydskrif vir menslikehulpbronbestuur, 33(1), 18−28.

Verweire, K., & Van den Berghe, L. (2004). Integrated Performance Management – A Guide to Strategy Implementation. Upper Saddle River, N.J: Wiley Pearson, Prentice Hall.

Viedge, C., & Conidaris, C. (2000). The Magic of the Balance Score Card. People Dynamics, 18(7), 38−43.

White, A. (2008, 15, April). Managing Academic Performance: Understanding Development in the Academic Environment. Guardian News and Media Limited, pp. 1–29.

Wilkinson, A., Fourie, M., Strydom, A.H., Van der Westhuyzen, L.J., & Van Tonder, S.P. (2004). Performance Management for Academic Staff in South African Higher Education – A Developmental Research Project. Bloemfontein: Handisa Printers.

Williams, R.S. (2002). Managing Employee Performance – Design and Implementation in Organization. Australia: Thompson Learning.