Article Information

Authors:
Pharny D. Chrysler-Fox1
Gert Roodt1

Affiliations:
1Department of Industrial Psychology and People Management, University of Johannesburg, South Africa

Correspondence to:
Pharny Chrysler-Fox

Postal address:
PO Box 524, Auckland Park 2126, South Africa

Dates:
Received: 28 Aug. 2013
Accepted: 28 May 2014
Published: 01 Sept. 2014

How to cite this article:
Chrysler-Fox, P.D., & Roodt, G. (2014). Principles in selecting human capital measurements and metrics. SA Journal of Human Resource Management/SA Tydskrif vir Menslikehulpbronbestuur, 12(1) Art. #586, 13 pages. http://dx.doi.org/10.4102/
sajhrm.v12i1.586

Copyright Notice:
© 2014. The Authors. Licensee: AOSIS OpenJournals.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Principles in selecting human capital measurements and metrics
In This Original Research...
Open Access
Abstract
Introduction
   • Literature review
      • Value proposition of an organisation’s competitive strategy
      • Value creation logic
      • Value proposition of the human resource function and workforce
      • Measurement system
Research design
   • Research approach
   • Case selection strategy
   • Research method
      • Research setting
      • Entrée and establishing researcher roles
      • Sampling
      • Data collection methods
      • Recording of data
      • Data analysis
      • Strategies employed to ensure quality data
      • Reporting
Findings
   • Theme 1: Cascade the business strategy into the organisational subsystems (93)
   • Theme 2: Understand how the business strategy horizontally integrates across the business’s value chain (27)
   • Theme 3: Integrate and support the business value chain with the human resource value chain (58)
   • Theme 4: Human resource practitioners need to understand how the human resource function’s activities contribute to the implementation of the business strategy (7)
   • Theme 5: Create a line of sight (12)
   • Theme 6: Influence and develop the business strategy (21)
   • Theme 7: Focus on measuring relationships (46)
   • Theme 8: Support decisions with the aim of generating profits (12)
   • Theme 9: People intelligence should be based on an integrated view of the organisation (38)
   • Theme 10: Provide specific information per levels of decision-making (43)
   • Theme 11: Senior management does not value transactional information (33)
   • Theme 12: Use a select number of measurements (10)
   • Theme 13: Qualitative information is of limited value to senior management (22)
   • Theme 14: Conduct external qualitative benchmarking (7)
   • Theme 15: Ensure validity and reliability (31)
   • Theme 16: Undesired behaviours are driven by a lack of clarity in measurement (20)
   • Theme 17: Setting targets presents challenges (11)
   • Theme 18: Indices are problematic (8)
   • Theme 19: Share responsibility for measurements (23)
Discussion
   • Practical implications
   • Limitations of the study
   • Suggestions for future research
   • Conclusion
Acknowledgements
   • Competing interests
   • Authors’ contributions
References
Abstract

Orientation: Physical and natural resources have been surpassed by human capital as a resource of wealth creation. As a result, senior management relies increasingly on appropriate people information to drive strategic change. Yet, measurement within the human resource function predominantly informs decisions in support of efficiency and effectiveness. Consequently, dissimilar understanding of measurement expectations between these parties largely continues.

Research purpose: The study explored principles in selecting human capital measurements, drawing on the views and recommendations of human resource management professionals, all experts in human capital measurement.

Motivation for the study: The motivation was to advance the understanding of selecting appropriate and strategic valid measurements, in order for human resource practitioners to contribute to creating value and driving strategic change.

Research design, approach and method: A qualitative approach, with purposively selected cases from a selected panel of human capital measurement experts, generated a dataset through unstructured interviews, which were analysed thematically.

Main findings: Nineteen themes were found. They represent a process that considers the centrality of the business strategy and a systemic integration across multiple value chains in the organisation through business partnering, in order to select measurements and generate management level-appropriate information.

Practical/managerial implications: Measurement practitioners, in partnership with management from other functions, should integrate the business strategy across multiple value chains in order to select measurements. Analytics becomes critical in discovering relationships and formulating hypotheses to understand value creation. Higher education institutions should produce graduates able to deal with systems thinking and to operate within complexity.

Contribution: This study identified principles to select measurements and metrics. Noticeable is the move away from the interrelated scorecard perspectives to a systemic view of the organisation in order to understand value creation. In addition, the findings may help to position the human resource management function as a strategic asset.

Introduction

Senior and line managers and human resource practitioners (HRPs) are experiencing a difference between the demand for and supply of human capital information due to inappropriate measurements. Physical and natural resources have been surpassed by human capital as a resource of wealth creation (Bassi & McMurrer, 2006). As a result, management information is essential to support decisions. On the demand side, information supports decisions to drive strategic success (Cascio & Boudreau, 2011), the allocation of resources (Becker, Huselid & Ulrich, 2001) and investment in people and practices (Cascio, 2006; Cascio & Boudreau, 2011). Information, thus, not only drives strategic change (Cascio & Boudreau, 2011) and financial success (Becker et al., 2001), but also focuses attention on value creation, the human resource (HR) function’s credibility and their value as a strategic asset (Becker et al., 2001).

Yet, the supply side presents challenges. Becker, Huselid and Beatty (2009) articulate issues of inappropriate measurements that ‘generate more questions than answers’ (p. 145). Measurements rarely drive strategic change (Cascio & Boudreau, 2011). Lawler and Boudreau (2009) highlight that measurements are also used to justify investments in HR processes or programmes and to support the utilisation and deployment of resources. These contribute to a limited view of organisation-wide information. In addition, measurements lack sophistication and cannot provide objective evidence (Boudreau & Ramstad, 2007; Burkholder, Golas & Shapiro, 2007). This leads to HRPs facing challenges in selecting suitable measurements.

The purpose of this research was to answer the question ‘what are the principles in selecting human capital measurements?’ and the concomitant research objective was to explore and describe themes amongst a selected panel of expert practitioners at executive level (or engaging at executive level) who belong to a community that specialises in human capital measurement, in which they were directly involved, exerted leadership and consulted in the field of human capital measurement. For clarity, human capital refers to collective attributes of employees (the workforce), as opposed to human resource management, which refers to the management practices of human capital. This study employs the term human capital to refer both to employees and the HR function.

Literature has progressively addressed HR measurement. Rooted in Kaplan and Norton’s (1996) Balanced Scorecard, measurement has been systemically expanded to focus on the HR function in Becker et al.’s (2001) HR Scorecard. Subsequently, measurement has been done using the people aspect in Huselid, Becker and Beatty’s (2005) Workforce Scorecard and, more recently, an integration of the three scorecards (Becker et al., 2009). These scorecards, to some extent, point to principles to consider in developing strategic workforce measures. However, little empirical work exists on the principles with which to select measurements. The main contribution of the study was to identify descriptive themes around principles to consider in selecting human capital measurements for a measurement system.

Literature review
Literature on measurement principles can be considered as an intercept between four elements, namely the value proposition of an organisation’s competitive strategy, the value creation logic, the value proposition of the HR function and workforce and, lastly, the integrated measurement system, as illustrated in Figure 1.

FIGURE 1: Clusters of themes based on the measuring elements.

Value proposition of an organisation’s competitive strategy
This element underscores the centrality of an organisation-specific strategy (Becker et al., 2001, 2009; Huselid et al., 2005; Kaplan & Norton, 1996; Lawler, Levenson & Boudreau, 2004). Measurements should be strategically valid to answer important strategic questions (Becker et al., 2001, 2009; Huselid et al., 2005). Starting with the measures should be avoided, as there is no inventory of useful best-practice (strategic) measures (Becker et al., 2009; Boudreau & Ramstad, 2007; Burkholder et al., 2007; Fitz-enz, 2009; Huselid & Barnes, 2003; Huselid et al., 2005). Failing to adhere to the above may result in benchmarking. Benchmarking implies a common strategy and implementation process for each organisation (Becker et al., 2001, 2009; Becker & Huselid, 2003; Singh & Latib, 2004), as well as sending wrong messages about what is important (Becker et al., 2009) and driving undesired behaviour (Becker et al., 2001). To counteract a reliance on benchmarking, HRPs need to replace efficiency measures with strategically valid measures to reveal the value of the current strategy (Becker et al., 2009). This will allow for identifying logical relationships (Boudreau & Ramstad, 2007).

Value creation logic
This element, firstly, considers the logic of how value is created within a competitive strategy and, secondly, identifies how the HR function and the workforce can contribute to the creation of value. Michael Porter’s value chain is an approach to understanding value (Becker et al., 2001). However, Becker et al. (2001) recommend the strategy map as a representation of a value chain. The strategy map, which is ‘a logical and comprehensive architecture for describing strategy’ (Kaplan & Norton, 1996, p. 10), aids uncovering of relationships between functions and the financial goal of an organisation and, thus, how value is created (Becker et al., 2001). Kaplan and Norton (1996), in the context of the Balanced Scorecard, voice that ‘the four perspectives should be considered a template, not a strait jacket. No mathematical theorem exists that four perspectives are both necessary and sufficient’ (p. 43), as is evident in the later development of the HR Scorecard and the Workforce Scorecard. Common to the three scorecards are the sequential relationships, in which leading indicators impact a lagging indicator.

The strategy map should take into account alignment, intercept, responsibility, systems, relationships and value creation perspectives. Alignment between the strategy and management functions provides an understanding of an organisation’s value chain and promotes a shared understanding of what and how value is created (Becker et al., 2001). Value is created where the HR function and desired workforce behaviours intercept (Becker et al., 2001). HRPs therefore have the responsibility to indicate those drivers and enablers of HR (Becker et al., 2001) and the required workforce behaviours (Huselid et al., 2005) in the strategy map. Thereafter, the HR system should be aligned to provide the drivers and enablers (Becker et al., 2001) in direct support of strategic workforce behaviours (Becker et al., 2009; Huselid et al., 2005). In addition, attributes (both financial and non-financial, tangible and intangible) should be related, and their impact estimated, to measure the HR function’s impact on organisational performance (Becker et al., 2001). These relationships aid the creation of a line of sight (Kaplan & Norton, 2001).

Value proposition of the human resource function and workforce
The HR Scorecard (Becker et al., 2001) and the Workforce Scorecard (Huselid et al., 2005), integrated into a strategic workforce architecture (Becker et al., 2009), propose leading and lagging perspectives with which to address the previously downsized role of HR and the workforce in the Balanced Scorecard. In the HR Scorecard, a sequence of leading indicators, a high-performance work system and then HR system alignment impact two lagging indicators, namely HR deliverables and HR efficiency measures (Becker et al., 2001). Thus, the HR function’s departments should be integrated in order to deliver services (Fitz-enz, 2007).

Becker et al. (2001) distinguish between core and strategic metrics to measure HR doables: core metrics refers to significant HR expenditures that make no direct contribution to the organisation’s strategy, as opposed to strategic metrics, which assess the efficiency of HR activities and processes designed to produce outcomes that serve to execute the organisation’s strategy. HR deliverables may include impact measurements, as relationships between deliverables and performance drivers may result in the strategy map (Becker et al., 2001). The Workforce Scorecard consists of perspectives that are leading indicators (i.e. mindset and culture, competencies and leadership and workforce behaviours) that impact a lagging indicator (workforce success). The leading perspectives should be integrated to deliver success of the workforce, which now becomes a leading indicator within the Balanced Scorecard’s perspectives (see Becker et al., 2009, for a discussion on how these three scorecards integrate). These scorecards’ elements influence understanding of value on a strategy map and direct the selection of appropriate measurements (Becker et al., 2001, 2009; Huselid et al., 2005).

Measurement system
Literature (Becker et al., 2001, 2009; Huselid et al., 2005; Kearns, Walters, Mayo, Matthewman & Syrett, 2006) presents the importance of measuring relationships, which becomes evident within the value creation logic element. Becker et al. (2001) and Kearns (2003) recommend measuring the impact of HR decisions linked to the bottom line. Huselid et al. (2005) argue that relationships should be considered before the levels (Becker et al., 2001) or quantities (Huselid et al., 2005). Attributes should be related to a strategic outcome, that is, drivers of organisational and financial performance (Becker et al., 2001, 2009). Similarly, Lawler et al. (2004) call attention to impact metrics that demonstrate the link between capabilities and core competencies and the impact on an organisation’s competitive advantage. As such, measuring relationships will have ‘managerial value’ when organisational performance (Becker et al., 2001) is included and prevent silo thinking (Huselid et al., 2005).

As relationships are context specific, benchmarks are misleading and counterproductive, as they prevent insight into value creation processes and limit inferences drawn about important relationships (Becker et al., 2001). Efficiency measurements that focus on activities encourage benchmarking (Boudreau & Ramstad, 2007) and thus cannot be used to measure the impact of the HR function on organisational performance (Boudreau & Ramstad, 2003). Effectiveness measurements, in turn, measure outputs (Cantrell, Ballow & Gerkin, 2004).

An audience perspective should represent levels of decision-making and the flow of information. D. Davis (2005) propounds three levels of decision-making. At a strategic level, predictive information (long term) is needed to stimulate what-if scenarios during planning. At a tactical level, descriptive historical information, current performance information and predictive information (short term) are required to plan and control. This contrasts with a technical level that needs descriptive historical information with the sole purpose of control. Burkholder et al. (2007) categorise information based on its flow as ‘managing up’, that is, aimed at HR and executive teams, ‘managing out’, that is, aimed at the line managers whom the HR function serve, and, lastly, ‘managing down’, which refers to intra-HR measurements to optimise the HR function.

The emphasis on the strategic context presents challenges to measurement sophistication, in particular trade-offs between strategic validity, quality and pragmatism. One approach favours measurement quality, subject to the context of decision support and strategy, as the strategic context is of more consequence; Boudreau and Ramstad (2007) refer to this when they say that ‘precision alone is not a panacea’ (p. 195). A second approach argues that ‘it is better to be approximately right than precisely wrong’ (Huselid et al., 2005, pp. 133–134): after consideration has been given to the business strategy, the process does not start with the metric itself in isolation. A third approach focuses exclusively on measurement quality, where measurements are debated in detail (including limitations) (Burkholder et al., 2007; Cascio & Boudreau, 2011).

Additional considerations are: considering a few vital measurements (Becker et al., 2001) and ensuring that HR is not taking sole responsibility for measurements, as this can inhibit successful implementation of the strategy (Huselid et al., 2005).

In conclusion, a dual focus is evident between the business’s competitive strategy and the transactional side of the HR function. At a strategic level, HR takes on a transformational role, not only to co-develop a competitive strategy based on relationships, but also to select strategically valid measures to manage the implementation of the strategy. In contrast, at a transactional level, implementation of the competitive strategy considers effectiveness and efficiency measures.

Research design

This section firstly explains the approach in line with the researcher’s scientific beliefs, followed by the strategy and methodology employed.

Research approach
This study could be described as exploratory-descriptive, which produced important categories of meaning due to rich descriptions of underexplored circumstances (Marshall & Rossman, 1999). This study was embedded in a realist ontology in which an external reality is independent of people’s beliefs (Ritchie & Lewis, 2004). Realism seeks to understand a common reality in which people operate inter-dependently (Sobh & Perry, 2006). As ontology affects epistemology (Guba & Lincoln, 1994), this study was rooted in objectivism, in which external facts are outside of the researcher’s influence (Bryman & Bell, 2003), and, hence, the researcher adopts a distant, non-interactive position, devoid of bias and values, so as to not influence the outcomes (Guba, 1990).

Case selection strategy
A multiple case study was carried out, which allows the exploration and description of a contemporary event (Mouton, 2001; Yin, 2009) without influencing the outcomes (Yin, 2009). This strategy allowed for descriptive ‘what?’ questions and a holistic approach to real-life events, which resulted in rich descriptions from multiple data sources (Yin, 2009). Yin’s (2012) holistic multiple case design, which considers multiple cases with a single unit of analysis, in this case expert individuals, was followed.

Research method

Research setting
The field setting can be described as members of a group (with expert HRPs as the unit of analysis) who specialise in measurement. In particular, the participants were directly involved in measurement, exerted leadership and consulted at executive level in the field of human capital measurement.

Entrée and establishing researcher roles
To achieve entrée, a key informant was first approached; participants were then sampled purposively and then using snowball sampling. In each case, telephonic contact was first established to present the study and enquire about the possibility of participation. Snowball-sampled participants were informed that an anonymous participant had referred the researcher to them. Next, an email was sent to each participant, which described the purpose of this study, after which interviews were scheduled.

Sampling
Consideration was given to a homogeneous sample of experts on the topic of human capital measurement. A homogenous sample allows for the understanding of a focused issue by collecting data from participants of similar backgrounds and experiences (Patton, 2002). We employed selection criteria in conjunction with specific sampling strategies to achieve homogeneity. The criteria focused on members in a community who shared a particular interest, in particular expert practitioners at executive level who specialised in human capital measurement and were directly involved with measurement, exerted leadership and consulted in the field of HR measurement. A homogeneous sample may reduce the number of participants needed (compared to a heterogeneous sample) (Guest et al., 2006; Jette, Grover & Keck, 2003) and may still yield meaningful findings and interpretations (Guest et al., 2006).

To gain access to this community, we first approached and interviewed a key informant. This non-probability sampling strategy relies on the identification of experts within a community (Strydom & Delport, 2011). We then employed snowball and purposive sampling respectively to avoid referral bias (Davis, Johnson, Randolph, Liberty & Eterno, 2005). With snowball sampling, interviewed participants (Babbie & Mouton, 2001; Brink, Van der Walt & Van Rensburg, 2006) and collected data (Henning, Van Rensburg & Smit, 2004) pointed to possible participants. Purposive sampling relies on the researchers’ knowledge about the topic (Henning et al., 2004) and practitioners knowledgeable about the field (Brink et al., 2006).

Seven participants were approached, with one participant being excluded from this study. The participants possessed a doctorate degree, except for one participant. Half of the participants were registered industrial psychologists and the remainder were from other academic disciplines. Three participants were from an HR function and the remainder were from other functional areas. Participants were employed in the financial services, information technology and food and beverages industries. Biographically, they were white men, with an average age of 50 (ranging between 41 and 57).

Data collection methods
Primary data was generated through unstructured and open-ended interviews to obtain richness and depth. The purpose of this method is to explore and understand people’s experiences and points of view (Greeff, 2011). At the beginning of each interview, participation and consent were agreed upon in the context of confidentiality. The taking of field notes was also agreed to. All interviews commenced with the introductory question: ‘What are the principles in selecting human capital measurements?’ The conversations were guided in the direction of interest (Eriksson & Kovalainen, 2008), specifically by asking probing questions about core (Eriksson & Kovalainen, 2008) and sensitising (Patton, 2002) concepts as noted in the field notes (Patton, 2002). The interviews lasted between 26 min and 1 h 23 min. We also included unsolicited secondary data (e.g. participant-authored opinion articles and organisational documentation) during data analysis. Secondary data aided the interpretation of participants’ responses through discussions, comments and debate (Mouton, 2001). Consent was sourced for the use of unsolicited secondary data (Mason, 2002).

Recording of data
Primary data was recorded and transcribed verbatim. Secondary data was reproduced in print and digital format, together with the field notes, and was then digitised and stored. All data was contextually managed, analysed and stored in an ATLAS.ti hermeneutic unit.

Data analysis
Schurink, Fouché and De Vos (2011) claim that data analysis strategies range from informal to formal strategies, for example, analytical induction and grounded theory. We employed an informal analysis strategy, namely Braun and Clarke’s (2006) thematic analysis. They argue that thematic analysis reports participants’ meaning and reality when reporting patterns of experiences and, as such, does not connect to a specific theory and thus does not claim to develop a theory. Braun and Clarke’s approach to thematic analysis consists of six phases, which were applied in the present study as follows: Phase 1 focused on data familiarisation by repeatedly reading the data to explore meaning and patterns and referring to field notes (Patton, 2002). Next, in Phase 2, initial codes were generated according to aspects of interest across the data set. We adopted a hybrid coding approach in this study and focused on both latent and semantic (for in vivo) coding. We then sorted codes into potential themes, in Phase 3. In addition, relationships were identified between codes, themes and different levels of themes. In Phase 4, we reviewed the themes. Problematic themes were collapsed into other themes, broken down into separate themes or removed due to a lack of supporting or overly diverse data. Considerable time was spent on data analysis, due to the questioning techniques and resulting responses associated with unstructured interviews (Patton, 2002). The resultant themes were defined and named in Phase 5, considering the essence of the theme and the data it captured. Phase 6 consisted of writing the report.

Strategies employed to ensure quality data
We employed four strategies to ensure data integrity. Before the study, reflexivity was conducted to safeguard objectivity and avoid researcher bias (Taylor, Gibbs & Lewins, 2005) and in order to be sensitive towards how data is collected, analysed and represented (Mays & Pope, 2000). To ensure credible data, we employed specific sampling strategies to avoid referral bias. Next, the data was transcribed verbatim. Lastly, we collected data up to a point of saturation, ensuring data variation was understood and accounted for (Morse, 1994). Saturation, in light of our concern for meaning and not frequencies (Mason, 2010) and attaining breadth and depth (Bowen, 2008), was achieved in two parts. Firstly, we focused on breadth during coding in Phase 2; no new codes emerged after the first five participants. Guest, Bunce and Johnson (2006) argue that saturation is reached at the point where no new codes emerge. Secondly, we considered depth within themes during Phase 3 and 4 of the data analysis: codes were sorted into themes, and some themes collapsed into other themes, to account for more detail and variation (Charmaz, 2006).

To ensure credibility and trustworthy findings, we opted for a multiple case study design to improve credibility (Yin, 2012), as this facilities triangulation between cases and permits convergence of data (Babbie & Mouton, 2001; Yin, 2009). Furthermore, we employed member checks (Patton, 2002) through follow-up interviews, email correspondence and peer debriefing during data analysis. Negative case analysis was performed to explain an outlier (Onwuegbuzie & Leech, 2007) by applying data, investigator and theory triangulation, which led to Participant 1 being excluded from the data analysis. Triangulation with theory and investigators (not part of this study) ensured conformability. To ensure transferability, we provided a rich description of the research setting (Guba & Lincoln, 1994) and applied purposive sampling (Morse, 1994).

Reporting
Findings are presented following Sparkes’s (2002) realist tale, which is an analytical narrative interspersed with empirical data to report findings. We edited selected quotations to avoid repetition and disjointed interjections, since English was not the first language of most participants.

Findings

The following section presents the themes found, showing each theme’s groundedness in the data set in brackets.

Theme 1: Cascade the business strategy into the organisational subsystems (93)
All participants advocated the centrality of the business strategy in measurement. Organisational subsystems should pursue the business strategy as follows:

‘So you need to be clear about your strategy and then you need to make a link somewhere between what people are doing and how that … how that pursues the strategy.’ (Participant 2, male, 51)

‘There’s a strategic organisational intent; out of that you need your strategic people framework relative to that.’ (Participant 4, male, 57)

Theme 2: Understand how the business strategy horizontally integrates across the business’s value chain (27)
Participants agreed that the business strategy should be absorbed into the business value chain; in particular they advocated the combination of separate and diverse elements at business level. This necessitates an understanding of the business’s value chain (with roles and activities) and the design of the organisation:

‘You would want to understand the context. So where within the greater organisation does this particular enterprise or part of the enterprise fit? … What is its primary contribution to the life of the organisation or the effectiveness or efficiency of the organisation? So it’s almost to understand its … its role in the structure.’ (Participant 2, male, 51)

Participant 3 recommended that measurements should be aligned across and between different functions, and that the application should be consistent, to ensure absorption:

‘You have to have alignment with metrics that go across and by definition in and betwixt all the various different functions.’ (Participant 3, male, 47)

Neglecting such alignment will encourage silo thinking:

‘You can’t just view the world through that one lens; it gives you a jaundiced view and it encourages silo thinking.’ (Participant 3, male, 47)

Theme 3: Integrate and support the business value chain with the human resource value chain (58)
The integration between the business value chain and the business strategy should be supported by the HR value chain. Separate and diverse elements at functional level should be combined to support the business value chain.

Participants explained:

‘To understand what it is that these guys do on a daily basis. So what is important at a reasonably high level? So you … and this almost ties back into organisational design but it’s um … it’s about understanding where it fits into the value chain.’ (Participant 2, male, 51)

‘I don’t believe that you can classify HR as something out over there that the HR function does. That can almost by definition then only be tactical. To me a strategic HR is a fundamental, integral part of any business strategy and as such can never be positioned away or siloed away within an HR function.’ (Participant 3, male, 47)

Theme 4: Human resource practitioners need to understand how the human resource function’s activities contribute to the implementation of the business strategy (7)
This theme relates the importance of activities to implement the business strategy. In order to do so, Participant 2 argued that HRPs need to understand the HR function as a subsystem within the organisation:

‘[It should be] cascaded vertically down into different parts of the value chain. Once that part of the value chain, let’s call it a functional area, like HR, has got it, it can then be decided how to disaggregate it along its own internal value chain. ... So, HR may then have multiple value chains running concurrently, depending on the structure of the business. But each one of those pieces, recruitment or payroll or talent management will have their own value chain, which will have inputs and outputs.’ (Participant 2, male, 51)

To achieve this, Participant 2 argued that HRPs’ and the HR function’s roles, responsibilities and how people-related activities integrate within the HR value chain must be understood:

‘Fundamentally I’d want to know that an HR department is responsible [and] accountable for attraction, retention, and under attraction you’ve got the recruitment and the pay scales and all the other stuff. … So what is the role that is being performed?’ (Participant 2, male, 51)

Theme 5: Create a line of sight (12)
This theme underscores the importance of a line of sight, both vertical and horizontal, to the business strategy in order to ensure the operationalisation of the HR strategy and activities and to prevent the challenge stressed in Theme 3.

Participant 3 highlighted two strategies to create a line of sight, namely ensuring a direct link between transactional HR activities and the business strategy and providing a visual representation to allow employees to observe the integration. Examples in the dataset are:

‘Even if you’re the payroll clerk who’s responsible for making sure all the data’s accurate. ... You’ve got to know that when you’re doing that, you’ve got a direct impact on the share price or on the profitability.’ (Participant 3, male, 47)

‘They just do it ... because nobody has bothered to draw the chart or to show them visually how what they do can impact the overall goal of the company. So the metric itself just becomes a box that they live in unless they can see through [it].’ (Participant 3, male, 47)

Theme 6: Influence and develop the business strategy (21)
This finding relates the need for HR to influence and co-develop the business strategy through people information and to bring about a common desired result. Participant 5 argued that people information should be provided to aid more effective competition:

‘I’m absolutely, convinced that HR people have got a critical role to play in the development of the strategy of the business. Not just influencing the strategy, but helping to develop that.’ (Participant 5, male, 48)

‘I want to see metrics that drive business ... and if we can get that link sorted, then HR will become strategically relevant.’ (Participant 5, male, 48)

Three hurdles surfaced that prevent HRPs from influencing the strategy with people information. Firstly, its current position of isolation is a stumbling block in HR influencing the business strategy. Secondly, the appropriateness of information (see Theme 10) can hinder the HR function to influence the business strategy, as it is transactional and used to monitor people-related issues. Thirdly, people information is not always expressed in numerical terms.

Theme 7: Focus on measuring relationships (46)
This theme considers a way of determining relationships. Participant 6 described this theme as follows:

‘It really becomes an analytical metric if combined with some business measure. It’s either divided, plussed, one leads to the other or statistically related and the two are linked.’ (Participant 6, male, 41)

Measurement experts indicated a systems approach to identifying relationships and revealed new ones to enhance an understanding of performance in relation to the business strategy. Exemplary quotes from the dataset are:

‘To actually start putting measures in to look at what you are doing in terms of strategy means you have to look at other measures, new ones. You have say, ”If I want to grow the business, am I growing a new market segment at X percentage so that it will actually replace another market segment.”’ (Participant 6, male, 41)

‘And that is really your ultimate measurement. … this is very useful. ... you’ve actually put numerical values onto that chain. ... That is ... the ultimate, because now you can proactively start managing.’ (Participant 4, male, 57)

Theme 8: Support decisions with the aim of generating profits (12)
The strategic relevance (i.e. contribution, not the role) is stressed. As a primer, HRPs should focus on metrics that will make the HR function strategically relevant in driving business. Two approaches will make HR metrics strategically relevant, namely a focus on how the HR function contributes to competitiveness and shareholder value:

‘Think about how you as HR help this business to compete better ... all the metrics that you use help this business to grow market share, to make more money, to sell more merchandise.’ (Participant 5, male, 48)

As a second approach, consideration should be given to the information needs (related to profitability) of senior management (see Theme 11), rather than to the customary provision of transactional HR information:

‘The context of the metrics can’t be HR or IR [industrial relations] strike days minimised, or number of court cases settled. It can’t be. ... Why don’t you as HR come and tell me what are the specific metrics linked to talent management that will help me to make decisions about a talent management strategy for the business? ... That’s what I miss with HR practitioners.’ (Participant 5, male, 48)

Therefore HRPs need to understand the dependent variable of the business when selecting metrics as explained by Participant 5:

‘I miss the context. And the context, if you’re in a world of business, ... the dependent variable, has to be the profits that we generate.’ (Participant 5, male, 48)

Theme 9: People intelligence should be based on an integrated view of the organisation (38)
This theme pertains to human resource information systems and business intelligence systems that should be integrated (as opposed to being departmental), to provide a cross-functional, embedded intelligence:

‘You have to have a business intelligence system that is not departmental, but is truly enterprise[-wide].’ (Participant 3, male, 47)

‘The company has identified all key HR processes, translated them into accessible and meaningful measures and recorded them on SAP HR. … This allows for the ongoing trend analysis of HR data, not only by HR professionals, but also by line managers. In so doing, HR’s role of partnering line management is greatly enhanced.’ (Participant 4, male, 57)

Participant 7 argued that automated intelligence should be set up for different levels of analysis. In addition, it should also be considered for multinational organisations in order to obtain a global perspective:

‘We are globalising it. … We’re putting a system across the world. ... Now you want to find out about a guy in Panama who needs to move to Switzerland. Well, is he good? The answer isn’t ”yes”. You say, ”Well, what is his score on this test? How is his performance rating?”’ (Participant 7, male, 54)

Critical to generating people intelligence is understanding the HR function’s contribution to the multiple value chains and design of the organisation (see Theme 3), avoiding the obstructive silo approach (see Theme 19), exploring relationships (see Theme 7), integrating data across functions and ensuring consensus (face validity) amongst themselves, other functional managers and users of people intelligence (see Theme 15).

Theme 10: Provide specific information per levels of decision-making (43)
This theme relates to the time aspect of data and its use in decision-making. Level 1 (grounded 11 times) considers historical performance data and monitors people and HR delivery through periodic review for the purpose of regulation or control; this should be inclusive of the entire HR value chain:

‘To gather, monitor, organise and understand people information [with the] objective ... to manage human capital using systematic, objective and proven methods.’ (Participant 4, male, 57)

Level 2 (grounded 16 times) considers past performance (historical data) to solve (or troubleshoot) performance problems. Two approaches, namely informal and formal troubleshooting, as differentiated by sophistication, were found. Troubleshooting informed by historical data can be done using lagging indicators (informal approach):

‘The lag indicators are very easy to attribute to … activities within a business. So, if productivity went down I can follow that back into the business, and it was because of trouble, an automated thing, truck or whatever broke down, therefore we had a reduction.’ (Participant 2, male, 51)

In addition, troubleshooting can involve a predictive relationship based on historical data (formal approach) of variables, which can be used to trace deviations down the chain:

‘It’s one of those multifaceted predictive indicators, which, if [you] go back and look at the possible causes, ... you can then predict [sic] [ascertain] what realistically has been happening within the business.’ (Participant 2, male, 51)

Level 3 (grounded 16 times) considers the future and its aim is to predict. At this level of decision-making, predictive statements are made regarding future events or consequences of actions, by forming an inductive or deductive conclusion based on existing data. This level, in contrast to Level 1 and Level 2, focuses cognitive efforts from the known to the unknown. A need exists for senior management to consider various approaches before taking a decision. This answers difficult questions, for example the depreciation of talent as an asset over time:

‘Does losing somebody after three months have the same weighting as losing someone who’s been with you 10 years?’ (Participant 3, male, 47)

Some participants indicated that prediction can be used to support various questions, for example to predict the HR function’s impact on the business (i.e. its contribution), to test assumptions and to understand investment opportunities. Such analytics are directly tied to the business strategy, as it informs the development of the business strategy and identifies weaknesses in the business strategy. Useful information should support strategy development with predictive value and should contain predictions.

Theme 11: Senior management does not value transactional information (33)
This finding considers the lack in value of transactional information provided to senior managers who are interested in profitability (see Theme 8). Deductions based on historical data cannot indicate whether a company will be profitable in the future:

‘This historical data where you make deductions from, is transactional data … [and] is purely a view on the relative health [of the organisation]. … That still doesn’t give you an idea whether the company will survive in the next five years.’ (Participant 6, male, 41)

In addition, senior managers do not need information about the HR function’s deliverables to make difficult decisions:

‘On the IR side, ah, we have brought down the number of strike days by 20% or by 15% or 10%. I don’t care about that because that’s your job [and] … HR people care about that.’ (Participant 5, male, 48)

Reporting on compliance (legislative) issues was highlighted as irrelevant to senior management. Compliance cannot be a strategic issue, as you cannot change legislation. Decisions of a strategic nature should be influenced before it becomes legislation, which emphasises the need to scan the external environment. In contrast, compliance reporting can be used strategically when related to talent management, which impacts transactional HR. Participant 4 observed:

‘The strategic part is where you influence the legislation beforehand so that it will allow you to be able to perform under those conditions.’ (Participant 4, male, 57)

Participant 7 explained another strategic approach linking talent management to compliance:

‘Equity is a no-brainer, but we divide it according to all the different categories. … That’s then talent composition, at different levels, … we have our performance potential grid … we look at the processes that we use and then we measure things accordingly.’ (Participant 7, male, 54)

Theme 12: Use a select number of measurements (10)
Participant 4 indicated a preference for a small number of measurements for senior management in order to achieve maximum impact. This participant also recommended a trade-off between the complete picture and the optimum number of measurements and that measures should be presented in a visual format that is easily understood:

‘What are the 3% measures that you should have that give you the 80% picture? Because the problem is if you’ve got too many measures, your data collection does become very tedious. … Where we’ve done this type of thing … is, we literally have a one–pager, and when we do a green, a yellow, a red robots.’ (Participant 4, male, 57) Participant 4 explained:

‘[With] too many measures, your data collection does become very tedious. You produce a report this thick. Nobody pays attention to it and that’s why I deliberately use … always use this pilot example.’ (Participant 4, male, 57)

Theme 13: Qualitative information is of limited value to senior management (22)
Qualitative information is of limited value to senior managers because it is difficult to express quantitatively. Most participants explained that it is a challenge to express the soft (people) side in numbers; indices remain subjective when qualitative information is converted into quantitative information; lastly, the HR function struggles to compete with other functions (e.g. marketing and finance), especially those that have established quantitative measures and decision sciences. Examples in the dataset are:

‘But in the people environment you get back into the more subjective stuff: team performance, um morale, culture, … fluffy stuff which is very challenging [to measure].’ (Participant 2, male, 51)

‘As soon as they become that [qualitative], we’ll lose the credibility because, again, you are fighting against two or three other sub-disciplines, finance, marketing and so on, which already have clear numerical metrics in a numerical format.’ (Participant 7, male, 54)

Theme 14: Conduct external qualitative benchmarking (7)
Only Participant 4 referred to the need for qualitative benchmarking, even though he acknowledged quantitative indicators used by organisations. Benchmarking consists of, inter alia, the following:

‘[To] find out whether … the HR function was really a business partner and which portions of these roles, as change agent, the people custodian role, the expert role, they were playing, in what combination of roles. And it was quite insightful for them to realise then where they were positioned.’ (Participant 4, male, 57)

The absence of qualitative benchmarking limits the under- standing of leading practices in world-class organisations, as there will not be an answer to the questions:

‘What are the leading practises, how do you compare against that? … Do you have world-class people management in your company?’ (Participant 4, male, 57)

Theme 15: Ensure validity and reliability (31)
Participants recommended guidelines to ensure face validity (grounded 19 times); in particular, HRPs have to understand the concept to be measured and the exact parameters of a measurement have to be agreed upon and applied in order to ensure a common business vocabulary. Meaningful dialogue between HRPs and business partners will contribute to face validity:

‘Make damn certain that you have defined the exact parameters that define the metric. ... You say, “Yeah, we want to measure staff turnover.” Okay, cool. Done. But then you’ve got three or four or five different definitions of what you mean by staff turnover.’ (Participant 3, male, 47)

‘My primary consideration would be to engage meaningfully in a debate with my colleagues in other parts of the business. So, when you’re talking to people in sales, for example, your market share or your sales volumes, etcetera, become the subject of discussion. You don’t say to them, “How did sales go this month?” [and] they say, “Quite fine.” They say, “We had 3% above budget.” ... We talk in numbers.’ (Participant 7, male, 54)

A lack of face validity not only creates confusion, but also leads to different ways of measuring the same concept (which is also evident in indices: see Theme 18), which results in a lack of rigour, which, in turn, negatively impacts the information gained:

‘But it’s amazing how many different versions there are of what constitutes customer satisfaction or sales performance. You’d think it’s quite simple, but the more you think … the more you uncover dangerous assumptions.’ (Participant 3, male, 47)

Reliability refers to the same observations when the assessment is repeated across various functions. Some participants highlighted the importance of reliability (grounded 12 times) and referred to its consistent application across functions as the main determinant of reliability (which prevents silo thinking):

‘You’ve got to have at least some metrics that are the same for each of those functions, otherwise there’s no incentive for them actually to cooperate.’ (Participant 5, male, 48)

Theme 16: Undesired behaviours are driven by a lack of clarity in measurement (20)
‘Metrics and rewards drive performance and behaviour.’ (Participant 4, male, 57)

A lack of sophistication and consequent undesired behaviours will negatively impact the culture of an organisation. The participants felt that behaviours should be in support of the business strategy:

‘What do I want and how am I going to drive behaviour that is consistent with where I want to get the business strategically?’ (Participant 2, male, 51)

Participants commented on drivers of undesired behaviour. An understanding of a measurement enables manipulation, resulting in unreliable information. In particular, a lack of specificity would result in the pursuit of targets with non-aligned actions, thus not supporting the overall business strategy:

‘What will it drive? What will it do? So, if I’m driving the talent pool and the executive has said, “In terms of our succession planning, we want 200 people in our talent pool by the end of the year” … we assume that there’s some sort of entry level to that. They have to be reasonably qualified, reasonably experienced, etcetera, but therein lies the devil. What does “reasonable” mean? So, if I’m the talent manager and I’m trying to hit my target, I’ll squeeze people into that pool to make up my 200. And, sometimes, these people won’t be exactly the sort of people that we want, I’ve got an over representation of a certain set of skills … but I’ll push them in there, because I can and I will hit my target.’ (Participant 2, male, 51)

Similarly, inappropriate units of measurement (e.g. monetary value) can corrupt behaviour:

‘I’m not in agreement [that you should express everything in rand value]. ... I don’t think it’s necessarily useful and I think … it can corrupt behaviour. ... Say I’m in a caring environment. What do I do with somebody who is a repeat problem? … Because in the terms of the perceived profitability of my job expressed in rand terms, I’m going to go for the easier ones, rather than the hard ones. So I think it potentially corrupts my own behaviour seeing it in rand value.’ (Participant 2, male, 51)

Lastly, measuring activities rather than outputs may potentially cause undesired behaviours:

‘But in the absence of an effective output you probably have to go for the input indicators. But I always try quite hard with the clients that I’m working with to get the outputs.’ (Participant 2, male, 51)

Three strategies to influence desired behaviours emanated from the interviews. These were: ensuring face validity of measurements (see Theme 15) by consulting with co-responsible line managers (see Theme 19), focusing on quality and not always quantity and measuring outputs (see Theme 3) where there is uncertainty or a lack of clarity.

Theme 17: Setting targets presents challenges (11)
This theme refers to the challenges in setting targets for specific activities in order to implement the business strategy. Targets need to be specific, as employees will resort to undesired behaviours to meet targets. Failing specificity in targets, the organisation will ultimately suffer:

‘There was a wonderful ad just outside the airport, which said ... “We only lose 2% of data” or something along those lines. Which 2% would you like to lose?’ (Participant 2, male, 51)

Participant 2 explained that targets can be manipulated by employees who understand their logic (e.g. through professional training). Soft measures are also problematic, as they are not only difficult to set, but are also not taken seriously by senior management:

‘Target setting in a finance or a process environment tends to be fairly straightforward but it’s almost always fluffed because the guys understand the metrics. … They understand [metrics] and they understand their composition and then they can tweak them. So I think in the hard metrics, target setting is much easier, but it’s quite open to abuse. So it needs to be fairly carefully monitored. In the soft environment they tend to have a much more vague approach.’ (Participant 2, male, 51)

Consequently, people do not bother with soft measures’ targets, as these are difficult to set:

‘If a measure is hard, like a process measure or a rand value measure or a financial metric or something, people think hard about it and they can wrap their heads around it quite easily. The softer stuff they don’t bother with.’ (Participant 2, male, 51)

Theme 18: Indices are problematic (8)
An index can be described as a value-indicating variation (increase or decrease) of a specified variable. Participant 2 warned against two major problems relating to design and data. The first problem relates to the consequences of subjective perception measures (ordinal and nominal data), which make indices complex:

‘The risk is that they [HRPs] tend to go for qualitative measures … and then they end up putting together some sort of complex index which just doesn’t bloody work.’ (Participant 2, male, 51)

The second problem refers to indices that are applied out of context (for example, the inability of practitioners to deal with the complexity of organisations), resulting in a worthless index, and weighting based on stakeholder input, all of which make managing an index difficult:

‘Where I think guys, including very experienced consults, get stuck is when they lose that logic and they start to see things out of context and they try to create indicators for what are really fundamentally fairly whimsical things, and then they create complex indices and those are notoriously a ****.’ (Participant 2, male, 51)

Theme 19: Share responsibility for measurements (23)
This theme refers to the state of being mutually responsible for cross-functional measurement and, as such, having a duty towards another person. Consensus (reiterating sophistication) is critical in cross-function, shared responsibility and prevents silo thinking:

‘We’ve got to get one perspective and one set of perceptions that work for the businessman at the front end, and the support people at the back end.’ (Participant 5, male, 48)

‘You don’t want silos. … [They] are all supposed to cooperate towards some form of … common goal.’ (Participant 3, male, 47)

Participants highlighted five strategies to ensure shared responsibility. The first strategy is a shared desired outcome that should drive mutual responsibility, which is brought about by a second strategy, namely a debate regarding the measurements. Consequently, the managers in this cross-function partnership should both accept responsibility for the co-measurement as a third strategy. In order to achieve on the above three strategies, HRPs, as a fourth strategy, need to understand the HR function’s architecture to contribute to a shared responsibility and take the initiative in facilitating these relationships as the last strategy.

Discussion

The purpose of this study was to explore and describe principles in selecting measurements amongst expert practitioners at executive level (or engaging at executive level). This research contributes to human capital measurement literature by identifying 19 principles to consider when selecting measurements, in order to drive and implement strategic change efforts and, consequently, to enhance the position of the HR function to one of a strategic asset.

Next is a discussion of the themes, clustered in four different measuring elements as in Figure 1.

Considering the element of value proposition of an organisation’s competitive strategy, strategy remains core to driving the selection of measurements during implementation of strategy, but measurements should also be considered to drive its development. A large group of themes (i.e. Themes 1–4, 6–9 and 19) emphasise a systemic approach by cascading the business strategy as the main driver of value creation into various organisational subsystems. The strategy as core is consistent with the literature (Becker et al., 2001, 2009; Huselid et al., 2005; Kaplan & Norton, 1996; Lawler et al., 2004), as is the need for a particular value creation logic (Becker et al., 2001; Huselid et al., 2005; Kaplan & Norton, 1996).

The importance of co-developing the business strategy is brought to the fore in Themes 6–9. The measurement of relationships should support decisions with the aim to generate profits. Literature addresses the notion of relationships (e.g. Becker et al., 2001, 2009; Huselid et al., 2005). However, recent literature (e.g. Cascio & Boudreau, 2011) has been explicit that HRPs should inform the development of the business strategy.

The findings of the present study indicate the importance of strategically valid measurements (refer to Themes 1–3) (Becker et al., 2001, 2009) and not starting with the measurements (Becker et al., 2009; Boudreau & Ramstad, 2007; Burkholder et al., 2007; Fitz-enz, 2009; Huselid et al., 2005).

Benchmarking is addressed in Theme 14, but the focus is on external and qualitative benchmarking, not quantitative benchmarking of activities, as found in the literature (Becker et al., 2001, 2009; Huselid & Barnes, 2003; Singh & Latib, 2004). The participants’ level of seniority may explain why they did not emphasise benchmarking as supported by Theme 11, which relates to the meaninglessness of transactional information at senior management level.

The measurement element of value creation logic presents a different view of traditional value creation. Themes 2–5, 7, 9 and 19 point to a value-adding logic that governs the selection of measurements. This logic focuses on the integration of multiple value chains of organisational systems (within and across functions). This contrasts with the literature, where cause-and-effect relationships between scorecard perspectives, for example the Balanced Scorecard (Kaplan & Norton, 1996), the HR Scorecard (Becker et al., 2001), and Workforce Scorecard (Huselid et al., 2005), are used to select measurements. Despite various authors (Becker et al., 2001, 2009; Huselid et al., 2005; Kaplan & Norton, 2001) recommending the use of a strategy map to understand how value is created, it is important to note that none of the participants referred to this term. A possible explanation is the systems-versus-functional approach of the HR function (see Jamrog & Overholt, 2006, for a discussion).

The findings point to a shared responsibility in measurement (Theme 19), but do not directly address the responsibility for indicating drivers and enablers of the HR function and the workforce (Becker et al., 2001; Huselid et al., 2005). This can be attributed to participants’ understanding of vertical and horizontal logic in implementing the business strategy, which is facilitated by partnering between line managers.

Though Theme 7 underscores the measurement of relationships, the literature is more prescriptive regarding measuring relationships between capabilities and core competencies (Lawler et al., 2004) and financial and non-financial attributes (Becker et al., 2001). Theme 9 places an emphasis on integration of the strategy across the organisational subsystems as an additional means to measuring relationships in the context where a systemic approach is preferred to the leading and lagging perspectives of the scorecards.

Considering the element of value proposition of the HR function and workforce, only Participant 2 directly referred to a consideration of leading and lagging indicators within the elements of the HR function and the workforce. The relationships between leading and lagging indicators constitute the underpinning logic of the scorecards (Becker et al., 2001; Huselid et al., 2005; Kaplan & Norton, 1996). In a similar vein to relationships (Theme 7), the integration of the business strategy, given the preference amongst participants for a systemic approach, may explain the move away from a sequence of leading and lagging indicators. This is supported by the notion of influencing the business strategy based on a systemic view, not one of perspectives.

The participants acknowledged the importance of understanding HR activities and, specifically, how they support the implementation of the strategy after the strategy has been cascaded. However, they warned against measuring activities rather than outputs, as this can lead to undesired behaviours. This is in line with the concept of strategic HR activity measures (Becker et al., 2001).

The measurement element of a measurement system emphasises sophistication and level-appropriate information. Themes 15–18 support sophistication, in particular how a lack of validity and reliability results in undesired behaviour. To share responsibility for measurements is seen as an avoidance strategy. This stance aligns with the exclusive focus on measurement quality, as is evident in the literature (Burkholder et al., 2007; Cascio & Boudreau, 2011). This view of sophistication can be explained by the participants’ awareness of the negative consequences, especially at an operational level, where there is a lack of sophistication.

A stakeholder approach to level-appropriate management information to support decisions was found (Themes 10–14). The findings highlighted different levels of complexity and integration across functions. The flow of information across functions (Burkholder et al., 2007) and different levels of decision-making (Davis, 2005) was not specifically addressed in scorecard literature (e.g. Becker et al., 2001, 2009; Huselid et al., 2005; Kaplan & Norton, 1996, 2001).

The findings do not directly address the issue of relevant versus available data, as found in the literature (Becker et al., 2001). This could be explained by the participants’ focus on sophistication and the consequences of the lack thereof.

Selection of measurements is a process embedded in an organisation’s unique context of competitive strategy and functioning and underscores the basic management functions (e.g. plan, lead, organise and control). Furthermore, it is systemic, as multiple value chains of business, the HR function and people, and their interrelationships, are considered. The latter necessitates a consultative approach with relevant stakeholders, as required by an integrated approach, and consequently contributes to the sophistication of measurement. Hence, to select measurements is not a singular activity of choice, given the plethora of available measurements. It is necessary that HRPs understand their organisations (i.e. how the organisational subsystems and architecture create value), to allow for a systemic and integrated view, as facilitated by (establishing) relationships through business partnering to achieve mutual understanding.

Despite the majority of the principles focusing on implementation (operational and tactical level of decision-making), evidence points to the need for human capital analytics. It is clear that people intelligence stems from analytics that should inform the formulation of the business strategy, as opposed to only implementation of the business strategy.

Practical implications
The study advances an understanding of the selection of appropriate and strategically relevant measurements that will contribute to the credibility of HRPs. HRPs should formalise a process when selecting measurements. The systems approach has bearing on higher education institutions, which have to deliver HRPs who can think systemically and function at various levels of complexity. Similarly, senior management should pay attention to leading indicators and how the HR function and workforce behaviours systemically create value and impact the bottom line of the organisation. Analytics has become critical in understanding relationships and the formulation of hypotheses in creating additional value and advancing the field of human capital analytics.

Limitations of the study
This study’s limitations, specifically the applicability to a specific context, are due to the research design, despite efforts to ensure dependability and transferability. Claims about trends, regularities or distributions to a wider population cannot be made, due to the small samples in qualitative research (Willig, 2008). However, this study provides detailed descriptions of the small sample, allowing for high construct validity and in-depth insights (Mouton, 2001).

Suggestions for future research
The move from leading and lagging indicators to a systemic view of an organisation to understand value creation deserves further research. Management information, also used to inform strategy development, should not be limited to information generated by the implementation of a business strategy. The impact of opportunity costs, in the context of different approaches to quality in literature, should be investigated in predicting future value that will inform strategy development.

Conclusion
The purpose of this study was to identify principles in selecting human capital measurements. Through an exploratory-descriptive approach, thematic analysis of the data obtained from six expert practitioners at executive level in human capital identified 19 principles to consider. Therefore, this study has met its objective.

Acknowledgements

We thank the two anonymous reviewers for their helpful comments. Professor Willem Schurink is acknowledged for his contributions to the methodology of this study.

Competing interests
The authors declare that they have no financial or personal relationship(s) that may have inappropriately influenced them in writing this article.

Authors’ contributions
P.D.C.-F. (University of Johannesburg) and G.R. (University of Johannesburg) were responsible for the overall conceptualisation of this study, whilst the first author executed the study and wrote the article.

References

Babbie, E., & Mouton, J. (2001). The practice of social research. Cape Town, South Africa: Oxford University Press Southern Africa (Pty) Ltd.

Bassi, L.J., & McMurrer, D.P. (2006). Beyond employee satisfaction, ROI, and the Balanced Scorecard: Improving business results through improved human capital measurement. In R.C. Preziosi (Ed.), The 2006 Pfeiffer Annual: Human Resource Management, (pp. 3–15). San Francisco, CA: John Wiley & Sons, Inc.

Becker, B., & Huselid, M.A. (2003). Measuring HR? Benchmarking is not the answer. HR Magazine, 48(12), 56–61.

Becker, B., Huselid, M.A., & Beatty, R.W. (2009). The differentiated workforce: Transforming talent into strategic impact. Boston, MA: Harvard Business Press.

Becker, B., Huselid, M.A., & Ulrich, D. (2001). The HR Scorecard: Linking people, strategy, and performance. Boston, MA: Harvard Business School Press.

Boudreau, J.W., & Ramstad, P.M. (2003). Strategic I/O psychology and the role of utility analysis models. In W. Borman, D. Ilgen & R. Klimoski (Eds.), Handbook of Psychology: Industrial and Organizational Psychology , Vol. 12 (pp. 193–203). New York, NY: John Wiley & Sons. http://dx.doi.org/10.1002/0471264385.wei1209

Boudreau, J.W., & Ramstad, P.M. (2007). Beyond HR: The new science of human capital. Boston, MA: Harvard Business School Press.

Bowen, G. (2008). Naturalistic inquiry and the saturation concept: A research note. Qualitative Research, 8(1), 137–142. http://dx.doi.org/10.1177/1468794107085301

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. http://dx.doi.org/10.1191/1478088706qp063oa

Brink, H., Van der Walt, C., & Van Rensburg, G. (2006). Fundamentals of research methodology for health care professionals. Cape Town, South Africa: JUTA & Co.

Bryman, A., & Bell, E. (2003). Business research methods. Oxford, UK: Oxford University Press.

Burkholder, N.C., Golas, S., & Shapiro, J. (2007). Ultimate performance: Measuring human resources at work. Hoboken, NJ: John Wiley & Sons, Inc.

Cantrell, S., Ballow, J.J., & Gerkin, J. (2004). How smart HR departments win with business intelligence. In A. Wharton & P. Cheese (Eds.), People, performance, profit: Maximizing return on human capital investments, (pp. 84–87). San Francisco, CA: Montgomery Research, Inc.

Cascio, W.F. (2006). The economic impact of employee behaviors on organizational performance. California Management Review, 48(4), 41–60. http://dx.doi.org/10.2307/41166360

Cascio, W.F., & Boudreau, J.W.W. (2011). Investing in people: Financial impact of human resource initiatives. (2nd edn.). Upper Saddle River, NJ: Pearson Education, Inc.

Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis. Thousand Oaks, CA: SAGE Publications Ltd.

Davis, D. (2005). Business research for decision making. Belmont, CA: Thomson Brooks/Cole.

Davis, W.R., Johnson, B.D., Randolph, D., Liberty, H.J., & Eterno, J. (2005). Comparing police drug allegations with enumerations of drug users/sellers. An International Journal of Police Strategies and Management, 28(4), 594–608. http://dx.doi.org/10.1108/13639510510628686

Eriksson, P., & Kovalainen, A. (2008). Qualitative methods in business research. Los Angeles, CA: SAGE Publications, Inc. http://dx.doi.org/10.4135/9780857028044

Fitz-enz, J. (2007). Beyond benchmarking: Value-adding metrics. College and University Professional Association for Human Resource Journal, 58(2), 12–16.

Fitz-enz, J. (2009). The ROI of human capital: Measuring the economic value of employee performance. (2nd edn.). New York, NY: American Management Association.

Greeff, M. (2011). Information collection: Interviewing. In A.S. de Vos, H. Strydom, C.B. Fouché & C.S.L. Delport (Eds.), Research at grass roots for the social sciences and human service professions, (4th edn.) (pp. 341–375). Pretoria, South Africa: Van Schaik Publishers.

Guba, E.G. (1990). The alternative paradigm dialog. In E.G. Guba (Ed.), The paradigm dialog, (pp. 17–27). Newbury Park, CA: SAGE Publications, Inc.

Guba, E.G., & Lincoln, Y.S. (1994). Competing paradigms in qualitative research. In N.K. Denzin & Y.S. Lincoln (Eds.), Handbook of qualitative research, (pp. 105–117). Thousand Oaks, CA: SAGE Publications, Inc.

Guest, G., Bunce, A., & Johnson, L. (2006). How many interviews are enough? An experiment with data saturation and variability. Field Methods, 18(1), 59–82. http://dx.doi.org/10.1177/1525822X05279903

Henning, E., Van Rensburg, W., & Smit, B. (2004). Finding your way in qualitative research. Pretoria, South Africa: Van Schaik Publishers.

Huselid, M.A., & Barnes, B.E. (2003). Human capital measurement systems as a source of competitive advantage. Retrieved September 13, 2010, from http://www.markhuselid.com/pdfs/articles/2003_Huselid_Barnes_HRMR.pdf

Huselid, M.A., Becker, B., & Beatty, R.W. (2005). The Workforce Scorecard. Boston, MA: Harvard Business School Press.

Jamrog, J.J., & Overholt, M.H. (2006). Building a strategic HR function: Past, present, future. Human Resource Planning, 27(1), 51–62.

Jette, D.J., Grover, L., & Keck, C.P. (2003). A qualitative study of clinical decision making in recommending discharge placement from the acute care setting. Physical Therapy, 83(3), 224–236.

Kaplan, R.S., & Norton, D.P. (1996). The Balanced Scorecard: Translating strategy into action. Boston, MA: Harvard Business School Press.

Kaplan, R.S., & Norton, D.P. (2001). The strategy-focused organization: How Balanced Scorecard companies thrive in the new business environment. Boston, MA: Harvard Business School Press.

Kearns, P. (2003). HR strategy: Business focused, individually centred. Oxford, UK: Butterworth-Heinemann.

Kearns, P., Walters, M., Mayo, A., Matthewman, J., & Syrett, M. (2006). What’s the future for human capital? London, UK: Chartered Institute of Personnel and Development.

Lawler, E.E., & Boudreau, J.W. (2009). What makes HR a strategic partner? People & Strategy, 32(1), 14–22.

Lawler, E.E., Levenson, A.R., & Boudreau, J.W. (2004). HR metrics and analytics: Use and impact. Human Resource Planning, 27(4), 27–35.

Marshall, C., & Rossman, G.B. (1999). Designing qualitative research. (3rd edn.). Thousand Oaks, CA: SAGE Publications, Inc.

Mason, J. (2002). Qualitative researching. (2nd edn.). Thousand Oaks, CA: SAGE Publications, Inc.

Mason, M. (2010). Sample size and saturation in PhD studies using qualitative interviews. Forum Qualitative Sozialforschung/Forum: Qualitative Social Research, 11(3). Retrieved April 11, 2014, from http://www.qualitative-research.net/index.php/fqs/article/view/1428/3027

Mays, N., & Pope, C. (2000). Qualitative research in health care: Assessing quality in qualitative research. British Medical Journal, 320(50), 50–52. http://dx.doi.org/10.1136/bmj.320.7226.50

Morse, J.M. (1994). Designing funded qualitative research. In N.K. Denzin & Y.S. Lincoln (Eds.), Handbook of qualitative research, (pp. 220–235). Thousand Oaks, CA: SAGE Publications, Inc.

Mouton, J. (2001). How to succeed in your master’s and doctoral studies: A South African guide and resource book. Pretoria, South Africa: Van Schaik Publishers.

Onwuegbuzie, A.J., & Leech, N.L. (2007). Validity and qualitative research: An oxymoron? Quality & Quantity, 41, 233–249. http://dx.doi.org/10.1007/s11135-006-9000-3

Patton, M.Q. (2002). Qualitative research and evaluation methods. (4th edn.). Thousand Oaks, CA: SAGE Publications, Inc.

Ritchie, J., & Lewis, J. (2004). Qualitative research practice: A guide for social science students and researchers. London, UK: SAGE Publications, Inc.

Schurink, W.J., Fouché, C.B., & De Vos, A.S. (2011). Qualitative data analysis and interpretation. In A.S. de Vos, H. Strydom, C.B. Fouché & C.S.L. Delport (Eds.), Research at grass roots for the social sciences and human service professions, (4th edn.) (pp. 397–423). Pretoria, South Africa: Van Schaik Publishers.

Singh, P., & Latib, M. (2004). Achieving breakthrough returns on human capital. In A. Wharton & P. Cheese (Eds.), People, performance, profit: Maximizing return on human capital investments, (pp. 160–163). San Francisco, CA: Montgomery Research, Inc.

Sobh, R., & Perry, C. (2006). Research design and data analysis in realism research. European Journal of Marketing, 40(11/12), 1194–1209. http://dx.doi.org/10.1108/03090560610702777

Sparkes, A.C. (2002). Telling tales in sport and physical activity: A qualitative journey. Champaign, IL: Human Kinetics.

Strydom, H., & Delport, C.S.L. (2011). Sampling and pilot study in qualitative research. In A.S. de Vos, H. Strydom, C.B. Fouché & C.S.L. Delport (Eds.), Research at grass roots for the social sciences and human service professions, (4th edn.) (pp. 390–396). Pretoria, South Africa: Van Schaik Publishers.

Taylor, C., Gibbs, G.R., & Lewins, A. (2005). Quality of qualitative analysis. Retrieved September 13, 2010, from http://onlineqda.hud.ac.uk/Intro_QDA/qualitative_analysis.php

Willig, C. (2008). Introducing qualitative research in Psychology. New York, NY: Open University Press.

Yin, R.K. (2009). Case study research: Design and methods. (4th edn.). Thousand Oaks, CA: SAGE Publications, Inc.

Yin, R.K. (2012). Applications of case study research. (3rd edn.). Thousand Oaks, CA: SAGE Publications, Inc.