key: cord-0851185-hdkw3ojg authors: Rosellini, Amy; Hawamdeh, Suliman title: Tacit knowledge transfer in training and the inherent limitations of using only quantitative measures date: 2020-10-22 journal: Proc Assoc Inf Sci Technol DOI: 10.1002/pra2.272 sha: ac931b46a01cf1da9218e15d0be2d830fce39df8 doc_id: 851185 cord_uid: hdkw3ojg The greatest challenge for many organizations today is not the acquisition, organization, and storage of information, but rather the ability to transform such information into useful knowledge as well as the application and measurement of it. This paper discusses effective knowledge transfer in a training environment and the inherent limitations in using only quantitative measures as a tool of knowledge transfer measurement. By examining the measurement tool of a major U.S.‐based airline, this study identifies disparity in peer and observer behavior assessment to understand other important factors that impact the measurement of knowledge transfer. A learning company is a company that is adaptable because it prioritizes learning agility of workers (Pedler, Burgoyne, & Boydell, 1997; Wilson & Beard, 2014) . As such, the learning company has a competitive advantage in a world that identifies a need for economic progress that is sustainable for ever-changing and constantly evolving businesses (Davenport & Prusak, 1998; Nonaka & Takeuchi, 1995) . With Coronavirus Disease 2019, companies face an additional challenge to sustain and measure learning practices in an environment with more remote workers and changing business practices (Centers for Disease Control and Prevention, 2020; Molla, 2020) . Organizations should endeavor to create a knowledge management strategy with a culture and environment where employees can learn, unlearn and apply new knowledge (Minonne & Turner, 2009; Nold, 2011; Senge, 1990; Stata, 1989) . This competitive advantage is why $87.6 billion was budgeted for training programs, 46.7 training hours per employee across the United States in private, public, government and nonprofit sectors (Freifeld, 2018) . In addition to tens of billions of dollars spent on training and development programs annually, companies are more effective when making a strategic effort towards knowledge management practices around their training programs and organizational culture (Cardosa, Meireles, & Peralta, 2012) . With strategic direction and resources committed to knowledge management systems, the accuracy of knowledge management tools is critical to understanding the success of these systems. This empirical study examines behavior analysis tools used to measure how knowledge transfer impacts behavior changes at a major U.S.-based airline. The study aimed at addressing the following questions: To what extent do peer's behavior assessment tools accurately measure tacit knowledge transfer and behavior change? What factors other than performance of the tasks affect a peer measurement of tacit knowledge transfer and behavior change? The airline industry is unique in that it requires a high level or training for employees which are mandated by company standards and regulated by federal agencies such as the Federal Aviation Administration (FAA). The high level of oversight in aviation makes it one of the more sophisticated industries for knowledge management systems. In addition, the workers employed on the aircraft like flight attendants are critical to the success of the company as their jobs are rooted in the safety of the customers and crew. The airline industry was chosen both for the sophistication of its knowledge systems in place and the critical nature of its safety requirements to ensure knowledge transfer and behavior change in the training environment. Given the complex nature of tacit knowledge transfer, this study examines existing quantitative training assessments used and the extent by which a qualitative approach can uncover underlying issues of the current measurement tools. Knowledge management describes the practice of specifying tacit and explicit knowledge in an organization and providing systems and infrastructure that support the transfer and application of the knowledge (Davenport, De Long, & Beers, 1998; Nonaka & Takeuchi, 1995) . Within the practice of knowledge management is knowledge transferan organization's ability to move knowledge form one employee to another and one department to another (Wathne, Roos, & von Krogh, 1996) . Knowledge transfer relies on tools, tasks and people to move knowledge through the organization, but the competitive advantage is focusing on the people. Because knowledge is more easily transferred through interactions of similar groups of people, which exist within an organization, utilizing communities and collaborations between people is an effective and impactful way to grow knowledge transfer (Argote & Ingram, 2000) . Knowledge application is performing job tasks or routines as directed (Becerra-Fernandez & Sabherwal, 2010) . For a successful knowledge management system to be in place, the system should allow for employees to apply the knowledge to their job tasks. Learning, unlearning and relearning is like the concept of "ba" which is continually changing and evolving. "Ba" is a term that considers the context of knowledge. Knowledge is not static. It is born of human interaction, creation, transfer, application and creation again. External knowledge may be provided on an information system or in a training class, and knowledge is internalized when an employee performs tasks differently based on the knowledge. Performing tasks and interactions with others then creates new knowledge, which is externalized. Knowledge exists on a spiral, constantly changing as illustrated in the SECI model (Nonaka & Konno, 1998; Nonaka & Toyama, 2003) . Unlearning is a critical piece of learning, as unlearning creates new knowledge. As employees are faced with new stimuli, new experiences, new circumstances, they can learn to respond differently. New responses can teach them new ways of acting within their environment, encouraging them to unlearn old habits and knowledge, while embracing new learning and therefore, creating knowledge (Hedberg, 1981) . This study tests a measurement tool of the knowledge transfer measurement model (Rosellini, 2017) to understand the success of a formal training program to create learning and knowledge transfer that should result in behavior change. The training environment that uses andragogy and experiential learning of employees promotes the transfer of explicit knowledgethat which is documented or stored, vs tacit knowledge which is unspoken or refers to a way of performing work that is not easily documented Knowles, Holton, & Swanson, 2005) . Quantitative measures are commonly used as measurements tools in knowledge management systems. Financial measures are a way to measure a company's success in their knowledge management performance (Lee, Lee, & Kang, 2005) . Stock price and earnings coupled with other metrics like access to knowledge management systems and amount of knowledge created continue to be executed to varying degrees of success at companies (Andone, 2009; Chen & Chen, 2006; Wong, Tan, Lee, & Wong, 2015) . Companies also utilize training budgets, investment in Information Systems (IS), Learning Management System (LMS) or Knowledge Management System (KMS) as a way to measure knowledge management practices (Kuah, Wong, & Wong, 2012) . In the Human Resources Management (HRM) field, there are other standards to measuring knowledge management programs like those in traditional training and development programs. HRM-related metrics include the number of knowledge workers at the company, ideas by employees, task efficiency of staff and number of training hours completed by employees (Kuah et al., 2012; Lee et al., 2005; Shannak, 2009) . When measuring knowledge management processes through a formal training program, the training industry standard Kirkpatrick Model contains four levels: (a) reaction to training, (b) knowledge immediately transferred post-training, (c) behavior change, and (d) business results, which rely heavily on the use of survey as a measurement tool (Kirkpatrick & Kirkpatrick, 2016) . The knowledge transfer measurement model introduces behavior analysis as a tool for a firm to measure if training has been internalized and the knowledge applied (Rosellini, 2017) . The knowledge management performance index (KMPI) proved a relationship be-tween knowledge management and financial indicators using survey data. The measurement tool utilized for KMPI is a quantitative survey given to employees to measure knowledge internalization, sharing, creation and utilization. KMPI is an effective tool to measure the knowledge processes of a company but does little to measure factors that may affect knowledge systems (Lee et al., 2005) . Measurements utilized for knowledge practices are often related to an IS measuring criteria like how often information is retrieved or downloaded, the number of times an employee visits a folder or page, and when certain search terms are utilized in the system (Andone, 2009; Lee et al., 2005; Shannak, 2009) . To measure the contribution of knowledge to reaching a desired result, Ahn and Chang (2004) developed a test laboratory where they used regression analysis to measure how teams utilized knowledge to achieve specific business outcomes in a simulated new product marketing project. The link of knowledge processes to business outcomes generated positive results through a measurement tool of behavior analysis. As this experiment was performed in a laboratory, behavior analysis must be tested in a real work environment to understand the implications of utilizing it as a tool to measure knowledge systems. The limitations of the tools that measure knowledge management processes is they fail to illustrate all factors contributing to knowledge management practices. Cultural implications, employee relationships and other factors may contribute to how knowledge measurement tools currently perform. Utilizing only quantitative measurements, companies are unlikely to understand all factors contributing to the success of knowledge management systems. In 2010, researchers began to study the factors affecting knowledge management, however the measurement tools have largely remained the same (Wong et al., 2015) . This study examines behavior analysis as an effective tool to measure the success of a training program. It examines the extent by which qualitative measures should be combined with existing quantitative methods to fully understand the capabilities and limitations of knowledge measurement processes and their impact on ongoing training. Two quantitative measures were utilized to determine if quantitative measures alone are sufficient to measure tacit knowledge transfer. The quantitative measures chosen are behavior analysis and worker survey. A major U.S. based airline adopted the first three levels of the Kirkpatrick model in its training program to measure knowledge retention and behavior change of flight attendants-in-training. For a multi-week formal training program, the airline tested flight attendants-intraining to ensure they learned the objectives of the training course. The initial tests prove that knowledge transfer occurred during the training. Next, they conducted a peer-reviewed behavior checklist to measure if the flight attendant-in-training demonstrated behavior change as a result of the knowledge transfer. This study examines the ability of the peer-reviewed behavior checklist to measure knowledge transfer and identify what other factors may contribute to measuring tacit knowledge transfer. The first portion of the study examined the peerreviewed behavior checklist to identify if it is an accurate quantitative measurement of behavior change in new flight attendants. The behavior checklist is the quantitative measurement tool identified to determine if an employee can move from knowledge transfer to behavior change in the knowledge transfer measurement model (Rosellini, 2017) . To test behavior analysis as a tool, observers audited the peer-reviewed behavior checklist on 30 flights to determine if the analysis accurately measured the transition from knowledge transfer to behavior change. The test population comprised of 10 new flight attendants (FaNe) in-training who recently completed training, 10 senior flight attendants (FaSe) who observed peer behaviors, and 10 observers (Ob). The flight attendants met the observers and knew the observers were observing the behavior analysis process while on the actual passenger flights. While on the flights, the FaNe performed the job duties of three different flight attendant positions: Forwardposition, Aft-position, and Mid-position. The three flight positions are different based on job responsibilities on commercial flights. Flights may also include a fourth position, but this study did not observe a FaNe in the fourth position. All FaSe and FaNe data was anonymized by the airline and provided to the research team. The research team provided input on how the study was completed and how best to randomize the sample of flights and observers to collect data within the limitations of time and resources of the company. The peer-reviewed behavior checklist includes 30 to 48 job tasks. The measurement tool utilized by observers included only seven to eleven job tasks; the number varies based on the flight position. The job tasks assessed were selected based on how easily they can be observed by someone seated near the flight attendant position on the plane. The behavior checklist used allows for a score of 1, 2, or 3 when measuring the FaNe's ability to perform a job task. According to the peer-reviewed behavior checklist a score of 1 = not competent, 2 = needs coaching, and 3 = competent. The FaSe and Ob used this scoring system to score the job behaviors with no additional instruction from the research team or airline management. A second phase of the study, also quantitative, utilized a survey of all FaSe in three U.S. cities. The survey's aim is to understand if factors other than job performance affect how a peer measures behavior change. The survey considers two categories of factors that may affect how they complete the behavior analysis. The questions in the first category are related to factors outside of job performance that could cause subjective scores like company culture, friendship with peers, training, personal experience and time. The second category of questions are related to the FaSe's ability to correctly analyze behavior using the measurement tool. In the first part of the study, 10 FaNe were observed by 10 observers on the FaNe's first three flight experiences in Forward-, Aft-and Mid-positions. Observers are members of the training team within the airline. About half of the team has work experience as former flight attendants while the remaining observers have experience in corporate training. Prior to the flights, a seasoned leader in the training department held a meeting to educate the observers on what behaviors to judge while on the flights. During the flights, the observer sat in a designated seat on the plane where the behaviors were best observed. During the Forward-position flight, 11 tasks out of 46 total tasks were tracked by the observer. During the Aft-position flight, 10 out of 38 total tasks were tracked by the observer. For the Mid-position flight, seven out of 30 tasks were tracked by the observer. At the same time and on the same flight, the FaSe tracked the total 46, 38, and 30 job tasks for the Forward-, Aft-and Mid-positions, respectively. Figure 1 shows the average scores of the FaNe as observed by the FaSe and the Ob when the FaNe was flying in the Forward-position. The Forward-position is the most complex of the three positions. It is considered the lead position and for most airlines, flight attendants are compensated at a higher rate of pay in the Forwardposition because of more complex job duties and higher responsibility. The absent score for FaNe 2 by Ob is accounted for as the flight that an observer missed due to flight schedule changes. The average scores for the FaNe were higher on all flights observed by both the FaSe compared to the Ob. The data supports that FaSe are more likely to score higher of the FaNe than an Ob. The results of the FaSe compared to the Ob demonstrate that a peer completing a behavior checklist does not reach the same results as a third-party observer on the same flight measuring the same FaNe. Reasons for this disparity may be that observers were provided clear direction on the job behaviors prior to the flight in a one-hour meeting with a seasoned leader. The FaSe did not attend training prior to this flight and therefore did not receive instructions of how to judge job tasks that are demonstrated by the FaNe. FaSe attend annual training where they receive a copy of the measurement tool and are provided a brief overview. As shown in Figures 2 and 3 below, the average score of the FaNe are closer when observing job tasks in the Aft-and Mid-positions. While Figures 2 and 3 show less disparity in the FaNe's average overall scores, the data still suggests that the checklist measured by a peer is more likely to score a FaNe higher than a third party observer who is not a peer. In Figure 2 average scores of FaNe are shown while flying in the Aft-position and their behaviors are observed by a FaSe and Ob. Seven out of 10 average scores were higher scores by the FaSe than the Ob. In only one flight of FaNe8 did the FaSe score the FaNe lower than the Ob. In Figure 3 , five out of 10 average scores were higher scores as judged by the FaSe compared to the Ob. Again, in only one flight of FaNe8 did the FaSe score the FaNe lower than the Ob. Overall, Aft-and Mid-positions showed similar results, though with less disparity that was demonstrated in the Forward-position flights of the FaNe. The peer-reviewed checklist will receive a higher score on average than a non-peer-reviewed checklist. As seen in Table 1 , there are three metrics that showed a disparity between the observer and FaSe scores of the same FaNe on the same flight in the same position. The largest disparity exists at Forward-position because of the complexity of this role. The first metric is the average score of the Forward-position when scored by the Ob vs the score of the FaSE. This metric takes an overall average score of all 11 Ob tasks observed compared to the total score for the 46 tasks Regarding tasks marked incomplete, the Forwardposition shows the greatest percentage of disagreement between the scores of the Ob vs scores of the FaSe (20.2%). Perhaps the most notable disparity is the number of tasks scored differently by the Ob vs the FaSe. Over half of the 11 tasks (58.6%) across the nine Forward-position flights that were scored revealed a different score by the Ob and FaSe. When each score is taken individually for the nine flights where the FaNe were in the Forward-position, the FaSe marked a score of two for three tasks out of 99 total job tasks. In the scores marked by the FaSe, all other scores were either three or Not Applicable because the task was not observed or the FaSe could not remember the task being completed. The observers marked a score of two in 38 tasks out of 99 total tasks observed. Many reasons may exist as to why the discrepancy is wide between the number of tasks that FaSe and Ob scored differently. This will be visited in more detail in the Conclusion section. Figure 4 illustrates the Forward-position tasks that showed the greatest disparity in the two scores by FaSe and Ob. The graph takes the average score of the Ob and the average score for FaSe broken down into the time the task took place. In order to protect the confidentiality and privacy of the airline, the exact task is not written here, but the time during the flight in which the task is associated. The average difference between the score of the Ob and FaSe is .58. The first two tasks listed and performed are at the gate and boarding, where both tasks were scored by the Ob over one point below how they were scored by FaSe. Figure 4 reveals that there are some tasks more closely measured by Ob and FaSe, for example the third task (Prior to Takeoff I) which only showed a variation of .17 on the three-point scale or the final measured task, which had no variation in score between Ob and FaSe. The nature of the flight attendant role, though on the airplane for safety purposes, is that most airlines hire flight attendants that are friendly, due to the amount of time the role is spent provide customer service or being customer-facing. The behavior profiles and competencies of the flight attendant staff might also be impeding factors in why the scores were different from the third-party observers. Quantitative measures alone are insufficient to understand how the behaviors and competencies of flight attendants may impact the accuracy of the knowledge transfer measurement tool. The next step in the study surveyed over 3,134 flight attendants with 986 responses. The confidence level is 95% with a margin of error of 2.58%. This study examines two categories: factors that may result in subjective scores on the behavior assessment and misinterpretation of the measurement tool itself. First, the results of the first category show that factors other than on the job performance affect how a FaSe scores the FaNe on behavior change. Table 2 shows which factors flight attendants identify as having an impact on how they score the behavior assessment form. The second category that the survey examines is the interpretation or understanding of the measurement tool used in a FaNe's first few flights. The scoring system utilized by the airline in the peer-reviewed behavior assessment is ambiguous. It requires different scores for "not competent" and "needs coaching" with no definition as to the difference. Another potential problem is the form is marked as confidential and is provided with an envelope, but some of the FaSe review the scorecard with the FaNe before placing it into the envelope. The survey examines the FaSe's understanding of the measurement tool by providing specific scenarios of an observable FaNe task from the behavior assessment and asked how the FaSe would score it. This objective approach reveals the FaSe's ability to judge behavior without relationship to the FaNe impacting their answer. Table 3 shows the ability of the FaSe to determine the appropriate score according to how the airline training personnel identify the correct answer. In the first scenario, the training personnel prefer that FaSe mark the behavior assessment tool as Not Competent. In terms of the behavior assessment experience itself, the survey leads to results showing that the FaSe sees their role as a teacher or coach more than as an evaluator of their peer's performance. Table 4 shows how the FaSe identify their role in the behavior assessment according to their most recent flight with a FaNe. The training personnel view the role of an FaSe as a silent observer meant to quietly watch the FaNe perform their role and only coaching if and when it is necessary or critical to the safety of the passengers and crew. However, almost as many FaSe see their role as one required to prompt FaNe as they do to silently observe. As we continue to study measurement tools used in training, it becomes clear from this study that enhanced qualitative measurements will provide a more complete and better understanding of the factors impacting tacit knowledge transfer. The initial testing of the FaSe score vs the Ob score showed that a peer's behavior assessment of a FaNe is not sufficient to measure behavior change in training. More data in terms of qualitative analysis is required to understand the disparity between FaSe and Ob scores. Observers tended to score more negative regarding job behaviors compared to job expectations than the peer flight attendants who completed the behavior assessment. To understand why the difference exists, a focus group or interview with flight attendants may provide a better understanding of how factors like company culture and relationships might affect how behavior change is measured. The second part of the study showed continued evidence to support that quantitative measures alone cannot accurately demonstrate the level of behavior change posttraining. Table 2 as an example, showed over 20% of the survey respondents listing "Other" factors as a reason for how they complete the behavior assessments of their peers. Table 4 shows that training personnel and senior workers do not have the same understanding of the roles in a peer behavior assessment. A qualitative approach through interview and follow-up questions will build a more complete understanding of how to accurately measure tacit knowledge transfer that can lead to behavioral change. One of the revelations in this study is the misinterpretation of the measurement tool itself and the role of the senior worker in utilizing the measurement tool. Another revelation is that senior workers view their role as an active participant in the knowledge sharing process as a teacher or coach, which provides an opportunity for future studies to determine how senior workers may be utilized as knowledge workers in the knowledge transfer process. Possible follow-up studies could include modification of the measurement tool to minimize misinterpretation. Based on the results of this study, it is not clear that workers understand the purpose of the measurement tool. Future qualitative studies such as a focus group or interviews will provide information to better understand the senior workers' current interpretation of the tool and provide insight on where to improve it. A qualitative approach partnered with the findings of the current quantitative data can uncover underlying issues of the current measurement tools to be examined and improved upon in future studies. Assessing the contribution of knowledge to business performance: the KP3 methodology Measuring the performance of corporate knowledge management system Knowledge transfer: A basis for competitive advantage in firms Knowledge management: Systems and processes Knowledge management and its critical factors in social economy organizations Coronavirus Disease Knowledge management performance evaluation: A decade re-view from Successful knowledge management projects Working knowledge: How organizations manage what they know 2018 training industry report How organizations learn and unlearn Four levels of training evaluation The adult learner: The definitive classic in adult education and human resource development Monte Carlo data envelopment analysis with genetic algorithm for knowledge management performance measurement KMPI: Measuring knowledge management performance Evaluating knowledge management performance This is the end of the office as we know it Making knowledge management work: Tactical to practical The concept of "ba": Building a foundation for knowledge creation The knowledge-creating company: How Japanese companies create the dynamics of innovation Knowledge-creating theory revisited: Knowledge creation as a synthesizing process The learning company: A strategy for sustainable development Knowledge transfer model to measure the impact of formal training on sales performance The fifth discipline: The art and practice of the learning organization Measuring knowledge management performance Organizational learning: The key to management innovation Towards a theory of knowledge transfer in a cooperative context. Managing knowledge: Perspectives on cooperation and competition Constructing a sustainable learning organization: Mark and Spencer's first Plan A learning store. The Learning Organization Knowledge management performance measurement: Measures, approaches, trends and future directions How to cite this article: Rosellini A, Hawamdeh S. Tacit knowledge transfer in training and the inherent limitations of using only quantitative measures