366 Still a Deadly Disease? Performance Appraisal Systems in Academic Libraries in the United States Glenn Ellen Starr Stilling, Allison S. Byrd, Emily Rose Mazza, and Shawn M. Bergman* Performance appraisal of professional librarians in academic libraries is important because of the critical role these employees play. Professional librarians ensure that the library’s resources and services are effective, relevant, and integrated within the parent institution. Performance appraisal and job feedback have been understudied in the library literature while, by comparison, these topics have generated great attention in other fields and in the corporate world. There have also been innovations in performance appraisal. Some large corporations have abandoned annual evaluations, substituting, for instance, quarterly performance snapshots and weekly check-ins with the supervisor. To investigate the current status of perfor- mance appraisal in academic libraries, we deployed a web-based survey in November 2013 to library directors in the United States. A national survey on this topic and with this population had not been conducted for 25 years. The results we report in this article relate to the following research objec- tives: The Snapshot research objective sought to identify the components of the performance appraisal systems currently being used. The Feed- back research objective sought to identify who can give feedback during each performance appraisal event, the extent of peer-to-peer feedback, and whether there is sufficient feedback in library performance appraisal systems. We also report on programmatic effectiveness for libraries’ an- nual evaluations as well as their overall performance appraisal system. Introduction Performance appraisal is a basic tool for employer-employee communication and un- derstanding in workplaces of all types. Employers rely on it to communicate whether organizational goals are being fulfilled and customers are satisfied. Employees rely on it for reassurance that their work is satisfactory and their continued employment is  Glenn Ellen Starr Stilling is Information Literacy Librarian and Professor in Belk Library and Information Commons at Appalachian State University; e-mail: stillngges@appstate.edu. Allison S. Byrd is Leadership Development Analyst at Bank of America; e-mail: allison.byrd@bankofamerica.com. Emily Rose Mazza is Sr. Human Resources Assistant at Amazon.com; e-mail: mazemily@amazon.com. Shawn M. Bergman is Associate Professor in the Psychology Department; Director, Office of Research Consultation; Co-Director, HR Science Research Team; and Associate Director, Center for Analytics Research and Education, all at Appalachian State University; e-mail: bergmans@appstate.edu. ©2018 Glenn Ellen Starr Stilling, Allison S. Byrd, Emily Rose Mazza, and Shawn M. Bergman, Attribution-NonCommercial (http://creativecom- mons.org/licenses/by-nc/4.0/) CC BY-NC. doi:10.5860/crl.79.3.366 mailto:stillngges@appstate.edu mailto:allison.byrd@bankofamerica.com mailto:mazemily@amazon.com mailto:bergmans@appstate.edu http://creativecommons.org/licenses/by-nc/4.0/ http://creativecommons.org/licenses/by-nc/4.0/ https://doi.org/10.5860/crl.79.3.366 Performance Appraisal Systems in Academic Libraries in the United States 367 secure. Both rely on it for insight into employee motivation, satisfaction, and engage- ment, all of which are tied to the organization’s success. In academic libraries, because professional librarians play a central role in furthering the goals of the libraries, one would expect performance appraisal to be closely examined in the professional litera- ture. Surprisingly, performance appraisal (henceforth, PA) has been understudied in the library literature.1 For example, the Association of College & Research Libraries’ (ACRL’s) The Expert Library: Staffing, Sustaining, and Advancing the Academic Library in the 21st Century is a wide-ranging book, with thirteen chapters dedicated to personnel selection, management, training, and new roles. Although editors Scott Walter and Karen Williams write, in their introduction, that “…there is nothing so important to the future of the library and its continued place at the heart of the academic enterprise [as] its people and the expertise they bring… ,”2 the chapters address PA only indirectly. Chloe Mills writes that ACRL provides “several broad guidelines for academic librarians of various types,” but they “lack specificity” in terms of PA of individual librarians.3 Perhaps this dearth of attention in the library literature stems from the generally dif- ficult nature of PA in workplaces. Both employees and managers dread it. One source of this dread is the complexity of giving and receiving feedback. Robbie M. Sutton, Matthew J. Hornsey, and Karen M. Douglas assert that “feedback is frequently ineffective and even counterproductive”; it is “a high-stakes game” and “is one of our most feared, avoided, and awkwardly handled social responsibilities.”4 At the same time, employees deeply desire meaningful feedback. Brené Brown learned this from her interviews with human resources professionals. She told the Washington Post that the most common criticism these profession- als hear in exit interviews is that employees did not receive enough feedback and that the feedback they did receive was “corrective… fast and not meaningful …and was blaming.”5 The difficulties inherent in PA have led some corporations to revamp it radically or eliminate it entirely. For example, Deloitte, a large accounting firm, was dissatisfied with its system, which included annual goal-setting, an end-of-the-year rating, and a comparison of each employee’s performance with that of peers (a system similar to that of many academic libraries). They replaced the end-of-year PA with performance snapshots, done by the team leader at the end of each project or quarter. They also added the requirement that team leaders have a check-in meeting once a week with each team member to ensure that work and priorities were still on track and to provide coaching or new information if needed.6 Peter Cappelli and Anna Tavis report that more than a third of U.S. companies have replaced the annual PA with more frequent manager-employee conversations about performance.7 Samuel A. Culbert and Larry Rout assert in their book on PA that “performance review is, at its core, a desperately flawed concept.” They highlight two central problems. First, the power differential created by performance reviews can get in the way of candid conversations about what employees need to get their work done. Second, they believe that rankings of any kind put people in competi- tion with each other and inhibit teamwork. They advocate replacing PAs with a system similar to Deloitte’s called performance previews: regular conversations in which both the manager and the employee ask each other what they can do to help accomplish the goals and results for which they are both being held accountable.8 The sparse attention to PA in the library literature, taken together with recent non- library research on job feedback and the PA process, affirmed our perception of the need for a comprehensive, in-depth examination of PA in academic libraries. Therefore, we administered the first national, empirical, cross-sectional survey conducted in the United States on the topic in 25 years.9 The time- and labor-intensive nature of design- ing, pilot testing, and administering such a large survey probably explains why it had not been undertaken. Given the large time gap since the previous survey, we sought to capture, in detail, the current state of PA in academic libraries. 368 College & Research Libraries April 2018 Review of Selected Literature For decades there has been a shortage, both in scope and quantity, of literature ad- dressing PA in academic libraries. In their 1994 literature review, Rao Aluri and Mary Reichel remarked on the lack of diversity among writers on PA (most were administra- tors, library educators, or personnel librarians) and the shortage of skepticism about the value of PA.10 Ronald G. Edwards and Calvin J. Williams observed in 1998 that the literature “does not reveal a multitude of examples from which library administrators can obtain advice.”11 Similar observations came from Julie A. Gedeon and Richard E. Rubin in 1999 (“Surprisingly, there is a dearth of data on the prevalence of performance evaluation in academic libraries”12) and from Mills in 2015 (“The specific discussion of the evaluation of academic librarians as employees, whether as staff or faculty, is a less prominent exercise in the profession”13). Our review of the library literature focuses on sources published since 1994. We found that most publications on PA in academic libraries fall into two categories: 1) PA events (such as the annual PA or reviews for pretenure, promotion, tenure, or salary increase) or 2) appraisal of one subset of librarians or library work. We discovered two gaps in the literature: the absence of a comprehensive, up-to-date snapshot of library PA systems and the need for a better understanding of job feedback in library PA systems. Addressing these gaps led to two of our survey research objectives. Discussions of PA events are almost all case studies describing the existing process, or a revision of the process, in one library. Most describe just one PA event, with little or no mention of the library’s full PA system. The large number of case studies might stem from the fact that librarians often have a different status from other academics, and this varies from campus to campus. Jen Stevens et al. remark on the “wide disparity of librarian status” in their article on revising academic library governance handbooks.14 Mary K. Bolin found so many variations at research universities that she was able to create a typology of librarian status: Faculty: Professional ranks; Faculty: Other ranks with tenure; Faculty: Other ranks without tenure; and Nonfaculty: Professional and academic staff.15 Another explanation for the numerous case studies describing one PA event might be that libraries are frequently asked by their institutions to adapt the campus process. The adaptation process might seem to be the most useful feature of PA for librarians to report in the literature. Case studies of PA events address a range of topics. Merit salary increase was the PA event reported on by Lou Anderson and Donnice Cochenour. In response to a mandate from their campus administration, librarians at Colorado State University developed new criteria for determining annual merit salary increases and integrated them into the overall PA system. Their librarians struggled (as did many others, according to the literature we reviewed) with “how …the work of librarianship [would] be translated into the university categories of teaching, research, and service” as well as how to make their criteria fit for librarians who taught regularly as well as those who did not. Another complexity was how to make the criteria fit the variety of duties of academic librarians (for instance, public services, technical services, administration).16 Merit salary increase was also the PA event reported on by Frada L. Mozenter and Lois Stickell. They explain that, at UNC-Charlotte, the issue of variations in librarian status, as described by Bolin,17 came into play. As with Colorado State, the impetus for change came from the campus administration. UNC-Charlotte’s librarians had both a tenure-track and a non–tenure-track (with multiyear contract) career path. They struggled with issues not just of drafting the documents and applying the performance categories, but also with the relationship of the merit pay criteria to the annual PA. The process of revising merit pay salary criteria brought up dissatisfaction with the lack of standardization within the library of several aspects of the annual PA process.18 Performance Appraisal Systems in Academic Libraries in the United States 369 Peer review for advancement and continuing employment is the PA event discussed by Joan M. Leysen and William K. Black. They describe the processes in use at Carnegie Research I and II institutions. Like other writers, they express concern about aligning the library’s peer review process with that of the institution as a whole. They recommend that the library’s criteria be clear and understandable to both librarians and university administrators and that the library’s process and expectations be comparable to other campus units.19 Edward F. Lener, Bruce Pencek, and Susan Ariew report on changing the promotion, tenure, and reappointment processes at Virginia Tech’s library. As did UNC-Charlotte’s librarians, Lener and colleagues rewrote their standards document in response to a campus-wide directive resulting from the university’s aspiration to reach a higher ranking as a research university. They articulate ways in which the process, and the resulting document, went beyond the original mandate, resulting in increased clarity and specificity of expectations, formal requirements, faculty ranks, and indicators of scholarly and professional achievement.20 A different approach to documentation of library work for promotion and perma- nent-status decisions is described by Mills. Once again, the impetus for change is a campus mandate. In her case study article, Mills describes the process used by library faculty at Robert Morris University (a private university with collective bargaining) to develop a librarianship portfolio modeled on the university’s required teaching port- folio. The result was not a full professional portfolio; it omits scholarship and service. Rather, the portfolio focuses solely on the work of librarianship. Its five categories of librarianship competencies, from which librarians being reviewed can choose the best fit(s), allow a “flexible and responsive portrait of individual work experiences,” ad- dressing the issue raised by Anderson and Cochenour of how to evaluate the variety of work done by professional librarians.21 Two case study articles describe a library’s annual performance appraisal process. First, Threasa L. Wesley and Nancy Campbell explain that, at Northern Kentucky University, a mandate that salary increases be awarded on the basis of merit prompted their library faculty to undertake a “ground-up revision of the faculty reviewing pro- cess,” aimed at consistency and developed with extensive collaboration and input. Like librarians at Virginia Tech, they found that their process and the resulting document made improvements beyond the original mandate. Like librarians at Colorado State, they struggled with the issue of applicability of the performance guidelines to all areas of library work. Peer review was considered in developing the instrument for rating primary job performance but rejected because “no one in the library was convinced that a true peer evaluation system, in which an archivist would review the performance of a systems librarian, for example, would improve the situation.”22 This concern about qualification to review a colleague, often heard among librar- ians, resembles objections that have been advanced about large, diverse academic departments (for instance, whether an American history professor should or can evaluate a European history professor), about levels of teaching (whether an instruc- tor who teaches large, lower-level lecture classes can evaluate one who teaches small, upper-level seminar classes), and about whether college students can evaluate their professors’ teaching effectiveness. William E. Cashin reports that, in fact, numerous studies have found that student ratings are statistically valid, reliable, bias-free, and helpful as an information source for teaching improvement. He adds that, for personnel decisions on college teachers, however, they should be used “in combination with other kinds of data (e.g., peer rat- ings, administrator ratings, self report, and teaching portfolio materials)”.23 Cashin’s advice could be extended to librarians with differing specializations who are evaluating each other. The library science degree, training and work experience in 370 College & Research Libraries April 2018 a professional position, observation of library colleagues, and service on library and campus governance committees all give librarians sufficient discernment to evaluate each others’ work, especially when following a set of guidelines. Thus, arguments against one type of librarian evaluating another type have limited, as well as limiting, usefulness. Just as multiple sources of data are recommended in evaluating college teaching, so should multiple sources of data (from library peers, library administrators, faculty outside the library, students, and campus administrators) be used in evaluating librarians. Multiple sources of data from a variety of customers of our work can serve as equivalents to student ratings of faculty. They also broaden and deepen the picture of the individual librarian’s work, compared with the picture drawn from standard sources of input (the librarian’s self-evaluation, the immediate supervisor, and/or the library director). A second case study of a library’s annual performance appraisal process is Junlin Pan and Guoqing Li’s detailed critique and analysis of an anonymous faculty-status library’s dual-track system. Library administrators vote to assign each librarian a numerical rating (50% of the final number), and a peer committee of librarians has a parallel vote (also 50%). Pan and Li discuss problems with the application of the system once its weighting of the evaluated categories (librarianship 60%, scholarship 20%, and service 20%) is put into practice. They argue that allocating 60 percent to librarianship, which “happens to be the least measurable,” makes the rating “nothing more than a matter based on impression, favorability, subjectivity, and biases.” They conclude that the design and continuous improvement of PA systems “is an urgent issue that needs to be more closely examined and adequately addressed in the library and management literature.”24 Most discussions of subsets of librarians or library work deal with public services. Evaluation of instruction, by peer review or by small group analysis, is discussed in three case-study articles. Two of the three libraries are seeking to conform to campus- wide promotion and tenure requirements; the library using small-group analysis approach is doing so for formative purposes only.25 Evaluation of patron services is discussed in the following three articles. Maureen A. Beck writes about a competency- based assessment program focused on information technology skills of public services librarians at Johns Hopkins. She discusses participatory, nonpunitive ways to use self-checking and direct observation to see if staff are learning and meeting the com- petencies, hold staff accountable for satisfactory performance on the competencies, and incorporate their outcomes on the competencies in the performance appraisal process.26 Mary Heinzman and David Weaver write about reciprocal peer observations of reference desk service at Augustana College and St. Ambrose University. The two librarians describe only the first year of their collaboration. Protocols on whether the peer observations would be included in the faculty review portfolio at Augustana had not been finalized.27 Anne Pemberton, Jerome Hoskins, and Caitlin Boninti describe the use of the Human Performance Technology model to evaluate desk service at UNC-Wilmington. As part of the model, the authors make specific recommendations regarding performance appraisal interventions that would close any performance gaps that were discovered.28 Non–public services subsets of library work are also being evaluated. For example, Jonathan Miller discusses evaluation of liaison work at Rollins College. He reports on an initiative involving anonymous surveys of campus faculty to gather their feed- back on their liaison librarian. These formative assessment data are shared with each librarian and used to create a liaison plan for the next two years. Miller, who is library director at Rollins, notes that “individual librarians are free, but not required to use their results in their own faculty reviews.”29 Hilary M. Davis and William M. Cross Performance Appraisal Systems in Academic Libraries in the United States 371 also focus on liaison work. They describe a data management plan review committee at N.C. State University that helps liaison librarians strengthen their competencies in research data management. Their program compares core competencies for advocacy, support, and management of data collections to the experience that librarians gain from their training model. In the next-steps section of the article, they do not address use of this training in performance appraisals for their librarians.30 Competencies and standards are another way in which the library literature ad- dresses the evaluation of subsets of library work. Jennifer Lyn Soutter compared competency statements in published, peer-reviewed, U.S. and non-U.S. library and information science journal articles. One stated purpose was to see how the concept of competency is used. She found that “management is the most common domain” for articles on competencies and that the majority are related to training. In her discussion of the articles, she does not mention use of competencies for performance appraisal. She asserts, however, that the study’s findings should be used “when grappling with various issues involving definitions, such as recruitment, evaluation, and the educa- tion of new librarians.”31 When librarians compile standards and guidelines documents, such as “ACRL Proficiencies for Assessment Librarians and Coordinators,” which outlines 52 specific proficiencies within 11 broad categories, we should be proactive in recommending that they be put into practice in one of the places where they can have the most impact: per- formance appraisal of professional librarians. The above-mentioned document does so by stating that the proficiencies can be used to “assess performance and guide evaluation.”32 Similarly, whether writing about competencies and proficiencies or some other method for evaluating a subset of library work, authors should discuss how the evalu- ation method is, or could be, included in library PA systems. With case study articles such as Davis and Cross, it might be that data from the evaluation they describe are provided by librarians in their self-evaluation or their goals discussion, but this detail was omitted. When authors omit this kind of information, they (and their readers) might overlook an important potential application of the project they describe. Beck33 includes PA-related information in her article on competency-based assessment, as does Miller34 in his article on gathering anonymous survey data from library customers. A few studies focus on topics outside the two main categories. Total Quality Man- agement as it relates to PA is addressed by four works.35 Psychology-related topics are addressed in the following four studies. Laurel Crawford et al. write about a survey on librarians’ fear of negative evaluation in relation to PA; Gedeon and Rubin write about attribution theory (“the human propensity to explain why people behave as they do”) as a potential source of bias that could lead to “serious and unrecognized inequities”; Richard McKay writes about how library supervisors can manage both their own and their employees’ anxiety during the PA process; and Melanie Clark, Kimberly Varde- man, and Shelley Barba write about a survey of the imposter phenomenon (“feelings an individual experiences when he or she rightfully achieves a level of success but does not feel deserving of said success”) among academic librarians, showing that it can negatively affect their psychological well-being, job satisfaction, and job performance.36 The following works are also outside the two main categories. Legal issues are the focus of Ben Johnson’s article. He outlines the characteristics needed by PA systems to ensure that they satisfy EEOC (Equal Employment Opportunity Commission) re- quirements that personnel decisions are based on employee performance and are not affected by illegal discrimination. He wonders whether both the legal and the quality management functions of PA can be combined in one system “without losing the es- sence of each.”37 Job satisfaction is the focus of a survey-based study by Noor Harun Abdul Karim. His survey used items from various affective commitment, organiza- 372 College & Research Libraries April 2018 tional commitment, and job involvement scales. He found a relationship between job performance feedback and job satisfaction.38 We observed two gaps in the literature that we wanted to address. First, a com- prehensive, up-to-date snapshot of the PA systems in use in United States libraries is needed (our survey’s Snapshot research objective). The most recent snapshot, focused on libraries at colleges and small universities, is Barbara Jenkins’s 1990 Performance Appraisal Systems in Academic Libraries (reporting on her 1988 survey).39 Since 1988, one source, Leysen and Black’s 1996 survey of libraries at Carnegie Research I and II institutions, provides a partial snapshot. Their survey focused on peer review in promotion and tenure decisions, rather than looking at the full PA system. They asked process- and format-related questions about what is assessed and who can give input.40 We found that the literature provides only partial or dated snapshots of PA systems, or (as described above) detailed case studies of segments of the PA system at individual libraries. Edwards and Williams stated in 1998, “The body of literature that does exist on performance appraisal does not reveal a single article which provides an overview of evaluation practices in academic libraries.”41 We found that this is still the case. Second, a look at job feedback and the variety of feedback-givers in library PA sys- tems is needed (our survey’s Feedback research objective). The library and nonlibrary literature coalesced to reveal this gap. The few studies of job feedback in the library literature (such as Gedeon and Rubin’s literature review on bias through attribution theory,42 Crawford’s study of fear of negative evaluation in librarians,43 and McKay’s discussion of how supervisors can deal with anxiety in the performance appraisal pro- cess44) echo some of the problems discussed in the job feedback and human resources literature. These library sources also echo Sutton, Hornsey, and Douglas’s comments about the complexity of giving feedback45 and Brown’s findings about employees’ dis- satisfaction with the feedback they receive.46 But they do not address who gives feedback in library PA systems, whether the feedback is sufficient, and whether feedback might be a source of problems or missed opportunities. To address these two gaps in the library literature, we developed a survey of PA systems used by academic libraries at institutions in the United States offering four- year or higher degrees. We chose library directors47 as our survey population because they are probably the individuals most knowledgeable about their library’s PA system. We used a web-based, rather than paper, survey, which enhanced the convenience of the process. Methods The authors were the core members of a research team that developed a database of contact information for the survey population, wrote and deployed the survey, and analyzed the survey data. The project received approval from Appalachian State Uni- versity’s Institutional Review Board. To develop the database for deploying the survey, we downloaded the 2010 academic library public-use data file from the NCES (National Center for Education Statistics) Library Statistics Program site (https://nces.ed.gov/surveys/libraries/academic.asp). We started with 2,663 libraries at institutions that offer four-year or higher degrees. If we were unable to identify a library director, and/or if the NCES data file indicated that the library employed three or fewer librarians, we eliminated the library. This left a survey population of 1,830. After eliminating undeliverable advance e-mails, we had a final population of 1,824. The survey opened on November 19, 2013, and closed on December 14, 2013 (see Appendix A). Our response rate for complete responses was 26.2 percent (478 complete responses). Our response rate for complete plus partial responses was 34.1 percent (622 complete https://nces.ed.gov/surveys/libraries/academic.asp Performance Appraisal Systems in Academic Libraries in the United States 373 plus partial responses). Our survey was lengthy because of its large scope. For libraries with PA events beyond the annual evaluation, we asked several questions about each PA event. Libraries with no additional PA events were routed to a shorter version of the survey. We thought the 25-year time gap since the previous survey justified the time commitment our survey required, but we recognize that survey’s length might have decreased the response rate. Our response rate is, however, congruent with rates for published academic studies that report on web-based surveys, given Kim Bartel Sheehan’s finding that response rates declined from 1986 to 2000.48 Although web-based survey response rates are generally around 11 percent lower than other survey modes, Katja Lozar Manfreda maintains that the real concern should be with the quality of the data. Because the precision of estimated parameters will be lower on web-based surveys, she believes that “the initial number of subjects needs to be higher to achieve the same precision.”49 Our survey meets this criterion, since our initial population consisted of all libraries at institutions in the United States that offer four-year or higher degrees. The main research objective for the survey, and thus the primary focus of our data analysis, was getting a detailed snapshot of the PA systems in use in academic librar- ies. We were also curious to see whether there were significant differences between libraries with only an annual evaluation and those that had additional PA events (for instance, in connection with promotion, tenure, or salary increases). Henceforth, we will refer to this distinction as “the two groups.” For the data analysis comparing the two groups, we ran one-way analysis of variance (ANOVA) to determine if there were statistically significant differences among survey questions (outcome variables) between libraries with only an annual evaluation and those with additional PA events (predictor variables). An alpha of .05 was used in the ANOVA to determine statistical significance. Our survey explored the following research objectives: • Snapshot: What are the details of the PA systems currently being used in aca- demic libraries? • Feedback: Who can give feedback in performance appraisals? To what extent is there peer-to-peer feedback? Is there sufficient feedback? • Balance: Is the PA system balanced in looking at librarians’ library responsibili- ties versus academic work? • Effectiveness: How do library directors view the effectiveness of their library’s annual evaluation and their overall PA system? Results Following the section on library and respondent demographics, our results (both one- way ANOVA data analysis and descriptive univariate analysis) are grouped according to the survey’s research objectives. Demographics of Participating Libraries and Respondents Since our target population was large and diverse (libraries at all institutions in the United States offering 4-year or higher degrees), we gathered basic information about the institution, its focus, the number of librarians, and the promotion, tenure, or con- tinuing employment options available to professional librarians. We found that of 495 respondents, the majority (92%) said their libraries employed 1–34 librarians, while 5 percent (n = 26) said their libraries employed 34–68 librarians. Twelve respondents said their libraries employed 68-170 librarians. Next, we asked about the level of degree programs offered at the institutions. We found that most library directors (67%) worked at institutions where undergraduates were the majority, while 16 percent were at exclusively undergraduate institutions, 374 College & Research Libraries April 2018 10 percent at majority graduate and/or professional institutions, and 6 percent at ex- clusively graduate and/or professional institutions. Additionally, we found that most (59%) were library directors at private nonprofit institutions, while 37 percent were at public institutions and 4 percent were at private for-profit institutions. When asked, “How would you describe the overall personnel structure of your library?” library directors chose among three options: 1) departments/areas; 2) teams (entire library is team-based); and 3) some areas are team-based, others are not. A department/area structure was most common, with a total percentage of 70.0 percent. Results are presented in figure 1. Finally, we asked whether the library offered promotion, tenure, and continuing employment according to the ACRLMetrics50 2011 definitions of these terms. Promo- tion was defined as follows: “Librarians are promoted in rank (equivalent to those of the faculty) on the basis of their academic proficiency and professional effectiveness (job performance, service, and scholarship) using a peer review system as the primary basis of judgment in the promotion process and the standards used by the library are consistent with the campus standards for faculty.” Tenure was defined as follows: “Librarians are covered by tenure policies equivalent to those of other faculties and during the probationary period, librarians have annual written contracts or agreements the same as those of other faculty.” Continuing employment was defined as follows: “After a period of no longer than seven years and through a process which includes peer review, librarians are granted continuing employment if they have met the appro- priate conditions and standards.” We found that the majority (54.6%) of libraries offer promotion (45.4% do not), but most (68%) do not offer tenure (32% offer tenure). We also found that 31.8 percent offer continuing employment, while 68.2 percent do not. Of the 496 respondents who provided individual demographic information, we found that 35 percent had been director for 11 or more years; 28 percent for 6-10 years; 20 percent for 3–5 years; and 17 percent for less than 2 years. We also learned that 63 percent were female and 36 percent were male. The vast majority of our respondents FIGURE 1 What is the Overall Personnel Structure of Your Library? (n = 554) Performance Appraisal Systems in Academic Libraries in the United States 375 (87%) had worked 11 or more years, with 7 percent having worked for 6–10 years, 2 percent having worked for 3–5 years, and 2 percent saying they were not a librarian. Finally, we asked about the library directors’ educational level. The majority (77%) had a master’s in librarianship, 36 percent had an additional master’s, 5 percent had a PhD in librarianship, 11 percent had a PhD in another discipline, 3 percent had an EdD, and 6 percent had some other degree. Respondents could check all that apply for education level, so the total percentage exceeds 100 percent. Snapshot of Library PA Systems Our primary research objective was to obtain a detailed snapshot of the structure and format of PA systems in use in academic libraries. Most of the results we report satisfy this objective. We first present results from five survey questions regarding the annual performance appraisal. We follow with results on feedback, evaluation of groups, training on evaluation skills, and frequency of library use of evaluations in addition to the annual performance appraisal. Evaluation formats used in the annual evaluation were explored with one survey question. The formats used most frequently were the librarian’s list of accomplish- ments, librarian’s self-appraisal, rating scales, and ratingless narrative (see figure 2). We found a statistically significant difference between the two groups, indicating that libraries having PA events in addition to the annual evaluation reported higher use of ratingless narratives, librarians’ lists of accomplishments, feedback from the librarian’s department or team, and peer evaluation. These data are presented in figure 2. FIGURE 2 Annual/Periodic Evaluation: Formats Used for Evaluation (n = 532) 376 College & Research Libraries April 2018 Goals as part of the annual evaluation were explored with three survey questions. First, we asked whether librarians have a written list of their job goals. Eighty-seven percent of library directors said yes, while 12 percent said no. Second, we asked whether librarians are evaluated on their progress toward the goals they listed. Again, 87 percent said yes, while 12 percent said no. Our third question was who could give feedback on the extent to which librarians met their goals. We offered several feedback givers as options and said that library directors could check all that applied. The most common choices, in order of frequency, were as follows: library director (87%); department head/ team leader (43.6%); the librarian being evaluated (37%); assistant/associate director (31.6%); other librarians in their department or team (16%); any other librarians (10.5%); and paraprofessional staff in the department or team (9%). Work areas evaluated in the annual evaluation were explored with one question. Library directors checked whether they evaluate nine work areas, which are listed in figure 3. When we looked at totals, we saw that the vast majority (>70%) of library directors indicated they evaluated six of the nine areas. Two areas (“service to the com- munity” and “scholarship and research”) were only evaluated by a majority (>50%) of directors and one area (“grant seeking”) was only evaluated by 22.2 percent of directors. For libraries with PA events in addition to the annual evaluation, all but two (“work- ing with others” and “contributions to department or team”) were evaluated by more directors compared with libraries with only annual PA. The only work area in which the percentage of libraries differed between the two groups by more than 40 percent was scholarship and research. These data are presented in figure 3. We asked who conducts the annual evaluation. Library directors chose from the following options: Library director, associate director, supervisor/team leader, HR professional, and committee. No statistically significant differences between the two groups were found. Library directors usually conduct performance appraisals in both groups, with a total percentage of 80.9 percent. These data are presented in figure 4. FIGURE 3 Annual/Periodic Evaluation: What is Evaluated? (n = 538) Performance Appraisal Systems in Academic Libraries in the United States 377 We also asked who can give input into the annual evaluation. Library directors chose from thirteen options. The input provider receiving the highest total percentage (81.65) across the two groups was the library director. Statistically significant differ- ences between the two groups were found, with libraries with PA events in addition to the annual evaluation having higher percentages of usage in nine of the thirteen feedback-giver options listed. These data are presented in figure 5. Sufficiency of feedback in library PA systems was explored with one question. We asked, “Overall, do you think the performance appraisal system provides individual librarians with enough job feedback?” which helped us explore the intersection be- tween our Feedback and Effectiveness research objectives. Fifty percent said yes, 25 percent said no, 22 percent said not sure, and 3 percent said don’t know or prefer not to respond. FIGURE 4 Annual/Periodic Evaluation: Who Conducts the Performance Appraisal? (n = 538) FIGURE 5 Annual/Periodic Evaluation: Who Can Give Input? (n = 538) 378 College & Research Libraries April 2018 Evaluation of groups was explored with one question. We asked, “Does your library have performance appraisals for groups?” Our question had a separate option for evaluating departments or teams, vs. evaluating other groups of library employees. Two hundred seventy-three (273) library directors responded to this question. Almost all said no (89%). Of those who said yes, 5 percent said they evaluate departments or teams (n = 13), and 5 percent said they evaluate committees, task forces, work groups, or project teams (n = 15). Training on evaluation skills was explored with a three-part question. We asked where professional librarians receive training in the following evaluation skills: How to avoid errors and biases when evaluating someone else; how to give and receive job feedback; and interpersonal skills, such as communication, negotiation, and conflict resolution. For each skill, library directors could choose any of these training venues: conferences, workshops, in-house training by a professional from inside the library, and in-house training by a professional from outside the library. Two hundred sixty-eight library directors responded to the training question. For training on how to avoid errors and biases, in-house training by a professional from outside the library (24.41%) and workshops (22%) received the highest number of responses. Twenty-one percent of library directors said no training was offered. For training on interpersonal skills, workshops (28.7%) and in-house training by a profes- sional from outside the library (26.6%) received the highest number of responses. Nine percent of library directors said no training was offered. For training on how to give and receive job feedback, in-house training by a professional from outside the library (26.8%) and workshops (23.8%) received the highest number of responses. Sixteen percent of library directors said no training was offered. We wondered how common it is for librarians to receive evaluations beyond the annual PA. We asked library directors whether they have performance appraisals for seven additional purposes. We did not do significance testing on this question since it was not applicable for libraries that only have an annual evaluation. Three hundred and nine library directors responded to this question. Promotion was checked most often, with 35.1 percent of total respondents saying yes, followed by tenure with 20.1 percent. However, the majority of library directors answered no to each option, indi- cating that formal performance appraisals for these seven purposes are not common in academic libraries. These data are presented in figure 6. FIGURE 6 Outside the Annual/Periodic Evaluation, What Has a Separate Evaluation? (n = 281) Performance Appraisal Systems in Academic Libraries in the United States 379 We also asked whether, for each of the additional evaluations, library responsibilities are evaluated. We found that, indeed, library responsibilities are almost universally evaluated. Among libraries with a separate evaluation for promotion, 99 percent evalu- ate library responsibilities (n = 194). Corresponding results for other additional PA events are as follows: Reappointment, 99 percent (n = 70); tenure, 96 percent (n = 105); post-tenure, 84 percent (n = 49); and salary adjustments, 92 percent (n = 48). Effectiveness of PA Systems Another research objective was to see how effective library directors perceive their PA systems to be. We compared the two groups in our data analysis of two survey questions. We asked, “How important is the annual evaluation in meeting the following performance management objectives?” Library directors rated seven objectives, each one tagged with a phrase indicating who would use the information. Library directors rated the objectives using a seven-point Likert scale ranging from “not at all important” (1) to “extremely important” (7). Libraries with PA events in addition to the annual evaluation had higher importance ratings on two items (“coaching” and “helping managers make promotional/demotional decisions”). These data are presented in figure 7. We also asked, “How successful do you perceive your current overall performance appraisal system to be in accomplishing the following?” Library directors rated their PA system on five performance objectives, using a six-point Likert scale ranging from “not at all successful” (1) to “very successful” (6). Libraries with PA events in addition to the annual evaluation gave their system higher effectiveness ratings that reached the level of statistical significance on three of the five objectives. For example, “Helping individual librarians meet job goals” had a significantly higher effectiveness rating (4.5) in libraries with additional PA events versus those with only the annual evaluation (4.1). These data are presented in figure 8. FIGURE 7 Annual/Periodic Evaluation: How Important is this Performance Appraisal in Meeting These Management Objectives? (n = 516) 380 College & Research Libraries April 2018 Discussion Four key takeaway messages emerged from our findings. We discuss our findings in the context of related library and nonlibrary literature. 1. Libraries continue to use several standard components of performance apprais- als that are recommended in the job feedback literature. This did not surprise us, given that the parent institution’s administration and human resources areas will influence library practice. Our survey results showed that the librarian’s self-appraisal is a common compo- nent of the annual evaluation, with 66.3 percent of libraries indicating they use this format (see figure 2). This finding shows that many libraries have made a choice that is consistent with the feedback literature. In The Power of Feedback (2015), Manuel London recommends that employees complete a self-assessment, even though people tend to assess themselves higher than others assess them. A self-assessment as part of the PA process can help employees see how their own views of their work match up with their supervisor’s views or with views of others who are asked to give feedback on them. London recommends that self-assessment procedures be checked for accuracy by being sure they are “based on objective, easily measured performance dimensions… [rather] than on subjective and ambiguous dimensions.” He cautions that, if there is a need to increase the alignment between the person’s self-assessment and the feedback from others, “annual or semi-annual performance appraisals are not a substitute for frequent, specific, and behaviorally oriented feedback throughout the year.”51 A minority (46.8%) of library directors said that they are using a ratingless narra- tive (either alone or with other formats) in the annual evaluation (see figure 2). This finding, also, is well supported in the job feedback literature. London recommends ratingless narratives because they allow managers to “avoid the defensiveness that often accompanies grading and focus the recipient’s attention on behaviors and direc- tions for development and performance improvement.”52 Additionally, 49.1 percent of library directors said that they are using a rating scale (either alone or with other formats; see figure 2). Lori Goler, Janelle Gale, and Adam Grant disagree with London on employees’ reac- tions to ratings. They cite evidence that some employees are actually helped by ratings, because they see clearly how their work for the year measured up overall. They also note that “even when companies get rid of performance evaluations, ratings still exist. Employees just can’t see them. Ratings are done subjectively, behind the scenes, and FIGURE 8 How Successful Do You Perceive Your Overall Performance Appraisal System to Be in Accomplishing the Following? (n = 496) Performance Appraisal Systems in Academic Libraries in the United States 381 without input from the people being evaluated.”53 CEB, a leadership and employee management advisory firm, found that, although human resources professionals and managers thought that eliminating ratings would improve the PA process (for the reasons London articulated), managers actually found it more difficult to explain to employees why their past performance might need improvement. Although managers saved time by not having to rate each employee, they subsequently spent less time in informal conversation with employees about their performances.54 Our survey found that, in 87 percent of libraries, a written list of goals is part of the annual evaluation process, and librarians are evaluated on their progress toward their goals. Similarly, Jenkins’s 1988 survey found that, in 77 percent of institutions, the librar- ian’s goals statement was part of the performance appraisal.55 Since our survey found that library directors gave their overall PA system a relatively low rating on “Helping individual librarians to reach job goals” (4.3, with 6 being very successful; see figure 8), we wondered what the literature recommends that might help. London offers a precaution about the link between the PA system and goal setting. He states, “Appraisal systems linked to goal setting must be an ongoing procedure.… If only an annual or semiannual review meeting occurs that covers only the most recent performance information, it may not be a valid system or one that provides accept- able justification for personnel decisions.… The supervisor should be clear with the subordinate at the outset of the performance period about how the appraisal will be used.” He recommends holding meetings to review progress on goals “throughout the year.” He includes the following in his list of “Helpful hints for giving feedback”: “Provide feedback frequently.… performance feedback should not be saved up and dumped on a person once a year during an annual performance review.” He explains that feedback has strong, research-supported effects on progress toward goals. First, it can motivate people to work harder toward goals if they learn that their goals are achievable with more effort. Second, it can demotivate people if they learn that they do not have skills or competencies required for their goals. Finally, it can cause people to change their goals or to reduce them to make them more realistic.56 2. Library directors are lukewarm about the effectiveness of their PA system. We designed choices for one of our questions about effectiveness to incorporate the definition from Bracken et al. of a successful 360-degree feedback process: “Creates focused, sustained behavior change and/or skill development in a sufficient number of people so as to result in increased organization effectiveness.”57 We chose this definition because of its alignment with our Feedback research objective. Figure 8 shows that library directors rate their overall PA system as only moderately successful in accomplishing the individual, departmental/team, and librarywide goals and behavior/skill changes we inquired about. On a scale of 1 to 6 (not successful to very successful), the combined ratings for the two groups of libraries did not exceed 4.3 on any item. These lukewarm ratings did not surprise us, given the literature on behavior modification. For example, Marshall Goldsmith and Mark Reiter wrote that “adult behavioral change is the most difficult thing for sentient human beings to accomplish.” Goldsmith has worked as an executive coach for more than thirty-five years, and he obtains detailed information on his clients’ behavioral challenges by using 360-degree feedback to gather information when he first starts working with them.58 Goldsmith and Reiter, as well as Carol Dweck, are firmly committed to the belief that, with focused, consistent effort, change can happen. Dweck’s research on growth mindsets shows that individuals and organizations that believe their talents can be developed, rather than believing their talents are innate gifts, often accomplish more. This is especially likely if they recognize that an individual’s thinking alternates between fixed and growth mindsets.59 382 College & Research Libraries April 2018 3. Library directors at libraries with performance appraisals in addition to an annual evaluation rate their overall PA system as more effective. Figure 8 shows that PA systems with performance appraisals in addition to an annual evaluation were rated as more effective in three of the five areas (creating behavior change in individual librarians, helping individual librarians reach job goals, and help- ing the library reach its organizational goals). One explanation for these higher ratings might be that, in these libraries, the annual evaluation gives librarians feedback on whether they are making sufficient progress toward successful reviews for their next multiyear evaluation (for instance, for reappointment, promotion, or tenure). To see how libraries are allocating their PA time and effort as far as library work vs. academic work (our Balance research objective), we asked about whether librar- ies have separate performance appraisals for academic events such as promotion, reappointment, tenure, post-tenure review, and review of teaching. Our data analysis shows that most libraries are not conducting separate evaluations for these events (see figure 6). This finding may be a reflection of the many different types of employment status that professional librarians now hold in academic libraries; their status may not require such reviews.60 However, as we discussed in the Results section, we also found that, when libraries do conduct these separate performance appraisals, they practically always evaluate library responsibilities (99% reappointment, 96% tenure, 92% salary adjustments, and 84% post-tenure). Thus, there might be advantages to conducting separate multiyear reviews, even if the institution does not require them. Gail Munde wrote that reviews such as post-tenure review “allow for examination of longer-term outputs” and “establish a reliable mecha- nism to institute formative professional development to improve job performance.”61 4. Libraries are missing out on opportunities to collect and share feedback that are widely used outside academia and might improve their PA system. Our results showing who provides input in performance appraisals demonstrate that the percentages of libraries that include feedback on the annual evaluation from sources other than the library director and supervisors/middle managers are quite low (see figure 5). The highest percentage is 30.5 percent for full-time permanent peers. The next highest percentage is 19.5 percent for faculty library users. The percentages for the other categories of feedback givers are 17.5 percent or lower. Given Brown’s research finding that the largest complaint from employees during their exit interview was that they received insufficient feedback,62 we believe this data could indicate an area for improvement. We found that 86.1 percent of libraries include librarians’ contributions to their department or team in the annual evaluation (see figure 3), yet 89 percent do not have performance appraisal for the department, team, or any other workgroups. We also found that library directors’ ratings of the success of the overall PA system in helping departments, teams, or other groups improve their performance showed room for improvement (4.1, with 6 being very successful; see figure 8). In addition, we found that 30 percent of library directors said their libraries were partially or fully organized into teams (see figure 1). However, other surveys have shown that some evaluation of teams in libraries is being done. Lihong Zhu’s survey of the use of teams in technical services in ARL libraries found that 181 libraries were fully or partially team-based; and, in 64 percent of them, the teams evaluated their own progress.63 Suggestions for Improving Library PA Systems Our suggestions take into consideration the areas where our survey results indicated there was room for growth; our literature review (which focused on the library lit- erature but also incorporates works on performance appraisal, job feedback, and Performance Appraisal Systems in Academic Libraries in the United States 383 positive psychology); and the constraints that we realize academic institutions place on PA systems. Further research, and our own additional data analysis, may lead to different suggestions. For now, however, we present ideas for what could be done to improve the imbalances our survey uncovered between the importance of the annual evaluation for several different management objectives (see figure 7) vs. the overall PA system’s performance on similar objectives (see figure 8). 1. Accept that “there are no perfect feedback systems.” Douglas Stone and Sheila Heen observe that organizations load a daunting combination of goals onto their PA systems, and no one system, or even combination of systems, can ac- complish them all.64 2. Figure out whether more feedback is needed in your library’s PA system. When we asked library directors if they thought their PA system provides individual librarians with enough job feedback, 50 percent said no, not sure, don’t know, or prefer not to respond. This finding suggests that it might be worth asking the librarians being evaluated what they think. Libraries could begin with an anonymous in-house survey, asking questions such as the following: Is the feedback from our evaluations sufficient in quantity? Are you hearing from a broad enough range of feedback givers? Is the feedback frequent enough? Is it specific enough? Are there areas about your performance on which you’d like more feedback? Ask librarians to think about formal feedback (as part of the an- nual evaluation and other evaluations) as well as informal feedback. If your library does not hold multiyear reviews, ask librarians if they think this might be helpful. It might be that nonmanagerial librarians, managerial and administrative librarians, or both think that they need more or different feedback. 3. If you decide to add more feedback, start by offering training for both managers and librarians on how to give, receive, and solicit job feedback. Advantages to such training are numerous, including (as Gedeon and Rubin note) reducing attribution bias65 and (as James R. Detert and Ethan R. Burris recommend) encouraging employee voice.66 It might also help to offer training on how individuals react to “mood altering feedback,” and how greatly their reactions can vary. Stone and Heen explain, using research from neuroscience and social science, that there are three variables in our responses: baseline (our normal state of well-being), swing (how far up or down we move in response to feedback), and sustain or recovery (how long we feel uplifted by positive feedback or upset by negative feedback, before returning to our baseline). They add that the length of time needed to return to baseline varies greatly among individuals.67 If librarians or library managers find themselves having strong emotional reactions to performance appraisals, they could use a research-based story-editing technique called “step back and ask why.” Timothy D. Wilson explains that people using this technique would recall emotionally charged events as if they were neutral observers, watching themselves from afar. They would ask themselves why they had the strong feelings and try to understand what the underlying causes might have been.68 Thus, prior training that explains feedback-related phenomena and prepares people for them might preclude or minimize feedback fallout. 4. If you decide to add more feedback, consider the following: a. Introduce some 360-degree feedback. This might be part of the annual evaluation for individual librarians; evaluation of the library director; an evaluation of departments, teams, or other workgroups; or an evaluation of individuals, using a self-directed format. G. Edward Evans, who believes that 360-degree feedback can be valuable at all levels of the library, explains 384 College & Research Libraries April 2018 why: “Using multiple raters… is one of the best methods for assuring fair- ness and accuracy.”69 b. Evaluate departments, teams, and other workgroups. Our survey found that evaluation of departments, teams, and other workgroups in libraries is minimal. Evans, writing about methods for evaluating teams in libraries, similarly reported that he “found… very little illustrating the application on an ongoing basis in an organization.” He recommends that when libraries evaluate teams, they should use methods that “do not emphasize the indi- vidual” and that they should evaluate teams on areas such as “trust, open/ honest communication, conflict management, mutual decision-making, problem-solving,… and collaboration.” Key stages in the process include developing team-based performance standards and ensuring that team leaders receive formal training in the assessment process.70 5. Shift the balance of the PA system toward the positive. Even time-strapped libraries and libraries that have to conform tightly to their institution’s PA system can make small positivity changes that stand to yield big attitudinal benefits. The literature of feedback documents some of the problems created by insufficient positive feedback. The literature of positive psychology documents the benefits of working to shift the balance. Stone and Heen explain that supervisors can habitually be so focused on higher- level problems that they forget to thank employees for what they do well, day in and day out. Employees notice the absence. Over time, both supervisors and employees feel unappreciated in their own ways. Stone and Heen call this phenomenon Mutual Appreciation Deficit Disorder (MADD) and write that it is all too common. They recommend that organizations have a “cultural norm of appreciation.”71 London explains some of the reasons that supervisors hesitate to give positive feedback. They worry about feeling embarrassed, seeming insincere, finding it harder later on if they have to give negative feedback, and having the employee expect a reward that is not in their power to give.72 The field of positive psychology offers realistic actions to take. Positive feedback can be introduced in ways that are small and informal, or larger-scale and carefully structured. Michelle Gielan cites positive psychology research from her consulting firm’s work with Fortune 100 companies. They found that three measures from their scale predict up to 75 percent of job successes. One of the three measures is work optimism, defined as “where you devote your mental resources …[whether] on the paralyzing or energizing aspects of work—and how strongly you believe good things will happen, which includes not only your own successes but also those of your colleagues and your organization.” Gielan writes that we can all learn specific ways to share positivity with others.73 By doing so, everyone (administrators, managers, and librarians) can help improve not just the library’s PA system but the job success of each individual. Suggestions for Further Research Nonmanagerial librarians need to be surveyed for their impressions of library PA systems. Such a survey would fit with the interpersonal value of fairness as well as the academic value of shared governance. Individual librarians are the ones being evaluated, and they have the most at stake. Such a survey would also align with recommended practices for PA and feedback systems. Stone and Heen recommend that when a new PA system is being selected or implemented, employees should be given information on the goals of the system, why the system was chosen over other options, and what the costs and benefits will be. Employees should also be engaged in discussion and asked for feedback through- Performance Appraisal Systems in Academic Libraries in the United States 385 out the process.74 Such advice is prudent for modifying an existing system as well as replacing it with a new system. A survey of nonmanagerial librarians would give library directors quantitative and qualitative data from a broad cross-section of similar employees on what they need from their PA system. Such a survey would allow libraries seeking to change their PA system to begin the process with goodwill, equipped with research on the perspectives of those being evaluated. Such a survey would also provide a starting point on what aspects of PA systems to change in order to improve the feedback-giving opportuni- ties of each component of the system. Because of our finding that only 50 percent of library directors think their PA system provides individual librarians with enough job feedback, particular attention should be paid to sufficiency of job feedback. We are conducting additional data analysis for our Effectiveness research objective. It will include qualitative analysis of responses to our open-ended question about the library PA system’s strengths, weaknesses, and constraints to being changed. Conclusion: From a Deadly Disease to a Preventive Health Checkup In 1994, Aluri and Reichel provided a sharp critique of performance evaluation. They based their critique partially on the writings of W. Edwards Deming, thought of as “the master of continual improvement of quality” and known for his numerous management-related publications and for teaching and consulting on quality manage- ment, both in Japan and in the United States.75 Deming cited the annual performance appraisal as the third of five deadly diseases afflicting management.76 Our survey results and literature review led us to a more hopeful conclusion than Deming’s. We believe that it is more useful to view PA systems in libraries as a preven- tive health system (including an annual preventive health checkup) than a disease. Both library managers and nonmanagerial librarians must take an active role in prevention. Library managers who, unlike many large corporations, have kept the annual performance evaluation, can remember that many of their employees might want to have an annual checkup. Goler, Gale, and Grant explain that, for employees who tend to feel very anxious about their performance, the annual evaluation is more helpful than the uncertainty of not knowing where they stand. These employees, like patients who have white-coat hypertension, probably prefer annual ratings.77 Library manag- ers, like doctors, can also follow some of our suggestions to strengthen their role in prevention. They can require that their professional librarians receive job feedback more than once a year, perhaps gathering some of it by evaluating groups. Managers can also learn more about effective delivery of feedback; learn to use coaching as a form of regular, ongoing feedback; and ensure that their feedback is not colored by biases such as attribution bias. Librarians, like patients, can strengthen their own preventive care by learning how to seek out feedback from managers, peers, and customers, and also by improving their skills in receiving and responding to feedback. We recognize, as did the hundreds of senior human resources professionals who gave their own HR systems a C, D, or F,78 that there are no miracle cures for the short- comings of performance appraisal. Our survey results, and the literature we have reviewed, have clearly shown that performance appraisal, although too complex to ever get just right, is necessary, valuable, and (as Aluri and Reichel79 also concluded) will not disappear. Our hope for academic libraries is that we can, as Wilson describes, rewrite the stories we tell ourselves80 about performance appraisal, learning to expect discomfort from the process, to view our reactions as normal, and to value the result- ing guidance and insight. 386 College & Research Libraries April 2018 Acknowledgements We thank the following for their contributions: Chad Shelton and Jayme Tetro for their work on building the survey database and programming the survey; Larry Boyer, Virginia Branch, Don Frank, Ken Johnson, and Peg Werts for pilot-testing the survey; Mary Reichel and Joyce Ogburn for the survey’s advance e-mail; Rao Aluri for reviewing drafts of the manuscript; Amy Hudnall and Colin Tate for assistance with citations and notes; Katherine Alford for assistance with figures and tables; and the PWWR (People Who Write Regularly) writing circle (Joe Gonzalez, Margaret Gregor, Mike Howell, Lynn Searfoss, and Kin-Yan Szeto) for feedback on the survey and manuscript from beginning to end; and the anonymous reviewers and the editor, whose questions, comments, and suggestions improved the manuscript. Glenn Ellen Stilling also thanks Susan Robison for helping her strengthen and deepen her writing practice, and Roger Stilling for his sharp editorial eye and his unwavering love and support. Performance Appraisal Systems in Academic Libraries in the United States 387 Appendix A: Survey instrument* Survey title: Performance Appraisal Systems for Professional Librarians in Academic Libraries Introduction The purpose of this survey was to describe and assess the processes libraries use to evaluate the performance of their professional librarians at four-year (and higher) academic institutions. Library directors were asked for details about the annual evalua- tion process for professional librarians in their library, as well as about other occasions on which librarians, as well as library workgroups, might be evaluated. In addition, they were asked questions about the extent of librarians’ participation in the system, especially through peer-to-peer feedback; the degree to which the system addresses all aspects of librarians’ job responsibilities; and the directors’ perceptions of the system’s effectiveness. We estimated that the survey would take approximately 5-20 minutes, depending on the number of performance appraisal events in the library’s system. Your library’s performance appraisal system 2. In your library’s performance appraisal system, are professional librarians evaluated on occasions in addition to their annual/periodic evaluation? □ Yes □ No (sent to an abbreviated version of the survey) About your library 3. To whom do professional librarians in your library report directly? Check all that apply. □ Library director □ Assistant/ associate director □ Department/ team head □ Other, please explain (open-ended question) 4. How would you describe the overall personnel structure of your library? □ Departments/ areas □ Teams (entire library is team-based) □ Some areas are team-based; others are not □ Other organizational arrangement; please explain (open-ended question) 5. Does your library offer promotion? □ Yes □ No 6. Does your library offer tenure? □ Yes □ No 7. Does your library offer continuing appointment? □ Yes □ No Professional librarians: Annual/periodic appraisal 8. How long have the main components of this performance appraisal been in place? □ Less than 2 years 388 College & Research Libraries April 2018 □ 3-5 years □ 6-10 years □ 11 or more years 9. Who conducts this performance appraisal? Check all that apply. □ Library director □ Assistant/ associate director □ Supervisor/ team leader □ A human resources professional □ A committee □ Other, please specify (open-ended question) 10. Who can give input? Check all that apply. □ Library director □ Supervisors or other middle managers □ Peers: Full-time permanent □ Peers: Part-time permanent □ Peers: Full-time temporary/ adjunct □ Peers: Part-time temporary/ adjunct □ Paraprofessional staff □ Any other librarians □ A committee □ Faculty/staff library users (not employed in the library) □ Library student assistants □ A consultant/ facilitator/ trained observer or human resources professional □ Student library users □ Other, please specify (open-ended question) 11. Which of the following do you evaluate? Check all that apply. □ Library responsibilities □ Ability to work with others □ Contributions to department/ team and other work groups □ Teaching/ Library instruction/ Information literacy instruction □ Scholarship and research □ Grant seeking □ Professional service □ College/ university service □ Service to the community □ Other, please specify (open-ended question) 12. Do librarians have a written list of their job goals? □ Yes □ No 13. Are librarians evaluated on their progress towards the goals they listed? □ Yes □ No 14. Who can give feedback on the extent to which librarians met their goals? Check all that apply. □ Library director Performance Appraisal Systems in Academic Libraries in the United States 389 □ Assistant/ associate director □ Department head/ team leader □ Other librarians in their department or team □ Paraprofessional staff in the department or team □ Any other librarians □ The librarian being evaluated □ A human resources professional □ Other, please specify (open-ended question) 15. What evaluation formats and methods are used? Check all that apply. □ Rating scale (i.e. Likert scale or some variation) □ “Ratingless” narrative (no numerical score or overall adjectival grade) □ Librarian’s list/report of activities and accomplishments □ Librarian’s self-appraisal (librarians rate their own performance) □ Ranking system (librarians are ranked highest to lowest on job factors) □ Feedback from a department/team, committee, or other workgroup □ Weighted scales □ Peer evaluation □ 360-degree or multisource feedback (feedback from a variety of employees; can include self-ratings and ratings from internal and external customers) □ Benchmarking □ Other, please specify (open-ended question) 360-Degree Feedback 16. Which librarians receive formal 360-degree or multisource feedback? □ Library director only □ Library director and other administrators □ Librarians in only certain areas □ All librarians □ None □ Other, please specify (open-ended question) 17. How is the 360-degree feedback system administered, and how is the feedback used? Check all that apply. □ Used for developmental purposes only [The librarian being rated is given a report of the feedback. It might be kept confidential (i.e., the raters and the supervisor might not see the report). The librarian decides how to use the feedback.] □ Used in the librarian’s performance appraisal □ Used for pay decisions □ Used for staffing decisions □ Results are shared with supervisor(s) □ Results are shared with raters □ An action plan is required as follow-up □ Other, please specify (open-ended question) Professional librarians: Annual/periodic appraisal, continued 18. How important is this performance appraisal in meeting the following performance management objectives? Not at all important | Low | Slightly | Neutral Moderately | Very | Extremely important 390 College & Research Libraries April 2018 Clarify the library’s expectations of librarians ¨ ¨ ¨ ¨ ¨ ¨ ¨ Provide information to managers for coaching purposes ¨ ¨ ¨ ¨ ¨ ¨ ¨ Provide information to managers for making pay decisions ¨ ¨ ¨ ¨ ¨ ¨ ¨ Provide information to managers for making promotion/demotion decisions ¨ ¨ ¨ ¨ ¨ ¨ ¨ Provide information to managers for assigning librarians new responsibilities ¨ ¨ ¨ ¨ ¨ ¨ ¨ Provide information to librarians about perceptions of their performance ¨ ¨ ¨ ¨ ¨ ¨ ¨ Provide information to librarians about their development needs ¨ ¨ ¨ ¨ ¨ ¨ ¨ 19. Which of the following are included or addressed in this performance appraisal? Check all that apply. □ Promotion □ Reappointment □ Tenure □ Post-tenure review □ Salary adjustments □ Review of teaching/ library instruction/ information literacy instruction □ Graduate faculty membership □ Other, please specify (open-ended question) 20. Outside of the annual/ periodic evaluation, for which of the following does your library have a separate performance appraisal process? Check all that apply.* □ Promotion □ Reappointment □ Tenure □ Post-tenure review □ Salary adjustments □ Review of teaching/ library instruction/ information literacy instruction □ Graduate faculty membership □ Other, please specify (open-ended question) * Note: For each of the first six options in Question 20, and for the first two options in Question 51, respondents who have the performance appraisal event are routed to a series of questions that repeat (with modifications and additions where needed) questions 8, 9, 10, 11, and 15. Professional librarians’ involvement 49. Overall, how would you rate the amount of involvement that professional librarians can have in the various components of your performance appraisal system? Please consider any of the following that are components of your system: Setting goals (individual, team/department, library); Self-evaluation; Input during the individual librarian’s performance appraisal session; Feedback on the performance appraisal of peers, teams/departments, and working groups; and Input on changes to the perfor- mance appraisal system (help text) Performance Appraisal Systems in Academic Libraries in the United States 391 □ Never □ Rarely, less than 10% □ Occasionally, about 30% □ Sometimes, about 50% □ Frequently, about 70% □ Usually, about 90% □ Every time Groups, departments, teams 50. How often does your library use groups with broad-based membership, such as committees, task forces, working groups, or project teams? This question refers to the use of groups in addition to the library’s overall organiza- tional structure. (help text) □ Not at all □ Rarely □ Occasionally □ Sometimes □ Often □ Usually □ Extensively 51. Does your library have performance appraisals for groups? Check all that apply. □ Yes, for departments or teams □ Yes, for committees, task forces, work groups, or project teams □ No □ Other, please specify (open-ended question) Training, follow-up, effectiveness 65. Where do professional librarians receive training in the following evaluation skills? Conferences | Workshops | In-house training by professional from inside the library | In-house training by a professional from outside the library | No training offered Training on how to avoid errors and biases when evaluating someone else ¨ ¨ ¨ ¨ ¨ How to give and receive job feedback ¨ ¨ ¨ ¨ ¨ Interpersonal skills (i.e., communication, negotiation, conflict resolution) ¨ ¨ ¨ ¨ ¨ 66. How does the library help librarians follow up on performance appraisal feedback and use it to improve their work and meet their job goals? Check all that apply. □ Follow-up meeting with supervisor □ Follow-up forms □ Asking for reports □ Providing funding for development □ Coaching and counseling by supervisor □ Work observation by supervisor □ Other, please specify (open-ended question) 392 College & Research Libraries April 2018 67. Overall, do you think the performance appraisal system provides individual librar- ians with enough job feedback? □ Yes □ No □ Not sure Effectiveness of your system 68. How successful do you perceive your current overall performance appraisal system to be in accomplishing the following? Not at all successful | Low success | Slightly successful | Moderately successful | High success | Very successful Creating behavior change in individual librarians ¨ ¨ ¨ ¨ ¨ ¨ Motivating individual librarians to develop new skills or improve existing skills ¨ ¨ ¨ ¨ ¨ ¨ Helping individual librarians reach their job goals ¨ ¨ ¨ ¨ ¨ ¨ Helping the library as an organization reach its goals ¨ ¨ ¨ ¨ ¨ ¨ Helping departments, teams, or other groups improve their performance ¨ ¨ ¨ ¨ ¨ ¨ 69. Overall, how satisfied are you with the current overall performance appraisal system? □ Completely dissatisfied □ Mostly dissatisfied □ Somewhat dissatisfied □ Neither satisfied or dissatisfied □ Somewhat satisfied □ Mostly satisfied □ Completely satisfied 70. Please comment on your overall performance appraisal system’s strengths and weaknesses. Please comment, also, on constraints to making changes in the system. (open-ended question) Demographics 71. How long have you worked as a librarian? □ Less than 2 years □ 3-5 years □ 6-10 years □ 11 or more years □ I am not a librarian 72. What is your educational level? Check all that apply. □ Masters in librarianship □ Additional masters □ Ph.D. in librarianship □ Ph.D. in another discipline □ Ed.D. □ Other, please specify (open-ended question) Performance Appraisal Systems in Academic Libraries in the United States 393 73. What is your gender? □ Male □ Female □ Other □ Prefer not to respond 74. How many professional librarians are employed at your library? (open-ended question) 75. What is the FTE enrollment of your institution? □ Under 500 □ 500-1,000 □ 1,000-3,000 □ 3,000-5,000 □ 5,000-7,000 □ 7,000-10,000 □ 10,000-20,000 □ 20,000-30,000 □ Over 30,000 76. Overall, which best describes the degrees offered by your institution? □ Diverse fields □ Special focus (examples: law; medical and health; business and manage- ment; engineering or technology; art, music, and design; and theology and Bible studies) 77. Overall, which best describes the degree programs offered by your institution? □ Exclusively undergraduate □ Majority undergraduate □ Majority graduate and/or professional □ Exclusively graduate and/or professional 78. Is your institution □ Public □ Private non-profit □ Private for-profit 79. Is your institution □ Brick and mortar only □ Primarily brick and mortar □ Primarily online □ Online only 80. Which of the following best describes your library? □ Has physical collections and facilities (in addition to online resources) □ Online only (no physical collections or facilities open to patrons) □ Other, please specify (open-ended question) 81. Are you currently the library director? □ “Library director” includes other titles, such as Dean of Libraries, Univer- sity Librarian, or College Librarian. (help text) □ Yes □ No (sent to end of survey) 394 College & Research Libraries April 2018 82. Does the library director receive 360 degree feedback? □ Yes □ No 83. How long have you held your current position as Library director? □ Less than 2 years □ 3-5 years □ 6-10 years □ 11 or more years * Notes: This is an abbreviated version of the survey, due to skip patterns and repeated questions. Question numbers here also differ, in some cases, from the numbering in the full survey, due to a skip pattern (see Question 2) for libraries with just one per- formance appraisal event. Most survey questions included the options “Don’t know” and “Prefer not to respond.” These options were omitted here in order to save space. Performance Appraisal Systems in Academic Libraries in the United States 395 Appendix B. Tables TABLE 1 What Is the Overall Personnel Structure of Your Library? (n = 554) Annual Only* In Addition to Annual** Total Departments 66.8% 72.2% 70.0% Teams (entire library team-based) 22.1% 14.1% 17.5% Some areas team-based, some not 11.1% 13.7% 12.6% *Annual Only: Libraries that have only an annual evaluation in their PA system **In Addition to Annual: Libraries that have one or more additional PA events in their PA system Note: No statistically significant differences among the two groups were found. TABLE 2 Annual/Periodic Evaluation: Formats Used for Evaluation (n = 532) Annual Only* In Addition to Annual** Total Rating Scale 52.2% 46.6% 49.1% Ratingless Narrative 38.4% 53.4%c 46.8% Librarian’s List of Accomplishments 62.0% 76.4%c 70.0% Librarian’s Self-Appraisal 62.5% 69.3% 66.3% Ranking (librarians ranked on job factors) 4.1% 8.1% 6.3% Feedback from Librarian’s Dept./Team 15.1% 26.2%c 21.3% Weighted Scales 4.9% 6.5% 5.8% Peer Evaluation 8.2% 23.3%c 16.6% 360 Degree Feedback 2.8% 6.2% 4.7% Benchmarking 1.6% 2.3% 1.8% a p<.05, b p<.01, c p<.001; Percentages were higher than the comparison group at p<.05. Respondents could check all that apply, so total percentages exceed 100%. *Annual Only: Libraries that have only an annual evaluation in their PA system **In Addition to Annual: Libraries that have one or more additional PA events in their PA system TABLE 3 Annual/Periodic Evaluation: What Is Evaluated? (n = 538) Annual Only* In Addition to Annual** Total Library Responsibilities 90.2% 96.7%c 93.9% Working with Others 77.1% 76.0% 76.5% Contributions to Department or Team 84.9% 87.0% 86.1% Teaching/Library Instruction 75.9% 89.0%c 83.0% Scholarship & Research 29.0% 71.8%c 52.9% Grant Seeking 14.3% 28.5%c 22.2% Professional Service 60.0% 84.8%c 73.8% College or University Service 66.5% 88.0%c 78.5% Service to the Community 37.6% 65.7%c 53.3% a p<.05, b p<.01, c p<.001; Percentages were higher than the comparison group at p<.05. Respondents could check all that apply, so total percentages exceed 100%. *Annual Only: Libraries that have only an annual evaluation in their PA system **In Addition to Annual: Libraries that have one or more additional PA events in their PA system 396 College & Research Libraries April 2018 TABLE 4 Annual/Periodic Evaluation: Who Conducts the Performance Appraisal? (n = 538) Annual Only* In Addition to Annual** Total Library Director 79.6% 81.9% 80.9% Associate Director 19.6% 27.5% 24.0% Supervisor/ Team Leader 26.5% 40.1%c 34.1% HR Professional 1.2% 1.9% 1.6% Committee 3.7% 23.3%c 14.6% a p<.05, b p<.01, c p<.001; Percentages were higher than the comparison group at p<.05. Respondents could check all that apply, so total percentages exceed 100%. *Annual Only: Libraries that have only an annual evaluation in their PA system **In Addition to Annual: Libraries that have one or more additional PA events in their PA system TABLE 5 Annual/Periodic Evaluation: Who Can Give Input? (n = 538) Annual Only* In Addition to Annual** Total Library Director 78.8% 83.8% 81.6% Supervisors/Middle Managers 33.9% 55.0%c 45.7% Full-Time Permanent Peers 17.6% 40.8%c 30.5% Part-Time Permanent Peers 5.7% 13.6%c 10.1% Full-Time Temporary Peers 3.7% 9.7%c 7.0% Part-Time Temporary Peers 3.3% 8.4%b 6.1% Paraprofessional Staff 11.0% 18.8%b 15.3% Any Other Librarians 13.1% 21.0%b 17.5% Committee 2.9% 19.1%c 11.9% Faculty Library Users 12.7% 24.9%c 19.5% Library Student Assistants 4.9% 6.4% 5.7% Consultant or Trained Observer 0.0% 1.3% 0.0% Student Library Users 5.7% 9.7% 7.9% a p<.05, b p<.01, c p<.001; Percentages were higher than the comparison group at p<.05. Respondents could check all that apply, so total percentages exceed 100%. *Annual Only: Libraries that have only an annual evaluation in their PA system **In Addition to Annual: Libraries that have one or more additional PA events in their PA system TABLE 6 Outside of the Annual/Period Evaluation, What Has a Separate Evaluation? (n = 281) Yes No Promotion 35.1% 64.9% Reappointment 12.0% 87.0% Tenure 20.1% 79.9% Post-Tenure 10.8% 89.2% Salary Adjustments 9.2% 90.8% Review of Teaching 9.9% 90.1% Graduate Faculty Membership 0.5% 99.5% Respondents could check all that apply; so total percentages exceed 100%. Performance Appraisal Systems in Academic Libraries in the United States 397 Notes 1. See the Review of Selected Literature. 2. The Expert Library: Staffing, Sustaining, and Advancing the Academic Library in the 21st Century, eds. Scott Walter and Karen Williams (Chicago, Ill.: Association of College and Research Libraries, 2010), viii. 3. Chloe Mills, “The Librarianship Portfolio,” New Library World 116, no. 9/10 (2015): 528. 4. Robbie M. Sutton, Matthew J. Hornsey, and Karen M. Douglas, “Feedback: An Introduc- tion,” in Feedback: The Communication of Praise, Criticism, and Advice, eds. Robbie M. Sutton, Mat- thew J. Hornsey, and Karen M. Douglas (New York, N.Y.: Peter Lang, 2012), 1. 5. Brené Brown, interview by Lillian Cunningham, Washington Post, October 3, 2012, available online at https://www.washingtonpost.com/national/exhaustion-is-not-a-status- symbol/2012/10/02/19d27aa8-0cba-11e2-bb5e-492c0d30bff6_story.html?utm_term=.44e122369e7c [accessed 14 February 2018]. 6. Marcus Buckingham and Ashley Goodall, “Reinventing Performance Management: How One Company Is Rethinking Peer Feedback and the Annual Review, and Trying to Design a TABLE 7 Annual/Periodic Evaluation: How Important Is This Performance Appraisal in Meeting These Management Objectives? (n = 516) Annual Only* In Addition to Annual** Total Clarifying library expectations (information for librarians) 5.8 6.0 5.9 Coaching (information for managers) 5.3 5.6b 5.5 Pay decisions (information for managers) 4.2 4.5 4.4 Promotional/Demotional decisions (information for managers) 4.1 5.4c 4.9 Assigning new responsibilities (information for managers) 5.0 5.2 5.2 Perceptions of performance (information for librarians) 6.0 6.1 6.1 Developmental needs (information for librarians) 5.9 6.0 6.0 a p<.05, b p<.01, c p<.001; Likert Scale 1-7, 1=Not at all important, 7=Extremely important; Likert Scale ratings were higher than the comparison group at p<.05. *Annual Only: Libraries that have only an annual evaluation in their PA system **In Addition to Annual: Libraries that have one or more additional PA events in their PA system TABLE 8 How Successful Do You Perceive Your Overall Performance Appraisal System to Be in Accomplishing the Following? (n = 496) Annual Only* In Addition to Annual** Total Creating behavior change in individual librarians 3.6 3.9a 3.8 Motivating individual librarians to develop new skills or improve existing skills 4.0 4.2 4.1 Helping individual librarians to reach job goals 4.2 4.5c 4.3 Helping the library as an organization reach its goals 4.1 4.4a 4.3 Helping departments, teams, or other groups improve performance 4.1 4.1 4.1 a p<.05, b p<.01, c p<.001; Likert Scale 1-6, 1=Not successful, 6=Very successful; Likert Scale ratings were higher than the comparison group at p<.05. *Annual Only: Libraries that have only an annual evaluation in their PA system **In Addition to Annual: Libraries that have one or more additional PA events in their PA system https://www.washingtonpost.com/national/exhaustion-is-not-a-status-symbol/2012/10/02/19d27aa8-0cba-11e2-bb5e-492c0d30bff6_story.html?utm_term=.44e122369e7c https://www.washingtonpost.com/national/exhaustion-is-not-a-status-symbol/2012/10/02/19d27aa8-0cba-11e2-bb5e-492c0d30bff6_story.html?utm_term=.44e122369e7c 398 College & Research Libraries April 2018 System to Fuel Improvement,” Harvard Business Review 93, no. 4 (2015): 42, 46, 48. 7. Peter Cappelli and Anna Tavis, “Assessing Performance: The Performance Management Revolution,” Harvard Business Review (Oct. 2016), available online at https://hbr.org/2016/10/the- performance-management-revolution [accessed 27 February 2018]. 8. Samuel A. Culbert and Larry Rout, Get Rid of the Performance Review: How Companies Can Stop Intimidating, Start Managing—and Focus on What Really Matters (New York, N.Y.: Business Plus, 2010), 10, 11, 122, 146. 9. Barbara Williams Jenkins, Performance Appraisal in Academic Libraries, CLIP Note, 12 (Chi- cago, Ill.: Association of College and Research Libraries, 1990), 2. In late 1988, Jenkins surveyed libraries at colleges and small universities, both public and private. She distributed 250 surveys and had 208 returned for an 83 percent response rate. 10. Rao Aluri and Mary Reichel, “Performance Evaluation: A Deadly Disease?” Journal of Academic Librarianship 20 (July 1994): 147. 11. Ronald G. Edwards and Calvin J. Williams, “Performance Appraisal in Academic Libraries: Minor Changes or Major Renovation?” Library Review 47, no. 1 (1998): Conclusion. 12. Julie A. Gedeon and Richard E. Rubin, “Attribution Theory and Academic Library Perfor- mance Evaluation,” Journal of Academic Librarianship 25, no. 1 (1999): 18. 13. Mills, “The Librarianship Portfolio,” 528. 14. Jen Stevens et al., “Revising Academic Library Governance Handbooks,” In the Library with the Lead Pipe (July 1, 2015), available online at www.inthelibrarywiththeleadpipe.org/2015/ revising-academic-library-governance-handbooks/ [accessed 27 February 2018]. 15. Mary K. Bolin, “Librarian Status at US Research Universities: Extending the Typology,” Journal of Academic Librarianship 34, no. 5 (2008): 418. 16. Lou Anderson and Donnice Cochenour, “Merit Salary Criteria: One Academic Library’s Experience,” portal: Libraries and the Academy 1, no. 4 (2001): 469. 17. Bolin, “Librarian Status at US Research Universities,” 418. 18. Frada L. Mozenter and Lois Stickell, “Without Merit: One Library’s Attempt to Put ‘Merit’ Back in ‘Merit Pay’,” College & Research Libraries 70, no. 1 (2009): 34–56. 19. Joan M. Leysen and William K. Black, “Peer Review in Carnegie Research Libraries,” Col- lege & Research Libraries 59, no. 6 (1998): 511–21. 20. Edward F. Lener, Bruce Pencek, and Susan Ariew, “Raising the Bar: An Approach to Re- viewing and Revising Standards for Professional Achievement for Library Faculty,” College & Research Libraries 65, no. 4 (2004): 287–300. 21. Mills, “The Librarianship Portfolio,” 527, 530, 533. 22. Threasa L. Wesley and Nancy F. Campbell, “Professional Librarian Performance Review: A Redesign Model,” Library Leadership & Management 24, no. 1 (2010): 12–17. 23. William E. Cashin, “Student Ratings of Teaching: Uses and Misuses,” in Changing Practices in Evaluating Teaching: A Practical Guide to Improved Faculty Performance and Promotion/Tenure Deci- sions, ed. Peter Seldin (Bolton, Mass.: Anker, 1999), 25, 28. 24. Junlin Pan and Guoqing Li, “What Can We Learn from Performance Assessment? The System and Practice in an Academic Library,” Library Management 27, no. 6/7 (2006): 468. 25. Cheryl Middleton, “Evolution of Peer Evaluation of Library Instruction at Oregon State University Libraries,” portal: Libraries and the Academy 2, no. 1 (2002): 69–78; Loanne Snavely and Nancy Dewald, “Perspective On…: Developing and Implementing Peer Review of Academic Librarians’ Teaching: An Overview and Case Report,” Journal of Academic Librarianship 37, no. 4 (2011): 343–51; Alessia Zanin-Yost and Robert Crow, “From Traditional to Non-Traditional: An Adaptive Procedure for Assessing the Instruction Librarian,” Reference Librarian 53, no. 2 (2012): 206–18. 26. Maureen A. Beck, “Technology Competencies in the Continuous Quality Improvement Environment: A Framework for Appraising the Performance of Library Public Services Staff,” Library Administration & Management 16, no. 2 (2002): 69–72. 27. Mary Heinzman and David Weaver, “Floating and Idea: Peer Observations across the Mississippi,” Public Services Quarterly 2, no. 2/3 (2006): 33–46. 28. Anne Pemberton, Jerome Hoskins, and Caitlin Boninti, “Minding the Gap: Identifying Performance Issues Using the Human Performance Technology Model,” Reference Services Review 39, no. 2 (2011): 206–22. 29. Jonathan Miller, “A Method for Evaluating Library Liaison Activities in Small Academic Libraries,” Journal of Library Administration 54, no. 6 (Aug. 2014): 493. 30. Hilary M. Davis and William M. Cross, “Using a Data Management Plan Review Service as a Training Ground for Librarians,” Journal of Librarianship and Scholarly Communication 3, no. 2 (2015): 1–20. 31. Jennifer Lyn Soutter, “Academic Librarian Competency as Defined in the Library and Information Science Journal Literature of 2001–2005 and 2011,” Partnership: The Canadian Journal https://hbr.org/2016/10/the-performance-management-revolution https://hbr.org/2016/10/the-performance-management-revolution http://www.inthelibrarywiththeleadpipe.org/2015/revising-academic-library-governance-handbooks/ http://www.inthelibrarywiththeleadpipe.org/2015/revising-academic-library-governance-handbooks/ Performance Appraisal Systems in Academic Libraries in the United States 399 of Library & Information Practice & Research 8, no. 1 (Jan. 2013): 15, 16. 32. “ACRL Proficiencies for Assessment Librarians and Coordinators,” College & Research Libraries News 78, no. 3 (Mar. 2017): 160, 161. 33. Beck, “Technology Competencies in the Continuous Quality Improvement Environment: A Framework for Appraising the Performance of Library Public Services Staff,” 69-72. 34. Miller, “A Method for Evaluating Library Liaison Activities in Small Academic Libraries,” 493. 35. Beck, “Technology Competencies in the Continuous Quality Improvement Environment,” 69–72; Ben Johnson, “The Case of Performance Appraisal: Deming Versus EEOC,” Library Ad- ministration & Management 18 (2004): 83–86; Marquardt, “Managing Technological Change by Changing Performance Appraisal to Performance Evaluation,” 101–10; Shelley Phipps, “Beyond Measuring Service Quality: Learning from the Voices of the Customers, the Staff, the Processes, and the Organization,” Library Trends 49, no. 4 (2001): 635–61. 36. Laurel Crawford et al., “Fear of Negative Evaluation: Differences Amongst Librarians,” Library Leadership & Management 29, no. 3 (2015): 1 [13 pages]; Gedeon and Rubin, “Attribution Theory and Academic Library Performance Evaluation,” 19, 20; Richard McKay, “Understanding and Managing the Anxiety Surrounding Performance Evaluations: Considerations for the Super- vising Librarian,” Library Leadership & Management 29, no. 3 (2015): 1–11; Melanie Clark, Kimberly Vardeman, and Shelley Barba, “Perceived Inadequacy: A Study of the Imposter Phenomenon among College and Research Librarians,” College & Research Libraries 75, no. 3 (2014): 255–71. 37. Johnson, “The Case of Performance Appraisal: Deming Versus EEOC,” 86. 38. Noor Harun Abdul Karim, “Investigating the Correlates and Predictors of Job Satisfaction among Malaysian Academic Librarians,” Malaysian Journal of Library & Information Science 13, no. 2 (2008): 69–88. 39. Jenkins, Performance Appraisal in Academic Libraries. 40. Leysen and Black, “Peer Review in Carnegie Research Libraries,” 513–14. 41. Edwards and Williams, “Performance Appraisal in Academic Libraries,” para. 8. 42. Gedeon and Rubin, “Attribution Theory and Academic Library Performance Evaluation.” 43. Crawford, “Fear of Negative Evaluation.” 44. McKay, “Understanding and Managing the Anxiety Surround Performance Evaluations.” 45. Sutton, Hornsey, and Douglas, “Feedback: An Introduction.” 46. Brown, interview by Lillian Cunningham. 47. In discussing our survey results, we use the term library directors rather than participants or respondents. Our survey included the question, “Are you currently the library director? Library director includes other titles, such as Dean of Libraries, University Librarian, or College Librar- ian.” Of the 496 responses to the question, only 50 said no. 48. Kim Bartel Sheehan, “E-mail Survey Response Rates: A Review,” Journal of Computer- Mediated Communication 6, no. 2 (Jan. 2001): Table 1. 49. Katja Lozar Manfreda et al., “Web Surveys versus Other Survey Modes: A Meta-Analysis Comparing Response Rates,” International Journal of Market Research 50, no. 1 (2008): 93, 97. 50. For a description of ACRLMetrics, see: https://www.acrlmetrics.com/index.php?page_id=11 [accessed 27 February 2018]. 51. Manuel London, The Power of Feedback: Giving, Seeking, and Using Feedback for Performance Improvement (New York, N.Y.: Routledge, Taylor & Francis Group, 2015), 41–42. 52. Ibid., 91, 92. 53. Lori Goler, Janelle Gale, and Adam Grant, “Let’s Not Kill Performance Evaluations Yet,” Leadership & Managing People (blog), Harvard Business Review (Nov. 2016), available online at https://hbr.org/2016/11/lets-not-kill-performance-evaluations-yet [accessed 27 February 2018]. 54. “Corporate HR: The Real Impact of Removing Performance Ratings on Employee Performance,” CEB Blogs (blog), CEB (May 12, 2016), available online at https://www. cebglobal.com/blogs/corporate-hr-removing-performance-ratings-is-unlikely-to-improve- performance/?business_line=human-resources [accessed 27 February 2018]. 55. Jenkins, Performance Appraisal in Academic Libraries, 3. 56. London, The Power of Feedback, 9, 19, 124, 125. 57. David W. Bracken et al., “360 Feedback from Another Angle,” Human Resource Management 40, no. 1 (2001): 4. 58. Marshall Goldsmith and Mark Reiter, Triggers: Creating Behavior That Lasts—Becoming the Person You Want to Be (New York, N.Y.: Crown Business, 2015), Kindle locations 330, 346–47. 59. Carol Dweck, “What Having a ‘Growth’ Mindset Actually Means,” Harvard Business Review Digital Articles, 2–4. 60. Stevens et al., “Revising Academic Library Governance Handbooks,” Literature Review. 61. Gail Munde, “Tenure and Continuous Employment,” in Everyday HR: A Human Resources Handbook for Academic Library Staff (Chicago, Ill.: ALA Neal-Schuman, 2013), 146. https://www.acrlmetrics.com/index.php?page_id=11 https://hbr.org/2016/11/lets-not-kill-performance-evaluations-yet https://www.cebglobal.com/blogs/corporate-hr-removing-performance-ratings-is-unlikely-to-improve-performance/?business_line=human-resources https://www.cebglobal.com/blogs/corporate-hr-removing-performance-ratings-is-unlikely-to-improve-performance/?business_line=human-resources https://www.cebglobal.com/blogs/corporate-hr-removing-performance-ratings-is-unlikely-to-improve-performance/?business_line=human-resources 400 College & Research Libraries April 2018 62. Brown, interview by Lillian Cunningham. 63. Lihong Zhu, “Use of Teams in Technical Services in Academic Libraries,” Library Collec- tions, Acquisitions, & Technical Services 35 (2011): 74, 79. 64. Douglas Stone and Sheila Heen, Thanks for the Feedback: The Science and Art of Receiving Feedback Well (Even When It’s Off Base, Unfair, Poorly Delivered, and Frankly, You’re Not in the Mood) (New York, N.Y.: Viking Penguin, 2014), 291–94. Writing for librarians, Munde (Everyday HR) also points out what the annual evaluation process can do, across the campus or across a category of employees, and what it can’t do. Like Stone and Heen, Culbert, and others, Munde emphasizes that the annual evaluation “is not the best place for corrective feedback, as it is a summative evaluation,” but it can also be a place for constructive feedback and offering to provide coach- ing. Like Culbert, Munde emphasizes the need for supervisors to provide formative evaluation throughout the year (Everyday HR, 104–05). 65. Gedeon and Rubin, “Attribution Theory and Academic Library Performance Evaluation,” 23. 66. James R. Detert and Ethan R. Burris explain that leader and manager behavior, such as their openness and their communication of psychological safety, are related to employee voice (defined as “speak[ing] up with potentially valuable information” that might improve the organization). They recommend that performance appraisal systems ask for feedback on whether managers’ behaviors merely espouse openness to input, or whether they “explicitly welcome voice.” They also note that “employees at all levels are likely to need training in both the delivery and receipt of upward information.” James R. Detert and Ethan R. Burris, “Leadership Behavior and Employee Voice: Is the Door Really Open?” Academy of Management Journal 50, no. 4 (2007): 869, 880, 882. 67. Stone and Heen, Thanks for the Feedback, 149–57. 68. Timothy D. Wilson, Redirect: Changing the Stories We Live By (New York, N.Y.: Little, Brown, 2011), 57–58, 72. 69. G. Edward Evans, “Manager/Executive Appraisal,” in Performance Management and Ap- praisal: A How-To-Do-It Manual for Librarians (New York, N.Y.: Neal-Schuman, 2004), 145–46. 70. G. Edward Evans, “Team Methods,” in Performance Management and Appraisal: A How-to- Do-It Manual for Librarians (New York, N.Y.: Neal-Schuman, 2004), 138, 142. 71. Stone and Heen, Thanks for the Feedback, 37, 300. 72. London, The Power of Feedback, 17. 73. Michelle Gielan, Broadcasting Happiness: The Science of Igniting and Sustaining Positive Change (Dallas, Tex.: BenBella Books, 2015), 18–19, 28–106. 74. Stone and Heen, Thanks for the Feedback, 296–97. 75. “Dr. W. Edwards Deming,” The W. Edwards Deming Institute, available online at https:// deming.org/deming/deming-the-man [accessed 14 February 2018]. 76. “Management’s Five Deadly Diseases: A Conversation with Dr. W. Edwards Deming” (Chicago, Ill.: Encyclopaedia Britannica Educational Corp., [1984?], YouTube video, 5:00–8:27, available online at https://youtu.be/ehMAwIHGN0Y [accessed 14 February 2018]. 77. Goler, Gale, and Grant, “Let’s Not Kill Performance Evaluations Yet.” 78. “2010 Study on the State of Performance Management,” WorldatWork and Sibson Consulting, (Oct. 2010), 5, available online at https://www.worldatwork.org/docs/research-and- surveys/2010-study-on-the-state-of-performance-management.pdf [accessed 14 February 2018]. 79. Aluri and Reichel, “Performance Evaluation: A Deadly Disease?” 153. 80. Wilson, Redirect, 18. Wilson explains that story editing, “a family of approaches developed by social psychologists,” works by helping people change their own interpretations of themselves and the social world. Through story editing, “people end up with a more desirable way of view- ing themselves that builds on and reinforces itself, leading to sustained change.” https://deming.org/deming/deming-the-man https://deming.org/deming/deming-the-man https://youtu.be/ehMAwIHGN0Y https://www.worldatwork.org/docs/research-and-surveys/2010-study-on-the-state-of-performance-management.pdf https://www.worldatwork.org/docs/research-and-surveys/2010-study-on-the-state-of-performance-management.pdf