Article

 

Educating Assessors: Preparing Librarians with Micro and Macro Skills

 

Rachel Applegate

Chair, Department of Library and Information Science
School of Informatics and Computing
Indiana University Purdue University Indianapolis

Indianapolis, Indiana, United States of America

Email: rapplega@iupui.edu

 

Received: 3 Feb. 2016     Accepted: 24 Mar. 2016 

 

 

cc-ca_logo_xl 2016 Applegate. This is an Open Access article distributed under the terms of the Creative CommonsAttributionNoncommercialShare Alike License 4.0 International (http://creativecommons.org/licenses/by-nc-sa/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly attributed, not used for commercial purposes, and, if transformed, the resulting work is redistributed under the same or similar license to this one.

 

 

Abstract

 

Objective – To examine the fit between libraries’ needs for evaluation skills, and library education and professional development opportunities. Many library position descriptions and many areas of library science education focus on professional skills and activities, such as delivering information literacy, designing programs, and managing resources. Only some positions, some parts of positions, and some areas of education specifically address assessment/evaluation skills. The growth of the Library Assessment Conference, the establishment of the ARL-ASSESS listserv, and other evidence indicates that assessment skills are increasingly important.

 

Method – Four bodies of evidence were examined for the prevalence of assessment needs and assessment education: the American Library Association core competencies; job ads from large public and academic libraries; professional development courses and sessions offered by American Library Association (ALA) divisions and state library associations; and course requirements contained in ALA-accredited Masters of Library Science (MLS) programs.

 

Results – While one-third of job postings made some mention of evaluation responsibilities, less than 10% of conference or continuing education offerings addressed assessment skills. In addition, management as a topic is a widespread requirement in MLS programs (78%), while research (58%) and assessment (15%) far less common.

 

Conclusions – Overall, there seems to be more need for assessment/evaluation skills than there are structured offerings to educate people in developing those skills. In addition, roles are changing: some of the most professional-level activities of graduate-degreed librarians involve planning, education, and assessment. MLS students need to understand that these macro skills are essential to leadership, and current librarians need opportunities to add to their skill sets.

 


Introduction

 

Over the last twenty years, libraries in general and academic libraries in particular have experienced a significant pro-assessment (evaluation) cultural wave. This is something that is becoming the norm in academic accreditation in general, and in the library field specifically. The question is whether current practitioners and current students have the opportunities to acquire the relevant assessment skills, which are different from what can be called the “practice” set (such as information assistance and instruction, information organization) and general professional values (such as knowledge of legal and ethical contexts and advocacy).

 

In this study, the word “evaluation” is used throughout. In higher education, the word “assessment” is generally reserved for a specific subset of evaluation: the assessment of student learning outcomes. When assessment of other areas (such as student affairs) occurs, it is generally termed “evaluation.” Evaluation is also the more commonly used term in K-12 education and social services contexts. Evaluation is distinct from research. According to the definitions for the use of human subjects in research, research aims to produce “generalized information.” In America, the Code of Federal Regulations states that, “Research means a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge” (U.S. Department of Health and Human Services, 2009).

 

Evaluation, on the other hand, is used for internal, organizational purposes, such as demonstration of value to stakeholders, improvement of existing functions, and design of new services, which have been collectively described as “the gathering of information for managerial decision-making” (Applegate, 2013, p. 1). For instance, an analysis of whether mathematics resources can support a new doctoral program in mathematics at University A is evaluation. An exploration of how mathematics researchers access scholarly communication would be research. The distinction between evaluation and research lies primarily in the ends to which the data is put, rather than in the specific techniques used to conduct the evaluation or research.

 

Higher education has placed increasing value on evaluation in accreditation, both institution-wide and for professional specializations. Educational associations seek to demonstrate the value of their work. The Council for Higher Education Accreditation’s statement on the value of accreditation (2010) spells out the goal of “promoting accountability and identifying successful improvement efforts” (p.2). This followed changes in federal regulation based on the Higher Education Opportunity Act of 2008 and enacting regulations from 2010 and 2011 (Higher Learning Commission, 2014). State and federal governments are keenly interested in accountability, given the significant funds given directly to institutions or indirectly through student aid and loans, as shown in the Accrediting Agency Recognition Criteria, U.S. Department of Education (2014).

 

The American Library Association’s (2009) Core Competences for Librarianship speak to the responsibilities of graduate-level librarians and spell out the importance of both research for understanding of practice, and evaluation for effective management of libraries (ALA, 2009). There are eight core areas, of which two (25%) mention evaluation; of 42 specific sub-points, four (10%) mention evaluation.

 

 

Evaluation received explicit prominence in the 2008 standards for accreditation of MLS programs, and even more emphasis in the 2015 Standards (ALA Office for Accreditation, 2008; 2015). From the preambles, both the 2008 and the 2015 documents state:

 

Systematic planning is an ongoing, active, broad-based approach to… (b) assessment of attainment of goals, objectives, and learning outcomes; (c) realignment and redesign of core activities in response to the results of assessment…

 

The Curriculum standard says:

 

(2008) II.7 The curriculum is continually reviewed and receptive to innovation; its evaluation is used for ongoing appraisal, to make improvements, and to plan for the future. Evaluation of the curriculum includes assessment of students' achievements and their subsequent accomplishments. Evaluation involves those served by the program: students, faculty, employers, alumni, and other constituents.

 

(2015): II.5 Procedures for the continual evaluation of the curriculum are established with input not only from faculty but also representatives from those served. The curriculum is continually evaluated with input not only from faculty but also representatives from those served including students, employers, alumni, and other constituents. Curricular evaluation is used for ongoing appraisal, to make improvements, and to plan for the future. Evaluation of the curriculum includes assessment of students' achievements.

 

And the Students standard (both 2008 and 2015) says:

 

IV.6 The school applies the results of evaluation of student achievement to program development. Procedures are established for systematic evaluation of the degree to which a program's academic and administrative policies and activities regarding students are accomplishing its objectives. Within applicable institutional policies, faculty, students, staff, and others are involved in the evaluation process.

 

The Institute for Museum and Library Services (2008) emphasizes outcomes-based evaluation for its grants directly, and through the Library Services and Technology Act funding to states. Its Webography on evaluation contains materials published in 1994 to 2004.

 

How do current and future librarians educate themselves to meet the need to evaluate (assess) library and information organizations? There is a micro-level of assessment that consists of understanding specific tools, such as survey design and data analysis, both generic (e.g., instructional testing) and library-specific (e.g., bibliometrics). There is also a macro-level that consists of understanding the role of assessment in managing libraries and in communicating with libraries’ users and parent institutions and communities.

 

In sum, assessment of libraries is something that a variety of stakeholders consider important. It is important internally for effective management, and externally, funders, donors, and governments expect it.

 

This descriptive study examined the prevalence of micro- and macro-evaluation skills on two sides: the job side, and the education side, for pre-service and in-service librarians. By combining data to provide an overall view of this landscape, this study lays the groundwork for further examination of the most effective and efficient venues for achieving this essential competency for libraries and information agencies.

 

Methods

 

This study explores two descriptive, prevalence-related research questions. 

 

For each research question, a population, a random sample, or a purposive sampling of items made up relevant data sets, and for each data set, qualitative coding was applied to arrive at a quantitative measurement of prevalence. A summary of these data sets can be found later in Table 1.

 

RQ-1 Positions: Operationalization

 

There are two data sets for this research question. One is idealized or prescriptive, while the other is descriptive or actual. The first data set (Data Set A) is the set of core competences and sub-points laid out in the ALA Core Competences. The second data set (Data Set B) consists of a body of job position advertisements retrieved from a random sample (n = 20 each) of member libraries of the Association of Research Libraries (ARL) and the Urban Libraries Council (ULC), as of spring 2014. This random selection of institutions, and using the institution’s own job posting sites, has been shown to provide the best representation of job ads, as opposed to using job-ad sites such as ALA JobList or the Chronicle of Higher Education (Applegate, 2010). All full-time jobs were included, regardless of whether they were librarian-specific or required an MLS.

 

There were 20 Urban Libraries Council institutions selected by random number generation. Of these, five had no current job openings. The New York Public Library listed 55 openings, while 15 other institutions listed 23 positions. Twenty Association of Research Libraries members were selected by random number generation. Of these, five had no job openings listed while the remaining 15 libraries had 50 jobs listings among them.

 

It is worth noting that the Boston Public Library (BPL) is a member of the Urban Libraries Council and also the Association of Research Libraries, and was selected in the ARL random sample. New York Public Library (NYPL) is also a member of the ARL but was selected in the ULC sampling. The analysis examined the ads with Boston Public Library positions in the ARL group (as sampled) and another analysis divided the libraries into three groups: public, public-research (BPL and NYPL), and research.

 

There were a total of 128 jobs identified. The researcher then coded each job at one of three levels of evaluation skills or responsibilities using coding level descriptions developed prior to coding. That is the coding represented an a priori categorization rather than a grounded content analysis.

 

 

 

 

 

RQ-2 Education: Operationalization

 

This part of the study draws on three data sets concerned with education for professionals.

               

Data Set C: Professional Development Courses or Sessions Offered By the American Library Association

 

This data set consists of professional development courses or sessions offered by divisions of the American Library Association as of spring 2014. This set included all online courses, all webinars, and listed ALA Annual meeting sessions. The “archives” were not accessed. These sessions were coded as either including or focusing on evaluation, or not.

 

Examples of sessions coded as “Evaluation-No” included:

 

 

Examples of sessions coded “Evaluation-Yes” included:

 

               

Data Set D: State Library Association Conference Presentations

 

The data set consists of sessions presented at state library association conferences. These were taken from a purposive sampling of seven states for 2014 and one state for both 2013 and 2014, for a total of eight conferences.

 

 

A total of 476 sessions were included. These sessions were coded as Evaluation-No, or into one of two Evaluation-Yes groups, either Results or Techniques. The line between Results and Techniques was somewhat fuzzy and some analysis combines them.

 

Evaluation-No: These were primarily how-to and update programs. They included professional techniques (“Basics of Preservation,”), content (“Mysteries Set in Florida,”), management (“Revolutionize Your Library with Strong Partnerships!”), and the community (“Conversations with the Montana State Library Commission.”)

 

Evaluation-Yes-Results: For these programs, it appears that data was gathered, but the primary focus of the session was on what the data told the researchers and evaluators what to do next.

Example sessions:

 

·         Parents, Alumni and Libraries: What Customers Really Believe about the Library

·         Turning the Tables: Assessing Student Worker Satisfaction in Peer- Staffing Models

·         Rethinking Reference: If it's Broke, Fix it!

·         Patrons on Performance: The Library Web as Users See It

·         Redefining Outreach: Creating a Perception of Person Accessibility

·         Outsourcing? An Evaluation of Vendor Assistance in Tech Services

·         Hispanic Americans and Public Libraries: Assessing Health Information Needs and Working Together in an E-Health Environment

 

Evaluation-Yes-Techniques: These sessions were specifically about how to conduct evaluation/research and data collection techniques, or, sometimes, explanations of why it should be done. In these cases the focus was on gathering data, not on why the data is gathered. Example sessions include:

 

·         How to Listen to Your Patrons: Maximizing Value and Outcomes Through Community Insight

·         Excel With Excel

·         Google Analytic with How-to-Directions

·         Listening to Your Patrons: Tools and Approaches for Gathering Insight From Your Community

·         You've Got Data, Now Use It: Innovative Methods for Better Understanding Public Library Use

 

Data Set E: Courses That Are or Were Required In ALA-Accredited Masters of Library Science Programs

 

This final data set consists of courses that are or were required in ALA-accredited MLS programs. These were examined at two time periods, 2005 and 2014, as reported to the Association of Library and Information Science Education (ALISE). The first time period was selected as occurring before the spike in emphasis on evaluation in the late 2000s discussed in the literature review; the second was the most recent data available at the time of the study. Three types of courses were captured, those about research, evaluation and management. Management was included because of the tight integration of evaluation into the administration/ management section of the ALA competencies. There were 48 degrees reported in 2004 and 50 in 2014. If a university offered multiple accredited degrees, the requirements for the degree that closest to a general “master of library science” were examined.

 

Results

 

In 2014, both skills and needs represent about 10% of opportunities and requirements.

 

RQ-1: What is the prevalence of evaluation skills or responsibilities in library-based positions?

 

This research found that approximately 10-30% of positions expect evaluation skills or include evaluation responsibilities, with no difference by type of library (public or academic/research). In data set A, the ALA professional competencies mentioned some aspect of evaluation in 2 of 8 competencies (25%), and 4 of 42 sub-points (10%). In terms of job postings evidenced in data set B, out of 123 total jobs posted, 32% had at least some mention of an evaluation role. For 15% of postings, the mention was minor or in passing, 15% had a more explicit mention, but at less than half of listed responsibilities, and for 2% (2 positions) it was the major role (more than half of duties) for that position. Conversely, the majority 68% of listed positions had no mention at all of evaluation or data responsibilities. This included professional librarian positions, such as “librarian” or “public services librarian II.” Other mentions were relatively meager.

 

There was a huge range of levels of responsibility in the descriptions, and they did not seem related to whether evaluation was present. Two very different positions coded at the same “minimal” level for evaluation activity were “staff secretary—compiling and reporting statistics” and “library services manager…. Cost effectiveness, monitor expenditures, continually benchmark approaches.” The two positions for which evaluation was the primary role included one primarily “librarian” (University of Houston: Assessment and Statistics Coordinator) and one of a professional support person (New York Public Library: Business Analyst).

 

When analyzed by type of institution, positions at public-research libraries (Boston Public Library and New York Public Library, members of both the Urban Libraries Council and the Association of Research Libraries), and research libraries (ARL libraries excluding ULC dual-members) were the only institutions to list primarily-evaluation positions. However, these institutions were also slightly more likely to have descriptions that had no mention of evaluation: research institutions listed 75% with no mention; public-research listed 68%, and public (ULC excluding ARL dual-members) institutions listed only 63%.

 

Table 1

Data Sources by Research Question

 

Research Question

Data Set

N (total), sample type, and date

RQ-1: What is the prevalence of evaluation skills or responsibilities in library-based positions?

 

A: ALA Core Competences

8 core competency sets

42 specific sub-competencies

Population

2009

 

B: Job postings at ARL and ULC institutions

128 job postings

Random sample

Spring 2014

RQ-2: What is the prevalence of opportunities for education for librarians in evaluation skills?

 

C: Professional development courses offered by ALA divisions

341 sessions

Population

Spring 2014

 

D: Professional development sessions at state library association conferences

496 sessions

Purposive sample of 8 conferences

2013 and 2014

 

E: Required courses at ALA-accredited MLS programs

48 programs, 67 courses;

50 programs, 74 courses;

Population

2005 and 2014

 

 

Table 2

Level of Evaluation Responsibility in Job Advertisements

 

Type of Library

Evaluation in described duties

Public

Research

Public

Research

Total

None

50

34

64%

76%

68%

Minimal

11

7

14%

16%

15%

Less than half

16

3

21%

7%

15%

More than half

1

1

1%

2%

2%

Total

78

45

 

 

RQ-2: What is the prevalence of opportunities for education for librarians in evaluation skills?

 

For professional development, less than 10% of offerings involved evaluation skills. For pre-professional education, “research” and “management” are common requirements but evaluation is less present. Data set C reveals that, as of spring 2014, there were 341 programs offered by 11 ALA divisions: all online-recorded, live webinars, and conference sessions listed as “continuing education,” of which 24 (or 7%) were about evaluation generally or about a specific evaluation technique. Out of 11 divisions, five had relatively few professional development courses/sessions listed (42 total sessions) of which none were about evaluation. YALSA had a large number of offerings at 38, of which only one was about evaluation. For the other divisions, the range of evaluation as a percentage of courses ranged from 6% to 18%. Notably, the management-related division Library Leadership and Management Association (LLAMA) had the highest percentage at 18%.

 

Data set D includes seven states’ professional conference programs, found using a maximum variety purposive sampling varying by state size, region of the country, and presence or absence of graduate library programs. One state (North Carolina) had two years examined (2013 and 2014). Out of 496 total sessions discovered, only 29 (approximately 6%) had some relation to evaluation, either in terms of reporting results, or of teaching evaluation techniques.

 

Graduate education for librarians typically consists of a wide variety of optional courses and some required courses. The balance between required and optional depends on the goals of individual programs, but the programs are unified here by the common factor of accreditation by the American Library Association. ALISE statistics cover most accredited libraries schools, though there are some gaps in the data for some programs in some years (Association of Library and Information Science Educators, 2010, 2014). Programs are asked to describe course requirements for their accredited degrees. Both management and research course requirements remained stable when compared at two different points in a ten-year period, with 71 (72% of) programs requiring training in management and 58 (60%) requiring research methods. Evaluation had a noticeable increase, with a low of 10% of programs in 2005 to 16% of programs in 2014.

 

 

Table 3

Continuing Education Offerings by ALA Division

Course/Webinar Involves Evaluation

No

Yes

Total

Percentage Yes

American Association of School Librarians (AASL)

6

6

0%

Association for Library Services to Children (ALSC)

8

8

0%

Association of Specialized and Cooperative Library Agencies (ASCLA)

7

7

0%

Library Information Technology Association (LITA)

13

13

0%

United for Libraries

8

8

0%

Young Adult Library Services Association (YALSA)

37

1

38

3%

Association for Library Collections and Technical Services (ALCTS)

114

7

121

6%

Association of College and Research Libraries (ACRL)

30

2

32

6%

Reference and User Services Association (RUSA)

16

2

18

11%

Public Library Association (PLA)

55

7

62

11%

Library Leadership and Management Association (LLAMA)

23

5

28

18%

Total

317

24

341

 7%

 

Table 4

State Library Association Conference Sessions

Session Involves Evaluation

No

Yes-results

Yes-technique

Total

Percentage yes

New York

69

1

70

1%

Louisiana

62

2

64

3%

New Hampshire

31

1

32

3%

Alabama

49

2

51

4%

Montana

48

2

50

4%

North Carolina

133

4

7

144

8%

Washington

28

1

2

31

10%

Florida

47

5

2

54

13%

Total

467

10

19

496

6%

 

Table 5

Required Courses for Master of Library Science Degrees

Courses

2005

Number

2005 Percentage

2014

Number

2014 Percentage

Management

34

71%

36

72%

Research

28

58%

30

60%

Evaluation

5

10%

8

16%

Programs

48

 

50

 

 

 

Across programs a management course was the most prevalent course requirement. Management courses had titles such as “Library/Management/Administration of/in Libraries/Information Organizations,” and frequently were by-type (academic, school, etc.). Three others in 2014 were “Achieving Organizational Excellence,” “Management and Systems Analysis,” and “Organizational Management & Strategy / Management Without Borders.”

 

Almost all research courses had simple titles of “Introduction to Research/Methods” or “Research Methods.” Three others were “Contextual Inquiry and Project Management,” “Designing Principled Inquiry,” and “Educational Research & Measurement.”

 

Courses that were counted as focusing on evaluation were included “Assessing Information Needs,” “Evaluation of Resources and Services,” “Evaluation of Information Systems,” “Evaluation Methods,” and “Library Planning, Marketing and Assessment.”

 

There was some overlap between categories. The course “Management and Systems Analysis,” was counted as a management course and as an evaluation course. “Research & Evaluation for LIS” and “Research & Evaluation Methods” were counted in both the research and evaluation categories. Also, in some programs, students could take either research or evaluation courses.

 

Given that many, and probably most, program requirements involve options and substitutions, with differences by specializations, and also some variation in reporting, this is a very fuzzy data set. Nevertheless, evaluation itself appears in required coursework for at least some programs, and has had some slight gains over the past 10 years.

 

Table 6

Overall Results by Research Question

Research Question

Results

RQ-1: Need: What is the prevalence of evaluation skills or responsibilities in library-based positions?

A-ALA Core Competences

10-25%

B-Open jobs at ARL and ULC institutions

32%

RQ-2: Opportunity: What is the prevalence of opportunities for education for librarians in evaluation skills?

C-Professional development courses offered by ALA divisions

7%

D-Professional development sessions at state library association conferences

6%

E-Required courses at ALA-accredited MLS programs

15% (Evaluation)

58% (Research)

71% (Management)

 

Discussion

 

Within these data sets, and accounting for their limitations, there appears to be a mismatch between the need for evaluation (assessment) skills and the formal opportunities for librarians (library staff) to obtain those skills. While few library positions, even at very large systems and institutions, are solely dedicated to evaluation activities, data collection and analysis is part of about one-third of positions advertised at these libraries. However, less than 10% of continuing education opportunities, whether by state associations or by American Library Association divisions, focus on evaluation skills (or results).

 

Association events, conferences, and courses are an important way for current information professionals to keep up to date, especially when life-long learning is not just a motto but an essential part of an information professional’s life (Long & Applegate, 2008). There appears to be an opening for increased attention to this area of education. This is also an area for a cumulative virtuous circle. Experts in evaluation can present results and instruction in techniques to a widening pool of practitioners who in turn

spread a culture, capability, and commitment to the use of data in decision-making. Over the years the ARL Library Assessment Conference has grown in prominence and size, supplemented by the launch of the ARL-Assess listserv in 2014, and the development of a public library assessment workshop.

 

Besides professional continuing education, there is pre-professional preparation. That is, programs of library and information science have the responsibility to prepare graduates to perform, understand, and develop further in the principles and practices of their profession. Library education at the graduate level has had a high level of interest in or requirements for research-specific skills, undoubtedly influenced by the place of the MLS degree as a graduate or professional degree at universities. There is a perennial discussion about the relevance of the MLS to professional practice, and this paper avoids entering that broad debate here.

 

There is, however, a specific issue that is relevant to understanding the place of evaluation education in professional preparation: the distinction between research and evaluation. Conceptually, are these the same, and pragmatically, does coursework in research methods prepare a student to conduct managerially-oriented assessment?

 

On the conceptual question, the Assessment in Higher Education listserv (ASSESS@LSV.UKY.EDU) has a user population made up primarily of people working at colleges and universities, in academic programs and also in centralized assessment offices. One perennial question and debate in this forum is whether evaluation or assessment is “research” as defined by the federal government or the institution’s Institutional Research Board (IRB) or other office for the protection of human subjects in research. Federal definitions define “research” as generalized knowledge, and on campuses that in turn can be operationalized as something to be published, presented, or disseminated to an external audience. In contrast, non-research evaluation is often treated as internally oriented: “If the investigator does not intend to use the information for publication or presentation outside of the investigator’s department or organization, the research will not contribute to generalizable knowledge and IRB review is not required” (Indiana University, 2014).

 

This leaves a gap in understanding the dissemination of methodology and of case-instances that may contribute to a generalizable understanding. For example, suppose you conduct a study with your math majors of their use of your e-book collection on mathematics. This is for one’s own use in collection management. Yet, an audience may want to know how to conduct such studies. Or another scholar may want to know the status of e-books about mathematics and other science areas: using the specific to illuminate the general. Methodologically, there can be important and useful overlaps in research or evaluation data techniques and data collection designs. Faculty in library programs that require or offer research methods courses can use the practical importance of evaluation to educate their students about the overall value of such courses: many library students believe they will not conduct formal “research” so tend to think of this as entirely theoretical.

 

This prevalence study describes in part the role and place of evaluation in library practice, showing the degree of importance accorded to assessments skill in institutions and in library professional development. It forms part of a larger, ongoing conversation about the preparation and function of MLS-educated librarians in information organizations. The extent to which the MLS is managerial, evolving in addition to, and perhaps away from purely technical professional skills, is reflected in the description of evaluation as an essential component of leadership (component 8C).

 

Conclusion

 

It is hard to design a quantitative equation encompassing offerings and needs, where A equals B, or even where A results in B, for the concerns under consideration in the study. Even the percent or prevalence of evaluation mentions in courses or in job ads are far from exact. The trend is clear, though, that there seems to be more extensive need for evaluation skills than there are structured offerings educating people in those skills. When LIS educators organize their programs of study to prepare graduates to meet the needs of practice, they need to thoughtfully consider what the core requirements are. Evaluation is specifically mentioned and indeed emphasized in the ALA competences document, and is reflected in new job position descriptions.

 

For existing librarians, roles will change. Just as a wave of RDA and FRBR workshops, webinars, and books were published to assist technical services librarians in making the transition to newer forms of organizing information, opportunities are needed to continually enhance the ability of library leaders to manage and to meet external demands for accountability and improvement.

 

References

 

American Library Association. (2009). ALA’s core competences of librarianship. Retrieved from http://www.ala.org/educationcareers/careers/corecomp/corecompetences

 

American Library Association, Office for Accreditation. (2008). Standards for accreditation of master's programs in library and information science. Chicago: American Library Association, Office for Accreditation. Retrieved from http://www.ala.org/accreditedprograms/standards

 

American Library Association, Committee on Accreditation. 2015. Standards for accreditation of master’s programs in library and information studies. Chicago: American Library Association, Office for Accreditation. Retrieved from http://www.ala.org/accreditedprograms/standards

 

Association of Library and Information Science Educators. (2010). Library and information science education statistical report 2009. Chicago: Association of Library and Information Science Educators.

 

Association of Library and Information Science Educators. (2014). ALISE statistical report, 2014 [data set]. Retrieved from http://www.alise.org/statistical-reports-2

 

Applegate, R. (2010). Job ads, jobs, and researchers: Searching for valid sources. Library & Information Science Research, 32(2),163-170. http://dx.doi.org/10.1016/j.lisr.2009.12.005

 

Applegate, R. (2013). Practical evaluation techniques for librarians. Santa Clara, CA: Libraries Unlimited.

 

Council for Higher Education Accreditation. (2010). The value of accreditation. Retrieved from http://www.chea.org/pdf/Value%20of%20US%20Accreditation%2006.29.2010_buttons.pdf

 

Higher Learning Commission, (2014). Federal compliance program packet. Retrieved from https://www.hlcommission.org/Policies/federal-compliance-program.html

 

Indiana University, Office of Research Compliance. (2014). Human subjects: Levels of review. Retrieved from http://researchcompliance.iu.edu/hso/hs_level_review.html

 

Institute for Museum and Library Services. (2008). Webography [Outcomes Based Evaluations]. Retrieved from https://www.imls.gov/grants/outcome-based-evaluation/webography

 

Long, C. E., & Applegate, R. (2008). Bridging the gap in digital library continuing education: How librarians who were not “born digital” are keeping up. Library Administration and Management, 22(4), 172-18. https://journals.tdl.org/llm/index.php/llm/index

 

U.S. Department of Education. (2014). Accreditation in the United States. subpart B: The criteria for recognition. Basic eligibility requirements. Retrieved from http://www2.ed.gov/admins/finaid/accred/accreditation_pg13.html

 

U.S. Department of Health and Human Services. (2009). Code of federal regulations, title 45 Public welfare, part 46: Protection of human subjects. Retrieved from http://www.hhs.gov/ohrp/humansubjects/guidance/45cfr46.html