Commentary

 

Collect with Intent: Craft Meaningful Questions that Drive Evidence Based Assessment Strategies

 

Melissa Goertzen

Information Management Consultant

Halifax, Nova Scotia, Canada

Email: goertzen.melissa@gmail.com

 

Received: 20 Feb. 2018   Accepted: 16 Apr. 2018

 

 

cc-ca_logo_xl 2018 Goertzen. This is an Open Access article distributed under the terms of the Creative Commons‐Attribution‐Noncommercial‐Share Alike License 4.0 International (http://creativecommons.org/licenses/by-nc-sa/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly attributed, not used for commercial purposes, and, if transformed, the resulting work is redistributed under the same or similar license to this one.

 

 

DOI: 10.18438/eblip29410

 


 

Librarians work in an information environment that is highly competitive and provides users with many alternatives to traditional library services. Despite the exponential growth of available information sources, collection budgets remain fixed or are reduced because of factors like the economy, greater competition for institutional resources, and assumptions that electronic content is low-cost or freely available (Goertzen, 2017). Information needs on university campuses surpass available resources, and librarians are required to justify annual collection budgets with evidence of use and overall value. Now more than ever, it is essential for professional to demonstrate evidence-based collection practices to support users’ research, teaching, and learning needs.

 

Developing collection assessment strategies in the current landscape is both an exciting and daunting task. The opportunities for experimentation are enormous but the complexities involved, like the dynamic nature of formats and technologies, present significant challenges. On top of this, librarians face pressures from administrators to produce evidence that justifies collection decisions or demonstrates impact.

 

Data analysis is a relatively new skill set required of librarians. Many articles published over the past several years focused on the fact that training opportunities are not widely available, and this disparity has prevented the standardization of assessment practices within the profession. From usage statistics to impact factors, there are myriad tools available to help librarians understand the strengths and weaknesses of their collections. The only problem is that there is not an agreed upon method to arrive at, compare, or act on assessment results (Schmidt, 2010).

 

I spent five years working as a Collection Development Librarian, and from my perspective, it seems that the profession directs its focus to solving a problem that has not been properly defined. There is pressure to present recommendations and evidence to administrators, but sustainable solutions will only come from a well-defined assessment strategies, goals, and objectives. I propose that the key to developing sustainable assessment strategies is to first uncover the correct questions to guide investigations. The inquiry process provides a focus to assessment work, ensures that the proper data is collected, and dictates how to conduct analysis activities in order to arrive at answers that support collection decisions. When librarians locate the central questions at the heart of evidence-based collection assessment, they create a roadmap which leads to correct answers and guides efforts to standardize assessment practices across the professional community as a whole.

 

Developing Questions that Drive Evidence Based Collection Practices Over Time

 

My experience developing evidence based collection strategies started five years ago when I was hired by Columbia University Libraries (CUL) to conduct the E-Book Program Development Study, a two-year assessment project that provided collection policies and best practices for e-book collections at CUL (Goertzen, 2016). When I read the project description, the opportunity seemed both exciting and daunting; the opportunities for experimentation were enormous, but I knew that the complexities involved with e-book collection development would present significant challenges.

 

Several months into the study I realized that I was operating on the assumption that users prefer electronic content for research, teaching, and learning activities. However, as I started to collect usage statistics, examine search terms, analyze cost data, and speak with patrons, I realized that my initial impressions of content use were far too simplistic and did not tell the full story. I started to ask more and more questions about when, how, and why users gravitate towards certain formats (e.g., print, electronic, archival materials) to support scholarly activities and build knowledge around specific subject areas.

 

The inquiry process provided a focus and pulled everything I had observed into one overarching question: What is the intended use of e-book content? Users interact with information for a variety of reasons including course use, research pursuits, and general reference. All of these activities serve different functions within a research community, rely on different levels of engagement with content, and support different information needs. In my investigation, identifying the intent of information use allowed me to provide evidence required to allocate budgets, negotiate license agreements, and make cases for information product acquisition.

 

When I consider the question of intent of use from a collection development perspective, my mind automatically separates activities into the categories of ‘current use’ and ‘future use’ (Yale University Library, 2013). This seems appropriate as there are few business models or collection development strategies that address both requirements at once. I think this separation points to a general shift in the way libraries in the 21st century must approach collection development activities: successful initiatives rely on a balance between ‘just in case’ and ‘just in time’ strategies. This balance allows information professionals to determine when it makes economic sense to invest resources in high use materials for current users and when it is appropriate to purchase materials that may have low use but add to the long-term value and legacy of the collection (Yale University Library, 2013). Again, having a strong understanding of how patrons intend to use collection materials provides the insight required to make these decisions.   

 

Intent of Use within the Context of a Long-Term Collection Development Strategy

 

Essentially, “data stands in place of a reality we wish to study. We cannot simply know a phenomenon, but we can attempt to capture it as data which represents the reality we have experienced…and are trying to explain” (Matthews & Ross, 2010, p. 45). In the age of Big Data, there are seemingly endless data streams to examine and analyze. In order to prevent scope creep and collect evidence that is relevant to the needs of local user communities, it is essential to design a quantitative research framework around the central assessment question: What is the intent of information use? This process allows librarians to sketch a roadmap that leads to the intent of use, and ultimately to present a case for budget requests and support collection development decisions (Goertzen, 2017).

 

Intent of information use is not a static investigation. As new technologies are developed and users’ needs shift, intent of information use will evolve as well. Building an assessment strategy that informs evidence based collection decisions is similar to building a long-term relationship with the user community. Success relies on librarians’ abilities to create assessment plans that are flexible, sustainable, and can be replicated year after year. When this is accomplished, annual results provide evidence of trends that support ‘just in case’ and ‘just in time’ collection development decisions, especially in cases where information products do not support both simultaneously.

 

Time spent planning is never wasted. In fact, the time invested in developing a strategy, particularly during the first year of an investigation, will create efficiencies in the long-run and develop baselines that provide evidence of collection use and value over time.

 

Clearly articulated objectives are the engine that drives the assessment process (Bakkalbasi, Sundre, & Fulcher, 2012). Below is a checklist that I used to sketch out a roadmap that answered my central assessment question: What is the intent of information use?

 

  1. Review strategic plans at the library system and host institution;
  2. State project goals and objectives;
  3. Create a list of internal and external stakeholders;
  4. Develop a project timeline around the annual budget cycle;
  5. Identify data sources that will answer the assessment question;
  6. Define data analysis methods;
  7. Develop a strategy to present results and evidence of collection value (Goertzen, 2017).

 

By developing a standardized template for collection investigations, librarians essentially create a bridge between the current information landscape and a future vision for collection development activities. Linking current work to future goals also allows librarians to effectively allocate budgets as research interests shift and ensure that information needs are met. Collection assessment becomes less about proving the value of the collection today, and more about demonstrating the impact of the collection over time.

 

Moving Beyond Data Analysis: Mapping Assessment Results to Collection Policies

 

When I consider how the intent of information use informs ‘just in time’ and ‘just in case’ collection decisions, my assessment activities take on new significance. I organize my activities so that results either confirm existing collection policies and practices, or flag areas where improvements can be made. By considering current best practices through the lens of assessment results, projects move the profession closer to standardized practices that benefit collection decisions over time.

 

With this being said, I organize data around five performance measures in order to understand how I can measure return on investment, value, or impact (Goertzen, 2017). These measures are not confined to electronic resources and allow for assessment across the full collection, providing a more holistic view of trends and resource allocations.

 

  1. Domain Measures: Captures the user community served by the library. Includes data related to demographic information, population size, and documented information needs.
  2. Input Cost Measures: Demonstrates how funds are allocated across collections. Includes cost data pulled from the library’s integrated library system.
  3. Collection Output Measures: Relates to the quantity and quality of output, like the number of titles in a subscription package, or the number or bibliographic records acquired over a given time. Includes data from title lists, overlap data, or bibliographic records.
  4. Effectiveness Measures and Indicators: Accounts for both collection input and output. Includes data from COUNTER reports, resolver statistics, consortial reports, turnaway statistics, publication counts, or Google Analytics data that provide insight into discovery, access, or usage trends.
  5. Cost-Effectiveness Indicators: Documents the return on investment or perceived value of a collection. Includes data from COUNTER reports, turnaway statistics, publication number, or citation analyses.

 

When I organized data analysis activities around the abovementioned performance measures, I discovered an important trend regarding the intent of information use. Print and electronic materials supported different forms of reading activities: continuous (e.g., reading for extended periods of time, conducting in-depth research, exploring subjects in depth) or discontinuous reading (e.g. reference, citation confirmation, searching for keywords, skimming chapters). The results were consistent across the major disciplines observed during this study (i.e., humanities, social sciences, sciences, and fine arts).

 

Based on results organized around the five performance measures, I went back to CUL’s collection policies and recommended that print serve continuous reading needs and electronic serve discontinuous reading needs. Essentially, I recommended that ‘just in case’ collection development activities focused on electronic materials, and that ‘just in time’ activities focus on print materials. Finally, I mapped these policy recommendations against collection depth indicators (Goertzen, 2016).

 

  1. Basic Collection: E-books Recommended 
    Supports lower-division undergraduate research; includes the core of the discipline or sub-discipline as it relates to the curriculum. This level describes materials that serve to introduce and define subjects including selected databases, fundamental materials, introductory works, historical surveys, and reference works.

  2. Extensive Collection: E-books Recommended
    Supports graduate course work; information is adequate to maintain knowledge of a subject required at less than research intensity. Examples of content include primary and critical resources, reference resources, specialized databases, and bibliographical resources.

  3. Research Collection: Print Recommended 
    Supports research leading to a doctorate, faculty research, or independent study. It includes resources supporting the framework for the methodology and implementation of original doctoral research.

 

By framing my assessment strategy around a central question and placing results within the context of overarching collection development policies, CUL not only received an understanding of how collections are valued today, but implemented strategies to measure intent of information use over time.

 

Conclusion

 

When librarians challenge assumptions, look at issues from multiple perspectives, and test beliefs against performance measures, they pull back the layers of a problem to uncover the core issues that pull seemingly disconnected elements together through one investigation. In my work, this core issue has been identifying and understanding the intent that drives information use. By beginning assessment work with a strong research question, librarians provide a starting point for strategic plans and collaborative relationships that define how collections and services will be delivered in the future.

 

References

 

Bakkalbasi, N., Sundre, D., & Fulcher, K. (2013). Assessing assessment: A framework to evaluate assessment practices and progress for library collections and services. In S. Hiller, M. Kyrillidou, A. Pappalardo, J. Self, & A. Yeager, (eds.), Proceedings of the 2012 Library Assessment Conference: Building effective, sustainable, practical assessment, October 29-31, 2012 (pp. 533-547). Washington, DC: Association of Research Libraries.

 

Goertzen, M. (2017). Introduction to quantitative research and data. In M. Goertzen (ed.), Applying quantitative methods to e-book collections (pp. 12-18). Chicago, IL: American Library Association.

 

Goertzen, M. (2016). E-book program development study: Results and recommendations, 2013-2015. Retrieved from https://doi.org/10.7916/D81Z44C3

 

Matthews, B., & Ross, L. (2010). Research methods: A practical guide for the social sciences. Toronto: Pearson Education.

 

Schmidt, J. (2010). Musings on collection analysis and its utility in modern collection development. Evidence Based Library and Information Practice, 5(3).  https://doi.org/10.18438/B8W330

 

Yale University Library. (2013, March 3). The eBook Strategic Plan Task Force: Report of findings and recommendations. Retrieved from http://www.library.yale.edu/departments/collection-development/Yale-ebook-task-force-rpt.pdf