Evidence Summary
Development of Deal- and Journal-level Metrics and Methods Assists
Librarians to Evaluate Big Deals
A Review of:
Blecic, D.D., Wiberley, Jr., S.E., Fiscella, J.B., Bahnmaier-Blaszczak,
S., & Lowery, R. (2013). Deal or no deal?: Evaluating Big Deals and their
journals. College & Research Libraries, 74(2), 178-193.
Reviewed by:
Kathleen Reed
Assessment & Data Librarian
Vancouver Island University
Nanaimo, British Columbia, Canada
Email: kathleen.reed@viu.ca
Received: 1 Jan. 2014 Accepted: 17 Jun. 2014
2014 Reed.
This is an Open Access article distributed under the terms of the Creative
Commons‐Attribution‐Noncommercial‐Share Alike License 2.5 Canada (http://creativecommons.org/licenses/by‐nc‐sa/2.5/ca/),
which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly attributed, not used for commercial
purposes, and, if transformed, the resulting work is redistributed under the
same or similar license to this one.
Abstract
Objective –
To assess the value of aggregated journal packages (Big Deals) and to select
individual journal titles for continued subscription should a deal be
cancelled.
Design –
Case study.
Setting –
Doctoral research university library in the United States of America.
Subjects –
Three anonymous Big Deals.
Methods –
The authors define metrics at two levels (deal and journal) to evaluate Big
Deal packages. The metrics rely heavily on the COUNTER JR1 metric Successful
Full-Text Article Request (SFTAR).
Main Results –
The authors found that while 30% of journals provide 80% of SFTARs, the cost of
subscribing to these journals individually would not save significant sums of
money. Additionally, they speculate that library users would increase the
number of interlibrary loan requests to access the 20% of SFTARs that would be
inaccessible if a Big Deal was cut, amounting to increased costs.
Conclusion –
With no sign of publishers moving to change the price and conditions of Big
Deals, these arrangements are becoming unsustainable for libraries. As this
occurs, librarians require methods of assessing which deals to keep and which
to cut, as well as evidence of to which individual journals they should
subscribe. The authors of this paper set out one method of conducting these
assessments that they have found to be useful at an academic library. They
conclude by stating that even with SFTAR data, individuals must keep in mind
the necessity of providing equitable access to all of a university community’s
user groups.
Commentary
In
a climate of financial difficulties, there is a need for metrics and methods to
assist librarians in evaluating Big Deals, “an online aggregation of journals
that publishers offer as a one-price, one size fits all package” (Frazier, 2001).
Since Big Deals utilize significant financial resources of academic libraries,
they continue to be under close scrutiny for their value to the institution.
The authors present an approach used at one research library to assess Big
Deals and to select individual journal titles for continued subscription should
a deal be cancelled.
As
an introduction to the assessment of Big Deals, this article is a must-read. It
contains practical instructions for conducting assessment, interwoven with
discussion of many of the critical issues and challenges librarians face when
examining Big Deals. The authors’ suggested assessment steps go beyond
examining just a single metric like COUNTER’s JR1 (successful full-text article
requests), but are not so complex that they require difficult models and
special training. There are two processes written about in detail: establishing
deal-level metrics and journal-level metrics.
The
authors give a thoughtful commentary on major pitfalls of their metrics,
including devoting significant discussion to the importance of qualitative
measures and individual judgment in addition to quantitative calculations. For
example, using the three Big Deals as examples and breaking down each journal
title into one of four subject areas (humanities, social sciences, STEM, and
health sciences), the authors show that if the SFTAR numbers alone were relied
upon, almost all access to humanities journals would have been cut. However,
the authors do not formalize these qualitative steps in the Big Deal assessment
model as clearly as the quantitative metrics steps. A subsequent paper that
captures the qualitative process of assessing the value of add-ons to the
packages such as mobile accessibility or interface design, for example, would
be of use to those thinking about database assessment.
Additionally,
the authors do not account for the value of having large numbers of resources
discoverable to serve the long-tail of user searches. A resource may not be
used for several years, and then be found to be useful by an individual. Having
a comprehensive collection that is discoverable adds value to a library,
although it is difficult to assign a specific dollar value to this strength of
a Big Deal.
A
final gap in the authors’ analysis relates to the journal prices used in
calculations. While the authors found that the savings projected by the
analysis were not significant, they came to this conclusion based on the
subscription price for individual journals listed on publishers’ websites. It
is not known if a library would get a discount on the list prices. If this
occurred, it might change the significance of the total savings.
Despite
a few minor aspects that could be strengthened, overall this article is a
useful and thoughtful contribution to the literature on assessing Big Deals.
The authors provide helpful examples of metrics and methods, as well as a
roadmap through a potential minefield of mistakes and oversights that could
befall librarians doing this type of assessment.
References
Frazier, K.
(2001). The librarians’ dilemma: Contemplating the costs of the “Big Deal.” D-Lib
Magazine, 7(3). Retrieved from http://www.dlib.org/dlib/march01/frazier/03frazier.html