178 Deal or No Deal? Evaluating Big Deals and Their Journals Deborah D. Blecic, Stephen E. Wiberley Jr., Joan B. Fiscella, Sara Bahnmaier-Blaszczak, and Rebecca Lowery Deborah D. Blecic is Bibliographer for the Life and Health Sciences and Associate Professor, Stephen E. Wiberley Jr. is Bibliographer for the Social Sciences and Professor, Joan B. Fiscella is Associate Professor Emerita, Rebecca Lowery is Map and Data Services Librarian and Assistant Professor at the University of Illinois at Chicago; e-mail: dblecic@uic.edu, wiberley@uic.edu, jbf@uic.edu, rplowery@uic.edu respectively. Sara Bahnmaier-Blaszczak is Head of Electronic Resources Acquisitions and Licensing and Associate Librarian at the University of Michigan; e-mail: sarabahn@umich.edu. The authors wish to thank Kristin Martin and John Cullars for their helpful reviews of this manuscript. © 2013 Deborah D. Blecic, Stephen E. Wiberley Jr., Joan B. Fiscella, Sara Bahnmaier-Blaszczak, and Rebecca Lowery, Attribution-NonCommercial (http://creativecommons.org/licenses/by-nc/3.0/) CC BY-NC This paper presents methods to develop metrics that compare Big Deal journal packages and the journals within those packages. Deal-level metrics guide selection of a Big Deal for termination. Journal-level met- rics guide selection of individual subscriptions from journals previously provided by a terminated deal. The paper argues that, while the proposed metrics provide helpful quantitative data for comparative analysis, selec- tion of individual subscriptions must also involve informed judgment about a library’s subject coverage needs and alternative sources of access. The paper also discusses how replacing a Big Deal with a reduced number of individual subscriptions may affect the collections budget, use of other resources, and interlibrary loan. n 2 0 0 1 , K e n n e t h F r a z i e r coined the term “Big Deal” to describe multiyear contracts in which a library purchases access to all or most of a commercial publisher’s journals at a price based on the library’s current subscription costs and pays annual price increases that are fixed at the outset of the contract.1 Four years later, Frazier predicted that librar- ies would not be able to sustain Big Deals in their current form because the annual price increases of such arrangements ex- ceed the normal growth of collections budgets, consuming an ever larger por- tion of the budget until the expenditure entailed would prevent the purchase of other essential resources.2 Fowler has seconded that prediction, writing in 2009 that “the days of the Big Deal were num- bered and were likely coming to a close sometime in the foreseeable future.”3 In the wake of the most severe eco- nomic decline since the Great Depression, many academic libraries are facing fund- ing cuts. These cuts are forcing libraries to reexamine their Big Deals. Continued financial constraints will likely speed the process of divesting of Big Deals, regardless of their value, or altering their crl-300 Deal or No Deal? Evaluating Big Deals and Their Journals 179 structure. Although Big Deals remain widespread, especially among members of library consortia, there are reports of libraries pulling out of Big Deals due to pricing and funding.4 While it is possible that libraries and publishers will negotiate alterations to the Big Deals pricing structures to make them sustainable for both parties, such change was not reported through 2011. Without concessions from publishers on base price and price increases, libraries that must terminate Big Deals due to decreasing purchasing power will benefit from methods and metrics to guide them both in determining which Big Deal(s) to terminate and then in selecting which individual journal subscriptions to con- tinue from those offered by the Big Deal publisher. This paper will present such methods and metrics developed at one research library, and findings from test- ing these methods and metrics on three Big Deals. Literature Review Librarians use a variety of means to deter- mine which journal subscriptions to retain in their collections, including impact factors, user ratings, programmatic cover- age, and measures of use. Historically, for print journals, measures of use included counts of reshelving, point-of-use surveys in the library, citation analysis, and inter- library loan and circulation data. While no one measure captured all use, each offered some indication of the utility of particular journals.5 Researchers realized that the various measures of use might reflect different levels of engagement with journal content. For example, Garfield found that Scientific American and New Scientist articles were read frequently for current awareness but seldom cited.6 In a health sciences library, Blecic found that clinical review health sciences journals had high in-house and circulation use but were not cited heavily in research articles, suggesting they were being used for educational and clinical purposes.7 Although counts of use varied, the counts from any type of study carried weight and offered at least a snapshot of a particular type of use. Some studies found positive Pearson and Spearman correla- tions between various measures of use of print journals, some correlations very high, indicating that for different methods of measuring use, similar use patterns emerged for many journals.8 Initial studies of electronic journal use often entailed comparison with use of print journals.9 As libraries have moved away from print toward electronic for- mats, and in many cases cancelled print, studies of journal usage have increasingly addressed usage of the electronic format only. Studies of electronic journal usage may cover both journals subscribed to individually and those subscribed in bulk through packages. Because of the variety of subscription options, Peters has rec- ommended comparison of performance of similar types of resources.10 Big Deals are one type of resource, and this study examines journal packages that fit the definition of a Big Deal. In comparing Big Deals, librarians have the advantage of standardized measures of their use. From the inception of systems that distribute electronic journals, librar- ians and publishers recognized that these systems could capture evidence of use. The International Coalition of Library Consortia established the first important guidelines for measuring use.11 Project COUNTER followed, and COUNTER- compliant reports are now the gold standard for measurement of use.12 The key COUNTER metric for journal use is the Successful Full-Text Article Re- quest (SFTAR), which is reported in the COUNTER Journal Reports. For a given journal subscription, the basic metrics for analysis are number of SFTARs during a given time period and the cost of the journal, from which one can calculate the cost per SFTAR; SFTARs and cost per SFTAR are the most discussed metrics in the literature. Cost per SFTAR is problematic, how- ever, because the cost is for one given 180 College & Research Libraries March 2013 year while the retrievals reported in a year can derive from multiple volumes and years of the journal. For that reason, Norm Medeiros called cost-per-SFTAR “meaningless and error-laden data.”13 Many caution that use statistics measure not value, but utility. For example, Boots et al said that the number of SFTARs for a given journal “do not directly indicate the value or importance of that journal to our users. It is not yet fully understood how the behavior of our users is reflected in these statistics.”14 Luther concurs, stating that “it is dangerous to assume that a pop- ular title, which is used by many students is worth more than a research title that is used by only a few faculty members working in a specific discipline.”15 But most authors do agree that SFTAR sta- tistics are a very powerful starting point in the evaluation of journals. As Gatten and Sanville have noted, “While sheer volume of use (i.e., cost-use analysis) is not the only measure of value, to fail to recognize use as the dominant starting point is to deny reality.”16 Even a critic like Medeiros, who stresses the shortcomings of usage statistics, asserts “they are still the best utility measure available to librar- ies.”17 This paper develops metrics based on SFTAR statistics. Present Study Overview The sections that follow describe and discuss two topics: first, methods for de- veloping metrics to analyze and compare Big Deals called deal-level metrics; and second, metrics to rank journals from a Big Deal publisher called journal-level metrics. Data derived from three of the study library’s Big Deals illustrate the methods and metrics. Because rankings alone are not sufficient to determine the individual subscriptions a library should place after termination of a Big Deal, the paper also discusses other considerations that should be brought to bear on choice of subscriptions. The authors presume that the most likely motivation for analyses of Big Deals is a lack of funds to pay for all anticipated expenditures for the next fiscal year. A library then decides it must terminate one of its Big Deals to use its limited fund- ing for other resources. If termination is necessary, then deal-level metrics guide the choice of the deal to terminate and journal-level metrics guide the choices of individual subscriptions from the ter- minated deal. In terms of degree of difficulty, the methods presented in the present article fall between using a single measure to compare journals such as cost per SFTAR or total SFTARs per year and complex calculations such as those used by the California Digital Library (CDL). The CDL evaluation of journal value employs a weighted value algorithm that assesses value of a journal in three categories: utility, quality, and cost effectiveness; and includes external measures such as impact factor and Eigenfactor combined with local factors such as usage, citation behavior, cost per use, cost per impact, multiyear trends, and local authors and editors. These metrics are further sepa- rated into broad subject disciplines across all systemwide journal packages.18 The present study aims to provide evaluation metrics that are less complex than those of CDL, relatively easy to apply, and add to the tools available to librarians. The article also suggests a path toward greater depth of analysis should time and resources allow. Quantitative measures based on one or more types of use are, of course, not the only way to evaluate journals. A librarian can survey the library’s user community for its ratings of journals or publishers. Also, using professional judgment, a librarian can map individual subscrip- tions and Big Deal packages against descriptions of academic programs and institutional research strengths to identify a desirable list of journals. Community opinion and professional judgment are all important and, ideally, come into play when managing Big Deals. Opinion, how- ever, can reflect self-interest or bias, and Big Deals, especially those from the major Deal or No Deal? Evaluating Big Deals and Their Journals 181 publishers, are so large that it is virtually impossible to have an informed opinion about every title. SFTAR and other us- age data and cost stand independent of self-interest and bias, apply to every title in a Big Deal and, if nothing else, enable librarians to sort journals so they can see which journals are clearly essential, which clearly unneeded, and which require in- depth evaluation. Comparison of Big Deals: Deal-Level Metrics The first step in comparing Big Deals is to obtain SFTAR data for the deals to be compared. The COUNTER Journal Report 1 (JR1) provides those data. One year of data is the minimum needed, but, if re- sources allow, the average of three years is recommended. Three years provides a better use profile than one or two years for titles that have wide variations in use from year to year. For example, at the study library, the journal Archaeometry had 88 SFTARs in 2006, 38 in 2007, and 151 in 2008, a difference of 397 percent between 2007 and 2008. The three-year average was more representative of the use of the journal than any one year of SFTARs. One could use more than three years to calculate average SFTARs, but doing so increases the time needed for analysis. More important, as time passes, the membership in a library’s commu- nity changes, making it more likely that SFTARs from earlier years do not reflect current needs. SPSS (used in this study) or other software can merge the data from the three reports, matching on ISSN. The case study reported in the present paper uses data for 2006–2008. Downloading a year’s data takes less than ten minutes. Merging files for three years, including reconciling anomalies, can take 30 min- utes to a few hours, depending on the number of missing ISSNs or duplicate ISSNs. Two cautions are in order in using JR1s. First, if a library has purchased backfiles (called an archive by COUNTER) from a Big Deal publisher, the JR1 will include SFTARs for the backfiles. Because back- files represent separate purchases and are not part of the Big Deals yearly renewal price, including SFTARs for backfile is- sues in addition to current issues in any cost per SFTAR analysis would be mis- leading. Also, if one Big Deal had backfiles and another did not, comparisons would then be based on unequal data. Thus, backfile SFTARs should be removed by subtracting the backfile reports from the JR1s. Since for most Big Deals, the back- files begin usually at least ten years in the past, this still gives most journal titles a base of ten or more years on which current use is measured. A second caution is that, if a journal has moved into a deal during the time period studied, previous years’ SFTARs will be missing from the data. Analysts should only calculate the multiyear average for each journal title on years that actually have data. After obtaining SFTAR data, the second step is to assign a subscription status code to each journal. The status code should be set in terms of the year that a library plans to terminate a deal because it reflects the relation of a given journal to the Big Deal publisher in the year that the library will renegotiate with that publisher. In the case described in the present paper, 2009 was the year chosen. Coding took about 20 seconds per title. The status code cat- egories are: Subscribed: journals that the library subscribed to in the status code year as part of the Big Deal. Subscribed journals determined the price of the Big Deal. Add-on: journals to which the library did not subscribe, but to which the publisher provided access in the status code year as part of the Big Deal at no additional charge or for a fraction of the list price. Other: all other journals, including ceased titles, title changes, and titles 182 College & Research Libraries March 2013 not included in the deal or no longer part of the deal. After assigning status codes, the next step is to determine the e-only cost of the Big Deal for a library for each year stud- ied. The e-only cost is not the sum of the list prices of all of the journals involved in the deal, but the negotiated Big Deal price that can differ from one library to another, depending on the number of subscribed titles and on negotiated fac- tors such as price caps on inflation. To calculate e-only cost, a library should be careful not to include any print costs in the cost of the Big Deal. In many cases, print charges are deeply discounted addi- tions to the contract that are easily located and removed. To include print charges for one deal and not another would make the comparison unequal. With SFTAR data and e-only cost of a big deal in hand, a library is ready to calculate deal-level metrics. The metrics recommended are: a. Cost per SFTAR for each Big Deal for each year studied b. Median and mean of SFTAR aver- ages for all years studied of both subscribed and add-on titles c. Percentage of subscribed and per- centage of add-on titles with SF- TAR averages at or above a chosen threshold (the study library used > 25 for the years studied) d. Percentage of subscribed and add- on titles combined that accounts for 80 percent of the SFTARs Most of the deal-level metrics ap- ply only to the subscribed and add-on journals for each Big Deal because those journals are current and active and are the titles that would need to be evaluated for individual subscriptions if the deal were cancelled. The following comments are in order about the different metrics: a. For the cost per SFTAR of each Big Deal for each year studied, the SFTARs total used should include all journals, not just the subscribed and add-ons. The SFTARs total for all journals is used because the status codes used for one year do not necessarily describe status of a journal in earlier years. For example, a subscribed title in an earlier year may have ceased or changed publisher by the status code year, but two years earlier may have been a key title in the deal generating many SFTARs. If many key titles move out of a deal, the cost per SFTAR of a Big Deal may show a change from year to year that may be helpful for evaluation. b. For the percentage of subscribed and percentage of add-on titles with SFTAR averages at or above a chosen threshold, a library may choose its threshold. Choice of threshold should depend on two factors: 1) a library’s judgment of what level of inconvenience to impose upon its users and 2) the projected cost of substituting ILL for SFTARs. For example, in us- ing 25, the study library decided that users’ loss of more than two downloads per month was too great and, given cost of ILL (at $12 per transaction) and copyright clearance fees (estimated as high as $40 per article), replacement of 25 SFTARs with ILL could exceed $1,300 per year. c. The percentage of subscribed and add-on titles combined that accounts for 80 percent of the SFTARs relates to Trueswell’s well-known 80/20 rule. Trueswell used data from several studies to illustrate that often 80 percent of library use would be observed from about 20 percent of the items in a library collection.19 Variations on the ratio are quite common, however. For example, Botero et al. found that, for Big Deals at Florida State University Libraries in 2004 and 2005, 80 percent of Deal or No Deal? Evaluating Big Deals and Their Journals 183 the retrievals came from between 30 and 40 percent of the journals, not 20 percent.20 Calculation of the percentage of subscribed and add- on titles that account for 80 per- cent of SFTARs provides insight into the concentration of use. For this metric, a better rank goes to the deal with a higher percentage of titles that supplied 80 percent of SFTARs, as this indicates that retrievals are not concentrated in just a few titles but that there are many titles that meet the needs of the users at the study library. In other words, if the deal were can- celled then the library would need to subscribe to a greater number of individual titles to supply 80 percent of its users’ retrievals. Each of the metrics enables a compari- son of the Big Deals on a key indicator of value or use. A rank of one (1) indicates the best score on a metric; the worse a deal scores on a metric, the higher its number. In the present study, three (3) is the worst rank. The ranks for each metric are then totaled to obtain a Deal Composite Rank Score (DCRS). The lowest total indicates the highest value to the library. The DCRS can be used to identify which deal is least valuable to a library, and that deal should be cancelled if budget shortfalls necessitate such action. Applications of deal-level metrics for this study are il- lustrated in table 1. Deal Level Metrics Results with Discussion Findings summarized in table 1 show that Big Deal 2 scored the worst on every mea- sure and had the highest score possible, 30. Though by 2008 the cost per SFTAR of Big Deal 2 had come very close to Big Deal 3, Big Deal 2 had the lowest percentage of subscribed titles with 25 or more SFTARs per year. If Big Deal 2 were cancelled, the library would need to subscribe to only 58.5 percent of the subscribed titles to retain access to journals with two or more SFTARs per month. Big Deal 2 has the lowest percentage of add-on titles with 25 or more SFTARs per year, 28.7 percent. High-use add-on titles would need to be considered for subscription, if a Big Deal is cancelled. All of the Big Deals came close to the 80/20 ratio when subscribed and add-on titles were analyzed as a group. Each deal had between 20 and 30 percent of titles that supplied 80 percent of use. That indicates that, no matter what the portfolio, if the study library has to drop all Big Deals but can afford to subscribe to around 25 percent of the most retrieved journals in each deal, it would be able to supply 80 percent of users’ retrievals through subscriptions. For an initial analysis of Big Deals, the metrics in table 1 give measures with which a library can compare deals. Each of the metrics calculated in table 1 was helpful in comparing Big Deals at the study library, but each Big Deal DCRS was in the same rank order as the cost per SFTAR for each year and the 80/20 ratios as well. Big Deal 1 was first, Deal 2 third, and Deal 3 second for each of these metrics. Further testing may reveal whether this pattern holds for other deals. If the pattern holds, the calculation of only those metrics will be necessary. The deal-level metrics are a way to compare several Big Deals in a given library. If a library has only one Big Deal, some of the deal-level metrics may still inform a library’s decision and may be telling in comparison with a smaller journal package. Comparison of Journals within a Big Deal: Journal-Level Metrics Once a library decides to cancel a Big Deal, the next step is to evaluate the journals in the deal to determine which titles must be continued as individual subscriptions. Because most Big Deals have hundreds of journals, evaluation of all can, at first, seem overwhelming. Use of metrics can rank journals, making evaluation less daunting. The key elements of metrics for jour- 184 College & Research Libraries March 2013 nal evaluation are current list prices and one or more years of SFTARs. Individual current list (not discounted) prices are needed to calculate the cost per SFTAR per title and to determine how much a library will spend by subscribing to selected individual subscriptions. Ide- ally, publishers’ websites contain current list prices for e-only subscriptions with ISSNs. Software (for example, SPSS) can merge these prices into the database cre- ated for deal-level analysis, a step that takes about 30 minutes. Although the database contains historical, not cur- rent, SFTARs, current prices are needed because they, not historical prices, best predict future library payments. For the present study, list prices for 2009 were used in the journal-level analysis because it was begun in 2009. While list prices were readily available in 2009, publish- ers have since then begun to adopt tiered pricing and other alternatives to list prices. Without readily available current list prices, calculating cost per SFTAR per title will become complicated. Two basic journal-level metrics are average SFTARs per year and the cost Table 1 Deal -level Metrics and Rankings for Three big Deals Deal-Level Metric Big Deal 1 Metric Rank Big Deal 2 Metric Rank Big Deal 3 Metric Rank All Journal Titles Cost Per SFTAR 2006 $2.62 1* $8.51 3 $4.89 2 2007 $2.14 1 $6.59 3 $4.48 2 2008 $2.53 1 $5.82 3 $5.36 2 Subscribed Journal Titles Average of SFTARs ** 213.1 2 100.1 3 335.7 1 Median of SFTARs 84.7 2 38.7 3 147 1 % with Average SFTARS per Year Equal to 25 or Higher 75.1 2 58.5 3 77.2 1 Add-on Journal Titles Average of SFTARs 79.9 1 30.9 3 77.6 2 Median of SFTARs 26.5 1 9.3 3 25.3 2 % with Average SFTARs per Year Equal to 25 or Higher 52.1 1 28.7 3 50.2 2 Subscribed and Add-on Titles Percentage that Accounts for 80% of Subscribed and Add-on SFTARs 26.7 1 23.4 3 23.6 2 Deal Composite Rank Score 13 30 17 * For all Deal-Level Metrics and Deal Composite Rank Score, the lowest number is best and the highest number is the worst. ** All averages and medians are based on the three year average of SFTARs for each journal title. Deal or No Deal? Evaluating Big Deals and Their Journals 185 per average SFTAR. Both mea- sures need to be considered when making subscription decisions; while cost per SF- TAR is commonly discussed, overall use is important also. In the case study presented here, cost per SFTAR was determined by taking the current e-only list price for 2009 and dividing it by the three-year average of SFTARs (in this case 2006–2008). Using these two metrics, the authors then ranked from high to low all the subscribed and add- on journals in a Big Deal in terms of each journal’s aver- age SFTARs per year and also ranked from low to high the cost per average SFTAR for each journal. Both ranks were then added together to get a Journal Combined Rank Score (JCRS) that was then ordered lowest (best) to highest. Table 2 shows a portion of the cal- culations for one Big Deal. In this sample, Child Development ranked 7th in the number of SFTARs it had in its Big Deal, with an average of 1,577.7 SF- TARs per year. But it had a list price of $190, so the cost per SFTAR was only twelve cents, which ranked 1st for the deal. The JCRS was 8 (7+1), which then had a combined rank order of 2nd overall. Ranking the journals by JCRS accounts for both av- erage SFTARs and cost per average SFTARS in one num- ber and brings all the data together in one table. The JCRS gave each of the two other metrics equal effect in the cal- culation. Ranking individual journals by JCRS can guide but not dictate selection of the individual subscriptions that Ta b l e 2 In di vi du al J ou rn al R an ki ng s by J ou rn al C om bi ne d R an k Sc or e w it h C um ul at iv e P ri ce fo r O ne o f t he b ig D ea ls Jo ur na l T itl e Jo ur na l L is t P ri ce 20 09 C um ul at iv e Jo ur na l L is t P ri ce 3- ye ar A ve ra ge of S FT A R S pe r y ea r R an k of A ve ra ge SF TA R S C os t p er A ve ra ge SF TA R R an k of C os t p er SF TA R Jo ur na l C om bi ne d R an k Sc or e C om bi ne d R an k O rd er B JO G : A n In te rn at io na l J ou rn al o f O bs te tr ic s an d G yn ae co lo gy $5 67 $5 67 2, 62 7. 3 1 $0 .2 2 2 3 1 C hi ld D ev el op m en t $1 90 $7 57 1, 57 7. 7 7 $0 .1 2 1 8 2 Jo ur na l o f t he A m er ic an G er ia tr ic s So ci et y $7 48 $1 ,5 05 1, 53 5. 0 8 $0 .4 9 6 14 3 Jo ur na l o f O bs te tr ic , G yn ec ol og ic , a nd N eo na ta l N ur si ng $4 26 $1 ,9 31 1, 05 7. 3 14 $0 .4 0 4 18 4 Jo ur na l o f P er so na lit y $9 69 $2 ,9 00 1, 48 0. 0 9 $0 .6 5 9 18 4 Jo ur na l o f C hi ld P sy ch ol og y an d P sy ch ia tr y an d A lli ed D is ci pl in es $7 73 $3 ,6 73 92 5. 0 17 $0 .8 4 13 30 6 P ub lic A dm in is tr at io n R ev ie w $3 47 $4 ,0 20 65 1. 7 29 $0 .5 3 7 36 7 186 College & Research Libraries March 2013 a library will place after it terminates a Big Deal. The journals that are at the top of the combined rank order merit subscriptions; those at the bottom of the combined rank order deserve cancella- tion. But it is a matter of judgment where the top ends and the bottom begins. And for all journals between the top and the bottom—what might be called journals in the middle ground—selectors must carefully weigh both quantitative data and qualitative characteristics. Cumulating list prices in the combined rank table can aid in determining the costs of the journals at certain points in the table at a glance. In table 2, the cumulative (or total) list price for the seven most highly ranked journals in this particular Big Deal is $4,020. Given that libraries will probably cancel Big Deals because they have less to spend than publishers de- mand, libraries can use the amount they can afford as a line of demarcation amid a running total of list prices for journals ranked by the JCRS. This line serves as the center of the middle ground in the rank- ing. The higher a journal is above the line, the more likely a library will subscribe to it after termination of the deal. The lower a journal is below the line, the less likely is a subscription. Complicating Factors Affecting Use of Journal-Level Metrics Several factors can complicate analysis of journal-level data. First, title changes can cause a journal to have apparent low use because some of its SFTARs were reported under a previous title. Second, publisher changes cause some SFTARs to be reported on a JR1 from the previous publisher. Proposed cancellations should be checked for both possibilities. More complex complicating factors are overlap of holdings between the publisher’s and other platforms and subject coverage. Both require additional data collection for assessment. Overlap of Holdings Overlap occurs when a certain year of full-text access to a journal is supplied by more than one provider: for example, both the publisher platform and an ag- gregated full-text database such as those supplied by EBSCO, ProQuest, and Gale. Ascertaining how much a library’s users rely on the publisher’s or on other provid- ers’ platforms for article retrievals from Big Deal journals can influence choice of individual subscriptions from a Big Deal. This section first discusses identification of journals supplied by more than one provider and then analyzes incidence of SFTARs from publisher and selected overlap platforms. To identify journals supplied by more than one provider, librarians can turn to Electronic Resource Management Systems (ERMSs) for overlap reports. Table 3 shows a very small portion of an overlap analysis report from the study library’s ERMS (SerialsSolutions). The ease with which a library can use such systems’ reports depends on their struc- ture and software available to the library. A spreadsheet (such as Excel) can sort and count SerialsSolutions overlap listings. In general, studying all of the overlapping providers may be prohibitive in terms of time and effort. To begin with, the basic work of identifying the extent of overlap (without later merging and analyzing JR1s from publisher and overlap pro- viders) can take as much as six hours per 1,000 publisher titles. Furthermore, overlap can be extensive. For example, for the present study, one of the deals overlapped with thirty-four other pro- viders. Nearly 70 percent (69.2) of titles in Big Deal 1 overlapped with content on another platform studied. For Big Deals 2 and 3, overlap affected 19.5 percent and 16.1 percent of titles respectively. Ironically, Big Deal 1 scored the best in the deal-level metrics even with a high percentage of titles with content overlap with other providers. A publisher’s Journal Report 1 records only the SFTARs executed on the pub- lisher’s platform. For SFTARs retrieved from overlap providers, one must turn Deal or No Deal? Evaluating Big Deals and Their Journals 187 to those providers’ JR1s and merge them with the Big Deal publishers’ JR1s. The present study merged, using SPSS, the COUNTER JR1 reports from those overlap providers with 30 or more overlapping titles. Some overlap providers were more prone to duplicate or missing ISSNs than others, so the time needed for a successful merge varied from 15 minutes to several hours depending on the provider. For Big Deal 1, data from six other providers were merged with the publisher data; for Big Deal 2, three other providers; and for Big Deal 3, two other providers. The authors compared average SFTARs per year for 2006–2008 for a given title on the publisher’s platform to average SFTARs per year for that title from all overlap providers for the same date range. If JR1s from other providers were available for only one or two years in the 2006–2008 range, the single year or the two-year average was used. Several comparisons were striking. For example, the 2006–2008 SFTAR average for The Journal of School Health on the publisher’s platform was 280, but the average on all overlap providers studied was 1,036. For the year 2008, The Journal of School Health had 371 SFTARs on the publisher’s platform and had 901, 171, and 128 SFTARs respectively on each of three overlap providers’ platforms. If only the publisher data had been exam- ined, the vast majority of SFTARs for this journal would have been missed. Table 4 illustrates for the three Big Deals the impact of overlap on subscribed and add- on titles in aggregate. Titles in Big Deal 1 were impacted the most by overlap: 39.8 percent of SFTARs were from the overlap provider platforms that were analyzed. Developments in ERMS software may make the process of capturing all SFTARS from all platforms for a given title much easier in the future. Overlap analysis is essential at the journal title level for titles in the middle ground. Overlap analysis is also advisable for any title that is a candidate for cancellation, although time may not allow overlap analysis for the lowest ranking titles. One problem with the overlap data is that an overlap pro- vider may cover years that differ from the years covered by the publisher. Analysis Table 3 extract from an Overlap analysis Report Journal Title ISSN Dates of Coverage Provider Type of Overlap Abacus (Sydney) 0001- 3072 1997–present Blackwell-Synergy Partial Overlap Abacus (Sydney) 0001- 3072 1998–2002 Electronic Collections Online Partial Overlap Abacus (Sydney) 0001- 3072 03/01/1985– 1 year ago Business Source Elite Partial Overlap Accounting and Finance (Parkville) 0810- 5391 07/01/1998– present Blackwell-Synergy Partial Overlap Accounting and Finance (Parkville) 0810- 5391 05/01/1993– 1 year ago Business Source Elite Partial Overlap Acta Anaesthesiologica Scandinavica 0001- 5172 01/01/1999– present Blackwell-Synergy Full Overlap Acta Anaesthesiologica Scandinavica 0001- 5172 1999–present Electronic Collections Online Full Overlap Acta Anaesthesiologica Scandinavica 0001- 5172 01/01/1999– 1 year ago Academic Search Premier Full Overlap 188 College & Research Libraries March 2013 (www.ssrn.com), and RePEc: Research Papers in Economics (http://repec.org). An article could get high use on one of these sites, but the users’ library would never know. Davis and Fromerth’s study of mathematics journal articles found that articles available both on the publisher’s platform and in ArXiv had fewer SFTARs on the publisher’s platform than articles only available on the publisher’s platform. However, the articles available on both were cited more, suggesting that users were utilizing the free database.21 Data from the PIRUS2 project may eventually offer a measure of a journal’s use from the publisher’s platform and all repositories, institutional and subject-based.22 Subject Coverage Ideally, an academic library will provide all the journals its community needs across the range of subjects that students and faculty address. Earlier studies have pointed to differences in usage of journals in different subject areas and to the practi- cal need to analyze use in relation to local budget allocations.23 Broad analysis of SFTAR data for the study library showed the difficulty of covering adequately all subjects and the importance of not rely- ing on quantitative measures alone to identify individual subscriptions from a terminated Big Deal. Analysis of SFTAR data by subject begins with assignment of subject catego- ries to subscribed and add-on journals. The present study used 1) humanities, 2) of overlap at the journal level must take years of coverage into account. For the present study, the ERMS overlap report provided the years of coverage from each provider. In deciding which individual journals to subscribe to, overlap analysis can be used in two ways. It can give a library SFTAR counts for titles from overlap providers that can be added to counts from the publisher’s platform to build an argument for subscription from the Big Deal publisher to that title, particularly if there is reason to think the other pro- vider may lose rights to supply the title. Overlap analysis can also identify titles that a library will not subscribe to from the publisher because access is available from another provider. The value-added features of the respective platforms, such as the combining of the full-text journal articles with an index, or mobile acces- sibility, may also contribute to a library’s decision. Problematic sources of overlap are in- stitutional and subject-based repositories that are free to users and offer the full text of journal articles or the preprints or postprints but do not provide librar- ies with data on retrievals by their user community. These free (to the reader) repositories grow larger every day and, in some subjects, are dominant players. Subject-based free digital repositories include ArXiv (arxiv.org), PubMed Central (www.ncbi.nlm.nih.gov/pmc/), the Social Sciences Research Network Table 4 Overlap analysis Summary for Overlap Providers with 30 or More Titles That Overlap Publishers’ Titles, Subscribed and add-on Titles Only Big Deal 1 Big Deal 2 Big Deal 3 Average SFTARs per Year on Publisher's Platform 106,620 61,537 73,980 Average SFTARs per Year on Overlap Providers’ Platforms 70,552 7,377 5,390 Total of Average SFTARs on All Platforms Studied 177,172 68,914 79,370 % of Average SFTARS on Publisher's Platform 60.2% 89.3% 93.2% % of Average SFTARs on Overlap Providers’ Platforms 39.8% 10.7% 6.8% Deal or No Deal? Evaluating Big Deals and Their Journals 189 social sciences, 3) sciences, technology, engineering, and mathematics (called STEMath for this study), and 4) health sciences. These categories fit the programs of the study library’s university. In Carn- egie classification terms, it is a doctoral research university extensive, with six health sciences colleges as well as clinical services that include a hospital; five other professional schools that have a social sciences/humanities knowledge base; and colleges of engineering and liberal arts and sciences. Guided by subject headings merged from the study library’s ERMS into each Big Deals database, Ulrichsweb Global Serials Directory, and journal web- sites, three authors independently coded each subscribed and add-on journal to one of the four categories, reconciled their differences, and then analyzed the data. When subject headings accompany title listings, coding most titles goes quickly. An individual can enter initial codes at a rate of 15 to 20 titles per minute. Coders agree on most titles, so reconciliation of differences involves an estimated one in ten titles. Additional research in Ulrichsweb Global Serials Directory or on publishers’ websites takes one person approximately 2 minutes per title. Table 5 summarizes the analysis and shows five main findings: 1. The percentages of humanities SFTARs are fractions of the small percentages of humanities jour- nals. 2. The percentages of social sciences and STEMath SFTARs are, in all but one case, less than their per- centages of journals, though not as extreme as humanities journals. 3. The percentages of health sciences SFTARs are much greater than the percentages of health sciences journals. A very high percentage of health sciences journals had an average of 25 or more SFTARs, around 90 percent in Big Deals 1 and 3. 4. For each Big Deal, more than 50 percent of the publisher SFTARS were in the health sciences, and more than 50 percent of the over- lap SFTARs were in the social sciences. 5. Humanities journals cost less than social science journals, which cost less than health sciences journals, which cost less than STEMath journals. The findings, of course, reflect condi- tions at the study library’s university, where the health sciences constitute a very large part of activity. In a different university, Botero et al. found that basic sciences journals had more SFTARs than clinical medicine.24 Nevertheless, given these findings, one can imagine that, if the study library used just the average SFTARs and cost per SFTAR, it might eliminate almost all access to humanities journals and limit subscriptions to social sciences and STEMath journals in favor of massive subscriptions to health sciences journals. Such elimination and limitations would not serve the university well. Sci- entists and social scientists cite journals heavily.25 While humanities scholars cite books more than journals, journal use is evident in their citations.26 Doing justice to all areas of scholarship will not be easy, even if good journal-level metrics help substantially in sorting through all the journals in a Big Deal. Journals in the middle ground will require careful review. If few humanities, social science, and STEMath journals are near the top of the combined rank order and many are near the bottom, then the study library will need a large middle ground. At any library, if one subject area dominates at the top or bottom of the combined rank order, then a large middle ground will need to be considered for adequate sub- ject coverage. Possible Impact of Terminating a Big Deal To gauge the impact of terminating a Big Deal, the authors projected how much the study library might save from the collec- tions budget at 2009 pricing if it decided 190 College & Research Libraries March 2013 Table 5 analysis of SFTaR Distributions by Subject area before and after Overlap Provider Data is added for Subscribed and add-on Titles Only Big Deal 1 Big Deal 2 Big Deal 3 % of Journals in the Humanities 7.6 3.7 0.5 % of Publisher SFTARs in the Humanities 2.2 1.1 0.1 % of Overlap Provider SFTARs in the Humanities 6.9 0.9 0 % of Publisher and Overlap SFTARs in the Humanities 4.1 1.1 0.1 % of Titles in the Humanities with ≥ 25 Average SFTARS per Year from Publisher Data 40 13.6 50 % of Titles in the Humanities with ≥ 25 Average SFTARs per Year for Publisher and Overlap Providers Combined 61.7 18.2 50 Average List Price of Humanities Journals $407 $590 $335 % of Journals in the Social Sciences 37.3 18.1 22.4 % of Publisher SFTARs in the Social Sciences 19.7 16.3 11.5 % of Overlap Provider SFTARs in the Social Sciences 60.2 55.3 62.3 % of Publisher and Overlap SFTARs in the Social Sciences 35.8 20.4 14.9 % of Titles in the Social Sciences with ≥ 25 Average SFTARS per Year from Publisher Data 49 38.6 51.7 % of Titles in the Social Sciences with ≥ 25 Average SFTARs per Year for Publisher and Overlap Providers Combined 62.2 47.9 61.8 Average List Price of Social Sciences Journals $521 $661 $1,030 % of Journals in STEMath 22.3 53.6 48.7 % of Publisher SFTARs in STEMath 17.5 31.8 37.6 % of Overlap Providers SFTARs in STEMath 10.9 10 0.5 % of Publisher and Overlap SFTARs in STEMath 14.8 29.5 35.1 % of Titles in STEMath with ≥ 25 Average SFTARS per Year from Publisher Data 52.5 27.2 51.5 % of Titles in the STEMath with ≥ 25 Average SFTARs per Year for Publisher and Overlap Providers Combined 54.2 29 51.5 Average List Price of STEMath Journals $1,356 $1,785 $3,184 % of Journals in the Health Sciences 32.8 24.2 28.4 % of Publisher SFTARs in the Health Sciences 60.6 50.8 50.9 % of Overlap Providers SFTARs in the Health Sciences 22.1 33.8 37.2 % of Publisher and Overlap SFTARs in the Health Sciences 45.3 49 49.9 % of Titles in the Health Sciences with ≥ 25 Average SFTARS per Year from Publisher Data 86.9 64.8 86.7 % of titles in the Health Sciences with ≥ 25 Average SFTARs per Year for Publisher and Overlap Providers Combined 89.2 66.2 90.3 Average List Price of Health Sciences Journals $992 $1,171 $2,325 Deal or No Deal? Evaluating Big Deals and Their Journals 191 to cancel a Big Deal and subsequently subscribe individually to the journals in the deal that provided 80 percent of the SFTARs. For simplicity of analysis, the three-year average of total publisher SFTARs for each subscribed and add-on journal in the deal was used to determine individual subscriptions, not the JCRS ranking and other factors. Thus, this simplified analysis does not take into ac- count subscriptions chosen with regard for subject coverage, overlap, and other considerations. Table 6 shows that, in two of three cases, the study library would avoid approximately $40,000 to $60,000 in subscription expenditures at 2009 list prices, although, for the Big Deal with the worst deal-level metrics, access to 80 percent of the SFTARs would cost $60,000 more than the projected cost of the deal. If a Big Deal is cancelled and a library’s access to that publisher’s journals decreas- es, users who need to read articles from journals no longer accessible through a Big Deal will have to turn to other avenues of access. (Again, for simplicity, the pres- ent analysis ignores the likelihood that some SFTARs are never read.) Articles in journal issues to which the library has per- petual access rights will still be available through the Big Deal publisher. Libraries typically have perpetual access rights to subscribed journals but not add-on jour- nals. In some cases, full-text aggregator databases and open access databases that overlap journals in the Big Deal, and free content available after an embargo period, will supply articles. Once users exhaust such options for obtaining needed ar- ticles, they may turn to interlibrary loan, so libraries that terminate a Big Deal need to be prepared for increased use of that service. Conclusions In the second half of the twentieth century, academic libraries faced journal prices that rose at a rate higher than inflation in the rest of the economy and than increases in library acquisitions budgets.27 Near the end of the century, Big Deals offered, at slightly greater costs, much more access to journal literature. But the prices for those deals provided a new base for escalation in the cost of journals that is higher than inflation in the rest of the economy and greater than increases in library acquisi- tions budgets. Unless publishers relent, just as libraries cancelled journals in the past, they will have to terminate Big Deals in the future. The present paper attempts to add meth- ods and metrics to the librarian’s toolbox to help determine which Big Deals merit retention, which termination, and what journals from terminated deals deserve individual subscription. In terms of degree of difficulty, the methods in the present Table 6 Deal level analysis with list Price Data; Subscribed and add-on Titles Only Big Deal 1 Big Deal 2 Big Deal 3 Actual Cost of the Big Deal 2008 $288,295 $496,390 $394,389 Projected Cost of the Big Deal 2009 for the Same Title Mix $302,709 $521,210 $414,108 Cost Of Individual Subscriptions to Journals that Provide 80% of SFTARs at 2009 List Prices $262,992 $583,339 $351,062 Savings: Projected Big Deal Costs Less Individual Subscription Costs $39,717 ($62,129) $63,046 Number of SFTARs that Account for 20% of the SFTARs in Each Big Deal and that May Require Interlibrary Loan, Use of Perpetual Access Rights and Other Modes of Access 21,324 12,267 14,771 192 College & Research Libraries March 2013 article fall between using a single measure and employing complex calculations. The metrics presented do ignore issues such as value-added platform features, backfile purchases, and the fact that big deal pric- ing can have many facets and complicating factors. But by keeping the analysis basic, the metrics can be applied to any Big Deal. Results from the present study suggest both good news and bad. The good news is that that 80 percent of SFTARs from Big Deals may derive from fewer than 30 percent of the journals in those deals. The bad news is that, after subscribing to jour- nals that supply 80 percent of the SFTARs, savings are not large; also, SFTARs from Big Deals are so numerous that obtaining the other 20 percent may lead to increases in interlibrary loan costs. The really bad news is that, lacking sufficient funding, libraries will eventually have to terminate Big Deals, and they and their communities will have to cope with the consequences. For decades, academic librarians analyzed several measures of use of print journals to help them decide which jour- nals to cancel and which to retain. Today, for online journals, methods and metrics are based in publisher SFTAR data found in COUNTER Journal Report 1s. The deal- level and journal-level metrics discussed in the present article can help librarians analyze and apply these data. But, be- cause publisher data is not the whole story of SFTARs, as time allows, librarians must also pay attention to SFTARs for Big Deal journals from overlap provid- ers’ platforms. Other well-established measures such as faculty citation counts and impact factor can also be considered. Finally, even with well-ordered SFTARs data in hand, individual judgment, often on qualitative grounds, must be brought to bear to provide equitable access to journals among all subjects covered by the library’s community. Notes 1. Kenneth Frazier, “The Librarian’s Dilemma: Contemplating the Costs of the ‘Big Deal’,” D- Lib Magazine 7, no. 3 (Mar. 2001), available online at www.dlib.org/dlib/march01/frazier/03frazier. html [accessed 22 January 2013]. 2. Kenneth Frazier, “What’s the Big Deal?” The Serials Librarian 48, no. 1/2 (2005): 49–59. 3. David Fowler, “The Bundling and Unbundling of E-Serials: Introduction,” The Serials Librarian 57, no. 4 (2009): 350–52. 4. University of Virginia Library Collections Management FAQ, available online at www2. lib.virginia.edu/collections/index.php?/pages/collections-management-faq/ [accessed 22 Janu- ary 2013]; T. Scott Plutchak, Incentives, blog entry of Feb. 9, 2010, available online at http://tscott. typepad.com/tsp/libraries [accessed 22 January 2013]. 5. Deborah D. Blecic, “Methods of Measurement of Journal Use,” in Encyclopedia of Library and Information Science, ed. Allan Kent (New York, NY.: Marcel Dekker), vol. 70 (2002): 294–99. 6. Eugene Garfield, “Citation Analysis as a Tool in Journal Evaluation,” Science 178, no. 4060 (1972): 471–79. 7. Deborah D. Blecic, “Measurements of Journal Use: An Analysis of the Correlations Between Three Methods,” Bulletin of the Medical Library Association 87, no. 1 (Jan. 1999): 20–25. 8. Blecic, “Methods of Measurement,” 297–99. 9. Brinley Franklin, “Managing the Electronic Collection with Cost per Use Data,” IFLA Journal 31, no. 3 (2005): 241–48. 10. Thomas A. Peters, “What’s the Use? The Value of e-Resource Usage Statistics,” New Library World 103, no. 1/2 (2002): 39–47. 11. International Coalition of Library Consortia, Revised Guidelines for Statistical Measures of Usage of Web-based Information Resources (Oct. 4, 2006), available online at http://icolc.net/state- ment/revised-guidelines-statistical-measures-usage-web-based-information-resources [accessed 22 January 2013]. 12. COUNTER: Counting Online Usage of NeTworked Electronic Resources, available at www. projectcounter.org/ [accessed 22 January 2013]. 13. Norm Medeiros, “Uses of Necessity or Uses of Convenience? What Usage Statistics Reveal and Conceal About Electronic Resources,” Usage Statistics of E-Serials, ed. David C. Fowler (New Deal or No Deal? Evaluating Big Deals and Their Journals 193 York, N.Y.: Haworth Information Press, 2007), 233–43. 14. Angela Boots, Julia Chester, Emma Shaw, Chris Wilson, “E-Journal Usage Statistics in Ac- tion: A Case Study from Cancer Research UK,” Usage Statistics of E-Serials, ed. David C. Fowler (New York, N.Y.: Haworth Information Press, 2007), 183–98. 15. Judy Luther, “White Paper on Electronic Journal Usage Statistics,” Council on Library and Information Resources, 2nd ed. (2001), available online at www.clir.org/pubs/reports/pub94/ pub94.pdf [accessed 22 January 2013]. 16. Jeffrey N. Gatten and Tom Sanville, “An Orderly Retreat from the Big Deal: Is It Possible for Consortia?” D-Lib Magazine 10, no. 10 (Oct. 2004), available online at www.dlib.org/dlib/ october04/gatten/10gatten.html [accessed 22 January 2013]. 17. Medeiros, “Uses of Necessity,” 242. 18. Jaqueline Wilson, “Journal Value Metrics Assessment,” online announcement on March 30, 2010, available online at www.cdlib.org/cdlinfo/2010/03/30/journal-value-metrics-assessment/ [accessed 22 January 2013]; Ivy Anderson, “Systemwide Library License Reductions in a Time of Fiscal Challenge (Public Letter),” May 16, 2011, available online at www.cdlib.org/services/ collections/current/publicbudgetletter2011.html [accessed 22 January 2013]. 19. Richard W. Trueswell, “Some Behavioral Patterns of Library Users: the 80/20 Rule,” Wilson Library Bulletin 43, no. 5 (Jan. 1969): 458–61. 20. Cecilia Botero, Steven Carrico, and Michele R. Tennant, “Using Comparative Online Journal Usage Studies to Assess the Big Deal,” Library Resources & Technical Services 52, no. 2 (2008): 61–68. 21. Philip M. Davis and Michael J. Fromerth, “Does the ArXiv Lead to Higher Citations and Reduced Publisher Downloads for Mathematics Articles?” Scientometrics 71, no. 2 (May 2007): 203–15. 22. The PIRUS2 Project, available online at www.cranfieldlibrary.cranfield.ac.uk/pirus2/tiki- index.php [accessed 22 January 2013]. 23. Angela Conyers and Pete Dalton, NESLi2 Analysis of Usage Statistics Summary Report, available online at www.jisc.ac.uk/uploaded_documents/nesli2_usstudy.pdf [accessed 22 January 2013]; Botero, Carrico, and Tennant, “Using Comparative Studies,” 61–62; Judith L. Wulff and Neal D. Nixon, “Quality Markers and Use of Electronic Journals in an Academic Health Sciences Library,” Journal of the Medical Library Association 92, no. 3 (July 2004): 315–22. 24. Botero, Carrico, and Tennant. “Using Comparative Studies,” 65–66. 25. Julie M. Hurd, Deborah D. Blecic, and Rama Vishwanatham, “Information Use by Molecular Biologists: Implications for Library Collections and Services,” College & Research Libraries 60, no. 1 (Jan. 1999): 31–43; Robin N. Sinn, “A Local Citation Analysis of Mathematical and Statistical Dissertations,” Science & Technology Libraries 25, no. 4 (2005): 25–37; Diana Hicks, “The Four Lit- eratures of Social Science,” in Handbook of Quantitative Science and Technology Research, ed. Henk Moed (Dordrecht Kluwer Academic, 2004), 473–96. 26. Jennifer E. Kneivel and Charlene Kellsey, “Citation Analysis for Collection Development: A Comparative Study of Eight Humanities Fields,” The Library Quarterly 75, no. 2 (Apr. 2005): 142–68. 27. Monograph and Serials Costs in ARL Libraries, 1986–2005, available online at www.arl. org/bm~doc/monser05.pdf [accessed 22 January 2013]. The Philosophy Research Index is an indexing database of bibliographic information on articles, books, reviews, dissertations, and other documents in philosophy. It uses the best available technology to increase bibliographic coverage of the known philosophical literature in several western languages. Coverage includes current and recent materials, as well as older literature back to the 15th century. The largest portion of this coverage is in English, but the databae contains listings of items in many other languages. Current listings and features include: Compare Our Scope and Content to Other Philosophy Databases Philosophy Documentation Center P.O. Box 7147, Charlottesville, Virginia 22906-7147 - USA Tel. 434.220.3300 Toll-Free: 800.444.2419 (US & Canada) order@pdcnet.org www.pdcnet.org Institutions $1500/year http://secure.pdcnet.org/pri Free Trials IP authentication, COUNTER-compliant usage statistics • 800 journals and series (and growing) • coverage of materials in 30 languages • multiple search and browse options • facted search results and timeline • save / export search results • automatic translation function • metasearch of all PDC resources • OpenURL ready • direct links to JSTOR • monthly updates The metasearch also allows PRI users to search all fulltext documents in our E-Collection and all listings in our International Directory of Philosophy. The translation function helps users combine all results into English, and social networking functionality facilitates information sharing. The database is OpenURL ready and an extended librarians’ view is available for each citation. This database is building bibliographic coverage of aLL philosophical literature. Over 1.1 MilliOn BiBliOgraphic recOrds  