College and Research Libraries versity of Michigan cataloging staff in the editing of the resultant RLIN records. Marko concludes her piece with an out- line of the "issues that are applicable to the bibliographic description of all com- puter files," followed by a short para- graph on the project's benefits for the University of Michigan library. Katherine Chiang's "Computer Files in Libraries: Training Issues" is an inven- tory of the skills and expertise required to incorporate electronically stored in- formation into the library. Like the Marko piece, it is rather brief, but substantive even so. Chiang focuses on the unique knowledge demanded for the tasks of se- lecting, acquiring, cataloging, and servic- ing machine-readable files. She then addresses central issues related to the training of library staff to meet the de- mands of managing computer files; stress- ing level of service, structure of service, service novelty and its relation to existing staff competencies, and staff learning styles as key points for special attention. The inclusion in this volume of discus- sion summaries from the RLG workshop is particularly welcome because these are at least somewhat visionary in artic- ulating the formidable array of tasks fac- ing the broader research library community as it begins to integrate computer data files into its collections. In fact, the most telling aspect of the discussions is that they are far less tentative than the four articles in setting an agenda for making computer data files a central resource in the research library of the near future. The result of these efforts is a more than adequate primer for librarians just be- ginning to think about computer file management and access. But collective thought about "the big picture" may be what most of us need quite urgently at this moment. There is, in fact, something frightening about the pace with which the national informa- tion infrastructure is evolving. Two re- cent examples make this clear: the anarchic expansion of information resources on the Internet and the proliferation over the past half-year of government information dis- tributed on CD-ROM. Each of these de- velopments has serious implications for Book Reviews 387 any discussion of computer data file management and access in the research library context, but neither is mentioned anywhere in this volume. Still, I learned much by reading this RLG publication, although I am concerned that the infor- mation it provides may be of only lim- ited value, given the velocity of change in the current electronic information envi- ronment-Joseph Lucia, Lehigh Univer- sity, Bethlehem, Pennsylvania. Van House, Nancy, and others. Measur- ing Academic Library Performance: A Practical Approach. Chicago: American Library Assn., 1990. 182p. (ISBN 0- 838905-293). LC 89-77253. When drafting this review, I was prompted by some misguided stylistic conceit to seek the grabbing quote. The beautiful phrase "shut up in measureless content" in Macbeth provides a backdrop for my ambivalence toward the work under review. Some eight years ago, I gave a workshop on the bibliographer's craft-including collection evaluation-to collection devel- opment librarians at a large upper-mid- western research library. I recall two pieces of advice I gave to that workshop group. First: "beware the fetish of mensura- tion"; that is, for a significant part of selectors' work, empirical measurement and quantification are of use only in the largest sense. Second: regard measure- ment, quantitative norms or standards, algorithms, and partial or full-blown models of collection development as heuristic exercises rather than empirical tools for decision making; that is, one should assess and, if necessary and rele- vant, perform such measurements as ex- ercises in informed persuasiveness and the art of the exposition and interpreta- tion of the mostly undemonstrable. On the one hand, measurement and mea- sures have their greatest social utility as a form of argumentation that comple- ments subjective judgment and experi- ence. On the other hand, they are least useful when reified and put forth as ob- jective determinants of human action or policy or when regarded as an intrinsic part of something called "the science of 388 College & Research Libraries management" -whether of libraries, physical facilities, or McDonalds. How, then, should one approach the manual under review? Perhaps the basic attitude should be that struck in the work's preface: "What difference will it make for us to have this information?" My ambivalence toward the work under review derives from the value that I, as a student and teacher of politics, place on the empirical and positivistic side. And as a social sciences curator and "house" survey researcher, I find de- scriptive statistics useful in explicating and informing the library policy-plan- ning process. However, it is necessary in all truly "applied" work to be extremely cautious about making claims regarding conclusiveness, generalizability, and replicability and not to dress up that work in scientistic or Taylorist garb. . The authors recognize that "measure- ment is not an end in itself." They also acknowledge that "good measures are valid, reliable, practical, and useful." Any measure, supported by data that are not only unreliable but-even worse-that do not, in fact, measure, for example, the in- library use of materials or reference satis- faction, must be invalid and can hardly be useful in a sane or minimally moral uni- verse. But while the authors are at some pain to insert disclaimers with regard to comparability-the third elementary benchmark of measurement beyond reli- ability and validity-both the foreword of the ACRL Ad Hoc Committee on Perfor- mance Measures and the authors' preface specifically refer to the goal of replicability from one institution to another. While we are not exactly talking about cold fusion here, to speak of being able to replicate these measures at an infinite number of local units without being able July 1991 to interpret them comparatively "across libraries, or even across units within the same library or library system" suggests questions about utility, whether practi- cal or theoretical. This difficulty be- comes especially acute when we are told that "management needs objective, stan- dardized data on which to base decisions [and] on the extensiveness and effective- ness of library services" for the purposes of "accountability" and to "quantify ser- vices." While the authors believe that "little is known about the factors that affect output measures results," they also believe these measures can be used to "monitor performance [and] help librar- ies to allocate resources and plan opera- tions and services." Help! In spite of my philosophical and meth- odological reservations, there is, in fact, much good in the manual for line profes- sionals, unit heads, middle managers, and directors of college libraries, espe- cially if one can get by or ignore the "M.B.O." talk and get at that which is practical. There is much of use here for those who have never run a survey or, if they have, are unsure about what they found out. The measures are well pre- sented and unburdened by the heavy hand of technical language; indeed, one wonders whether the novice would even be able to carry out data analysis, not to mention interpretation. Librarianship and libraries are neither full of "measureless content" nor full of that which is measurable. In choosing, employing, and interpreting measures, one should surely follow the authors' own dictum that "interpreting and using output measures ... requires a full un- derstanding of the data's meaning and limitations."-Tony Angiletta, Stanford University, Stanford, California.