College and Research Libraries Designing and Evaluating a Programmed Library Instruction Text Patricia Ann Kenney and Judith N. McArthur The upper-division institution of higher education presents new challenges to librarians in the area of bibliographic instruction. At a number of universities, older student populations with varying backgrounds often commute to use library facilities. Faced with the task of providing a course-integrated library instruction module for such students, the authors report on their efforts, which resulted in a programmed textbook. An analysis of a statistical study, which compared test scores between a group using the programmed text and another using only lec- ture material, demonstrates that the programmed text was the more effective teaching tool for lower-level cognitive skills and preferred by students. The authors conclude by describing a method of teaching the upper-level cognitive skills, using the programmed text as a founda- tion. robably every academic librar- ian who has attempted to take library instruction into a college classroom has experienced the familiar frustrations described in the bib- liographic instruction literature. 1 Students tend to be unmotivated and inattentive, faculty members frequently resent giving class time to "library periods," and the li- brarian, like the full-time classroom in- structor, must work with groups of stu- dents whose skill levels range from relatively sophisticated to barely func- tional. Not surprisingly, the better- prepared students are bored by the pre- sentation of material already familiar to them; the less-experienced ones are con- fused and frustrated by the traditional "book barrage" style of teaching that most librarians use on the classroom lec- ture circuit. We faced such a problem at the Univer- sity of Houston-Victoria, where students receive library instruction as part of Com- munications 3330, a required junior-level course in advanced composition. Since a research paper is assigned as one of the central requirements of the course, librari- ans had been granted classroom time to explain source materials. Traditionally, a librarian spent three class periods, at stag- gered dates, discussing library research tools, and students completed practice questions in the library based on the lec- ture and accompanying handouts. The practice questions constituted 10 percent of the course grade. The situation was not particularly satisfying from anyone's point of view. Instructors often relin- quished class time grudgingly, and stu- dents complained on evaluation forms that the exercises were difficult and very time-consuming. Our search for a better way to teach li- brary skills naturally led us to the exten- sive literature on bibliographic instruc- tion. We chose to center our research on the self-paced workbook as an instruc- tional delivery method, and here again, we encountered several problems. Patricia Ann Kenney teaches technical mathematics and statistics at the University of Houston-University Park, and Judith N . McArthur is head, Collection Development, and coordinator of bibliographic instruction, University of Houston- Victoria. 36 College & Research Libraries Dozens of libraries across the country have used the model pioneered by Miriam Dudley at UCLA to design workbooks tai- lored to their own requirements. 2 While they have the advantage of allowing stu- dents to proceed at their own pace, thus compensating somewhat for the wide variation in ability inherent in a heteroge- neous student population, these work- books are almost invariably designed for freshman orientation. We needed to teach at a level above call numbers, Britannica, and Readers' Guide: our goal was to discuss more complex problems such as citation indexing, finding statistics, and deciding whether to request a computer search. We were dissatisfied as well with the form-over-function approach to library re- search tools that these workbooks adopt. They discuss books by categories- encyclopedias, handbooks, biographical dictionaries, and so on-just as their li- brarian authors were taught in their grad- uate reference courses. Librarians assume that inexperienced students will intui- tively know how to apply these sources to their own research problems. This ap- proach presupposes both that the profes- sional librarian's paradigm for organizing information is an appropriate one for the undergraduate student and that a junior majoring in marketing has the same moti- vation to memorize this artificial structure as does the aspiring future reference li- brarian. We suggest that if these assump- tions were true, library instruction would not be so widely regarded by students as boring and useless. Having decided that none of the existing text models would serve our purpose, we set about designing our own, based on two concepts: the presentation of sources emphasizing function over form and the necessity of building in a method for stu- dents to assess their understanding of the material as they work through it. A programmed-text format seemed an effi- cient way to let students monitor their un- derstanding, but a search for models was fruitless. It quickly became apparent that the term programmed instruction has been loosely applied to a variety of workbooks of the Dudley format, which are self- January 1984 paced and include practice questions to be answered in the library, but do not pro- vide immediate feedback to the learner on how well he has understood the material before he attempts to transfer it to an ac- tual library situation. Programmed texts in the cybernetic perspective defined by Rao Aluri, 3 which present the material to be learned in logical sequences with appro- priate branches for the learner's errors and level of knowledge, have seldom been used in bibliographic instruction. The few examples we were able to locate were small-scale attempts to teach specific con- cepts such as Wilson indexes or the card catalog. Despite the prospective amount of work involved, we decided to invent our own programmed workbook. · Our interest in experimenting with pro- grammed instruction was the direct result of dealing with a diverse and nontradi- tional student population. The University of Houston-Victoria is an upper-division branch campus, and all students com- mute, some from several counties away. They are older-the average age is thirty- one-and most have family responsibili- ties and work part- or full-time. Many spend essentially no time on campus out- side of class hours. It was our hope that a programmed text that these busy people could digest before they made a trip to the library to complete practice exercises would alleviate previous complaints about the burdensomeness of library work. Since a substantial proportion of these students return to college after an absence of several years, we faced the ad- ditional problem of providing a compre- hensive review of the library for those whose skills are rusty, without alienating their more practiced classmates in the pro- cess. A programmed text deals with this situation by allowing students to pass quickly over material that may already be familiar, or to spend as much time as nec- essary to master new concepts. Each chap- ter of the text that we designed contains several self-testing problems to familiarize the student with the bibliographical sources before he attempts to answer a set of practice questions in the library. The li- brary then functions as a laboratory expe- rience that helps assure the transfer of learning, and the sets of questions provide the librarians and instructors with an eval- uative instrument. In addition to its programmed-learning format, the text emphasizes function over form, presenting library research tools in the context of the kinds of problems that they can solve rather than stressing their physical formats or their scope. Thus, book reviews and citation indexes are combined in one chapter discussing the evaluation of books and journal articles; Statistics Sources, Statistical Abstract of the United States, American Statistics Index, and Statistical Reference Index converge in a chapter on uncovering statistical data. As a conceptual framework, we adopted the systematic literature searching model de- scribed by Kobelski and Reichel, who point out that this approach can be used for both beginning and advanced stu- dents.4 We began our text, entitled Re- searching a Paper in the Library, with a chap- ter on using subject encyclopedias to help focus on a term-paper topic. In nine suc- ceeding units, we outline a search strategy through the standard bibliographic access tools, emphasizing that finding informa- tion is a logical process that can be ap- plied, in whole or in part, to any topic. The aim is to make library instruction relevant to what students actually do, not to give them a minicourse in reference tools. A draft version of Researching a Paper in · the Library was submitted to the course in- structors for review, and their suggestions were incorporated into the final format. A feature of the text that proved to be ex- tremely popular with the students (and an awesome amount of work for the authors) was the decision to write four different versions geared to the university's pri- mary degree-granting programs- business, education, psychology, and hu- manities. This subject-specific approach also obviated the artificiality of stressing the value of all sources equally. In the business version, for example, the chap- ters on statistics and government docu- ments are long and detailed; in the hu- Library Instruction Text 37 manities version this kind of information is covered briefly, and extra space is de- voted to biography and literary criticism. With the programmed text in hand, we confronted one final question: Could we reliably determine its effectiveness as a teaching instrument? Although several studies have demonstrated that work- books can transmit some kind of library skills as efficiently as human teachers, 5 at- tempts to measure the effectiveness of programmed texts against classroom lec- tures have only recently been undertaken. Phillips and Raup found no significant dif- ference in the achievement of two groups of students who learned to use periodical literature by programmed text or lecture, but their flawed methodology casts doubt on their findings. 6 No pretest was given to determine whether prior knowledge lev- els of the two groups were statistically equal, the students knew that they would be required to take a posttest, and not all faculty gave credit for completing the ex- ercise, a factor that may have resulted in a lower level of motivation for some classes. A more recent experiment by Thomas Sur- prenant has shown that an experimental group of college freshmen using a pro- grammed library-skills text to learn the use of Wilson indexes and the card catalog scored significantly higher on an evalua- tive posttest then did a control group that received the same instruction by class- room lecture. 7 With these conflicting findings in mind, we prepared to evaluate the effectiveness of our instruction program. Using pre- and posttest scores as measurement tools, we hoped to determine (1) whether stu- dents' library skills would improve as a result of instruction and (2) if there would be any appreciable difference in the levels of achievement between two samples-an experimental group using the pro- grammed text and a control group receiv- ing the same information through class- room lectures. Essentially, we wanted to find out if students learn anything at all from bibliographic instruction, and whether one method .works better than another. After investigating sample library-skills 38 College & Research Libraries tests, both in-house and standardized, we created our own testing instrument. Real- izing that only the lower to middle cogni- tive skills involving factual recall rather than analytical problem-solving could be tested in this situation, we adopted a ten- question, multiple-choice format, which used concepts common to all four versions of the textbook. Areas covered included the card catalog, Library of Congress sub- ject headings, periodical indexes (Wilson type, abstract, newspaper, statistical), and government documents. The test results provided data for the re- liability coefficient as well as for hypothe- sis testing. The students were not fore- warned of the testing situation and were not told that they were subjects in an ex- periment. In conjunction with the posttest, we for- mulated and administered an attitude questionnaire. Nonparametric statistical evaluation methods were used to analyze data for significant differences. Also, questions such as year in college and grade point average provided additional data for a group composition study. The five sections of Communications 3330 scheduled for fall semester 1982 were randomly divided into three experimental and two control groups. Each student in an experimental section chose a version of the programmed text according to his ma- jor. One librarian became the liaison to each class, meeting the students to answer questions and to return corrected library practice exercises. Since the experimental- group students received the programmed text in three segments, the students in the control groups received the same informa- tion, but in the form of three classroom lectures. The same librarian planned and delivered the lectures to both sections of the control group, making every effort to duplicate the substance of the pro- grammed text through subject-specific ex- amples. The students in the control group also completed the set of nine library prac- tice exercises. At the conclusion of the experiment, the participants responded to a posttest and an attitude questionnaire. The question- naires were specifically designed to mea- January 1984 sure attitudes for each group's situation, depending upon the method of instruc- tion. For example, question #5 on the ex- perimental group's attitude questionnaire read, ''The programmed textbook took too long to read." The corresponding question on the control group's question- naire was, "The lectures took too much class time.'' The experiment terminated during the first week in November 1982, but not without a casualty. Midway through the semester, it was discovered that one of the experimental classes was working under a different set of constraints than the other groups, and the test scores and attitude re- sponses from this class had to be dropped from the study. A last-minute change of instructors resulted in these students be- lieving that the library practice questions counted for no credit toward final grades. Negative attitudes ensued, and the librar- ian assigned to this section found out too late to completely rectify the situation. Using the results from the remaining two experimental and two control sec- tions, we centered our initial efforts on de- termining the internal reliability of the testing instruments. According to the Kuder-Richardson 21 formula, the in- house test showed a coefficient of 0.603. While not outstanding, the reported coef- ficient met the 0.60 recommendation for a teacher-made test. 8 Since this was the only estimate available, we decided to adopt the testing instrument for measurement purposes. Forty-six students in four sections suc- cessfully completed the requirements of the library portion of Communications 3330. Their pretest scores were used to test the following hypothesis: no significant difference exists between the population pretest mean scores of the experimental and control groups (Ho : p., - p., = 0). As- suming known and equal population vari- ances, we analyzed the scores and ob- tained the data shown in table 1. At the designated a-level of .05, we failed tore- ject the hypothesis and reported a p-value (level of significance) of 0.904. We con- cluded that the four groups began the ex- periment with equal knowledge of lower- Library Instruction Text 39 TABLE 1 ANALYSIS OF PRETEST SCORES Experimental group Control group level cognitive library skills. Number of Students 22 24 Assuming normal parent populations, we used the matched-pairs t-test to find any significant differences between the pre- and posttest mean scores of the indi- vidual control, individual experimental, combined control, and combined experi- mental groups. The hypothesis for each test was: no significant difference exists between the population pretest mean and the corresponding posttest mean (Ho : IL2 - IL1 = 0). The alternative hypothesis in each case was: the posttest mean score is signif- icantly greater than that of the pretest mean (Ha : IL2 - IL1 > 0). Table 2 summa- rizes the results of this hypothesis testing. These tests showed that students in all sections, regardless of the method of in- struction, increased their lower-level cog- nitive library skills, as measured by the testing instrument. We rejected the null hypothesis (Ho) in all cases at a high level of significance. In the evaluation of the hypothesis that tested for significant differences between the posttest mean scores of the experi- mental group (x = 7.364) and the control group (x = 6.50), the z-statistic showed a significant difference for any a > .08 for a two-tailed test. Within the parameters set initially, we rejected the hypothesis that there was no significant difference be- tween the posttest mean scores. Based on these results, we further concluded that in our situation the programmed text was Mean (X) 4.273 4.333 3.063 3.309 .1151 the more effective method of presenta- tion. The attitude questionnaire provided even more data for our analysis. These data, although not conducive to inferen- tial statistics, gave an insight into the com- position of the two groups, their attitudes toward the instruction methods, and the differences between those who listened to the lectures and those who worked with programmed instruction. The nonpara- metric chi-square statistic evaluated re- sponses to the questionnaires. O{ the ten questions asked, only one showed a significant difference between the groups. Question #7 on the experi- mental group's attitude questionnaire was, "I would have preferred class lec- tures instead of the programmed text- book." The control group's version read, "I would have preferred using a work- book or media version of the lecture mate- rial.'' A chi-square statistic with four de- grees of freedom yielded a score of 9.612, significant at .01 :$a:$ .05. The responses showed that those students who used the programmed text strongly preferred this method and were reluctant to use any other. The control group was not as posi- tive about the lecture method. However, the novelty of the programmed text could partially .account for the significant differ- ence, since only 23 percent of those in the experimental group reported having used one before. Continued measurement of TABLE 2 ANALYSIS OF PRE- AND POSTTEST SCORES Pretest Posttest Level of Mean Mean t, Si~nificance Experimental group 1 4.462 7.769 8.628 a<.0005 Experimental group 2 4.000 6.778 3.283 .01.0005 Control group 2 3.846 6.308 5 .18 a<.0005 Combined experimental 4.273 7.364 6.584 a<.0005 Combined control 4.333 6.50 7.583 a<.0005 40 College & Research Libraries student reaction to programmed instruc- tion, as it becomes an integral part of the course, could disclose any signs of the 1 'halo effect,'' the tendency to react to overall initial impression rather than to an objective investigation of content. The last part of the attitude question- naire gave students the opportunity to of- fer comments and/or suggestions. Gener- ally, the comments of the experimental group were favorable, as shown in these examples: I thoroughly enjoyed this unit. Although it was extra work, I had plenty of time to do it at my own pace. Very well developed. I learned a lot from this. I feel there is no more efficient and effective way to get helpful information across. Because of this study, I have had no problems finding the materials for my research paper. The control group's comments, al- though also favorable to the idea of biblio- graphic instruction, centered on the fact . that too much information was crammed into too few lectures. Some representative comments follow: Class presentations were good, but a little rapid; soak-in time needed. The only improvement I can suggest would be to increase the number of lectures and discuss each topic in detail. There is so much to cover. I have had several research papers to write prior to taking this course. It would have been help- ful . . . to have a manual available. The responses of the control group mirror a similar finding by Timothy D. Jewell, who compared attitude questionnaires of students instructed by the lecture method against a group who used a non- programmed workbook. The lecture stu- dents were substantially more likely to re- port that their instruction was not I I clear and understandable. ''9 By allowing students to take control of the learning situation by working at their own pace and concentrating on informa- tion that they could see as directly useful to their course work, we had hoped to in- crease motivation and avoid the common perception of library exercises as drudgery or busywork. Student responses on the at- January 1984 titude questionnaires confirmed success in this respect, but unsolicited input was even more revealing. Due to internal budgetary constraints, students were asked to return the chapters of the pro- gramed text as they finished them. After a number of students appealed to the in- structors and librarians, the ruling was changed to permit them to keep the text. The importance that students attach to subject-specific instruction became evi- dent when the computer science majors began to complain that their field had been slighted. (We had assumed that the business version could accommodate their needs, since computer science ma- jors take a number of courses in business administration.) In response to demand, a computer science version will be forth- coming. Faculty attitudes have improved as well. Surprenant has pointed out that since as- signing programmed texts requires less classroom time, they tend to be mor~ ac- ceptable (and less threatening) to faculty. 10 In our own case, we have found the course instructors' reactions to Researching a Paper in the Library to be overwhelmingly favor- able. It has been adopted as a required textbook in all sections of the advanced writing course. The greatest inherent drawback in using a programmed text is the possibility that students will procrastinate and cram the work into the last few weeks of the semes- ter. We have avoided this problem by in- stituting variable pacing. The library exer- cises are collected three chapters at a time, with staggered due dates for each set. Stu- dents are permitted to turn in the work sheets at any time prior to the cutoff date. While the results of the study demon- strated the superiority of programmed learning -as applied to library skills, it should be stressed that the library skills taught and tested for were factual ones at the lower to middle range of Bloom's well- known taxonomy of cognitive objectives for teaching. 11 While these knowledge, comprehension, and application skills can be readily taught by programmed in- struction, the upper-level cognitive processes-analysis, synthesis, and evaluation-are less easily adaptable to a programmed format. Furthermore, as Richard Werking's review article makes clear, there are inherent limitations in evaluating library research skills by means of objective testing. 12 Such tests can mea- sure mastery of only the most fundamen- tal skills, and the ability to recall facts and principles on a multiple-choice test cannot guarantee that students will be successful in finding materials in an actual library sit- uation. In light of these limitations, we are not suggesting that a programmed text is the whole solution to the bibliographic in- struction problem. Obviously there is more to educating library users than teaching them to distinguish between a superintendent of documents number and a Washington, D.C., zip code in a Monthly Catalog entry (a question actually missed by 36 percent of all students on our pretest). Nor are we suggesting that librar- ians have no function in the classroom. We do believe, however, that Surprenant's findings and the results of our own experi- ment indicate that programmed texts can teach basic skills more effectively and less tediously than a librarian with a truckful of Library Instruction Text 41 books. If programmed texts were to take over the gritty details of demystifying sub- ject headings, decoding thesauri, and mak- ing sense of superintendent of documents numbers, librarians might be able to devote their time to teaching at the higher end of Bloom's taxonomy. At the University of Houston-Victoria, we are now proposing a two-tiered ap- proach to library instruction. To help stu- dents develop the more sophisticated rea- soning skills of the upper cognitive range, we plan to incorporate a postworkbook simulation exercise, using· the classroom time saved by abandoning the previous labor-intensive approach to teaching basic skills. Guided by a librarian, the students will work in groups to analyze their own term-paper topics and determine the most promising library search strategies. By combining the successful programmed approach to learning the basics with a classroom simulation structured to de- velop analytical and problem-solving skills, we hope to encourage maximal learning with minimal investment of scarce time-the students', the instruc- tors', and our own. REFERENCES 1. Mignon Adams, "Individualized Approach to Learning Library Skills," Library Trends 29:83 (Summer 1980). 2. A number of libraries have reported on their use of self-designed workbooks: Beverly L. Renford, ''A Self-Paced Workbook Program for Beginning College Students,'' Journal of Academic Librarian- · ship 4:200-203 (Sept. 1978); Clark N. Hallman, A Library Instruction Program for Beginning Under- graduates (Omaha: University of Nebraska Library, 1980) ED 188 633; Gwen Gittens and Carolyn Dusenbury, Library Survival Workbook in Library Skills: A Self-Directed Course in the Use of the Marriott Library. General Studies 101 (Salt Lake City: University of Utah, Marriott Library, 1978) ED 176 792; Patricia Gebhard and Barbara Silver, Library Skills: A Self-Paced Workbook (Santa Barbara: Univer- sity of California at Santa Barbara Library, 1978) ED 167133; Shelley Phipps and Ruth Dickerson, "The Library Skills Program at the University of Arizona: Testing, Evaluation, and Critique," Journal of Academic Librarianship 4:205-14 (Sept. 1979). Miriam Dudley described her prototype in "The Self-Paced Library Skills Program at UCLA's College Library," in John J. Lubans, Jr., ed., Educating the Library User (New York: Bowker, 1974), p .330-35. For an overview of the place of workbooks in library instruction, see Shelley E. Phipps, "Why Use Workbooks? Or, Why Do the Chickens Cross the Road? And Other Metaphors, Mixed," Drexel Library Quarterly 16:41-53 (Jan . 1980). 3. Rao Aluri, "Application of Learning Theories to Library-Use Instruction," Libri 31:140-52 (Aug. 1981). 4. Pamela Kobelski and Mary Reichel, "Conceptual Frameworks for Bibliographic Instruction," Journal of Academic Librarianship 7:75 (May 1981). 5. See for example: Susan H. Edlsbury, Library Instruction Workbook for the Sciences for Use in Mitchell 42 College & Research Libraries January 1984 Memorial Library, Mississippi State University. Pilot Study, Final Report (Mississippi State University Library, 1980) ED 210 028 and Scott Stebelman, Evaluation of Self-Paced Library Instruction at the Uni- versity of Nebraska-Lincoln Libraries (Lincoln, Neb.: University Libraries, 1980) ED 197 742. 6. Linda L. Phillips and E. Ann Raup, "Comparing Methods for Teaching Use of Periodical In- dexes," Journal of Academic Librarianship 4:420-23 (Jan. 1978). 7. ThomasT. Surprenant, "LearningTheory, Lecture, andProgrammedinstructionText: An Exper- iment in Bibliographic Instruction," College & Research Libraries 43:31-37 Oan. 1982). 8. Jon C. Marshall and Loyde W. Hales, Classroom Test Construction (Reading, Mass.: Addison- Wesley, 1971), p.199-221. 9. Timothy D. Jewell, "Student Reactions to a Self-Paced Library Skills Workbook Program: Survey Evidence," College & Research Libraries 43:374 (Sept. 1982). 10. Surprenant, "Learning Theory," p.36. 11. For a detailed description of these cognitive objectives, see Benjamin S. Bloom, J. Thomas Hast- ings, and George F. Madaus, Handbook on Formative and Summative Evaluation of Student Learning (New York: McGraw-Hill, 1971), p.141-224 . 12. Richard Hume Werking, "Evaluating Bibliographic Education: A Review and Critique," Library Trends 29:159-61 (Summer 1980).