364 American Archivist / Vol. 57 / Spring 1994 Automating the Archives: A Case Study CAROLE PRIETTO Abstract: The establishment of an archival automation program requires that the archivist address issues of both a technical and a managerial nature. These issues include needs assessment, selection of hardware and software to meet identified needs, redesigning ar- chival tasks in light of the system selected, and ongoing maintenance of the system se- lected. The present article discusses the issues Washington University Archives staff members faced in developing an automation program and the solutions they adopted. It concludes with a brief discussion of possible future directions for the automation program. About the author: Carole Prietto holds a B.A. in history from the University of California at Santa Barbara and an M.A. in history from UCLA. From 1986 to 1989 she was the assistant in the UCLA University Archives; since 1990, she has been university archivist at Washington University, St. Louis. D ow nloaded from http://m eridian.allenpress.com /doi/pdf/10.17723/aarc.57.2.9p4t712558174274 by C arnegie M ellon U niversity user on 06 A pril 2021 Automating the Archives: A Case Study 365 MUCH DISCUSSION IN THE LITERATURE about archival automation concerns imple- mentation of the USMARC AMC format and development of national descriptive standards.1 This discussion is both useful and necessary, but it examines only one side of the automation issues archivists face. Without physical and intellectual con- trol over collections, MARC records can- not be created and collections cannot be used. Discussions of the application of per- sonal computers and commercial software packages to archival processing tasks are scarce. A search of the American Archivist and Archival Issues (formerly the Mid- western Archivist) going back to 1980 re- vealed only two such articles. In a 1990 article in the Midwestern Archivist,2 Ri- chard J. Hite and Daniel Linke outlined the use of a personal computer and Word- Perfect in a team approach to processing at the Western Reserve Historical Society. In a 1991 American Archivist article,3 James G. Carson outlined his repository's use of WordPerfect and Minaret. Even more scarce are discussions of the decision-making process that results in the implementation of an automation program. My purpose here is to discuss the imple- mentation of the automation program at the 'For discussion of the MARC AMC format and the development of descriptive standards, see especially the papers of the Working Group on Archival De- scription, reprinted in the American Archivist 52 (Summer 1989) and 53 (Winter 1990), with extensive bibliography; and David Bearman, ed., Toward Na- tional Information Systems for Archives and Manu- script Repositories: The National Information Systems Task Force (NISTF) Papers, 1981-1984 (Chicago: Society of American Archivists, 1987). Anne J. Gil- liland, ed., "Automating Intellectual Access to Ar- chives," Library Trends 36 (Winter 1988) is devoted entirely to microcomputer applications in an archival setting, as is American Archivist 47 (Summer 1984). 2"Teaming Up with Technology: Team Process- ing," Midwestern Archivist 15, no. 2 (1990): 91-98. 3"The American Medical Association's Historical Health Fraud and Alternative Medicine Collection: An Integrated Approach to Automated Collection Management," American Archivist 54 (Spring 1991): 184-91. Washington University Archives. This pro- cess consisted of a number of steps over a three-year period: evaluating the existing hardware and software, selecting a new da- tabase management package,4 installing and setting up the new software and train- ing staff in its use, adding OCLC and NO- TIS access to facilitate MARC-AMC cataloging, and, finally, adding our first MARC AMC records to a national biblio- graphic database. The discussion will con- clude with an outline of some possible future directions for our automation pro- gram. The Beginning of Special Collections Automation The Washington University Special Col- lections Department purchased its first per- sonal computers in 1988, two years before my arrival. At that time, the department's hardware consisted of four IBM-compati- ble computers and two printers. The prin- ters were shared by way of a local area network and a network program, LANtas- tic. WordPerfect was chosen for word- processing needs. In addition, a database management software package was needed for archives and manuscripts processing. The program chosen was Marcon, then manufactured by AIRS, Incorporated. My predecessor did not make automa- tion a high priority, and the personal com- puter in University Archives, when it was used at all, was used for correspondence. Item-level finding aids, accession registers, and statistics were prepared, as they always had been, on a typewriter. The result was a small archives staff overburdened with clerical tasks while facing both a large backlog and a heavy reader services load. Soon after my arrival in 1990, it became "The commercial database management packages discussed here are trademarks of their respective man- ufacturers. The author has no connection with any of the manufacturers whose products are discussed here. D ow nloaded from http://m eridian.allenpress.com /doi/pdf/10.17723/aarc.57.2.9p4t712558174274 by C arnegie M ellon U niversity user on 06 A pril 2021 366 American Archivist / Spring 1994 apparent that the time had come to take a critical look at archives procedures, with an eye toward streamlining them. Automation offered a means to do this. We had the computers and the software; what we now needed was a plan to exploit our personal computer's capabilities to the fullest. Needs assessment came first: what activ- ities should be automated? The activities best suited for automation were those fre- quent and repetitive in nature, heavily pa- per-based, and involving a great deal of word processing. In examining our opera- tions, we found the activity that best fit the above criteria was the creation of archival and manuscript finding aids. Once we de- cided what to automate, we had both the tools (personal computers and software) and a clear sense of what we wanted to accomplish. Before any further progress could be re- alized, I had to learn to use Marcon Plus, the database package I had inherited, and then train the archives staff in its use. My strategy for learning Marcon was to find a test collection, design a database structure for that collection, enter data into Marcon, and generate a finding aid. These steps would provide the training I needed, which I could then pass on to the staff. The da- tabase file and finding aid that resulted could be used to evaluate Marcon's index- ing, searching, and reporting capabilities. The test collection was a group of au- diotapes documenting Washington Univ- ersity's ongoing lecture program, the Assembly Series. The Assembly Series was an ideal test collection because of its size (about 1,000 tapes) and the need to increase the number of access points to the collection. The only finding aid available for the collection was a typed list of the lectures in rough chronological order; cross-indexes by speaker's name, lecture ti- tle, or sponsoring organization did not exist. Putting this information into a data- base structure would enable us to generate these cross-indexes easily. These indexes could easily be updated as more informa- tion was added to the database. Unfortunately, Marcon had a number of drawbacks. The first I noticed was that the program's processing speed slowed dra- matically after we entered one collection of approximately a thousand records. Over time, thousands of records would be en- tered into the database, and I did not want a program that would be bogged down by the presence of large files. We also discov- ered other problems. Printed reports—an important component because the program would have to produce not only finding aids but also the results of on-line searches—were difficult to set up and dif- ficult to modify. The data structure could not be modified except by completely eras- ing the file and reentering the data. For no apparent reason, indexes became corrupt, and on several occasions hundreds of re- cords were lost. Because of these problems, I proposed dBASE III+ for archives use. Concerns were raised about dBASE's lack of com- patibility with the MARC format and the feasibility of having two database manage- ment systems within Special Collections. I was assigned to investigate database man- agement systems used in archival and man- uscript repositories and make recommen- dations to the library. The head of Special Collections and associate dean for Collec- tions and Services gave me permission to use dBASE on a trial basis, pending the outcome of my investigation. Investigating the Options The investigation of database manage- ment programs began in February 1990. The first step was ascertaining what pro- grams were used in archival repositories. To find out, I queried archival colleagues in the St. Louis area. At this beginning stage, I was interested only in basic infor- mation: what programs were used, who the manufacturers were, how much the pro- grams cost, and strengths and weaknesses D ow nloaded from http://m eridian.allenpress.com /doi/pdf/10.17723/aarc.57.2.9p4t712558174274 by C arnegie M ellon U niversity user on 06 A pril 2021 Automating the Archives: A Case Study 367 of the respective systems. Follow-up con- tacts were made with manufacturers, who provided sales literature, user- group infor- mation, and demonstration disks. Several additional programs came to my attention through a software review column in the Midwestern Archivist.5 The search resulted in a preliminary list of seventeen database management pro- grams. Based on information from soft- ware reviews and comments from users, the initial field of seventeen was narrowed to seven: Advanced Revelation, dBASE III+, Georgetown Archives Management System, Marcon Plus, Workflow, MicroMARC:amc, and Minaret. The 1990 Society of American Archivists (SAA) meeting in Seattle played an important role in the database management project be- cause it provided an opportunity to obtain detailed information about all seven pro- grams. The Marcon, MicroMARC, and Minaret user groups would be meeting, and information-sharing sessions (called "swap shops") for users of Advanced Revelation, dBASE, and Minaret were part of the pro- gram. In preparation for the Seattle meet- ing, I reviewed the literature and user comments I had received for our final group of seven programs and worked out, in consultation with other staff in Special Collections, the criteria to be used for se- lecting our database management system. They were as follows: • Reliability. Had other users experi- enced loss of data or system crashes while using a program? • Ease of use. Factors to be considered included ease of installation and setup, amount of time and level of technical knowledge needed to learn 'Glen McAnich, ed., "Reviews: Computer Appli- cations Programs," Midwestern Archivist 11, no. 1 (1986): 69-83. The programs reviewed were dBASE III, PFS File/PFS Report, DataEase, Savvy PC 4.0 and 5.1, Marcon II, PC File III, and DB Master 4 Plus. the program, the extent to which the program would allow modifications in either the data structure or the data it- self, and the quality of the user inter- face. A related factor was that no one on the Special Collections staff, and few people within the Olin Library System, had expertise in computer programming. Because of this, it was important that our database manage- ment system not be dependent on such expertise. • Adaptability. The system should be adaptable to the needs of both the University Archives and the Manu- scripts Section. The primary concern was whether the program could ac- commodate both folder- and item- level description. • Quality of documentation. Is it easy to understand? Does the program come with a tutorial, either print or on line? If so, how useful is it in learning the program? A related factor was avail- ability of resources beyond those pro- vided by the manufacturer: are there user groups, classes, or books availa- ble to assist the user? • Manufacturer's support of the prod- uct. Does the customer have to pay for technical support? How much sup- port, if any, is included in the pur- chase price, and what is the cost of ongoing support? Do users have dif- ficulty getting through to the manu- facturer? Are they happy with the service they get? In the case of pro- grams developed by an individual, how much technical support could we expect from the developer and how much would it cost? • Cost implications. Cost was inter- preted not only as the cost of the pro- gram itself but also as costs associated with technical support and the level of hardware needed to run the program. It was important to have a program that would run on our existing hard- D ow nloaded from http://m eridian.allenpress.com /doi/pdf/10.17723/aarc.57.2.9p4t712558174274 by C arnegie M ellon U niversity user on 06 A pril 2021 368 American Archivist / Spring 1994 ware with little or no sacrifice of per- formance. Could the program run on our local area network? Comments from the Marcon Users Group at the Seattle meeting confirmed what I had experienced. Others had expe- rienced problems such as sudden and unex- plained locking of the keyboard, corrupted indexes, data loss, and report forms that produced garbage text. The comments re- lated to performance alone were enough to remove Marcon from contention, but the users had other concerns: a poorly written manual, lack of a tutorial, and poor tech- nical support from the manufacturer. Marcon's manufacturer, Interactive Sup- port Services, did not send a representative to the meeting, with the result that the chair of the user group had the unenviable task of addressing the concerns of a hostile group of Marcon users. The manufacturer was working on a new release of Marcon that would fix the many bugs in the pro- gram, but the release had no definite ship- ping date. It was also announced that all work on the development of MARC-MAR- CON, a Marcon Plus utility that would have given the program the capacity to cre- ate MARC records, was being abandoned because the archival market was too small to warrant the costs involved. By the end of the meeting, many user-group members were speaking openly about plans to aban- don Marcon. Their comments made it plain that we, too, would be best served by mov- ing in a new direction. Fortunately, we were in a position to do so because our investment in Marcon had been small. Workflow and the Georgetown Archives Management System (GAMS) are derived from dBASE III+. Workflow is written in the dBASE programming language; GAMS is written using the dBASE lan- guage and the Clipper compiler.6 Both 6A compiler is a program that converts a user's pro- gram files to stand-alone applications that do not re- were designed to meet the needs of specific institutions (UCLA and Georgetown Uni- versity, respectively) by staff from those institutions. The Georgetown system recognizes three levels of hierarchical description used in archives and manuscripts: collection, box, and folder. Data for each level is linked to the next with a machine-gener- ated ID number. Index terms, filled in by the user during data entry, may be linked to each folder record and can be searched. The results of searches can be displayed on screen or printed, and the system can gen- erate finding aids at folder level. In creating the Workflow system, the de- velopers began with the premise that proc- essing requires a number of products above and beyond the finding aid, such as gift acknowledgements, monthly and annual statistics, inventories of archival supplies, and, of course, the MARC record.7 Work- flow consists of a series of databases and programs which are designed to track the actions taken on a collection, beginning with the initial contact with the donor and continuing with accessioning, creating a rinding aid, and cataloging. Information pertaining to a given collection exists in- dependently in the various databases until the programs format the data into whatever product is desired: accession register, gift acknowledgement (including news re- leases), finding aid, or MARC record, as appropriate. quire the presence of a particular program to run and can be legally distributed. The presence of Clipper means that GAMS, unlike Workflow, does not require the presence of dBASE to run. In fact, GAMS makes use of features found only in Clipper that prevent it from running directly in dBASE III+ or dBASE IV. One such feature allows GAMS to accommodate up to 64 kilobytes of free-text description per folder, by- passing dBASE's limit of 254 characters. T o r a detailed outline of the Workflow system, see Dan Luckenbill, "Using dBASE IIP- for Finding Aids and a Manuscripts Processing Workflow," Rare Book and Manuscript Librarianship 15, no. 1 (1990): 23-31. D ow nloaded from http://m eridian.allenpress.com /doi/pdf/10.17723/aarc.57.2.9p4t712558174274 by C arnegie M ellon U niversity user on 06 A pril 2021 Automating the Archives: A Case Study 369 For both programs, my primary concern was adaptability: neither program was able to accommodate the item-level description needed by our manuscripts curator. An- other concern was the availability of tech- nical support. Ashton-Tate, then the manufacturer of dBASE, had a policy of not providing assistance to users of cus- tomized dBASE applications. The devel- opers of Workflow and GAMS were full-time archivists in Los Angeles and Washington, D.C., respectively. The dis- tance to St. Louis from either location would make site visits prohibitively expen- sive and difficult to arrange. Telephone service would have to be scheduled to ac- commodate the developers' work schedule. Advanced Revelation (A-Rev.), manu- factured by Revelation Technologies, was the most powerful program I saw. A- Rev. consists of an array of programming tools that allow the user to custom design a com- plete database management system without having to write programming code. These tools allow the developer to paint data en- try screens (fields can be placed anywhere on screen, and the developer can determine how much or how little data shows on screen), develop multiple levels of menus, develop pop-up windows that provide the user with lists of options at any point, and employ multiple levels of data verification. Data fields are stored in a central data dictionary, allowing changes to be made in the database structure without requiring that the developer restructure the entire data file or modify an entire application. A- Rev. allows variable- length description and can accommodate records up to 64 kil- obytes in size. Boolean and proximity searches are both possible; report forms used for finding aids can be developed and stored in a centralized reports library. If the tools provided are not adequate, the devel- oper can create others, thanks to the pres- ence of a programming language, R-BASIC, and an internal compiler and de- bugger. All the A-Rev. users I met commented that the user pays a price for A-Rev.'s power in the form of a steep learning curve—the program is difficult to learn. Having read the program's sales literature, I had to agree with their assessment. Clearly, the program's power was going to present a significant obstacle for us. There were no A-Rev. user groups in the St. Louis area. No one in Olin Library had heard of the program, much less knew how to use it. Thus, we would be faced with mastering a difficult program with few lo- cal resources to draw on. Although the training issues were significant, even more significant was the discovery, gained from conversations with other A-Rev. users, of two significant hardware limitations. The first was that A- Rev.'s file management system could not handle volumes of data larger than 32 megabytes, thereby putting an upper limit on the amount of informa- tion we could store in our computers. The second was that A-Rev. would not run on our local network without significant re- configuration of all our existing hardware. For those two reasons, A-Rev. was not considered the best option. The database management program that emerged as the best option for Special Col- lections use was dBASE III+. Its cost was the lowest,8 and its performance was the least affected by having to run on older, slower computers. Unlike A-Rev., dBASE III+ could easily run within our network and it set no limits on how much data 8The low cost was due in part to the fact that d- BASEIII+ was beginning to give way to dBASE IV. Although dBASE IV was the newer product, I never considered it for our use because the early versions of dBASE IV received poor reviews in the popular personal computer journals. dBASE III+, on the other hand, was a product with a proven track record, and Ashton-Tate had no plans to stop supporting it. Since that time, Ashton-Tate has been taken over by Bor- land International, and dBASE III+ is no longer man- ufactured or supported. Borland has made a number of improvements to dBASE IV, and we will be up- grading our database manager in the near future. D ow nloaded from http://m eridian.allenpress.com /doi/pdf/10.17723/aarc.57.2.9p4t712558174274 by C arnegie M ellon U niversity user on 06 A pril 2021 370 American Archivist / Spring 1994 could be stored—dBASE III+ can handle as much as the computer's hard disk can hold.9 Unlike GAMS and Workflow, d- BASE III+ could accommodate the differ- ing descriptive needs of the University Archives and the Manuscripts Division. Because dBASE III+ was a commercial product, rather than the product of an in- dividual developer, customer service was a phone call away at any time. The software was widely used within the archival com- munity, and the many tutorial books, ref- erence guides, and third-party utilities designed for it constituted a virtual dBASE industry.10 It had the additional advantage of strong institutional support. During the course of the database management study, the library administration had selected dBASE as the officially supported database manager. This meant that on- site service (if needed) and upgrades could be easily obtained. It also meant that, in terms of in- formation sharing with other units, dBASE would not isolate us from the rest of the library system.11 One thing dBASE III+ could not give us was the ability to create MARC records that could be loaded into the OCLC data- base and our local NOTIS catalog. To that 'dBASE III+ has a limit of one million records per file but no limit on the number of files that can be created. In effect, dBASE can handle as much data as can fit on the hard disk. With Advanced Revelation, 32 megabytes is all the program can handle, even if the hard disk has 200 megabytes of free space. This problem can be solved using multiple DOS partitions, but such partitioning is not possible with later ver- sions of DOS. 10A related consideration at the time was that both Ashton-Tate and dBASE had remained stable for many years; thus we could be reasonably confident that Ashton-Tate and dBASE would be stable entities over the long term. Within six months after the con- clusion of the database management study, Ashton- Tate became a subsidiary of Borland International, one of dBASE's former competitors. "It should be noted that the Special Collections De- partment was not at any time forced to use dBASE. The library administration encouraged us to look at a number of options and propose the solution we felt was best. end, Minaret and MicroMARC were ex- plored. We were evaluating not only the usefulness of these programs for creating and exporting MARC records, but also whether one of these programs would sub- stitute for, or serve as an adjunct to, dBASE III+. I preferred Minaret to MicroMARC because it had a more user-friendly inter- face, could work with word-processing pro- grams such as WordPerfect to create finding aids and catalog cards, could read dBASE files, and was more widely used by archival colleagues in the St. Louis area. While in Seattle, I spoke with represen- tatives from the manufacturer and attended the Minaret users group meeting. On re- turning to St. Louis, I obtained demonstra- tion disks, tried out the program, and consulted with Minaret users in the St. Louis area. In the end, we decided against a PC-based MARC AMC utility and in fa- vor of a modem with OCLC's Passport software.12 Besides being a more cost-ef- fective solution, Passport would allow us to enter MARC AMC records directly into OCLC without having to convert data into a format OCLC could read, as would be necessary with Minaret or MicroMARC.13 Another benefit would be access to the OCLC authority file and other utilities we would need for our cataloging. 12Passport is the terminal emulation software for OCLC's PRISM system. In layman's terms, Passport enables a personal computer to function as an OCLC terminal. 13Minaret could send records to OCLC over tele- phone lines using the ProComm telecommunications package and a third-party utility; however, this re- quired a number of data conversions. For a discussion of Minaret's uploading procedure, see Carson, "American Medical Association." MicroMARC had, at the time, no way to send AMC records to OCLC via telephone lines. MicroMARC users had to copy completed records to a floppy disk and send them to Michigan State University. At Michigan State, re- cords were tape-loaded into OCLC via the universi- ty's mainframe. Both MicroMARC and Minaret have since added modules for importing and exporting MARC records. D ow nloaded from http://m eridian.allenpress.com /doi/pdf/10.17723/aarc.57.2.9p4t712558174274 by C arnegie M ellon U niversity user on 06 A pril 2021 Automating the Archives: A Case Study 371 The study of database management needs in Special Collections ended in Oc- tober 1990, with a two-part recommenda- tion to the library administration: Full access to OCLC and NOTIS for cataloging needs, and dBASE III+ for in-house data- base management functions, including the preparation of the finding aids that make a MARC record possible. This recommen- dation was accepted, and work with d- BASE began in November 1990. Implementing the System Once dBASE was installed, the next tasks were staff training and putting d- BASE to use in the archives. The Assem- bly Series database, created during the initial trials with dBASE and originally de- signed with that specific collection in mind, was modified so that it could accommodate audiovisual materials from all collections. Databases for folder-level description of paper records and item-level description of printed items were added. Because our da- tabases are grouped along lines of format (paper, audiovisual, or print), it is possible for items from the same collection to ap- pear in three different databases. To keep track of where information was stored, and to assign classification numbers, two more databases were created for collection-level data—one for university records and one for our St. Louis—area manuscripts. The implementation of dBASE was ac- companied by changes in our processing procedures for paper records. Emphasis was placed on folder-level, rather than item- level, processing of paper records. This change in emphasis maintains intel- lectual control over collections while re- ducing the amount of staff and student time needed to process them. Another change is that we no longer create draft finding aids at various stages of processing. As part of the arrangement and description process, we review old folder headings and assign new ones when appropriate, as we did in the past. Once the arrangement and de- scription are complete, the processor, working from the folders themselves, en- ters the finished folder-level data into the computer database. When all folders have been entered, the processor generates the finding aid using the dBASE report form. Audiovisual materials and printed items are described at item level, but they are checked in using the computer rather than manually. The computer greatly simplifies the process of shelving an item, updating box and folder numbering, and producing a corrected finding aid. Of course, dBASE could accomplish nothing unless the staff and student assis- tants were trained to use it. Like all uni- versity archivists, I am faced with a constant turnover of student assistants, so dBASE training is ongoing. It is also in- cremental. In teaching a student the basics of dBASE, I begin with an introduction to our databases and the types of materials they describe. This introduction serves a dual purpose: in explaining what the vari- ous database fields mean, I am also giving a primer on the principles of archival ar- rangement. Because the manuscripts, publications, and audiovisual databases have similar data structures, learning to navigate one file means that the others can be quickly mastered. The students are first taught the four most basic commands used in data entry: USE, EDIT, APPEND, and QUIT. Once those commands are mas- tered, I move on to searching commands, such as LOCATE, LIST, and DISPLAY, and global-replace commands such as RE- PLACE WITH that expedite data entry by reducing the amount of repetitive typing. With few exceptions, students are com- fortable with the computer and have little trouble with dBASE commands. Once the students are familiar with both dBASE and basic archival hierarchy, they are introduced to actual processing of col- lections. For them, processing is a wel- come addition to the more routine tasks of refoldering, paging and retrieval, and pho- D ow nloaded from http://m eridian.allenpress.com /doi/pdf/10.17723/aarc.57.2.9p4t712558174274 by C arnegie M ellon U niversity user on 06 A pril 2021 372 American Archivist / Spring 1994 tocopying. For me there is the challenge of assigning appropriate work. As a recent American Archivist article rightly points out,14 student assistants cannot substitute for staff and their work assignments must reflect that fact. When I assign processing projects, the students are given collections that need little rearrangement but do need review of folder headings and input into the computer. Collections that require com- plex rearrangement, require access deci- sions, or have significant preservation problems are processed by my assistant. The staff consists of one half-time parapro- fessional and two undergraduate student assistants, and it is not unusual to have three or four projects in progress simulta- neously. The volume of materials proc- essed with the same number of staff has increased dramatically. MARC records, however, are not a stu- dent task. Because I was to be responsible for creating catalog records for archival collections, I, too, needed training, some of which was provided by technical services staff in Olin Library. Like many archivists, I have taken the SAA workshop on Ar- chives, Personal Papers, and Manuscripts; I have also taken a course on OCLC authority files sponsored by our regional OCLC office. Additional workshops will be necessary in order to keep up with cur- rent cataloging practice. The procedures used for creating MARC records take full advantage of OCLC's ability to copy screens to a personal com- puter's hard disk. First the OCLC work- form for AMC records is copied to the computer's hard disk and saved as a WordPerfect file. This procedure allows extensive editing of the record without in- curring large amounts of connect time. Once the basic information (main entry, ti- '"Barbara L. Floyd and Richard W. Oram, "Learn- ing by Doing: Undergraduates as Employees in Ar- chives,' ' American Archivist 55 (Summer 1992): 440- 52. tie, physical description, organization and arrangement, restrictions, scope and con- tent, biographical/historical note) is in final form, the WordPerfect file containing the filled-in workform is printed out. Data from the printout is keyed into OCLC and added to the OCLC save file. Potential sub- ject and added entries are searched in the OCLC authority file and appropriate entries are added to the saved record. Technical services staff review the completed record for conformity with both OCLC conven- tions and AACR2, then the record is added to the OCLC database. OCLC records are loaded into the library's NOTIS system via the library's weekly tape load, without the need for further intervention on our part. As of February 1994, our databases con- tain approximately 25,000 collection-level, folder-level, and item-level records span- ning over 70 record groups. As the data- bases grow in size and scope we gain an increasingly useful on-line searching tool. Already we have gained greater productivity from the same number of staff. Now that the various components of our automation program are up and running, keeping it run- ning smoothly is an important activity. To that end, we perform regular backups of our data and regular checks of our hardware for viruses and signs of impending hard-disk failure. Planning for the future is already taking place in a number of areas. A number of special collections departments are now using the Internet communications protocol known as Gopher to make finding aids available over the Internet. We are explor- ing ways to do the same.15 Planning and developing an automation program taught me two important lessons. The first was that developing an automa- "Seventeen special collections departments have set up Gopher servers as of February 1994, and the number continues to increase. An important part of making our finding aids accessible over Gopher will be retrospective conversion of older finding aids that exist only in typed form. D ow nloaded from http://m eridian.allenpress.com /doi/pdf/10.17723/aarc.57.2.9p4t712558174274 by C arnegie M ellon U niversity user on 06 A pril 2021 Automating the Archives: A Case Study 373 tion program has both technical and man- continually developing process. In the be- agerial components, and neither is more ginning I assumed that selecting a system important than the other. In fact, the two would mean the end of my work. I now are constantly overlapping. The second les- know it was only the beginning of an on- son was that an automation program is a going, long-term process. D ow nloaded from http://m eridian.allenpress.com /doi/pdf/10.17723/aarc.57.2.9p4t712558174274 by C arnegie M ellon U niversity user on 06 A pril 2021