richards Question Master: Web-based Decision-Support System 29 Question Master: An Evaluation of a Web-Based Decision-Support System for Use in Reference Environments John V. Richardson Jr. Designed for librarians, Question Master (QM) (at http://purl.org/net/ Question_Master) is a decision-support system automating some of the more routine, fact-type reference questions encountered in libraries. A series of Web pages guides librarians through a set of clarifying ques­ tions before making recommendations of an appropriate electronic or relevant print resource from WorldCat, the OCLC Online Union Catalog. The goal is to improve the accuracy of reference transactions, which in turn should lead to increased end-user satisfaction. Based on usability studies of QM’s biographical module, this study found that although the system already was easy to use, its usability could be improved in sev­ eral ways. Its ability to answer questions was 100 percent, with an accu­ racy rate of 66 percent compared to Weil’s 64 percent accuracy. In addi­ tion, QM accuracy was substantially better than most reported studies of real reference environments and certainly better than the Internet results of 20 percent for HotBot and 30 percent for AltaVista. riting in 1964, Jesse Shera said: “The popular conception of automation as applied to the work of the reference librarian suggests a mechanical marvel from which accurate and authoritative answers to questions will be disgorged in immedi­ ate response to a push of the proper but­ ton.”1 In the future, intelligent technology could answer many types of questions from anywhere in the world. For ex­ ample, both reference librarians in any type of library and end users from school, work, or home could query an intelligent (a.k.a. expert or knowledge-based) front end to their local OPAC to answer refer­ ence queries. Such queries could utilize the profession’s database of shared cata­ loging records to answer reference que­ ries: The practical benefits are obvious. More specifically, the OCLC Online Computer Library Center in Dublin, Ohio, provides access to 36 million OCLC/MARC-formatted bibliographic records, including reference works via WorldCat (the OCLC Online Union Cata­ log). The reference records are accessible by the 049 field and local holdings by the LCC (050 field) or DDC (082 field) classi­ fication number, delimited by the double dagger symbol (‡). Furthermore, the 6xx field, subfield x provides access to the standard form subdivision2 that further identifies types of reference sources (e.g., John V. Richardson Jr. is an Associate Professor in the Department of Library and Information Science of the Graduate School of Education and Information Studies at UCLA; e-mail: jrichard@ucla. edu. 29 http://purl.org/net 30 College & Research Libraries January 1998 biographical sources, dictionaries, ency­ clopedias, and indexes). Considering that more than fifty extant first-generation expert systems for refer­ ence service have been reported in the li­ brary and information science literature, proof of concept clearly exists.3 Not only does proof of concept exist, but the au­ thor also has articulated the architectural logic of doing reference work; and there are more than 1,500 additional rules for building a second-generation knowledge- based system.4 The Research Problem Stated formally, the problem is that more than 250 million reference questions are asked in U.S. public libraries every year, according to the National Center for Edu­ cational Statistics.5 Many more questions in business and at home go unanswered every year due to information technology barriers that need not exist. Moreover, re­ search suggests that the accuracy of the librarian’s response is only about 50 per­ cent: One out of every two questions is answered correctly.6 Therefore, an intel­ ligent decision-support system (IDSS) could serve librarians well. An IDSS could free up the valuable time of reference li­ brarians so that it could be spent answer­ ing the more demanding research-type questions. In the long run, an IDSS would reduce a library’s costs of providing ac­ cess to information. Such a system could be available full-time from any Internet- accessible computer and could recom­ mend the single, best source regardless of language, as well as record the com­ plete transaction.7 Research Goals and Objectives One way to bring the present reality and the future closer together would be to implement a second-generation, ques­ tion-answering prototype using the World Wide Web. Thus, the threefold, overarching goals of this research project are to support the decision-making pro­ cess of librarians by automating some of the more routine, fact-type biographical reference questions, to improve the accu­ racy of reference transactions, and to in­ crease end-user satisfaction. The project’s three specific objectives are: 1. to implement a Web-based system that will select the single most appropri­ ate resource, either print based or elec­ tronic, regardless of language, in order to answer the end user’s query; 2. to evaluate its usability; 3. to test its accuracy (and then com­ pare and contrast its results with earlier studies of human reference librarians and computer-based systems). Related Research Based on an extensive review of the pro­ fessional literature, the author discovered at least three prior efforts to develop a bio­ graphical question-answering system.8 These are reviewed in chronological or­ der. In 1967, Cherie B. Weil, a student in the Graduate Library School at the Univer­ sity of Chicago undertook her master’s thesis, entitled “Classification and Auto­ matic Retrieval of Biographical Reference Books,” under the direction of Profes­ sor Victor Yngve.9 Writing her program in COMIT, a list-processing language, she designed a mainframe batch pro­ gram at an estimated cost of $900 to make use of 234 biographical reference sources which she characterized on the basis of eight points: living/dead, na­ tionality, gender, occupation, religion, race, memberships, and date. Arguing that “there are not enough reference li­ brarians who have perfect recall of their collections,” she tested her system with fourteen test questions which she ran­ domly drew from an advanced reference syllabus and discovered that it could an­ swer eight (66.6%) of those questions ac­ curately.10 This figure became the unoffi­ cial goal to beat. The next reported system using bio­ graphical sources is the Biographical Ref­ erence Advisor developed in 1987 at the http:curately.10 Question Master: Web-based Decision-Support System 31 FIGURE 1 QM Opening Page Decker Center for Information Technol­ ogy at Goucher College (Maryland) by Robert Lewand, professor of mathemat­ ics and computer science, and Larry Bielawski, director of the center. They relied on Yvonne Lev and Barbara Simons as their domain experts. Using an IBM personal computer (PC) for their platform, they selected shell soft­ ware, called First Class, to implement a menu-driven program of 680 nodes on a total of eighty-five decision trees. Biographical sources were character­ ized by their coverage of contemporary versus historical figures; fourteen dif­ ferent occupations; twelve nationalities; and gender. Development required five months and cost approximately $2,500. Informal evaluation reports indicate that “students rarely use the system,” which emphasizes the importance of usability testing. During the fall quarter 1987, the author introduced an expert system assignment in sections of the required reference ser­ vices course sequence at UCLA’s Gradu­ ate School of Library and Information Sci­ ence. Over the course of the next several years, the author’s graduate students de­ veloped a variety of modules, including several biographical ones using ESIE, a shareware backward-chaining shell. In addition to positive class evaluations and a multistudent presentation at the 1988 ASIS midyear meeting, several students reported on their experience in the LIS lit­ FIGURE 2 Biography Module Opening Page 32 College & Research Libraries January 1998 erature.11 This experience contributed sig­ nificantly to the author’s knowledge of the methodological issues involved in de­ veloping and evaluating first-generation systems. Research Questions and Provisional Hypothesis Based on the preceding literature and objectives, three questions arise: First, how easy is QM to use? Second, can QM answer questions, and how accurately? Third, how does QM compare to Cherie Weil’s pioneering 1967 work and to brute- force searching of the Internet for bio­ graphical information (in other words, have we made any progress in the field of intelligent question answering?). Fi­ nally, the author proposes the null hy­ pothesis that there is no statistically sig­ nificant difference in the accuracy between QM, Weil’s system, and what one can find on the Internet. Answers to these questions will tell the profession about the promise of intelligent question- answering systems and give system de­ signers insight into promising ap­ proaches. Methodology This section covers the construction of QM, the design of its usability testing, and the scoring of its accuracy. HTML Pages QM is a series of HTML pages that guide librarians through a set of clarifying ques­ tions before making recommendations on the single, most appropriate electronic or rel­ evant print resource from OCLC’s WorldCat. Since December 1996, QM has been available for public use and testing at http:// purl.oclc.org/net/Question_Master (see fig­ ure 1). At present, the biographical module contains 159 reference sources (see figure 2). This approach assumes that the reference problem/solution boundaries (i.e., the search space of all reference questions and reference sources) are finite. Furthermore, the imple­ mentation is predicated on the so-called Mudge Method or the Hutchins Heuris­ tic—that each and every reference source can be classified into a finite number of categories for cognitive efficiency.12 Cog­ nitive effectiveness is achieved by classi­ fying reference questions by format and then by specific source. To determine their utility in the reference environment, the author used a modified distinctive feature analysis to categorize the sources. Hence, the interface poses questions much like the reference librarian should be doing to reach the correct conclusion. As alluded to above, actual implementation of the author’s intelligent question-answering system is based on decision rules using a multiple-choice classification process. Without a doubt, the theory is a reductive transformation of the reference librarian’s complex, decision-making task; nonethe­ less, the advantage is that it converts this complex task into a much more manage­ able one for a computer-mediated envi­ ronment. http:efficiency.12 http:erature.11 Question Master: Web-based Decision-Support System 33 TABLE 1 Taxonomy of Potential Responses Score Range of Responses Service Quality 5.0 Referred to a single-source, complete, and correct answer Excellent 4.0 Referred to several sources, one of which gave complete Very good and correct answer 3.0 Referred to a single source, none of which leads directly Good to an answer but one of which serves as a preliminary source 2.0 Referred to several sources, none of which leads Satisfactory directly to an answer but one of which serves as a preliminary source 1.0 No direct answer; referred to specific source/person/ Fair/poor institution 0.0 No answer; no referral (e.g., I don’t know) Failure -1.0 Referred to a single inappropriate source Unsatisfactory -2.0 Referred to several sources, none of which Most unsatisfactory answers Source: Suggested by Ralph Gers and Lillie J. Seward, “Improving Reference Perfor­ mance,” Library Journal 110 (Nov. 1, 1985): 32–35; and Cheryl Elzy, Alan Nourie, F. W. Lancaster, and Kurt M. Joseph, “Evaluating Reference Service in a Large Academic Library,” College & Research Libraries 52 (Sept. 1991): 454–65. Evaluation The detailed procedure for evaluating the accuracy of expert systems has been pos­ ited in the literature by John V. Richardson and Rex Reyes.13 Simply put, it consists of employing a set of validating test questions that might be encountered in a typical aca­ demic or large public library and scoring the answers on an eight-point scale (see table 1). In essence, this scheme is based on rewarding efficiency following Cutter’s maxim that one does not want to waste the user’s time. For this study, Weil’s original 1967 questions were selected initially (see her appendix E, “Results of Accuracy Test”) so that a comparison and contrast could be made with her study. Note that her questions are strong on deceased, foreign individuals much as one would encoun­ ter in an academic library setting. Unfor­ tunately, four of her original questions (numbers 4, 9, 10, and 11) had to be elimi­ nated because the first one was more strictly genealogical in nature and the next one was no longer a valid contem­ porary question. The final two did not actually use biographical sources (one in­ volved pronunciation, which she more properly answered using a dictionary, and the other required an encyclope­ dia, handbook, or yearbook to answer). To enlarge the set for mathematical pur­ poses, in the future more public library- type questions should be added from the biographical module of the OCLC Reference Collection Development Module, which logs users’ questions. In this case, the questions could involve more living Americans. The author scanned the log, which represents more than two hundred users, looking for typical questions. Finally, starting in mid-February 1997, the ten questions were used for two brute-force searches of the Web in order to see how well an unmediated search might perform in finding answers. The two searches were conducted using the largest index (31 million pages) created by Digital Re­ search Laboratory’s AltaVista Scooter® as well as Inktomi Corporation’s HotBot Slurp®, which searches at a deeper level than AltaVista. http:Reyes.13 34 College & Research Libraries January 1998 Usability TABLE 2 In late March 1997, the OCLC Us- Scoring of QM and Weil’s Reference ability Laboratory (Ulab)14 re- Book Systemcruited four test users to evaluate the quality of QM.15 Selected by the Total Ulab, the four users (three women QM No. Weil No. QM Weil Possible and one man) were typical of the 1LIS community: white, middle­ 2class individuals with corporate, 3public and academic library expe­ 4rience. Their positions ranged from 5library clerk to former head of a 6large academic reference depart­ 7ment, and each had worked with 8reference questions on a daily ba­ 9sis. These individuals were asked 1 5.0 2.0 5 2 5.0 4.0 5 3 3.0 4.0 5 5 5.0 2.0 5 6 -2.0 2.0 5 7 5.0 4.0 5 8 3.0 4.0 5 9 3.0 4.0 5 13 3.0 4.0 5 10 14 3.0 2.0 5to use QM to find the answers to Grand Total 33 32 50 the set of test questions mentioned (66%) (64%) (100%)above. Each task was to be accom- Mean Score 3.3 3.2 plished in two minutes, which is the average time spent on ready­ reference-type questions. While being videotaped for subsequent analysis,16 us­ ers were asked to “’think aloud,’ verbal­ izing what they are thinking and prob­ lems they encounter while doing the tasks.”17 After the test, each user com­ pleted a questionnaire and was inter­ viewed by the principal investigator and a member of OCLC’s Human-Computer Interaction Team. Research Findings Usability Based on the Ulab testing, QM scored an average of 4.5 on a five-point scale where 1 was “very difficult” and 5 was “very easy to use.”18 In addition, several unique items were found that could be used to improve QM. In particular, its usability was increased in the following six ways. First, one page was added to define the function of each format; second, the query box was moved to the top of the bio­ graphical module; third, one page was re­ written to clarify that the system, at this time, recommends the single, best source regardless of language19; fourth, another page was reformatted to indicate more clearly that “brief versus long” refers to the biographical entry in the source rather than the bibliographical record; fifth, sev­ eral pages (the ethnicity, religion, and oc­ cupational pages, specifically) were merged into a single screen following the selection of either living or deceased in­ dividuals; and sixth, additional space was added on all pages to make it clear that “unsure” is an option everywhere. Overall, users understood the screens and the terminology, and knew what was going to happen next. As mentioned earlier, these changes probably account for at least 75 percent of the difficulties any user might have with the system. And even prior to these changes, all the test users thought the system was “easy to use.” Now, al­ most any reference librarian or staff member should be able to use QM with great ease. Evaluation of Accuracy Based on the results presented in table 2, QM is able to answer 100 percent of all biographical questions put to it. In sev­ eral instances, it could provide a single source with the complete and correct an­ swer. However, an equal number of times, Question Master: Web-based Decision-Support System 35 TABLE 3 Question Master Versus Internet Search Engines QM No. Internal No. QM Alta Vista HotBot 1 Weil, 1 DSB 1*; 3rd – 4th 0; 1/6 options/1000 2 Weil, 2 CWW; SDCB; MDCB 0/155,264 0/29,888 3 Weil, 3 BGMI 0/38,882 0 4 Weil, 5 BLKO 0 0 5 Weil, 6 AO or BDUB 0/70,919 0 6 Weil, 7 MEL 1/7522;† 4/50; 1/4 7 Weil, 8 GDMM 1/10,492;‡ ?/173 8 Weil, 9 BGMI 0 0/4 9 Weil, 13 WWWAA 0/30,743 3-4/238 10 Weil, 14 BGMI 0/15,558 0/4 Success Rate 30% 20% In the Alta Vista and HotBot columns, the figure before the slash indicates the location of the most useful hit within the retrieval set and the second number indicates the overall size of the retrieved set. Figures after the semicolon indicate results of an advanced search. * = a server error; † = not found; and ‡ = page returned did not contain an address. it failed rather miserably because it would recommend Biography and Genealogy Master Index when there was incomplete information about a subject, and that in­ dividual would not be listed in this source. Overall, QM scored thirty-three out of fifty points (66%), or 3.3 per ques­ tion, on average. In qualitative terms, that would mean its service quality was some­ what better than good. In one sense, Weil’s system performed more consistently in a narrow range by recommending more sources each time, but although these might be judged good sources, usually only one would lead to the correct answer. For a rather large amount of time, the user would be looking for the answer in one of those recommended sources, whereas QM would recommend only one title and the user would know immediately upon checking the source that the answer was not there. Compared to the Internet searches, QM is superior for several reasons (see table 3). First, many of the Internet searches yielded no results at all, or when they did, they returned exceptionally large, hence useless, retrieval sets. Of course, these large sets could be reduced by using quo­ tation marks or other techniques, al­ though naïve users may not know of or use the advanced searching options. Sec­ ond, many of the pages retrieved are not available now. Of course, persistent uni­ form resource locators (PURLs) are one solution to this difficulty.20 However, when a page does return useful informa­ tion (20 to 30% of the time), it is highly satisfying and supports the principle of least effort—why shouldn’t relevant in­ formation be at one’s fingertips? For the Future Despite QM’s exceptionally easy-to-use interface, the author would like to implement a form-based approach to asking the user questions about the query. Second, adding more titles could increase QM’s ability to answer addi­ tional questions that might be encoun­ tered in a real reference environment. Third, deducing additional facets of biographical questions might increase QM’s accuracy. And finally, someone s h o u l d e va l u a t e m o r e b r o a d l y t h e Internet’s accuracy using the same cri­ teria as discussed here. http:difficulty.20 36 College & Research Libraries January 1998 Conclusions Intelligent question answering is making progress. In one sense, we see technologi­ cal differences, having moved from an overnight batch environment in 1967 to on-demand answers via the Web. QM is available twenty-four hours a day, seven days a week, whereas Weil’s system op­ erated in a batch mode. Nonetheless, there were no significant differences in accuracy since 1967, although QM is more usable, more efficient, and less wasteful of the user’s time. However, there is a big difference between mediated and unme­ diated searching of the Internet. Brute- force searching of the Internet is still not viable, at least for this set of test biographi­ cal questions. Intelligent, mediated searching by human or computer is still necessary; however, the ability to reduce human error in the reference transaction seems especially noteworthy. Although QM is “easy to use” and slightly more ac­ curate than what could be done in 1967, there is obvious room for improvement, as mentioned above. Perhaps the refer­ ence theory, the so-called Mudge Method or Hutchins’ Heuristic, is unsatisfactory, and a better theory of how to answer ref­ erence questions is needed. In summary, this is a situation not unlike the now fa­ miliar SDC Orbit or Lockheed Dialog online searching systems in the early 1960s—research prototyping. Perhaps in another thirty years, we will have what Jesse Shera stated was the popular con­ ception of automation as applied to ref­ erence work. The ideal of accurate and au­ thoritative information from a single computer interface is not that far away. The author would like to thank UCLA for his year-long sabbatical during the 1996–1997 academic year, as well as OCLC for offering him the position of Visiting Distinguished Scholar. In the course of this research project, the author enjoyed the support and encour­ agement of many individuals at OCLC, in­ cluding: Terry R. Noreault, Director, Re­ search and Special Projects; Keith E. Shafer, Senior Research Scientist; Bradley C. Watson, Consulting Systems Analyst; P a t r i c k M c C l a i n , S y s t e m s A n a l y s t ; Vincent Tkac, Senior Programmer/Ana­ lyst; Mike Prasse, Head, and Chris Vavro, Analyst, of the OCLC Usability Labora­ tory, as well as the four test subjects; Susanne Krouse, Administrative Coordi­ nator; and last, but not least, the help desk folks, including Kevin Ball and Bruce Goll. Original funding for the knowledge base in QM was provided by UCLA’s Academic Sen­ ate Committee on Research and the Council on Library Resources, Grant Number 8027. Notes 1. Jesse Shera, “Automation and the Reference Librarian,” RQ 3 (July 1964): 4. 2. Library of Congress, Subject Cataloging Manual: Subject Headings, vol. 1 (Washington, D.C.: Cataloging Distribution Service, 1991), section H180, II, 3. 3. John V. Richardson Jr., “A Review of KBS Applications in General Reference Work,” in Knowledge-based Systems for General Reference Work: Applications, Problems, and Progress (San Di­ ego, Calif.: Academic Pr., 1995), chapter 8. 4. ———, “The Development of a Knowledge Base for an Expert System in Reference Work,” in Knowledge-based Systems for General Reference Work: Applications, Problems, and Progress (San Diego, Calif.: Academic Pr., 1995), chapter 5. 5. National Center for Education Statistics, Public Libraries in the United States, 1997 (Wash­ ington, DC: GPO, February 1997). 6. Matthew Saxton, “Reference Service Evaluation and Meta-Analysis: Methodological Is­ sues in Summarizing Data from Multiple Studies,” Library Quarterly 67 (July 1997): 267–89. 7. A more complete list of the actual requirements of an essential system are given in John V. Richardson Jr., “Understanding the Reference Transaction: A System Analysis Perspective,” in progress. 8. See table 8.1, entries 1 and 24, as well as page 272, in Richardson, Knowledge-based Systems for General Reference Work. Question Master: Web-based Decision-Support System 37 9. Cherie B. Weil, “Automatic Retrieval of Biographical Reference Books,” Journal of Library Automation 1 (Dec. 1968): 239–49. 10. On page 106 of Weil’s study, she indicates that scoring the computer’s response was “A = It has the answer or at least part of it; B = Good choice but does not have the answer.” She scored eight questions as A and two as B. In her final scoring, she reported 66 percent accuracy whereas the author scored her system as 64 percent accurate. The author interprets these results as having no significant difference after having dropped four of her original fourteen questions. 11. Deborah Henderson, Patti Martin, Lauren Mayer, and Pamela Monaster, “Rules and Tools in Library Schools,” Journal of Education for Library and Information Sciences 30 (winter 1989): 226– 27. 12. John V. Richardson Jr., “Teaching General Reference Work: The Complete Paradigm, 1890– 1990,” Library Quarterly 62 (Jan. 1992): 55–89. 13. John V. Richardson Jr. and Rex Reyes, “Expert Systems for Government Information: A Quantitative Evaluation,” College & Research Libraries 56 (May 1995): 235–47. In the near future, the author expects that Matthew Saxton will present a methodology (e.g., a seven-point Likert scale modifying Childers’s work) in his dissertation tentatively titled: “Evaluation of Reference Service in Public Libraries Using Structural Equation Modeling: The Role of Multivariate Analy­ sis in Testing Theory.” 14. Michael J. Prasse and R. Tigner, “The OCLC Usability Lab: Description and Methodol­ ogy,” in 13th National Online Meeting Proceedings—1992 Meeting, New York, May 5–7, 1992, ed. Martha E. Williams (Medford, N.J.: Learned Information, Inc., 1992), 255–61. 15. Using four test subjects will uncover at least 75 percent of a system’s problems, according to Robert A. Virzi, “Refining the Test Phase of Usability Evaluation: How Many Subjects Is Enough?” Human Factors 34 (August 1992): 457–68. 16. Michael J. Prasse, “The Video Analysis Method: An Integrated Approach to Usability As­ sessment,” in Proceedings of the Human Factors Society 34th Annual Meeting: Orlando ‘90 (Santa Monica, Calif.: The Human Factors Society, 1990), 400–404. 17. The OCLC Usability Lab: Supporting Efficient Development of Excellent Products (Dublin, Ohio: OCLC Inc., 1996), 1. 18. The videotape of the user sessions along with Dr. Michael Prasse’s Usability Evaluation Summary Report are filed in the OCLC Information Center. 19. For a commercial system, this requirement could be modified to recommend more sources or simply those in a single language, such as English. 20. For more on persistent uniform resource locators (PURLs), point the browser to http:// purl.oclc.org. http:purl.oclc.org << /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /All /Binding /Left /CalGrayProfile (Dot Gain 20%) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Warning /CompatibilityLevel 1.3 /CompressObjects /Tags /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /CMYK /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams false /MaxSubsetPct 1 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo true /PreserveFlatness false /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts false /TransferFunctionInfo /Apply /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /CropColorImages false /ColorImageMinResolution 151 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.10000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /ColorImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasGrayImages false /CropGrayImages false /GrayImageMinResolution 151 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.10000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /GrayImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasMonoImages false /CropMonoImages false /MonoImageMinResolution 600 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.16667 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile () /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ENU (IPC Print Services, Inc. Please use these settings with InDesign CS4 \(6.x\). These settings should work well for every type of job; B/W, Color or Spot Color. Contact Pre-press Helpdesk at prepress_helpdesk@ipcprintservices.com if you have questions or need customized settings.) >> /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ << /AsReaderSpreads false /CropImagesToFrames true /ErrorControl /WarnAndContinue /FlattenerIgnoreSpreadOverrides false /IncludeGuidesGrids false /IncludeNonPrinting false /IncludeSlug false /Namespace [ (Adobe) (InDesign) (4.0) ] /OmitPlacedBitmaps false /OmitPlacedEPS false /OmitPlacedPDF false /SimulateOverprint /Legacy >> << /AddBleedMarks true /AddColorBars false /AddCropMarks true /AddPageInfo true /AddRegMarks false /BleedOffset [ 9 9 9 9 ] /ConvertColors /ConvertToCMYK /DestinationProfileName (U.S. Web Coated \(SWOP\) v2) /DestinationProfileSelector /DocumentCMYK /Downsample16BitImages true /FlattenerPreset << /ClipComplexRegions true /ConvertStrokesToOutlines true /ConvertTextToOutlines true /GradientResolution 300 /LineArtTextResolution 1200 /PresetName ([High Resolution]) /PresetSelector /HighResolution /RasterVectorBalance 1 >> /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles true /MarksOffset 9 /MarksWeight 0.250000 /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (3.0) ] /PDFXOutputIntentProfileSelector /NA /PageMarksFile /RomanDefault /PreserveEditing true /UntaggedCMYKHandling /LeaveUntagged /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false >> << /AllowImageBreaks true /AllowTableBreaks true /ExpandPage false /HonorBaseURL true /HonorRolloverEffect false /IgnoreHTMLPageBreaks false /IncludeHeaderFooter false /MarginOffset [ 0 0 0 0 ] /MetadataAuthor () /MetadataKeywords () /MetadataSubject () /MetadataTitle () /MetricPageSize [ 0 0 ] /MetricUnit /inch /MobileCompatible 0 /Namespace [ (Adobe) (GoLive) (8.0) ] /OpenZoomToHTMLFontSize false /PageOrientation /Portrait /RemoveBackground false /ShrinkContent true /TreatColorsAs /MainMonitorColors /UseEmbeddedProfiles false /UseHTMLTitleAsMetadata true >> ] >> setdistillerparams << /HWResolution [2400 2400] /PageSize [612.000 792.000] >> setpagedevice