MANAGING EXPERT SYSTEMS : A Framework and Case Study Rob R. Weitz Department of Information, Operations, and Management Sciences Leonard N. Stern School of Business, New York University 44 West 4th Street, New York, NY 10012 Center for Digital Economy Research Stern School of Business Working Paper IS-90-06 Abstract ?his paper addresses the problem of managing the development and implementation of a large expert system in an organization. A traditional systems analysis and design methodology is used as a framework to highlight similarities and differences in managing large scale traditional computer based projects and large expert systems. As a non-technical, prescriptive guide, this article focusses on defining at each stage in the project, the tasks to be accomplished, resouras required, impact on the organization, likely benefits and potential problems. The case of a large expert system ' implemented by a multi-national corporation'across several Ehropean sites is used to clarify and expand upon the mgement guidelines pmvided. Introduction Research in the field of Artificial Intelligence (AI) signals great p h s e for the next generation of hardware and software. At the present time, expert systems are arguably the most commercially successful product of Artificial Intelligence research; they have crossed the threshold of the laboratory and are beginning to mke their presence felt in real world applications. While others have described applications, suggested opportunities for the technology, or emphasized organizational considerations, (Wnard-Barton and Sviokla 1988, Luconi et al. 1986, Leonard-Barton 1987), to date the management of their introduction into the workplace and their inpact once there still remain largely unexplored. This paper presents a prescriptive guide for managing large scale expert systems from ineption 4Arough implementation and maintenance. It is motivated by experience with the developmeslt of a sizable expert system by a large, multinational ampany, the cgrcwing literature of cases describing expert systems in practice, and the belief that management and organizational considerations (as opposed to m i c a 1 wizardry) must remain paranmunt for Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 the success of such systems to be achieved. The thrust of this paper is to focus awareness on 1) the processes and resouras required for an expert system project, 2) the costs and benefits of such an undertaking, and 3) organizational and task changes likely to result from the intrcduction of the system. This paper is not a technical guide for building expert systems; technical concerns are expressed here only insofar as they are inextricably linked to the management of such systems. A great deal of experience has been gathered regarding the design, implementation and maintenance of sttraditional" computer based systems, from both technicdl and management viewpoints. The pitfalls, players and positive practices have been identified in a large bcdy of existing research, and are well described in the information systems literature. (See Senn 1984, and Burch and Grudnitski 1986, for example.) The management of expert systems does not lie completely independent from this previous computer based project management qrience. Indeed, one trend is towards whisiblen expert systems - that is, expert systems which are integrated into conventional data processing hardware and software (Kozlov 1988) . This paper builds on the existing groundwork by taking as a framework a traditional systems analysis and design (SA&D) methodology and adapting it for application to expert systems. Ekpert systems (ES) are computer prqmms for solving difficult, ssfuzzyss problems in dcnnains where human expertise is nonrdlly associated with a great deal of training and experience. Application areas to date include such areas as fault diagnosis, tax planning, credit evaluation, geological prospecting, chemical analysis and medical diagnosis. (See Kupfer 1987 and Kneale 1986 for K>pular-press descriptions of expert systems qperating in business environments; Waterman 1986 provides an extensive Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 werview of ES applications.) Expert systems are typically characterized by: The utilization of larye amounts of damain specific knowledge. The ability to use incomplete or uncertain information. The capacity to explain their behavior (a kind of self-knowledge). Symbol manipulation, that is ttreasoning" about obj&, as opposed to numerical dpulation ( w h i c h typifies traditional computer programs). Perfor'mne levels at, or exceeding, those of experts in the problem domain. Typically, sizeable e x p e r t system are built in an iterative, incremental fashion via repeated interviews between one or more damain experts and a wknowlPdqe engineertt. Briefly, the task of the knowledge engineer is to elicit the e x p e r t knowledge, map the knowledge into a suitable structure and actually code the knowledge using appropriate software and hardware. The process tends to be tedious and the consuming, and has in fact been referred to as the ttknawledge engineering bottleneckw. A good overview of experts systems in general, and of the building process is provided by Hayes-Roth et al. (1983) and Hanmn and King (1985) . For E3 the i m p o m of early development of working prototypes is stressed. The prototype is haementally improved and its capabilities expanded by repeated trials with the damain expert and actual use in a test environment. In fact, several versions of the prototype are typically successively developed, until a sufficiently evolved version is realized for possible release. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 Wzilding and implerrrenting a larye expert system is both time consuming and resaurce intensive. While improved software envirommts have helped speed developnat, and recent experience has suggested some guidelines for easing the overall process samewhat, existing verifiable ES applications suggest that, for large systems, the time r e q x b d to go from prototype to implementation is typically on the order of person-years, with costs measured in at least the tens of thousands of dollars. For a case in point, see Linden (1982). Clearly, larger systems (as measured by the amount of knmledge acquired and by the number of users) require more effort than smaller ones. In any case, it seems that under the appropriate circumstances the value of autmted expertise supports the magnitude of these efforts. 3. A mditioml Systems Analysis and Design FYamework Creating a larye computer based system for more than personal use is a campla task requiring technical skills, creativity and good management of resources. A guide for this process has been established through experience and while details vary f m source to source, the overall thrust is fairly standardized in the information systems literature. One such systems analysis and design framwork is provided in Figure 1, and is due to Iucas (1982). As indicated previously, this paper adapts the traditional SA&D procedures for use with expert systems. Figure 1. about here. The next section traces this outline as it applies to expert systems; fumhmntal variations of the traditional SA&D process are noted, and described or referenced. The resultant SA&D process for expert systems is summarized in Figure 2. Figure 2. about here.. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 After detailing the SA&D framework for expert systems, a '*real worldn ample is presented. It should be stressed that the case description serves to illustrate the framework and not define it. The Comet eqert system was developed without benefit of the mmtive model outlined here - in fact the project wBs in large part a motivating influence in developnent of the framework. It should be clear that the case description not only clarifies the model, but also serves to demonstrate the benefits of follcxnling the proposed SA&D framework, arid the disadvantages of deviating fmm it. 4. Framework 4.1 Inception Inception refers to the realization that a ccmputer based technology or application can be of value to an organization. The idea may refer to investigating a technology in an exploratory sense, or more specifically applying a technology to an existing or proposed procedure within the organization. At this stage the envisioned system is naturally somewhat murky in its details, though the overall goals - to reduce costs, ease a production bottleneck, maintain a competitive position, or better manage a process, for vie, are more clear. Several variations of possible systems are likely entertained at this point. The question proposed here is, why should the envisioned system (or one of the envisioned systems) be an expert system? It should be kept in mind that e x p e r t systems are not cheaper or easier to build than conventional systems; it is more realistic to assums the contrary, Therefore, there should be strong, positive reasons for proposing an expert system solution. Selecting the right tool (or technology) is always a question of matching the characteristics of the problem with the capabilities of the tool. Criteria for tasks well suited for expert systems Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 have been en-ted elsewhere (Bobrow et 61. 1986, Qm 1987 for example). These criteria are briefly summarized below. Ekprt system are used to replace or assist e x p e r t s . (A rough definition of an expert is someone who can solve difficult problems more quickly and with less effort than a novice, or tlaveragetv person. Typically, expertise is acquired through lengthy training and experience, and is limited for an individual to a particular field of knowledge.) Insofar as technical criteria are concern&, for applying an expert system to a problem, the task should clearly Have semi-structured solution processes. E k p r t systems were created in an attempt to model problems that do not have algorithmic or "step- by-stepw solutions. (Traditional, mathematical models or decision tree programs are appropriate for these type of problems. ) Expert systems are well suite3 for tasks where a b d y of loosely structured knowledge exists; this technology was designed for capturing the heuristics or yules of thwibn for arriving at good solutions. Involve a well c-ibed, limited field of knawledge. Ekpert systems are not general problem solvers. (As a case in point, each ES application in medicine focusses on one specialty; for example, MYCINts damain is bacterial infections, while INTERNIm covers the domain of internal medicine. ) Moreover, problems requiring bmrnon sensem are not well suited for an ES solution. Be difficult, but not too difficult. As a general guideline the task should take a human expert samewhere between minutes and hours. Given the above necessary conditions, an expert system is more strongly remmm-ded for tasks that Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 Require explanation to the user of the system's reasoning in solving the problem. Involve reasoning with uncertain or inccarq?lete information. Entail resoning about symbols, or ~tobjectsll, as opposed to mathemtical manipulations. The above criteria are purely technical requirements, and define the minimum considerations for f u r t h e r pursuing the process of tkinking about applying expert system technology. Related to the above technical CO- are more organizational requirements for an expert system to be envisioned as a solution. These include that Human experts exist and are willing and able to participate in the project. Human expertise is scarce and/or expensive. Management is likely to commit the required resources to the project. (This implies that the task is viewed as an important one.) At the inception stage, some indication that these non-technical concerns are satisfied must be sensed, or at least their negation must not be evident. For both traditional and expert systems, the inception stage produces a proposdl outlining the reasons for the project, its goals, expect& benefits, possible risks, alternative approaches, time frame, and required resources. The entire inception process serves to better define the project, and to garner organizational support. The inception phase ends with a decision to either stop consideration of the project, or to continue f u r t h e r investigation and evaluation. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 In traditional SA&D, the next step is a full-fledged feasibility study, (The prototype proposal may in fact be viewed as a "first draftw feasiblity study.) For expert systems, successful campletion of the inception stage means approving the development of a demonstration prototype, and concurrently und- the feasibility study. Typically, the inception report includes an estimate of the resources which will be required, and an approximate schedule for the entire project. However, since approval at the end of the inception phase means cormnitment to the project solely through the feasibility study (for traditional SA&D) or t.hroqh prototyping and feasibility study (for expert SA&D) , the inception report should specify in detail a timetable and the resources required for these next steps. A decision to go ahead with the full system will be made when the feasibility study, or in the case of expert SA&D, the demonstration prototype and feasiblity study are ready. Finally, the inception report for an ES may include evaluation criteria for the prototype. While criteria may be difficult to specify, what constitutes lfsuccessw for the prototype should be defined in advance. (De- * . evaluation criteria is discussed in the next section.) For many organizations, a process for gaining a basic understakiing of what ES technology has to offer, in general or with respect to the application domain, is required in order to campile the inception proposal. lhis may entail formal training, attendance at semhmrs, transfer within the organization of individuals experid with ES technology, contact with ES software firms, or the hiring of consultants. The objective is to acquire the capability of fully assessing the potential project, and to begin to codlesce the resources for its undertakhg. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 4.2. Feasibility Study As with traditional SA&D, the feasibility study is caprid of a comparative evaluation of all conceivable alternative systems. ( ~ t minimum a single new system is proposed and evaluated with respect to the existing procedures.) The ultimate purpose of the study is to select the best of all possible alternatives, but this part of the analysis serves many other pmpses. Among these, it helps solidify ideas, provides a common source of reference, and serves as a focus for gaining camnitment from users and management. The feasibility study is a fuller, more detailed version of the inception report. It should be ready for release when the demonstration prototype is ready, or shortly thereafter. The contents of a traditional SA&D study report are outlined below. Follawing each directive is a description of haw that directive applies to an expert system feasibility study. The report should: Describe the current system including a rationale for dhanging it. As the saying goes, I1If it ain't broke, don't fix it!" Shortcomhgs in the current system and improvements provided by the suggested system should be clearly specified. Ekpert system proposals based on rationales of selecting a "high tech, state-of- the-art solutiont1 should be rejected. Explain why the particular solution is proposed, as opposed to some other alternative; already existing, similar systems both inside and outside the organization should be referenced if possible. Why is an expert system appropriate? The previous section describes problems and envimnments wkich suggest an expert system solution. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 Lay out the goals, scope and objectives of the envisioned system. It is important to set realistic expectations for the proposed system. Terms such as ttartificial intelligencett and expert systemtt tend to spur the imagination. Bear in mind that this technology typically cannot do what a human expert cannot do and that, in any case, the breadth of knawledge enampassed by the system will be limited. Specify the bounds of the proposed system; i.e., what are not the goals, objectives and expected capabilities of the system. Describe what the proposed system will do. Include in this description haw the system will work, who will use it, haw the task itself and related tasks will change. The follawing questions should be addressed. Will ttexpertstt themselves be using the system (i.e., doctors using a medical diagnosis system) or will wtechniciansH use the system? What skills will be required of the users? Will the users be new employees or current workers already doing the task under the present system? If current workers, how will their input into system design, and their goodwill in general be solicited? Will the system ttde-skilltt their task, and if so how do they feel about it? Haw will job responsibilities and lines of cxmmmication change? The averall technical specifications should be outlined; i.e., how the system will interface with existing information systems, data needs, input/output mechanisms, performance -ts, etc. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 Specify a timetable for the project, including periodic reviews and expected performance of the system at each review. As mentioned above, expert systems are developed incrementally. The first review should occur at the unveiling of the demonstration prototype system. (This review should take place two to eight months from the start of the developtent work.) Feedback from this review should direct M e r technical work on the system, while any organizational difficulties which have surfaced should be aired and attended to. Attendees at the meeting should include those involved with developrent, representatives of the ultimate users of the system, and those responsible for the direction of the project. At minimum, one more review will be required (on the order of 6 months after the first meeting) to decide to either test release, delay or cancel the project. 'I'ypically another review is held after test release, and just prior to full release. Detail the resoumas required, when they will be required and from whom they will be required. For the proposed project, allocations should be included for knowledge engineers, domain experts, programmers, hardware, software, training, and travel cqemes. Note that the participation of domain experts is crucial, while their time is typically at a premium. (Hence the value of automating their expertise.) E?rwisions for re-specifying their job description to include work on the new system should be detailed. Cbmmnly, prqpmms are required to write user, and system to system interfaces. A fuller description of all the players required for such a project and how they differ from those involved in developing a traditional system is given at the end of this section. As with traditional systems, the choice of hardware and Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 software is one of making trade-offs. Indeed, much of the criteria (price, speed, caqatibility, reputation of supplier, for ample) are the same. Hardware selections fall into three categories: micmcmputers, minis/mainframes, and workstations. Special purpose expert system developxnent software (typically called "shellsf* or fterrvirorrments**) exist for each category of hardware and ease the prqramhg task considerably. It is typically the krwwledge engineer's responsibility to choose appropriate h a m h a r e and software for the project. (Gilmore and Huward 1986 and Mettrey 1987 discuss ES software selection. For details on ES hardware and software, and more on the specifics of the knowledge engineering process see Harmon and King 1985, Holsapple and Whinston 1987, Waterman 1986 and Hayes-Roth et al., 1983. ) The reswsces required to achieve the demonstration prototype should be made explicit, as typically after the initial approval, the next Ifgo - no gott decision point is at the prototype review. Provide a cost-benefit analysis. This analysis should include both tangibles and intangibles; typically some estimates are required, but the process should be supported with a sensitivity analysis. Benefits commonly ascribed to expert systems include: preservation and dissemination of scarce w i s e , relieving experts of tedious tasks and thereby allowing them to concentrate on more diff icult/mre interesting problems, speedier solutions and more consistent problem solving. Huber (1984) identifies knowledge as a key, strategic resaurce in the wpost-industrialf* organization. Important costs include: personnel ( i . e. , knowledge engineer and expert) , software and hax&are (perhaps specifically for expert system develapment and use), and those expnses associated with training, operations and updating. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 Define, as best as possible, evaluation criteria and agree on how the success of the system will be measured. The evaluation criteria should be utilized during the periodic reviews. Expert system will typically be evaluated on a host of criteria. These include the quality of the solution, the speed of solution, the manner in which the solution is reached (transpaency), breadth of knowledge, explanation and help facilities, user satisfaction, and the ease with which the system can be transported, modified and up%ted. The relative importance placed upon each of these criteria is detemLined by the projectf s goals ' and objectives. The criteria should be established by the ultimate users of the system and the managers of the task in question, in conjunction with the developers of the system. Assign mnership of the system over the course of its lifetime. In many applications, w t i n g of the knowlpdqe encampassed by the system is a large, ongoing job due to the nature of the task in question. Consider, for example, a diagnosis expert system for saw large piece of machinery. Frequent updates will be required if new models of the machine are produced, parts (or part numbers) change, and new faults (and procedures for finding them) appear as the machine ages. Responsibility after release of the system may or m y not rest with the developers, but in any case this responsibility and the mechanisms for und- the maintenance task should be made explicit. Many of these points will be elaborated upon in the subsequent sections. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 4.2.1 Fersonnel Same c a n r e n t s abut the players irrvolved i n expert systems development and hclw they ccglp>are with those for traditional system are i n order. In traditional systems, three major players are identif ied: users, the information system (IS) department, and management. Much has been written of the responsibilities of each of these agents aver the course of the life of the information system project (see Iucas 1982, for example). Briefly stated, ideally the user participates i n the design process and may in fact originate the project idea. The user is best equiped to understand the workings of the present system and therefore to provide input for system specifications, and good test examples. Clearly user satisfaction with the system is an important criterion for success. The IS department reviews the feasibility study, designs the system, specifies possible alternatives and the trade-offs they imply (languages, "off-the-shelf1# or special purpose programs, use of service bureaus, batch versus time sharing mode, hardware, etc. ) , handles the required programming, documentation, training, conversion and maintenance. Management responsibilities are werall approval and direction of the project, and providing ccmnnitment in terms of resources and recognition. Expert system projects include additional players. First, the domain expert is not necessarily a typical user of the system. If the expert is in fact just an llaverage userM, then his/her ir~~olvement in the E S development will, in any case, be much more intimate than that required of a user in traditional systems. Second, the position of knaJledge engineer requires skills that are not necessarily found i n IS departments. Tfiis role may be filled by groomhg in-house personnel, or through external services. These services may be obtained via independent consultants, and expert system software or hardware conp>anies. In traditional system develoyxnent, the IS department has the responsibility for choosing outside vendors and services. The situation is analagous here, with many of the trade-offs (such as reputation of the Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 campany, cost, type of service provided) being similar. Hmever, the IS department m u s t be knowledgeable about ES technology and the market in order to evalulate these trade-offs. F'urther, the IS department will have to work with the external organization to prcanote a smooth interface between &sting systems and pmcdures and the new system. If an outside organization is contracted for developing the system, anmqements for transfering awnersbip to the IS department (and what that crwnership entails) should be specified. 4.3. Prototype The use of prototypes has been referred to several times. Prototype in this sense means a scaled-dawn (in scope, p e r or both) version of the fully errvisioned system. While a working prototype phase may exist in the development of traditional information systems (Alavi 1984, Jamon 1986), it is not an inescapable part of the stamlard system building methodology; further, for traditional systems prototyping may refer simply to creating screen tho& pups" in order to improve the user interface. The formal prototype stage in ES development is suggested k c a u s e 1) it m y be han3 to know what is feasible when it comes to automating expertise without attempting a working, trial version, 2) given the newness of the technology (and the hype that surrounds it) a prototype can help set realistic expectations, 3) a demonstration can serve to garner and solidify enthusiasm for the project, and 4) a phased approach involves less risk. Finally, due to the inaxmental nature of building expest systems and the software developent tools created to support this developent, prototypes may be built rather quickly. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 4.4. Analysis, Design, Specifications, and In developing traditional informtion systems, each of the steps of analysis, design/specification and pmgmmhg is ideally separate and distinct; one does not prcx=eed to a subsexpent step until finished with the previous one. In the analysis phase, the current system is studied; transactions, data volumes, decisions made, etc., are detailed. Design/specification of the new system involves enumerating hardware and software, input and output files, media, pxoxdures for use, security and error control. Writing and testing the program follows the design phase. For expert systems, some analysis, design and detailing of specifications is usually required for the inception phase, and a l q e part of these tasks plus some pragmmbq is completed as part of the prototype system and feasibility report. These processes however are far less differentiated in ES developmnt than for traditional systems. In fact, these processes more closely resembles a paradigm for decision support system development. (See Spmgue and Carlson, 1982, and Keen and Scott Morton, 1978 for example.) The inmemental, iterative prooedure for building ES has already been described. 7his is a function of the fact that experts simply cannot sit d m and fully and completely specify their m problem solving behavior. Ekpert systems are construcked as a collaborative and iterative effort between one or more knowledge engineers and one or more damain experts. The knowledge engineer is experienced in eliciting knowledge from an expert, structuring knawledge such that it can be pzmpmned, and then coding the knowledge. Working frcan the first prototype, the knowledge engineer can improve, refine and expand the capabilities of the system throu* further interaction with the expert, This loop: eliciting, structuring and coding knowledge is trav- many times before a suitable version for release is produced. Here, pragmmbg may be considered to be a function of the analysis, design and specification of the system and visa versa. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 At the beginning of the development of the system, infrequent contacts behem the knmledge engineer and the expert will suffice; as development proceeds, more frequent and more lengthy meetings will be required. In any case, it is crucial that the expert has sufficient time and W t i v e to work with the knowledge engineer on the system. 4.5. Testing For both traditional and expert SA&D, the feasibility study should specify a timetable for release of the system, first as one (or more) trial or test versions, and later as the ggproductgg version. The test version receives limited distribution and is used to fine tune the system for general release. For each release, the system should pass the minimum requbments set up in admnce in the feasibility report. Disagreement and mertahty concerning the evaluation pmedure is likely to exist. Developers m y be most concerned with technical criteria, and users with quality interfaces, while managers worry about how the system will help solve ggbusinessw problems. (For ES even a purely technical evaluation based on the quality of solutions provided by the system may be difficult to conduct. For example, e x p e r t s themselves may have differing opinions as to what constitutes a "rightgg answer.) While disagreement may exist as to haw to evaluate the system, the time to resolve such differences is prior to development. (See Hayes-Roth et al. 1983, for a g o d discussion of evaluatimj expert systems.) In any event, expert systems nust be evaluated under ~rnfltiple criteria. Possible evaluation criteria include whether use of the system: Results in better solutions. Results in faster solutions. Results in more consistent solutions. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 Prarides greater worker satisfaction. Is easy to use. Reduces training time (for the humn users). Reduces deperdence on scarce individuals. Has resulted in greater insight into the problem. Reduces the number of extremely bad solutions. These criteria, should be easy to derive fram the goals of the system as stated early in the project. The more difficult part is measuring how the system fulfills these criteria. While same data collection procedures may already exist (the or quality standards for diagnosis tasks, as an example), data collection may have to be initiated as part of the project to allow for a "before and afterw study. While the level at which the system satisfies each criterion m y be flexible, other requirements may be both easy to detemnhe and rather fixed, for example: Time to repair must be reduced by half. Relatively novice technicians must be able to do the task (with the aid of the system) as opposed to the experts who do it now. Cdlls to the resident expert for help on difficult problems must be all but eliminated. The number of people doing the task should be reduced by 10%. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 rXrring testing, the system should be pushed to discover its limits, in terms of the range of its knmledge, but also on very traditional dimensions: security, back-up procedures, and error handling, for example. User ard/or mgement satisfaction with the system can be assessed via formdl questionaires or interviews. Acceptance will be predicated on not just h m well the system does what may be mmmly defined as its task, but also on ease of use, appropriate help facilities, speed, and on how well it supports the user in doing the task as (s)he sees it. A more f?m%mental question regards not the system per se, but rather the individuals for whom the system is designed. Rmwledge workers, may or may not want a machine . looking wer their shoulders while they do kheir jobs, particularly if they view the system as skill or prestige reducing. (Doetors are the classic case in point.) This possibility should be addressed early in system develoyxnent, so that disaffection due to resentment does not show up among users at the evaluation stage. As with traditional systems, both tangible and intangible outcomes will likely need to be measured; release of the system is, of course, contingent on the benefits outweighing the costs. Training, conversion and installation are similar to that for traditional systems. =ropriate doamentation should be developed by the krwwledge engineer in conjunction with the IS department if necessary. Training procedures are set up in cooperation with the users and management. Hopefully, conversion may be phased in, with the date for conversion agreed upon by all those involved. As with traditional systems, the question of ccanpatibility of hardware and software is important. With expert systems, ccanpatibility may focus on the use of AI languages or specialized workstations which need to interface with more conventional hardware and software. Methods for accessing ampmy data, and generally interfacing Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 with existing systems are frequently a necessity. These issues should have been planned for in the design phase. Problen-ts can occur involving the conversion and installation phase of ES if develcpent of the system was performed by personel outside the oryanizationts MIS function. This outside developmt may happen, given that the skills and training requi_rrrl to develop an ES are rather specialized. Cooperation of the MIS group is essential for installing the ES as part of the oryanizationts overall network; their :!buy inff should be managed from the start of the project. 4.7. Operations The operations phase for traditional SA&D involves fixing errors, and when necessary rraking changes to the system. For both traditional informition systems and expert systems, ass* an artside agent has been involved in the developmmt phase, the arrangement should include provisions for ongoing maintenance of the system, in the traditional sense (i . e. , information ffhot-lineff, software revisions, etc. ) . Maintenance duties w h i c h are the responsibility of the IS department should be clarif ied. For ES particular attention must be paid to the maintenance task as these systems work in knowledge intensive areas, and problem solving knowledge in practical applications frequently changes. 'Updating may be required for knowledge proper, for data the system uses, or both. Some asessmmt of the extent of updating should be made early in the project, based on the nature of the task. For the Xa3N system, Digital Ekpipment Corporation's ES for configuring their Vax family of cmpters, saw eight individuals are employed full time on the updating, that is, collecting and em=oding task. This huge effort is due, at least in part, to the fact that the system must constantly be modified to work on new products (McDemtt Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 1984). An estimate of the frequency of knowledge updating and the assignment of updating responsibilities should be part of the project report New knuwl&ge (and alterations to the system independent of the knowledge base) will likely be suggested by users of the system, particularly if they are e x p r t s . Same formal mechanism should be established to capture these suggestions, and system performance in general. Unfortunately the knowledge encapsulated in expert systems cannot be updated simply by adding knowledge. (Autmted updating, that is, clpdating performed entirely by users suggesting new knawledge to the system, is an important area of research in XI. Of course, systems w h i c h learn from their awn mistakes, another research area, would solve this problem.) Aside from having to put the new knowledge into a form the system can understand, the new knowledge must be tested for its effects on the existing knowledge. This is to say that individuals familiar with the details of structure and operation of the system must be involved in the maintenance task, in addition to domain experts. If the system was developed by knowledge engineers outside the organization, updating will have to be performe3 with the assistance of these knowledge engineas. Alternatively, as part of the project sufficient expertise must be brought in-house (i.e., to the IS department) . Again, depending on the nature of the task, its projected future rqmhemmts, and the data involved, maintenance may be a considerable job. Finally, at sane point after the system has been in regular operation, same thought should be given to haw well the system performs. Are the projected cost savings, or productivity i m p m e m e n t s being realized? Does the system satisfy current and projected demands? How well does the project contribute to the overall strategy of the organization? Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 Digital Ekpipent Corporation, one of world's largest computer mmufacturers, is perhaps best knuwn for its Vax series of minicomputers. An interml rep* describing expert system technology was circulated within Digital's manufachurhg/repair engineering group in Nijmegen, Holland in early 1985. The report defined the technology and its capabilities, and suggested a range of possible application areas within the group's purview. An ES approach for these applications was w r t e d by the follawing rationales : Knowledge, currently, in and of itself is a vital camrcdity for high technology campanies. Expert systems are a means of managing knaJledge. In particular, the knawlt-dge required for the manufacture and repair of computer hardware will likely increase in scope and complexity as the products themselves becane more complex and product life cycles shorten. A forecasted high demand and increasing cost for human experts. A long term need to increase productivity and to reduce costs. A competitive incentive to maintain and develop expertise in the area of AI. ?he document recarmnends one application in particular, an ES for assisting in the task of haKtware module (tfboardw) diagnosis and repair. ?his damain was suggested firstly on economic grounds, and secondly on the rationale that relatively well defined, diagnosis type problems are Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 typically well handled by FS tdmolcgy. A pilot project is called for in order to both m o r e fully explore the application area, and to prwide support tcmxds a long tesm strategy. The creation of a project team is stiplated, ccanposed of a -ledge engineer, domain expert and local process manager, who muld receive overall direction f m a management steering committee. ?he report appointed individuals to the steering committee. Internal consulting support ( f m Digital's AI group) and external consulting support ( f m local university and government projects) were suggest&. Finally, the required funding, and a timetable for forming the project team, evaluating resouw=es, and developing the pilot (prototype) system(s) was delineated. Upon approval of the report, the two individuals whose responsibilities were to include the knowledge engineering function were sent to internal training courses covering ES technology, and its application within Digital. 'Ibis education phase was necessary in order for the individuals, who work in the manufacturing function and whose professional training is in engineering, to 1) beccrme fully acguainted with the opportunities and methodologies associated with ES, 2) provide expertise in the selection of an appropriate application domain, 3) contribute to the feasibility study and 4) ultimately, act as krwwledge engineers during developrent. Lastly, at this point, liason with Digital's internal AI consulting group was established for maintaining ongoing assistance. The timetable pravided by the project report specified approhtely six months for the process of education of the -ledge engineers, further specification of the domain application, and developtent of a demonstration prototype. The fXLl feasibility study would be recpired within a month of the pmtatype* Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 5.2 Feasibility Study The feasibility study (or 9?roject Plan" as it is called within Digital), was prepared by the manager of the strategic engineering function at the Nijmegen facility, the two laxxlJledge engineers assigned to the project, and a representative of Digital's internal AI consulting group. The authors did not include a domain e x p e r t . As all cmputer vendors, Digital is concerned with assuring reliability of its products and, in the case of hardware failure, minimizing customer dmtime. When a harctware fault occurs at a client site in Europe or the Middle East, the offending board (or boards) are isolated and removed from the computer installation by a field technician who then replaces them with functioning equipment. ?fie faulty boards are then sent to one of 17 field service sites located throughout E b m p e , or to the one in Israel. At the field sites, the boards are either repaired, scrapped, or sent to Digital's central repair facility (CRF) in Nijmegen, Holland for mre extensive testing. The CRF has mre sophisticated test equipment and more experienced personnel than the field service sites. Major costs are incurred in this process due to the inventory of boards required to support such a system, skipping fees, expensive test equipment and the training of repair personnel. Training costs are particularly important as diagnosing procedures may be different for different boards, and hardware is constantly changing. Additional costs arise when technicians replace sets of components on a board unnecessarily. This occurs when a problem area is isolated, but detmmmmg . , exactly the compmt responsible is especially difficult and time consuming. There are currently over 500 different boards, each one containing same 150 - 250 electronic ccmpnents. Repair times range from 30 minutes to several hours, depending on the board itself, the equipent available at the repair site, and the expertise of the technician. Test equipment varies Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 fram sinpsle voltage and current meters, to -built devices which automatically run a series of tests on the suspect M e , to full-fledged, high-end computers. Many thousands of modules pass through the system per month, each M e typically costing in the humlreds of dollars. Digital's repair function can be considered quite a sizeable business in its own right. The system will be available at each repair site, in the form of terminals at the workbench of the repair technician. 'lkmugh interactive dialogue with the technician, the system will direct the repair process by suggesting appropriate test pmcedwes and intexpreting the results of the. tests. An ES solution is proposed because there is a good match between the characteristics of the problem and the capabilities of expert systems. F'rom a technical point of view, the problem is difficult but limited in breadth, experts exist, and the task involves symbolic, not mathematical manipulation. Morec~er, this is a standard diagnosis problem and as such tends to be well suited for an ES application. F'rom a practical/business point of view, e x p e r t s are available, and the possible tangible benefits are sizable. FWber, Digital, as a major computer capmy and one irnro1ve.d in AI researc;h and products, has a practical interest in learning about this technology through its own experience. A c c o r d i n g to the report, the Knowledge-Based Board Diagnosis System (KEBIE) will reduce costs by increasing local repair, thereby minimizing **in-pipeline inventoryw, speeding repair, distributing scarce expertise, lowering test &pent -ts, cutting training time, and decreasing cmponent usage. F'urther, the -ledge and records developed through the system may provide fault infomtion useful for the design of future haxdmre pKxtucts. Diagnosing all conceivable faults or incorporating all possible repair informtion are not goals of the project. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 Digital's motives in this project are, firstly, to implement this particular expert system. However, their goals extend beyond this immediate task tnmiis twr, others: 1) the development of a rrred.lanism for evaluating the feasibility of this technology in other applications, and 2) the creation of a set of generic software modules that can sewe as a basis for future expert systerrrs in the manufacturing/repair ernriromt. The feasibility study begins with a description of the current system and a discussion of the opportunities for improvement. A single alternative, the e x p A system is proposed; therefore the analysis compares current procedures to the proposed one. 5.2.1 33enefits Cost reduction in inventory was estimated via a simple model which estimates the savings in boards "in the pipelinen e x p e d A via use of the . The following input to the model is required; input estimates were varied in order to provide a sensitivity analysis. The total volume of boards in the system. The fraction of boards sent for repair to the central repair facility. The carrying cost of inventory. The Yurn around timew of boards sent to the central facility. The wturn arcxlnd timew of boards in local repair. The cost of a module. Additional expect& benefits were approximated by providing estimates of the impact of the system on each of the following categories. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 Decrease in average time to diagnose. Reduction in learning curve cycle. Lrrwering of depreciation costs. Reduction in component usage. Actual percentages were proposed for each category, based upon judgments of the project team. No dollar value was provided for these savings, though such a transfomtion could be made. 5.2.2 Costs Costs for the system are broken d m into three categories: hardware, software and personnel. The system will be developed on a micro-Vax (a stand-alone Vax workstation), and will run on any Vax under the VMS operating system. At the repair workbenches, access to the system is to be assured through lxxfhxe and software connections to Vax's already on site, or if a Vax is not already available on site, use of the system will require the purchase of a micro-Vax. S o f t w a r e required includes the O X - 5 shell, the knowledge base, the knowledge crpdating mechanism, and the interface code, all of which is considered part of the KBBSX;. (See the next sections on prototype develoysment, and systems analysis and design for more on hardware and software.) Fersonnel requhmmts include the project manager, two knowledge engineers, internal consultants, domain experts, and users, While specific individuals were identified and assigned the task of manager, knawledge engineer and consultant, only the eventual necessity of recruiting articulate, qualified, and willing domain experts and users was recognized. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 A quarterly budget e x k n d h g over the length of the project (a period of dlmost two years) included costs for personnel, travel, training, hardware and software. 5.2.3 Project Planning and Specifications The folluw-ing time frame for the project was suggested. I: Quarters 1 and 2. Select a computer and several of its boards on wkich to focus. The application should be direct& taiJards a stable, well known product. The V a x 11/750 (Ccxnet) was suggested. Develop prototypes. The first demonstration prototype is to be tested at Nijmegen. A second prototype is to be field tested at a local repair facility. Evdluations of each prototype are to be conducted. Revision and improvement of the system should be e>rpected at each evaluation. The system is to be distributed to those local facilities wkich request copies. 111. Quarters 3, 4, 5 and 6 Select and develop an ES for a new and lesser knawn project. Create additional module repair systems for additional pmducts. Develop, based on experience gained thus far, procedurdl guidelhes and generic software tools for M e r use in the repair process developnent envirorIment. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 The project plan clarifies the limits of the project: the system should not be exptxted to diagnose all conceivable faults, or to incorporate all possible information CO- board repair. System specifications include: 1) capability to aid in the diagnosis of 30% - 50% of the fault., 2) a response the varying from 10-40 seconds, 3) a context sensitive ffhelpff function, and 4) a Nkistory file" option for recording the diagnosis procedures used by the technicians for problems not solvable via use of the K 6 B . Detailed criteria by which the system would be evaluated were not cited in the report. Technicians currently doing the repair task wili still do the task under the new system; that is, no change in the individuals doing the task is The task itself will change in the sense that a new piece of test equipment (the ES) will be added to the tools available to the technician. It is expected that fewer boards will have to be sent to central repair locations, less boards will be scrapped, fewer consultations with other technicians (on difficult problems) will be required, and the learning curve for repair will shorten. No special training is expected to be needed, other than minimal on the job training. Scnne concern was expressed about the KBEDS acting to de-skill the task of board diagnosis, and a concommitant resentment of the system on the part of the technicians. Solutions to this possible eventuality were to a) design the system to avoid de-skilling, b) encourage technicians to move on to more complicated products, and c) encourage technicians to take on updating and eKMnding the KBBlXj as part of their jds. 5.3 Prototype A nurmber of e x p e r t system shells were evaluated with the assistance of Digital's internal AI group. ?he first prototype was developed with a cxmmercially available PC-based product. Rris initial prototype contained Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 lrnawledge about a subset of possible faults for a single Comet board. A decision was then made to m e to the OPS-5 language on a V a x - b a s e d system. It was felt that this platform allawed for the greatest potential in terms of cammication and networking within the Digital manufacturing and repair e n v h m t . Digital policy directs only the use of already "provent1 in applications ES software. While ES within Digital have employed a variety of software tools, OK-5 was ultimately chosen for its flexibility, and longv history of use. The prototype senred to: 1) deepen the project team's understanding of the technical and organizational issues surounding the board diagnosis problem, 2) gather technical support from the internal AI consulting group, 3) gather project support from the repair facilities, and 4) averall, dem3nstrate the technical feasibility of the application. 5.4 Analysis, Design, Specification, and Rqxmmhg Developrrrent of the system progressed in stages, from prototype to test- release version to full-release version. While the prototype contained limited knclwledge about one board, the test release version could reason about faults on four. Knuwledge was i.nmamtally added to the system, broadening and deeping its capabilities through full release. Aside from some ccmmnly encountered problems (i. e. , the e x p r t s had difficulty expressing their knowledge, the process was ~ ~ l y time consuming), the project team acknowledged their frustration in accessing the experts. The team members emphasized the importance of gaining ccnnmitmmt frcxn verifiable ex.prts earlier in the project. Each use of the system during test and full release autmtically sent a Ituse reportf1 electronically back to the project team at the Nijmegen facility. The report recorded the type of board under test, and a rating on a seven-point scale of h w well the KBJ3DS was able to diagnose the fault. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 (This scale, described in the next section was used for evaluation purposes.) The technician could also enter free form amments regardhg performnce. Thus, if the system could not find the fault, information was collected r e q a d h g the type of fault which occurred, and h m the technician proceeded to diagnose the problem. !Ms knowledge, pramptly passed back to the project team in Nijmegen, could then be incorporated into the system by lneers. the knowledge eng' 5.5 Testing 5.5.1 Test Blease Several months after the demonstration prototype was completed, an updated system with a more extensive knowledge base was ready for evaluation. A successful review at this stage would mean an official test release at one repair facility. (?his facility was select& based on the willingness of the plant manager to participate in the test. ) In order to allaw for more complete evaluation, the system was installed at the proposed test site, in Ewy, F'rance, where local staff could samewhat informally examine it. This feedback could then be utilized in the review for test release. Members of the test release review team included the EVry facility plant managers, a repair engineer from Evry who had had access to the system, a representative from Digital's intenadl AI group, and two representatives from the advanced mmufacturing group who had experience with another ES project. The day long meeting's agenda began with an historicdl overview of the goals and objectives of the project, an outline of its current and p l m status, and a demonstration, including "hands-onH usage of the system. Overall the project was on schedule, the designem having created a system containing krmwledge mgardhg four Carnet modules. In the process, they had gained significant understanding of the board diagnosis task, and solid experience in knowledge engineering. Problems Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 regarding accessing damain experts and the task of gleaning expert knowledge were discussed. Aftes the presentation by the developerkt team and an opportunity to try the system themselves, the review members broke into groups, spendirq an hour and a half discussing the project. The team then regrouped to discuss their conclusions with each other and the project team, and subsequently provide feedback to the project team concaning further developent and irrprzlementation. From both the response of the technicians who had tried the XBBIE on site, and from the demonstration at the review meeting, it was felt that the system was ready for field testing. While no technical redirection was suggested, several proposals (made by the Evry test site managemnt) concesning organizational issues were enterbitled and ultimately incorpomted into further develcqrme~t plans. The first of these concerned better orientation of the personnel at the test facilities concerning the nature, scope and -ts of the project. It was suggested that the technicians, engineers and managers at the repair sites needed to be better informed prior to release of the system as to how the system works, the h a r d w a ~ mired, who will use it, and what if any, training would be required. It was agreed that a steering curtunittee should be set up at the repair facilities, capsed of project members, and facility managers and technicians; the objective of these steering groups would be to better manage the transition and maintenance of the KBBE at each site. It was also suggested that several technicians be brouFplt more closely into collaboration with the devel-t team in their work tclwards enhancing the system. FWther, the nature of a formdl evaluation mechanism was debated. The project team proposed that the system be evaluated during test release phase according to the following mechanism. Each use of the system could produce one of seven possible results, listed belm. The project team Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 had specified the nwdmm or minimum percentage of boards to fall in each category; these are listed in the right hand column. 0) No Response < 30% 1) 'lbtally Incorrect Response < 5% 2) Unclear and Misleading Response < 5% 3) Neutral Response < 10% 4) Samewhat Helpful R e s p o n s e > 15% 5) Very Helpful -rise > 20% 6) System Found Fault > 15% The review team, in particular the manager of the E k q test site, proposed alternative criteria by which the system should be measwed for successful exit from the test release phase. As opm to specifying minimum or m x h m percentages for each category, it was suggested that at least 60% of the trials produce results greater than 3, with an average performme of at least 4.5. Moreuver, it was pointed out that of major concern to mgement was that average time to repair (A!ITR) be reduced; a goal for success would be a reduction in this measure by 33%. Finally, it was noted that no measure for opmtor satisfaction with the user interface had been established. It was proposed that feedback from the technicians ragarding satisfaction with the user interface be obtained. These proposals were approved by the review and project teams as suitable criteria for evaluating the system during test release, in anticipation of full release. 5.5.2 Full F&lea~e The system was in test release for sane three months. At the end of this period, an evaluation was held to determine if and how a full release should be und-. ?his evaluation concluded that: Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 Overall, the system performance gene.rally met the technical standards n q u i m d for full release. (Reliable ATTR data were not obtainable hawever; it was agreed that these data would be appropriately determhed and made available in the near future. ) ?he system, both f m the technicianst and manager's viewpoints, was extremely valuable as a learning tool; that is, as a means for novice technicians, or more experienced technicians unfamiliar with Comet boards, to "get up to speedH quickly and with minimum frustration. Novice technicians working with the KF3BaS could maintain output at about the same level as very experiencd technicians. sane concern was expressed by the manager of the repair facility concerning the time spent by his technicians in working with the knowledge engineers in qdating the knowledge base. It was suggested that this be considered formally as an investmmt by Digital, as this time could not, under present corporate guidelines be counted nproductive timew vis-a-vis the established repair metrics. Finally, the experience during the test phase reinforced the belief that a careful preparation be made at each repair facility prior to intmducing the KBBaS. Fersonnel at the facilities should be made aware of the capabilities, reqyirements, limitations, and benefits of the system before it comes on site. 5.5.3 m-Release Full, inmemental release to the other test facilities follwed. Six months fram the start of this full diffusion of the system, a final evaluation was performed based both on the criteria established at the review for test release, and a cost savings analysis. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 Performnce data w e r e collected for the start, mid-point, and end of the six month long, full release evaluation period. Continuous i m p r o v m t vis-a-vis the seven point rating scale w a s observed, though the level and improvement of perforx-ane varied between the boards. W i t h respect t o the average time to repair metric, as ccanpared to process stamiards defined corporate wide, the reduction i n ATI!R ranged from 20% to 6%, depending on the board. More impmssive were the %ealtt ATI'R improvements for novice technicians; using the system reduced the ATI!R by approximately 50% for two of the boards. 77- feedback from both technicians and managers was very positive. Again, the big advantage of the system was perceived' to be the sharp rise it induced i n the learning curve, and the improved overall performance of novice technicians. 5.6 Training, Comersion/Installation After s u m f u l evaluation of the test release system, the K B B E was released t o the other Carnet repair f a c i l i t i e s , one a t a time. The project team in each case was responsible for presenting the system t o the f a c i l i t y staff. Training r e q u k i l was muumdl. . . 5.7 Operations The automatic, el-nic fommiihq of diagnostic reports assured a mechanism for keeping track of needed updates. These could be incorporated into the system by the knawledge engineers in the Nijmegen f a c i l i t i e s , and the updated knawledge bases tested. A t regular intervals, the revised system can be transmitted to the lccal repair sites. A s the inception phase described, the Comet KBBDS was seen as an early stage of an overall kwwledge engineering pracess a t Digital. W i t h the Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 future development of additional ES in mind, a mugh analysis was performed canparing the estimated cost savings of the Comt system, with the estimated costs of developing and maintaining a similar system. Based on the Comt project experience, the estbted total costs for developtent arsd maintenance (i.e., knwledge amsition, validation, hardware, etc. ) of another similar system would amunt to between $25,000 - and $50,000. Estimated annual savings due to the Comt system, based solely on the reductions in AlITR (based on process standards) and in the learning curve, more than offset tkis cost. (These savings do not include those due to inventoxy or scrap reduction, nor those due to reductions of boards Itin the pipelinew. ) It should be noted hwever, that the managers of the repair process find that the greatest value of the system is in the increased flexibility it provides. In bringing novices "up to speeY1 quickly, and in general allwing less exprierx'Rd technicians to perform mre proficiently, the Carnet ES has allowed for 1) peak work periods to be handled with relative ease, 2) decreased dependence on the constant availability of expert technicians and 3) greater freedom for the e x p e r t s to work on the more difficult problems. Overall, the system was seen to have achieved its original objectives, Since the implementation of the Comet ES, the system has been expanded to include diagnostic krxrwledge about mdules on Micro-Vax 2000 workstations. Cummt discussions focus on future directions of expert system technology as part of the overall strategy in Digitalf s repair and manufacturing environment. Figure 3. presents an overview of milestones of the Comet expert system project. Figure 3. about here Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 ?he task of managing the develcpmt and implmtation of a large scale expert system is in many ways similar, but in substantive ways different than that for large, traditional ccmputer based systems. This paper has served to highlight the similarities, describe the differences and provide a rationale for the contnsks. Figure 4 summarizes the differences in the SA&D process for the two systems. Figure 4. about here Among the differ- with important managerial implications, are that an Es: Captures and manimates knowl&ge, as opposed to information. Includes a working prototype phase. Is developed in an incremental, iterative style. R q u i r e s additional players; in particular, at least one knowledge engineer and one damain expert. May involve non-traditional software and hanNare. Will likely involve significant revisions (q@ating) of the system once in operation. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 Successful management of an ES project requires: First, clearly, recognizing when an ES solution is appropriate to the problem at hand, and when it's not. U n d e r s t a n d j n g the likely financial resources required. Identifying attainable goals for the system, and the associated benefits. Specifying evaluation criteria at several phases of the project. Setting an appropriate timetable. Identifying an e x p e r t , amt ensuring his participation. Enlisting or training a knowledge engineer. Caution concernhg the use of ES hardware and software, vis a vis maintenance and interfacing with existing systems. Understanding that conventional pmcpmmhg resources will no doubt be necessary* Ekpecting knowledge engineering to be tedious, time co&, iterative and incremental. Managing expectations and skepticism. Considering the organizational W o r task changes w h i c h are likely to result from implementation of the system. Allotting resouras and mechanisms for ongoing updating. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 This paper has addressed in detail each of these issues, by describing the life cycle of an ape& system. Each phase has been defined, and a prescriptive guide tcrward managing the resources and responsibilities required over the course of tkis life cycle was presented. An example of the developent, implementation and maintenance of a large-scale, multi-site ape& system served to illustrate the conceptual fmmamrk. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 References Alavi, M., "An Asesmmt of the Prototyping Approach to Information Systems Developimentw, Communications of the Am, vol. 27, no. 6, June 1984. Bdbrm, D.G., Mittal, S. and Stefik, M.J., t@EhpaA Systems: Perils and pranise@@, Cammications of the Aail, vol. 29, no. 9, 1986. Wlrch, J, and Grudnitski, G., Information Svstems: Theow and Practice (4th ed.) , Wiley, 1986. Dym, C.L., l@Issues in the Design and Implementation of Expert Systemst1, Artificial Intelliqence for Ensineerinq Desicm, Analvsis Manufacturing lAIEaAM), vol. 1, no. 1, 1987. Gilmore, J. and Howard, C., "EkperIz System Tool Evaluationtt, P r d i n q of the Second Annual Conference on Ekpert Systems Tools and iPs,~lications, - Avignon, FraTx=e, 1986. (Authorsr address: Artificial Intelligence Branch, Georgia Tech Research Institute, Atlanta, Georgia 30332, USA.) Harmon, P. and King, D., Expert Systems: Artificial Intelliuence in E3usiness, John Wiley and Sons, 1985. Hayes-Roth, F., Watemm, D. and Iemt, D., Build- E % x z t Svstems, Addison-Wesley, 1983. Holsapple, C. and Whinston, A. , Wzsiness E3mer-t Systems, Irwin, 1987. Hukr, G., ItTfie Nature and Design of Post-Industrial OrganizationsH, Manauement Science, vol. 30, no. 8, August 1984. Janson, M., @IApplying a Pilot System and Prototyping Approach to Systems Development and Implementation, Information Mznaqement, vol. 10, no. 4, 1986. Keen, P. and Scott Morton, M., Decision Suport Systems: Organizational Perspec=tive, Addision-Wesley, 1978. M e , D., "HOW Coqpers and Lybrand Put Ekprtise Into Its Computers1@, Wall Street Journal, p. 33, Nwember 14, 1986. Kozlw, A., I1Rethhkhg Artificial Intelligencew, Technoloqv Wzsiness, May 1988. Kupfer, A. , "NOW, Live Experts on a Floppy Disktt, Fortune, October 12, 1987. Leonard-Barton, D. , l@lhe Case for Integrative Inrxnmtion: An Expert System at Digital1', Slcan Manauemat Review, Fall 1987. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 Ir?Onard-E3artonI D. and Sviokla, J., wPutting EYpert Systems to Workw, Il[arvard B u s i n e s s Review, March-April 1988. Linden, E., lvIntellicorp: The Selling of Artificial Intelligencew, Hiqh Technolay, W s x h 1985. Luconi, F., Malone, T., and Scott Morton, M.S., vvE3rpest Systems: The Next Challenge for Managersn, Sloan Manacfement Review, Summer 1986. Lucas, H., Information Systems Concepts For Management, McGraw-Hill, 1982. McDemott, J., lvR1 Revisited: Four Years in the Trenchesvv, a Maqazine, Fall 1984. Mettxey, W., "An Assessrm?nt of Tools for Building Larye Knowledge-Based Systems1v, Maqazine, vol. 8, no. 4, Winter 1987. Senn, J., Analysis and Desiqn of Information Systems, McGraw-Hill, 1984. Sprague, R. and Carlson, E., Wri1di.w Effective Decision Support Systems, Prentice-Wl, 1982. Waterman, D. , _A Guide to EamA Systems, Addison-Wesley, 1986. Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 Inception Feasibility Study systems Analysis Design Specifications lksting Training ~onversion/Installation Operations Figure 1. A Traditional Systems Analysis and Design F'rarnework ( f m meas 1982) Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 Inception + Prototype Propcrsal Appmval of Prototype Development Workkj Versions (Incremental Improvements) Demonstration Prototype + Feasibility Report Approval of Project Developrent of System for Test Release Approval of Test Release System (Evaluation Criteria Passed) Test System Released Evaluation of Test System Perfonnance Improvement of Test System For General Release General Release (pefhaps phased in) Evaluation of General Release System Performance Maintmarle~ting Figure 2. Life Cycle of An Expert System Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 Knuwledge Wineer brought on board; started coordination with external consultants; selection of Comet application. April 1986: Evaluated ES shells; coordination with internal AI group. July 1986 Developed first prototype (for one Comet module) on a PC using Wrsonal Consultant Plus fram Texas Instruments; decision to go to OPS5 based system on Vax; t m m s p r t prototype to Vax. Sep- - October 1986 Prototype reviewed; project approved; direction defined to include four Comet M e s , second Knmledge Ehgineer brought on board. N o v e ~ n b e r 1986 - May 1987 System capabilities expanded, review for test release; evaluation criteria better defined; problems/successes isolated. June - September 1987 Trial implementation; approval for phased in full release. 1987 - March 1988 Phased release to Comet field repair sites; remote update acquisition; project evaluation. Figure 3. Milestone!3 of the Corrret ES project Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06 Feasibility study Traditional System Approval for: feasibility study. Task Selected: iniormat ion-based Not Applicable Analysis Consecutive, Design well defined stages. ~pecifications Testing Well established P-• operations Fixing errors. Occasional updates. A p p r o v a l for: feasibility study and demonstration prototype. Task Selected: laxrwledge-based Working, limited version. Inmementally developed. Includes 'evaluation of prototype. ES specific costs and benefits. Iterative, incremental process. Additional players required: knowledge engineer and domain expest. Wk#-ittt answer may not exist. EsIperts may disagree. Likelihood that users are skilled ttknowledge workerstt. Fossibility of non- standard hard/software. Likelihood of frequent updates* Figure 4. Systems Analysis and Design: Contrasting Traditional and EXpert Sy- Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-06