PII: 0898-1221(90)90116-2 Computers Math .4pplit Vol. 20. No 9 10, pp. I I 1-123, 1990 009%4943 90 $3.00 + 0.00 Printed in Great Britain. All rights reserved Copyright ,£' 1990 Pergamon Press pie I N T E L L I G E N T I N T E R A C T I O N I N D I A G N O S T I C E X P E R T S Y S T E M S Y. LIROV and S. RAVIKUMAR AT&T Bell Laboratories, Holmdel. NJ 07733, U.S.A. Abstract--Advisory systems help to improve quality m manufacturing. Such systems, how, ever, both human and computerized, are less than perfect and frequently not welcome, Sharp separation between working and learning modes is the mare reason for the apparent hostility of advisory systems. Intelligent interaction deploys computerized advisory capabilities by merging working and learning modes. We have developed a knowledge-based interactive graphic interface to a circuit pack diagnostic expert system. The graphic interface integrates both the domain knowledge (i.e. circuit pack) and the troubleshooting knowledge (i.e. diagnostic trees). Our interface dynamically changes the amount of detail presented to the user as well as the input choices that the user is allowed to make. These changes are made usmg knowledge-based models of the user and of the circuit pack troubleshooting domain. The resulting system, McR, instead of guiding the user by querying for input, monitors users actions, analyzes them and offers help when needed. McR is able both to advise "how-to-do-it" by reifying shallow knowledge from the deep knowledge, and to explain intelligently "'how-does-it-work'" by abstracting deep knowledge from the shallow knowledge, McR is used in conjunction with the STAREX expert sy.stem which is installed at A T & T factory. I. I N T R O D U C T I O N In this paper we discuss the development o f advisory interfaces for electronic circuit pack troubleshooting. Advisory interface is a term used to describe collectively any training and reference material, on-line help and guidance, and any other user support means. A diagnostic expert system is an example o f a computer-based advisor. Computerized advisory systems are hard to build and difficult to use. Since a consultation is a break in work continuity, new users try to skip around in a training sequence or dismiss training altogether [I]. Additionally, as Ref. [2] noted: "'Studying advisory problems and developing advisory solutions for the leading edge o f h u m a n - c o m p u t e r interaction is hampered by the fact that the leading edge must have already been codified and deployed before advisory problems can even exist." A n o t h e r basic difficulty in developing advisory interfaces is the lack o f understanding o f coaching and interacting. For example, people often seek advice by making claims about the possible answer to their (unannounced) query. Does it mean that in such cases a direct question is less efficient? A n o t h e r important issue is the frequency o f an advice, or how often should the advice be offered? A good parent does not teach the children at every occasion when something is not up to the standards o f the parent. How detailed should be the advice? When is it more important to provide the declarative advice (i.e. how-does-it-work) and w h e n - - t h e procedural advice (i.e. how-to-do-it)? In addition, it is i m p o r t a n t to note that the advisor is constantly under suspicion o f giving the wrong advice. The advisor loses credibility after displaying even the smallest deficiencies. W h a t are the techniques to restore credibility? And what are the answers to all these questions, when the advisory interface is a computer program. I.I. Research and Development Topic In this subsection we refine the above outlined research topic to a task: we specify what exactly we want a c o m p u t e r to do. Having developed an expert system for a m a n u f a c t u r i n g facility (STAREX, see Lirov [3]), we experiment with a possible solution to the most important question: how to deploy a less than perfect computerized advisor? Basically we are trying to improve the classical expert systems which patronize the user by issuing queries, explaining only when asked, and giving the result only at the end o f the session. Such systems treat the user as an object for providing the details which are necessary to complete the reasoning done by the machine. These systems also maintain a sharp separation between the learning and working modes. III 112 Y. LIROV and S. RAVIKUMAR I. I.I. The task Carroll and A a r o n s o n [2] address the same question by simulating an intelligent advisory system for an interactive software design package (the "'Wizard o f Oz" technique). We build on their work in that we consider the same issue, but we restrict ourselves to a relatively narrow application domain. By considering a narrow application domain we expect to be able to go a step further from simulating an interface to actually writing its code and observing its behavior. We build an add-on software module, called McR, which merges working and learning modes. I. 1.2. The domai n We chose our application d o m a i n to be the electronic circuit pack diagnostics for two reasons: first, diagnostics is the widest expert systems application domain; and second, diagnostics o f electronic circuit packs is the better understood diagnostic problem because o f the availability o f deep knowledge [3,4]. We build on S T A R E X expert system in that we regard circuit pack diagnostic models to be available. Since S T A R E X is a deployed expert system, we expect our actual intelligent advisory system to be deployable at the factory floor level. We note here that to ease on our p r o g r a m m i n g efforts, ~.e have used a simplified S T A R E X version, which does not include truth maintenance. This issue is postponed for future research. On the other hand we reuse the softw.are modules performing the optimization o f the diagnostic sequences [5]. For the sake of better paper readability, we digress briefly to explain the basic concepts in electronic circuit pack diagnostics. For an overview o f the topic the reader is referred to Ref. [4]. I. 1.2. I. Diagnosis. Diagnosis o f an electronic circuit usually refers to the process of determining the fault~r component(s) that cause an undesired behavior (output) of the given circuit for some (correctly) given input. Diagnosis can be regarded as a problem o f economic optimization: there is a value associated with every c o m p o n e n t in the pack, as well as a fiscal value associated with the process o f assembly and soldering o f the pack. Thus, when a pack is declared faulty, it is desirable to replace only the faulty components. Moreover, the process o f identification o f the faulty components (the diagnostics) consists o f a series o f tests each o f which has an associated cost, expressed in such parameters as test setup time, c o m p o n e n t destruction, etc. The successful troubleshooter must be able to select an appropriate strateg), conduct measurements and replace components. Morris and Rouse [6] report that humans are not good in judging failure rates, h u m a n performance degrades as systems become larger and more complex and, or in the lace o f time constraints, presentation o f theory o f operation does not improve performance and proceduralization improves performance. Diagnosis is a difficult problem for the beginners. An unskilled troubleshooter has difficulties in making an indictment, doing it correctly and doing it sufficiently fast. As a result, a beginner causes buildup in work in process, unnecessary test-repair cycles, unresolved manufacturing problems (e.g. a fault) robot or a bad batch o f components), and even sometimes overlooked design problems. Therefore, the construction o f a u t o m a t e d means for troubleshooting guidance is justified both as an intellectual and as an economic challenge. However, the diagnostics problem li.e. the construction o f a c o m p u t e r algorithm which is capable to perform diagnostics) is NP-hard [7, 8]. I. 1.2.2. Troubleshooting strategies. Two basic approaches and their combinations are currently utilized in industry for conventional diagnostic software development: the fault dictionary approach and the guided probe approach. The fault dictionary approach requires simulation o f the fault) behavior of the system and subsequent storage o f the simulation results (as well as the fault assumptions~ in the fault dictionary. A " m i s b e h a v e d " o u t p u t o f the pack under test can therefore be looked up in the fault dictionary. The fault dictionaries are not being used x~idely since the) are usually incomplete and ambiguous. The guided probe approach requires only simulation o f the correctly functioning circuit pack. The basic tool in utilizing this approach is the blame-shifting mechanism applied after each measurement. The upstream (from the test point view) components in the circuit are blamed only when the measurements do not agree with the expected results. The efficient sequencing of measurements becomes the main problem when implementing the guided probe approach. The troubleshooter must employ some kind o f strategy in searching for the source o f the diffi- cult). Ref. [9] noted that poor troubleshooters mader fewer tests before accepting hypothesis as Intelligent interaction in diagnostic expert systems 113 correct, they had more incorrect hypothesis, and they pursued incorrect hypothesis longer than did the better troubleshooters. Glaser and Phillips [10] associated more than 20% o f strategic shortcomings (e.g. insufficient testing) with faulty inferences (e.g. misinterpretation o f a test). Additionally, poor troubleshooters tend to have incomplete lists o f hypothesis and to be frequently overconfident about their completeness [I I]. I. 1.3. The ciew McR behaves like a diary for the user, where the user enters the measurements and their results. The system collects this data and tries to reconstruct the reasoning of the user. If McR discovers a significant reasoning fault on the user's behalf, then it offers guidance. If the fault is a technical fault (e.g. using wrong measurement device), then the advice is a low-level, "how-to-do-it" advice, Otherwise, it is a "how-does-it-work" advice. Such advice can be about the test strategy (e.g. a diagnostic tree), or about the circuit pack (e.g. signal path). This combination of two approaches (the procedural and the declarative system explanations) is most likely to be the most effective means of troubleshooting advice [6], since the system merges the Socratic and the "'learning by doing" methods of instruction. I. 1.4. A d d i t i o n a l benefits An important expert systems characteristic is its flexibility to acquire knowledge. Expert systems usually mimic the behavior of experts which combine knowledge about several domains. Experienced circuit pack troubleshooters, for example, have some understanding about the circuit pack, know how to use measurement equipment (e.g. oscilloscope), have knowledge about the manufacturing process and its weak spots (e.g. "that robot loses its calibration frequently and thus inserts wrong components", or "'this transistor is often bad"), and know about generally good troubleshooting strategies (e.g. "divide and conquer"). Shallow knowledge bases contain rules which represent the combined expert knowledge. It is difficult to maintain such knowledge bases as it is difficult to comprehend all the interdependencies between the rules. Deep knowledge bases, on the other hand, promote easier maintenance by segregating different kinds of knowledge in separate knowledge bases. The price for this convenience is the need for an integrated inference engine which may take a form of a meta-interpreter [3, 4]. Construction of user interfaces for such knowledge bases is complicated by the necessity to maintain all the knowledge bases simultaneously. Our system is able to acquire different kinds o f knowledge (e.g. electronic signal path or troubleshooting tree) and immediately show the implications in the compiled Ishallow) form. We demonstrate this flexibility using an integrated graphics interface. From the methodological point o f view, this experiment helps to understand better the taxonomy of problems related to artificially intelligent diagnostics. In particular, we show that the contem- porary subdivision of the issue to seven subproblems [4] is rather superficial: the user interface problem is not a stand-alone problem, to be solved separately. Its solution includes solving all the typical problems for the knowledge-based applications, e.g. choosing knowledge representation, acquiring knowledge, etc. I . Z The Approach 1.2. I. User m o d e l Roughly, we develop intelligence o f the advisory system by creating a user model. The model can be used as a reference with which the actual user performance can be compared. The task of building intelligent advisory system through a user model seems to be tractable for three reasons: first, we can inventory the errors o f the analysts; second, we expect to develop shallow knowledge disassembly mechanisms to map from observations to user errors; and finally, we have developed in Ref. [3] a multi-source deep knowledge integration mechanism (reification), which we use to check the success o f the disassembly o f knowledge. Systematic shallow knowledge disassembly (extracting deep knowledge) is a new interesting problem having much in common with machine learning. 114 Y. LlROV and S. RAVIKUMAR 1.2.2. Reification and abstraction C o d i n g an intelligent advisory system is a large project requiring solutions to a variety o f problems: designing knowledge representation, c o n t r o l strategy, building knowledge base, integrat- ing multiple sources o f deep knowledge a b o u t electronic circuitry, developing user model, acquiring knowledge and displaying advice in a variety o f ways. O u r previous rep o rt [3] addressed the first four problems, while in this paper we deal with the last three. We show that the algorithms for knowledge reification and abstraction are the co rn erst o n es in the intelligent advisory systems. This view holds promise for further d e v e l o p m e n t because it provides better understanding o f b o t h reification and its i n v e r s e - - a b s t r a c t i o n . F u r t h e r m o r e , we introduce a new use for a b s t r a c t i o n - - w e view it as a tool to test user b e h a v i o r patterns. 1.2.3. Performance criteria The p e r f o r m a n c e o f the p r o p o s e d m e t h o d can be evaluated by testing it for correct and timely user e r r o r classification. While correct classification depends on the user model and diagnostic algorithm, the timeliness o f classification depends on how m u ch o f the relevant information has already been preprocessed. Being able to preprocess most o f the i n fo rm at i o n b efo reh an d holds promise to be able to deliver timely and correct advice. 1.2.4. Programming techniques T h e p r o g r a m m i n g a p p r o a c h is similar to that o f knowledge-based p r o g r a m m i n g since we use knowledge to supplement the observed user actions to generate o u r interpretations. T h e difference, o f course, is that the p r o d u c t o f the system is not a p r o g r a m code, but an advice. We use object-oriented logic p r o g r a m m i n g to develop graphic interface, and m e t a p r o g r a m m i n g - - f o r reification and abstraction. 1.2.5. The scope O u r m e t h o d might scale up to bigger systems, if deep and differentiable shallow kinds o f knowledge can be integrated, and a complete and finite inventory o f h u m a n errors are available. O f course, certain types o f intelligent help will always be missing because o f the inherent brittleness o f such systems. T h e system is honest a b o u t its limitations when it knows them: the system will tell that something is wrong but will not extend a t e m p o r a r y solution. We ad v o cat e such an a p p r o a c h to all advisory systems (machine and h u m a n ) in o rd er to maintain their credibility. O u r next question is: H o w well is the m e t h o d u n d e r s t o o d ? We confess that we d o not understand the m e t h o d very well, T h e reason is that the m e t h o d depends on having complete knowledge a b o u t the system under diagnosis and a b o u t the h u m a n errors. We are unable to p ro v e that we have completed the acquisition o f either kind o f knowledge, let alone the mapping between the h u m an e r r o r inventory and electronic circuitry deep knowledge. Thus, paradoxically, the only way to complete knowledge acquisition is to write d o w n a p r o g r a m with incomplete knowledge to deploy it, to p r o v o k e the difficulties, and to learn experimentally a b o u t the limitations o f the method. As a result, we expect not only to expand the scope o f the knowledge base but also to refine the knowledge a b o u t h u m a n analyst errors. 1.3. Design for Experimentation Once ~.e decided to use p r o g r a m m i n g as the main exploration tool, we must verify that o u r p r o g r a m is going to s u p p o r t e x p e r i m e n t a t i o n in addition to having the required p e r f o r m a n c e level. T o ensure "'experimentability", we write the code in Prolog, in o rd er to have all the inference mechanisms readily available for alteration. We will be able to observe the p r o g r a m ' s b eh av i o r both externally via graphics display and textual messages, and i n t e r n a l l y - - v i a Prolog execution trace. We d e m o n s t r a t e its correct knowledge t h r o u g h a set o f test cases c o r r e s p o n d i n g to the user faults. T h e p r o g r a m generally supports both " h o w - t o - d o - i t " and "'how-does-it-work" kinds o f advice, it is not tuned to one particular kind o f help. P e r f o r m a n c e o f the p r o g r a m has two aspects: its functional and its time characteristics. Functionally, we expect to observe increased n u m b e r o f rules and improved t r o u b l e s h o o t i n g procedures. O u r understanding a b o u t interaction, together with a short review o f some prevoius relevant work, is laid out in Section 2. An interface, implementing Intelligent interaction in diagnostic expert systems 115 o u r ideas o f interaction, is developed by efficiently combining knowledge and metaknowledge in Section 3. 2. I N T E R A C T I V E A P P R O A C H A c o m p u t e r program is considered interactive when it requires some user participation in order to conclude its processing. Diagnostic expert systems are obviously interactive systems both at the knowledge acquisition and at the diagnosis phases. The degree o f interactivity, the a m o u n t o f freedom that the user is allowed varies depending on the skill o f the user. If the user is a programmer, then an editor is sufficient. Otherwise, more sophisticated tools are required. Some guidelines for constructing h u m a n - c o m p u t e r interfaces have been published by D O D [12-15], but they do not provide the necessary information for determining the effectiveness o f specific interfaces and their sophistication. As a rule o f thumb, the program sophistication level is directly proportional to the user's familiarity with the c o m p u t e r programming techniques. S O P H I E (SOPHisticated Instructional Environment) is probabl~ the best example o f an intelligent interactive computer-based instructional system used for teaching electronics trouble- shooting [16]. The system can be used in two modes: a team troubleshooting game and an interaction with an expert. In the game mode the players o f one team inflict faults into a simulated electronics system and troubleshoot the simulated faults o f the other team. S O P H I E is an example o f an experimentally implemented learning-b)'-~h~ing environment with a t a x o n o m y o f user errors and an advice feedback mechanism to the user. Unfortunately, the system lacks graphic interface, does not handle significantly complex electronics equipment, and does not evaluate troubleshooting strategies. 2. I. Graphic Interaction Interactive graphics systems require both the inputs and outputs be specified graphically. Graphics editing o f the knowledge base has been proposed recently [17-19] as interaction means with the knowledge base. An expertise transfer system (ETS) [20] has been proposed to generate the rules from the user-supplied elements and their attributes and corresponding ratings. The system identifies conflicts and ambiguities in the rules (but not omissions), and asks the expert for modifications. Since ETS does not use a model o f the unit under test, numerous similar rules may need to be entered (e.g. pertaining to the same faults on different channels) and at the same time ETS may still not notice some o f the missing rules. Sand K A S T (Sandia Knowledge Acquisition System, Hill et al. [19]) uses a directed acyclic graph structure as a formalism for knowledge acquisition. In this graph, a node represents a state o f the troubleshooting session with an associated set o f possible faults. An arc represents a test to be applied in the context o f the source node. The graph structure provides the means for viewing the knowledge acquisition and truobleshooting processes as well as for graphic editing of the knowledge base. S a n d K A S T , although alleviates most o f the problems that arise with knowledge base maintenance, runs only in the K E E environment on a Symbolics computer and relies heavily on KEEs graphics utilities and object orientation. I M P U L S E [18] permits the user to interact with the knowledge base via various multiple windows and graphic displays o f the knowledge. Editing, however, is allowed only at the textual level and not graphically. The user o f sophisticated graphics interfaces in computer aided design, simulation, or instruction systems has been also suggested in S T E A M E R [21], Omega [22], Pecan [23], G a r d e n [24], Wlisp [25], PV [26], PAW [27], Thing Lab [28] and G U I D O N - W A T C H [17]. All o f the above systems emphasize and make use o f the abilities o f the brain to detect spatial patterns and reason upon them [29]. 2.2. Flexible htteraction Few o f the above mentioned systems, however, call attention to the questions o f evaluating advisory strategies and adapting them, and all leave those questions open. Consequently, all o f the above systems lack the flexibility o f adjusting the user interface to the sophistication level o f the user. As a result, the systems are either too complex at the initial stages or too detailed at 116 Y. L m o v and S. RAVlKUMAR the subsequent stages. In any case the users becomes a n n o y e d with the system and frequently stop using it. T h e concept o f an adaptive interface [30-32] is an extension o f an interactive user interface idea. An interface system becomes an adaptive system in two ways: active and passive. T h e passive way allows users to modify the interface, so it is tailored to meet the specific users needs. A l t h o u g h resulting in a m o r e suitable interface, the burd en o f its ad ap t i n g is left to the user. An active interface modifies itself. Architecture o f an active interface [33-35] addresses explicitly the issues o f dialogue [36, 37] and user modeling [38, 39]. M c R is an active user interface. C r o c k f o r d [40] identified the user involvement as the most i m p o r t a n t principle in the design o f interactive programs. He characterized user involvement as having " m o r e to d o with taking part than in making decision". T h e user choices must affect the presentation. T h e user is viewed as a part o f the program. Following this a p p r o a c h , o u r system maintains a model o f the user. According to the user-model, the sytem defines the kind o f d at a and rules to o p erat e on. Such an a p p r o a c h allows the user to enter incomplete specifications o f the problem an d let the system make knowledge-based interpretations o f user intentions. Thus, instead o f a one-directional interface at a single level o f complexity, the system interface is flexible. Consider, for example, the arcade games. Dep en d i n g on the n u m b e r o f accumulated points, the speed and the complexity o f the game increases. But, c o n t r a r y to arcade games, where the options given to the user to input into the system remain fixed, we require the user options to be d y n am i c and fit the user needs. T h e user model is const an t l y reevaluated by challenging the user at every step. Such a d y n a m i c a p p r o a c h not only keeps the p r o g r a m to be u p d at ed a b o u t the level o f the user's proficiency, but it also allows some d o u b t in the user's mind a b o u t the o u t c o m e o f the interaction. T h e r e f o r e , the user remains interested in the interaction. T h e final diagnosis then is the "'happ~ e n d " which reinforces the user's interest in using the p ro g ram . Recently Fischer et al. [41] reported a b o u t the need for co m b i n i n g the advice-giving strategies. The). distinguish between actice strategies, where the system provides advice by interrupting the dialogue, and passive strategies, where the user must explicitly ask for advice. Passive strategies usually employ a Socratic style o f interaction where the system poses questions and the user is expected to provide answers. Such systems often patronize the users, treating them as the i n f o r m a t i o n providing tools, needed only to conclude the reasoning by the co m p u t er. Active strategies often e m p l o y learning-by-doing environments, where user's actions are c o m p a r e d with the ideal actions and feedback is provided to i m p ro v e the user's responses towards expert p r o t o t y p e . M c R combines the strategies o f interaction. 2.3. Adaptive bzterface Architecture Knowledge-based adaptive interfaces include four kinds o f knowledge [36, 42,43]: (a) user model: (b) interaction and dialogue management; (c) knowledge o f the task and (d) system characteristics. In the next section we describe user model and interaction management. 2.3. I. L'ser model Kass and Finin [44] claim that individualized user models are essential for g o o d explanations ~ h e n the users differ in their knowledge o f the d o m ai n . Loosely speaking, they perceive the user model as a knowledge source c o n t a i n i n g explicit assumptions on all aspects o f the user that m ay be relevant for the interaction o f the system with the user. A n y interactive system has a user model. However. most systems maintain an implict user model because o f the assumptions a b o u t the user made during system design. Representing user model explicitly allows to maintain it dynamically during the p r o g r a m execution, and thus to achieve the desired level o f system interface flexibility tcf. C o o m b s and Alty [45], who stored e r r o r patterns as a n o rm at i v e user models for users o f Prolog). The simplest technique for building user models involves classifying users as novices and updating their status as they d e m o n s t r a t e imp ro v em en t s [46]. A m o re discriminating technique, allowing more efficient teaching, involves comparing the user's p e r f o r m a n c e with that o f the expert. It is assumed that the user knows a b o u t the underlying concept, if its derivative is used correctly [47, 48]. On the o t h e r hand, by classifying the errors that are m ad e by the user, the underlying deficiencies may be uncovered. T h e most sophisticated technique is the stereotype user modeling I n t e l l i g e n t i n t e r a c t i o n in d i a g n o s t i c e x p e r t s y s t e m s I 17 Table I. Typical a n a l ) s t errors and corresponding user stereotypes Errors Interpretauon Unnccessar) measurement Wrong measurement Wrong conclusion Earl)' conclusion Late concluston No conclusion Lack of technological suggestions Lack of plan Misunderstanding of schematic. lack of sktll to use eqmpment Misunderstanding of schematic Laziness Lack of self-confidence Lack of persistence "l'~ory tower" complex, lack of understanding of manufacturing process which involves describing the user by a set o f characteristics [49]. Examples o f such systems include a bibliographical system G R U N D Y [39], and a real estate r e c o m m e n d a t i o n system [50]. M c R uses a t r o u b l e s h o o t e r ' s model which is a c o m b i n a t i o n o f stereotyping and user e r r o r classification techniques. T h e user model is useful during the t r o u b l e s h o o t i n g session to select the way in which a t r o u b l e s h o o t i n g advice is constructed and displayed, Once such an e r r o r - s t e r e o t y p e table (Table I ) is constructed, the t r o u b l e s h o o t e r s can be ranked, depending on a linear c o m b i n a t i o n o f their scores in the table. When a novice analyst is interacting with the p r o g r a m , the entire is displayed, including the c o m p o n e n t location, description o f the test e q u i p m e n t used, m easu rem en t p r o c e d u r e and the meaning o f the readouts. As the user becomes m o r e proficient in the use o f the system, such a detailed display becomes annoying. A part o f the advice is now sufficient. At the next level o f proficiency, it is enough to display just the location o f the measured c o m p o n e n t instead o f the complete advice. This i n fo rm at i o n now can be supplemented by the display o f the relevant part o f the fault tree. Finally, for the expert, it is en o u g h to display just the list o f suspected c o m p o n e n t s and a scaled dov~.n fault tree. We notice that, as the user sophistication increases, we may condense more and m o re i n f o r m a t i o n at the increasingly abstract levels. We note here that in o r d e r to diagnose correctly the user's errors, it is not sufficient to observe the last user's action. T h e trace o f the entire process that the user traversed in arriving to the current situation is needed in o r d e r to make a correct conclusion a b o u t user's errors [51-53]. F o r example, if a user, t r o u b l e s h o o t i n g a signal path, consisting o f the c o m p o n e n t s C1, C2, C3, C4, C5 in that order, and having observed good input to CI and bad o u t p u t from C4, measures the o u t p u t from C5, then the user most likely m i s u n d e r s t o o d the signal path. On the o t h e r hand, if the user measures the input to C2 then it is most likely that she does not have a good t r o u b l e s h o o t i n g plan. 2.4. Logic Programming Implementation Before continuing with the discussion a b o u t user modeling and interaction, we digress briefl2, to highlight several points on system implementation. O u r graphic interactive interface has been constructed by efficiently co m b i n i n g knowledge and metaknowledge. Aiello et al. [54] describe three a p p r o a c h e s o f embedding m et ak n o w l ed g e in a system. The most primitive a p p r o a c h is to "'hardwire'" metaknowledge by simply writing it as pieces o f code in the system. Such an a p p r o a c h results in extending the implementation o f the system with the procedures that actually instantiate the variables o f metaknowledge. An example o f such an a p p r o a c h is the early rule-based expert systems where each separate case has to be covered by a specific rule instance. A second a p p r o a c h to c o m b i n e knowledge and m et ak n o w l ed g e is the metalanguage a p p r o a c h as in ML [55, 56]. However, using this a p p r o a c h prevents the higher levels o f metareasoning. A third a p p r o a c h allows the user to access both the object and the metalevel simultaneously, as in F O L [57]. This a p p r o a c h requires that both the language and metalanguage have the same form o f expression. A n o t h e r requirement in the third a p p r o a c h is that both levels have access to the inference engine, allowing for proving both theo rem s and metatheorems. T h e user is referred to Ref. [54] for further discussion a b o u t the ways o f a m a l g a m a t i n g knowledge and metaknowledge. 118 Y, LIROV and S. RAVIKUMAR It is suffice to note here that M c R implements the third approach using the m e t a p r o g r a m m i n g techniques o f Prolog. Accordingly, we may state that a subcircuit X is fault}' as follows: indict(X, Z):- suspect(X), generate_suspect (X,Suspects), generate_ideas(Suspects,Ideas), try_ideas( Ideas, Suspects,Z). T h e predicate suspect/lis true if its argument X has g o o d input and bad o u t p u t (this fact may be acquired from the user). T h e predicates g e n e r a t e _ s u s p e c t s / 2 and g e n e r a t e _ i d e a s / 2 are the knowledge base access predicates, where the first is used to subdivide the circuit X into a set o f faulty subcircuits, and the second to derive the set o f indictment m e t h o d s associated with the subcircuits. Here the object-let, el fact a b o u t the c o m p o n e n t Z being the reason for circuit X to fail is derived provided that the metalerel condition ofprol'ability holds between the set o f relevant facts and the goal try-ideas (Ideas,Suspects,Z): try_ideas([IdealRest],Suspects,BadGuy):- do_or (Idea,Suspects, BadG uy), try_ideas(Rest,BadG uy). do_or (A,[XlY],Z):- T = .. [X,A,Z],T, do_or (A,Y,Z). T h e meta-predicate t r y _ i d e a s / 3 creates and executes ( d o _ o r / 3 ) the goals required to indict the suspected subcircuits, as long as there are ways to indict (ideas) and as long as all the subcircuits are not indicted. N o t e that a circuit pack can be faulty for different reasons and hence we not not declare X indicted by the following object-lel,el sentence: indict(X,Z):- s u s p e c t ( X ) , i d e a ( Z ) . T h e reader is referred to Ref. [3] for further i n f o r m a t i o n a b o u t the code o f the circuit pack diagnostic expert s~stem. O u r interface consists o f two basic modules: the test tree m a n a g e r and the user model manager. Test tree m a n a g e r consists o f the modules to reify the diagnostic tree and to handle the knowledge bases o f advice and o f signal path and pins in the circuit pack. It also manages the d at ab ase o f relations which define the mapping between the nodes o f the test tree and the graphical objects representing the top view o f the c o m p o n e n t s [58]. Th e user model m a n a g e r maintains a c o u n t e r representing the level o f the user's proficiency and the conditions for achieving the next level o f proficiency. When the value o f the c o u n t e r exceeds a preset n u m b e r o f allowable errors, the system issues the c o r r e s p o n d i n g advice. 2.5..4daptire Troubleshooting hlterface Any user modeling system must provide the following i m p o r t a n t functions: user model representation scheme, interface between the user model and the rest o f the expert system, and user model acquisition. O u r c o m p u t a t i o n a l paradigm for implementing intelligent user interfaces is a loop consisting o f the following four activities: m o n i t o r , abstract, co m p are, and react. Th erefo re, M c R consists o f a m o n i t o r to acquire knowledge a b o u t the user, a fault identifier to m ap from user actions to user descriptions, and a proficiency table to m ap from user model to advice level. The m o n i t o r simply presents the user with the top view o f the circuit pack and accepts from the user the i n f o r m a t i o n a b o u t the t r o u b l e s h o o t i n g session. This i n f o r m a t i o n contains the location o f the measurements, and their results. T h e user fault identifier matches the a b o v e i n f o r m a t i o n to one o f the diagnostic trees developed at the knowledge acquisition stage. When a significant discrepanc}' is identified, the user is offered I n t e l l i g e n t i n t e r a c t i o n i n d i a g n o s t i c e x p e r t s y s t e m s 1 1 9 I u s e r actions S~gnal p a t h m a n i p u l a t i o n t r e e D m g n o s l i c tree decompo'>lllOrl Ll~er Inlerface I I r o u b l e s h o o t i n g ad'. ice IA+er ] M o d e l +ngnal p a t h F i g . I . A d a p t i v e t r o u b l e s h o o t i n g k n o w l e d g e b a s e s c h e - m a t i c . l i T , . . . . . I L ~ d L o ¢ l t i o n : I c 9 - p 8 | [ ] V a l u l : r o o d mi~t! | [ ] I ~ v l c e : D V N I F ++ : + + I -':" ."" '~?. :" ' ':: i':~.': N~.-'~ ~ .:.~....... ,,,,..:.. ~_" I1.....:.:...K+~ • :: :: : ~..~::~:"::. ':...'.'..:..:.':....:'" :'..': ":" .'.:...." ..',.:.:..:~::.:::..: ".'.'.... ~ ,:.:x."::.':':.q~:::::.~,'~.~ , . . . . . . . . . . ~ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ~ . , , . - , , . , . . ; . ; . . . , . . . . . . . . . . . • ~ . . ~ : . . : : . ~ . ~,.. • . A . . : . c . , : : ; . ~ ' q < , . : . . . . : . . : . ' . . . . : : ' : . : . ' : . . : : . . . • . . . . . ~ ~:..~.~: : ; : > . ' : . . . : " . . . : . ~ . . : . c ~ : : . : . . . . . : . ~ : . h ::.. +-~ ~ . . ~ .... :...: ............ : . . . . . . . . .........~...,..,.,:+~, ~ , :;.. :.:.....: . . . . :...:.~,:, . ~ : ~ . :: • ~ . ~ . , , . , . . . • , . . ; . . . . . k . ~ . & : . ~ ; : : ; : ~ . . . ~ : ~ f...:....:.:..:...~,;~.~..-..:~, ~ v . 0 ; . : : : : , : . " - ' - ; . - ~ , ~ . ~ . . . . . ~ : . : ' . : ? : ' i : ~ " '. ,'...~"'x~ ..~.~:.:.+~::...:~:,+: .:.:.:...: .+..:. ................ k%.:.'..'. : : . . ~ . . ~ . . ~ . . . . . . . . ," ~ : ~ . ~ " - " ' ~ ' . ~ -:.:~ ........... .:'::: .. :.:~.~::,~:i:i.: + i:i:i: .~.iil i:..,,: :ii: ::: .:i:i.'?:'i:..'.::::i:.:..+ +:'.::~:~'~:'~+ii:.'.'ii:~:gi:~:~:~::~:k.~'+~ ~ ~ : ~ :~.+.:.+... ~ , : ~ : • ~. ~z.~.:.'.: .~. +:.: :.." ::. :..::: .;:: +:.-" . ~:. ~ ~-" :++:~:'~: ~: :t:-: :.~ ~:~.~ + + ' . . ~ . ; , ' ~ + ,'~.~',~ • ~ . : . . " .. ~ . . . . " : . ' - - - " . : : . . : : . . : . . . . . / v :.'.::....:...'.: " z : . × . ; . . . ' ~ . ~ . . v x . . : v ' . : . . . : . : . . , . : . : . . . . . ~ . . . . ~ : • . . . . ~ . . : . ~ . . ' : . , . ~ + • ~:'.'~:.'::, :':: ~"-~::~!i..~::~::. • ::::: :~'~.:: ~.~".¢.". :.~t. ~, ~',,.'~: :~:~: :.-~ .~. ~ e . ~ . ~ . ~ i . ~ . • :.:~.:. , . . : ~ , ' : : c ' ~ . . , ~ " : ' ~ ' : : : ' : ' :: ' : . :.'..'..'.':.~ x,";::. ':'.'::':: : . : : ~ ;;.>.':;" : : . ' , :" x,..'.:.. ' • :::~ .:.....,~'~.~ ..... i?,,x--~',~ . . . . .... +. • ~ +.,..:.:.:~:,.~.~.i~+ , ::.. ~,~,.,~"~:.'Y~+:..~.:,,~ ,-,,x~ F i g . 2. A d v i c e a t e l e m e n t a r y l e v e l . help and the proficiency c o u n t e r is decremented. T a b l e I in Section 3.3 lists possible user faults. We differentiate between three basic kinds o f t r o u b l e s h o o t i n g errors: misunderstanding o f schematic, misunderstanding o f t r o u b l e s h o o t i n g strategy and misinterpretation o f the measure- ment. T h e proficiency level is matched also for the a p p r o p r i a t e level o f abstraction which should be used when presenting i n f o r m a t i o n to the user. Low proficiency level c o r r e s p o n d s to high level o f detail and vice versa. The schematic and m e a s u r e m e n t related erro rs can be ident- ified by c o m p a r i n g the true signal path with the signal path abstracted from user measurement sequence observations. The abstractions rules are presented in Section 3.6. Th e t ro u b l esh o o t i n g strategy related errors can be identified by c o m p a r i n g the m easu rem en t sequence with the ideal t r o u b l e s h o o t i n g tree. Using a rule-based knowledge representation, we m a y subdivide o u r system into a hierarchy o f rules (Fig. 1). At the first level are the rules which allow to d e c o m p o s e the knowledge base into a sequence o f c o m p o n e n t s c o r r e s p o n d i n g to the path o f signal flow in the unit under test. At the second level are the rules which use the user supplied measurements to manipulate the sequence o f c o m p o n e n t s obtained at the first level. Every m an i p u l at i o n triggers a rule in the third set o f rules which describe the h u m a n t r o u b l e s h o o t e r s behavi o r patterns. Every rule in the third set m ay issue a t r o u b l e s h o o t i n g advice. 2.6. Diagnostic Tree Decomposition Rules T h e diagnostic tree d e c o m p o s i t i o n rules are used to generate an abstract signal path which will serve as a model for the actual signal path. This model serves the p u rp o se o f m o n i t o r i n g and analyzing in a convenient way the sequence o f actions o f the h u m a n t ro u b l esh o o t er. Rule 2.6. I If c u r r e n t diagnostic rule has subrules then invoke Rule 2.6. I for each o f t h e subrules and collect the current rule ld in the abstract signal path and invoke Rules 2.6.2 and 2.6.3. Rule 2.6.2 Rule 2. 6.3 If current diagnostic rule is a replace rule then tag the c o r r e s p o n d i n g entry in the signal path as a replace entry. If current diagnostic rule is a test rule then tag the c o r r e s p o n d i n g entry in the signal path as a test entry. 120 Y. Lmov and S. RAVmUrdAR Note. T h e resulting signal path m a y contain n o n - u n i q u e elements which are ignored by Rules 2.6,2 and 2.6.3. 2. 7. Abstract Signal Path Manipulation Rules T h e abstract signal path m a n i p u l a t i o n rules maintain the cu rren t status o f the model derived by the previous set o f rules. T h e model is updated either by deleting its parts which b eco m e obsolete due to the results o f test observations, or by replacing it by a new model due to a charge in the t r o u b l e s h o o t i n g strategy. Rule 2. Z I If the o u t c o m e o f the test is g o o d then delete from the c u r r e n t abstract signal p at h all upstream entries. Rule 2. Z2 If the o u t c o m e o f the test is bad then delete from the current abstract signal p at h all d o w n s t r e a m entries. Rule 2. 7.3 If c u r r e n t test point is the r o o t o f the diagnostic tree then select new diagnostic tree 2.8. User Modeling Rules T h e last set o f rules actually selects the a p p r o p r i a t e t r o u b l e s h o o t i n g advice. These rules are used to decide whether to dispense a high level advice due to a m i n o r t r o u b l e s h o o t i n g strategy mistake, or to advice at a more detailed level. Such an advice is given because o f a significant lack o f circuit pack u n d e r s t a n d i n g on the part o f the h u m a n t r o u b l e s h o o t e r discovered by one o f the rules. Rule 2.8. I If current test point does not belong to abstract signal path then display the signal path. Rule 2.8.2 If c u r r e n t test point differs from the ro o t o f the cu rren t diagnostic tree then display the diagnostic tree. Rule 2.8.3 If activated Rules 2.8. I or 2.8.2 m o r e than three times then disperse a c o n v e n t i o n a l t r o u b l e s h o o t i n g advice. 2.9. Intelligent Interaction Example T h e following p a r a g r a p h s illustrate the different kinds o f advice the system can issue depending on the user. The most detailed advice which would be given to a notice user, is shown in Fig. 2. In this m o d e the system advices the user on what-to-do by telling her the location to be tested, the value to be expected, the measuring device to use and the p r o c e d u r e to be followed for making the measurement. This advice would be given to a user who does not have a t ro u b l esh o o t i n g strategy and whose proficiency c o u n t e r value is low. T h e next advanced level o f advice, which would be given to a t r o u b l e s h o o t e r who has a basic understanding o f the test set, various measuring devices, and the circuit pack in this mode, is shown in Fig. 3. N o w the system displays the entire diagnostic tree which was constructed based on good troubleshooting strategies (e.g. "divide and c o n q u e r " ) for this particular pack. Th e system identifies the user as one who d o e s n ' t have a g o o d t r o u b l e s h o o t i n g plan and who might need a longer time to diagnose a fault. T h e tree displays only the location to be tested. Error m test stratecJy-consult, d,ognostlc tree Intelligent interaction in diagnostic expert systems 121 Fig. 3. A more advanced advice. Finally, the most advanced level of advice is shown in Fig. 4. Such an advice would be given to an experienced troubleshooter. In this mode the system gives an advice on how-does-it-work, by displaying the signal path. The user is responsible for the troubleshooting strategy and also for completing any necessary details. 3. D I S C U S S I O N Since consultation interrupts working, advisory systems, both human and computerized, are frequently not welcome. Intelligent interaction deploys computerized advisory capabilities by merging working and learning modes. In this paper we explore the user modeling technique to implement intelligent interaction. If explanation--communicating knowledge to the user by the program--is viewed as a process of human knowledge acquisition, then a user model must be maintained by the program as a presumed human knowledge representation scheme. Carroll and Aaronson [2] showed how it is possible both to frustrate and to help people by providing "intelligent" help. We are dealing with the question of how to deploy a less than perfect advisory capability without having a sound theoretical ground. We believe that a flexible interaction environment based on the human model, maintained by the computer program can alleviate some of the user frustrations. The flexibility of the interaction is achieved by allowing to query the advisory knowledge base at different levels of detail, depending on the level of user sophistication. To obtain realistic experience and insight we chose to work in the area of engineering, mental, and economic significance: circuit-pack diagnostics. In this paper we describe the technical aspects of what we perceive to be an intelligent user interface to an advisory system of sufficiently economic impact. McR is a knowledge-based interactive graphic interface to a circuit pack diagnostic expert system. Scllernot,c error CORSULt $1qrlOL potl't bipolar bipolar blpoLar = ~ clemu~ bipolar J demux bipoLar Fig. 4. The most advanced and the least detailed advice. 122 Y. Lmov and S. RAVIKUMAR McR integrates three kinds of knowledge: domain knowledge (i.e. circuit pack), troubleshooting knowledge (i.e. diagnostic trees) and user knowledge (i.e. stereotype table), McR is an active interface which dynamically changes the amount of detail presented to the user as well as the input choices that the user is allowed to make. These changes are made using a knowledge-based model of the user and of the circuit pack troubleshooting domain. McR combines the strategies of interaction: it is able to advise both on "'how-to-do-it" and on "how-does-it-work". While "'how-to-do-it" advice is concerned with the measurement procedures [e.g. instrument (scope, etc.) and location (ic, pin)], the "'how-does-it-work" deals with the deeper knowledge representing the flow of the signal and the sequence of measurements. We have demonstrated that the analysis of user actions requires disassembly of observations along the different kinds of knowledge. Thus we develop special knowledge abstraction algorithms along with reification algorithms. The resulting system, instead of guiding the user by querying for input, monitors users actions, and offers help when needed. McR is used in conjunction with the STAREX expert system which is currently installed at an AT&T factory. Methodologically. we have shown a deep relationship between the problems of user interface construction, the problem of knowledge acquisition, and the problem of efficient diagnostic reasoning. All of these problems involve optimal selection of the sequences of measurements, and in fact, all of our implementations use the same software module for this purpose [5]. We have also proposed the basic intelligent interaction paradigm to be a loop, consisting of monitor, abstract, compare, and react activities (the resulting system, McR, demonstrates its software implementability and its name is a mnemonic to the paradigm). Additionally, we have demon- strated that metaprogramming techniques have great potential in implementing intelligent inter- action systems. McR, in particular, has been developed entirely in Prolog--a logic programming language with an especially convenient environment to develop metainterpreters. An important area for future research is that of improving a communication backchannel to allow the user to respond to the help in a less constrained way. Another possible way to improve our system is to develop an automated user activities monitoring system. We also foresee future intelligent advisors being able to perform "'what if" analysis of user actions, A way to do such a task is to substitute the "'correct" parts of the model with the assumed user actions and results, reify the knowledge base and compare the resulting tree with the ideal one. Additionally, we plan to incorporate the truth maintenance mechanisms in intelligent user interfaces. And finally, we foresee using intelligent user interfaces for knowledge acquisition. .4cknowledgements--We gratefully acknowledge the support of Walt Lawrence who made man~, invaluable comments and suggesttons: On-Ching Yue for guidance and insight, and Leon Levy for patient reading and comments. R E F E R E N C E S I. J. Carroll and J. McKendree, Interface design issues for advice-giving expert systems. Commun..4CM 30(1h 14-31 (1987). 2. J. Carroll and A. Aaronson, Learning by doing with simulated intelligent help. Commun..4CM, 31(9), 1064-1079, (1988) 3. Y. L~rov. S-fAREX--s~muhaneous test and replace c~rcmt pack troubleshooting expert system prototypmg and implementation. Engng .4pplic Art(£ Intell. 2, 3-18 (1989). 4. Y. LIrox. Circuit pack diagnostic expert systems--a sur,,'ey. Computers Math. Apphc. 18(4), 381-398 (1989), 5. Y. Lirov and O. Yue, Circmt pack troubleshooting via semantic control I: goal selection. Proc. IEEE Wkshp .4rtificial Intelligence lbr Industrial Application, Hitachi, Japan, pp. II8-122 (1988). 6. N. Morris and W. Rouse, Revie~ and evaluation of empirical research in troubleshooting. Human Factors 27(5), 503-530 11985) 7. L. H.~afil and R. L. Rl~,est, Constructing optimal binary decision trees is NP-complete. In~. Process Lett. 51, 15-17 (1976) 8. O. H. Ibarra and S. Sabra. Pol.~nomially complete fault detection problems. IEEE Trans. Comput. C-24(3), 242-250 (1976). 9. J. L. Saupe, Troubleshooting electronic equipment: an empirical approach to the ~dentificauon of certain requirements of a maintenance occupation. Ph.D. Dissertauon, University of Illinois (19541. 10. R. Glaser and J. Philhps, An analysis of proficiency for guided missde personnel: Ill, patterns of troubleshooting behaxior. Technical Bulletin, 55-16. American Institute for Research, Washington D.C., Aug. (1954). I I. C. Getty, s, C. Manning, T. Mehle and S. Fisher, Hypothes~s generation: a final report of three years of research. Techmtal Report 15-10-80. Decision Process Laboratory, Universit3 of Oklahoma, Norman, Okla, Oct. (1980). 12. Human engineering criteria for mihtary systems, equipment, and facihties. Report MIL-STD 1472C, Department of Defense. Washmgton, D.C. (itS81). Intelligent interaction in diagnostic expert systems 123 13. W. Buxton, M. R. Lamb, D. Sherman and K. C. Smith, Towards a comprehensive user interface management system. Comput. Graph. 17(3), 35-42 (1983). [4. S. L. Smith and Y. N. Mosier, Design guidelines for the user-system interface software; The Mitre Corporation Technical Report ESD-IR-48-190, Bedford. Mass. (1984), 15. A. F. Norcio and J. Stanley, Adaptive h u m a n - c o m p u t e r interfaces. Technical Report # NRL-9148, Naval Research Laboratory, Washington, D.C., Sept. (1988). 16. J. Brown, R. Burton and J. deKleer, Pedagogical, natural language, and knowledge engineering techniques in SOPHIE I, I! and Ill. In Intelligent Tutoring Systems (Eds D. Sleeman and J. Brown), pp. 227-282. Academic Press, New York (1982). 17. M. H. Richer and W. J. Clancey, G U I D O N - W A T C H : a graphic interface for viewing a knowledge based system. Technical Report, STAN-CS-85-1068, Stanford University, Calif. Aug. (1985). 18. E. Schoen and R. G. Smith, IMPULSE: a display oriented editor for STROBE. Proc..4AA1"86, pp. 356-358 (1986). 19. F. N. Hill, J. D. Ward and A. L. Yates, SandKAST: An automated knowledge acquisition system. SAND-87-0364C, Sandia, Dec. (1987). 20. J. H. Boose, Personal construct theo D and the transfer of human expertise. Proc. A A A I ' 8 4 (1984). 21. A. Stevens, B. Roberts and L. Stead, The use of a sophisticated graphics interface in computer-assisted instruction. IEEE Comput. Graph. Applic. Mar./Apr., 25-30 (1983). 22. M. L. Powell and M, A. Linton, Visual abstractions in an interactive programming environment. Proc. A C M SIGPLAN, Sigplan Notices, Vol. 18, pp. 14-21, Jun. (1983). 23. S. P. Reiss, Graphical program development with PECAN program development systems. Proc. A C M SIGSOFT/ S I G P L A N Software Engineering Syrup. Practical Software Development Environmems. Apr. (1984), also printed as Sigplan Not. 19(5), 30-41 (1984). 24. S. P. Ross, A conceptual programming environment. Proc. 9th Int. Conf. Software Engineering, Monterey. Calif. Mar,Apr. (1987). 25. C Rathke, Human-computer communication meets software engineering. Proc. 9th Int. Con(. Software Engineering, Monterey, Calif. Mar Apr. (1987). 26. G. P. Brown, R. T. Carling, C. F. Herot, D. A. Kramlich and P. Souza, Program visualization: graphical support for software development. Computer Aug., 27-34 (1985). 27. B. Melamed and R. J T. Morns. Visual simulation: the performance analysis workstauon. Computer. Aug. 87-94 (1985J. 28. A. Borning, The programming language aspects of ThingLab, a constraint-oriented simulation laboratory. A C M Trans. Progrng Lang. Syst. 3(4}, 353-387 11981). 29. T. Dudley, Graphics in software design. Computers. Graph. WId Feb. 35-42 (1986). 30. S. Greenberg and L. H. Witten, Adaptive personalized interfaces--a question of viability, Behavior Inf. Tech. 4( I ). 31-45 (1985). 31. E. A. Edmonds. Adaptive man-computer interfaces. In Computing Skills and the L'ser Inte(lace (Eds M. J. Coombs and Y. L. Ahy), pp. 389-426. Academic Press. London (1981). 32. M. V. Mason and R. C. Thomas. Experimental adaptive interface. Inf. Technol. Res. Des. Applic. 3(3), 162-167 (1984). 33. W. Sherman, SAUCI: self-adaptive user-computer interfaces. Ph. D. Dissertation, Umversity of Pittsburgh, Pittsburgh, Pa 119861. 34. R. Reichman-Adar, Extended person-machine interface. ,4rttf. In(ell. 22, 157-218 (1984). 35. W. B. Rouse, Human-computer interaction in the control of dynamic systems. Computing Sun'. 13, 71-100 (1981). 36. E. Risland. Ingredients of intelhgent user interfaces Int. J. Man-Mach. Stud. 21, 377-388 [1984). 37. W. Wahlster and A. Kobsa, Dialogue-based user models. Proc. IEEE 74(7), 948-960 (1986). 38. H. Mozelco, A human;computer interface to accommodate user learning stages. Communs .4CM 25(2), 100-104 (I 982). 39. E. Rich, User modeling ,,ia sterot~pes Cogn. Sei. 3, 329-354 (1979). 40. D. Crockford, Stand by for fun. In Interactit'e Multimedia (Eds S. A m b r o n and K. Hooper). Microsoft Press (1988). 41. G. Fischer, A. Lemke and T. Schwab, Kno~,ledge-based help systems. Proc. CH185 Human Factors tn Computing Systems. San Francisco, Calif.. 14-17 Apr. pp. 161-167. ACM, New York (1985). 42. I. Monarch and J. Carbonell, Coal SORT: A knowledge-based interface. IEEE Expert 2(I), 39-53 (1987). 43. W. B. Croft, The role of context and adaptation in user interfaces. Int. J. Man-Maeh. Stud. 21, 283-292 (1984). .,t4. R. Kass and T. Finin, The need for user models in generating expert system explanations. Int. J. Expert. Syst. (m press). 45. M. J Coombs and J. L. Alty, Expert systems: an alternate paradigm. Int. J. Man-Maeh. Stud. 20, 21-43 (1984). 46. D. A. Norman, Design rules based upon analyses of human error. Commun. ACM 26, 254-258 {1983}. 47. M. Matz. Tov, ards a process model for high school algebra errors. In Intelhgent Tutoring Systems (Eds. D. Sleeman and J. S. Bunon). Academic Press, New York (1982}. 48. R. Burton and J. S. Brown, A tutoring and student modehng paradigm for gaming environments. Proe. A C M S I G C S E , SIGCUE Joint S.vmp. (1976). 49. T. Finn and D. Darger, G U M S : A general user modeling system. University of Pennsylvania School of Engineering and Apphed Science, Technical Report MS-CIS-86-35, May (1986). 50. K. Morik and C. Pollinger, The real estate agent--modeling users by uncertain reasoning..41 Mag., pp. 44-52 (1985). 51. J. J. Allen and C. R. Perrault. Analyzing intentions in utterances. Art!£ Intell. 15, 143-178 (19801. 52. R Burton and J. S. Brown. An investigation of computer coaching for informal learning activities. In Intelligent Tutoring Systems (Eds. D. Sleeman and J. S. Brown}, pp. 137-155. Academic Press, New York ~1982). 53. M. R. Genesereth, The role of plans in intelligent teaching systems, in Intelligent Tutoring Systems (Eds D. Sleeman and J. S. Brown), pp. 137-[55. Academic Press, N e t ' York (1982). 54. L. Aid(o, C. Cecchi and D. Sartini, Representation and Use of Metaknotledge. Proc. IEEE 74( 10L 1304-1321 (1986). 55. M. Gordon, R. Milner and C. Wadsworth, Edinburg LCF: a mechanized logic for computation. Lecture ,Votes Computer Stience 78. Springer, N e t ' York (1979). 56. A. Wikstrom, Functional Programming t.'sing Standard ,l,lL. Prentice-Hall, Englewood Chffs, N.J. (19871. 57. R. Weyhranch, Prolegomena to a theory of mechamzed formal reasoning. AI J. 13, 133-170 (1980). 58. Y. L~rox, Computer aided software engineering of expert s.~stems. E,xpert Systems with Applications. Pergamon Press, Oxford [in press).