PDF hosted at the Radboud Repository of the Radboud University Nijmegen The following full text is a publisher's version. For additional information about this publication click this link. http://hdl.handle.net/2066/112337 Please be advised that this information was generated on 2021-04-06 and may be subject to change. http://hdl.handle.net/2066/112337 Mikrochim. Acta [Wien] 1991, II, 493-503 Mikrochimica Acta �9 by Springer-Verlag 1991 Printed in Austria Expert Systems for Method Development and Validation in HPLC M a r y M u l h o l l a n d 1 '*, N. W a l k e r 1, J. A. v a n Leeuwen, L u i t g a r d Buydens 2 , F. Maris 3 , H. H i n d r i k s 3, a n d Peter J. S c h o e n m a k e r s 4 1 Philips Scientific, Cambridge CB1 2PQ, UK 2 University of Nijmegen, Nijmegen, The Netherlands 30rganon International, Oss, The Netherlands 4 Philips Research Laboratories, NL-5600 JA Eindhoven, The Netherlands Abstract. ESCA, Expert Systems Applied to Chemical Analysis, started its research in M a r c h 1987, with the aim of building p r o t o t y p e expert systems for H P L C m e t h o d development. Results of this research have been published as the w o r k has progressed. T h e project is n o w c o m p l e t e d a n d this p a p e r summarises s o m e o f the overall project conclusions. Seven different expert systems have been built which tackle p r o b l e m s t h r o u g h o u t the process of m e t h o d development, four s t a n d - a l o n e systems a n d three integrated systems. T h e object of ESCA was to evaluate the applicability of expert system t e c h n o l o g y to analytical chemistry a n d n o t all the systems were built for c o m m e r c i a l uses. M a n y of the systems tackle p r o b l e m s specific to one o r m o r e of the p a r t n e r s a n d t h u s m a y n o t be useful outside this e n v i r o n m e n t . However, the results of the w o r k are still p e r t i n e n t to analysts wishing to build their o w n systems. These results are described, however, the emphasis of the p a p e r is o n those systems developed for m e t h o d validation. M e t h o d validation for H P L C is a complex task which requires m a n y charac- teristics o f the m e t h o d to be tested, e.g. accuracy, precision, etc. T h e expert systems built within E S C A c o n c e r n the validation of precision. T w o systems were d e v e l o p e d for repeatability testing a n d ruggedness testing. T h e m e t h o d validation process can be divided into several discrete stages, these include: (1) T h e selection o f the m e t h o d feature to test, for instance which factors can influence the ruggedness of a m e t h o d . (2) The definition o f a test procedure, for instance a n efficient statistical design. (3) T h e execution of experiments a n d the i n t e r p r e t a t i o n of results. (4) A diagnosis o f any observed problem. This p a p e r describes these two systems in some detail a n d summarises s o m e of the results o b t a i n e d f r o m their evaluation. It concludes t h a t expert systems can be useful in solving analytical p r o b l e m s a n d the i n t e g r a t i o n of several expert systems can p r o v i d e extremely powerful tools for the analyst. * To whom correspondence should be addressed: Department of Chemistry, The University of New South Wales, Sydney, NSW 2033, Australia 494 M. Mulholland et al. Key words: expert systems, m e t h o d development, m e t h o d validation, H P L C . ESCA started its research in March 1987, with the aim of building prototype expert systems for H P L C m e t h o d development and validation. The starting point of the project involved two separate tasks. Firstly, expert system development software was evaluated to enable the project to select the best available packages for our application [1]. Secondly, the chemical application area had to be defined [2, 3]. Initially, four separate expert system prototypes were built using various expert system development software. The first system tackled the problem of selecting a best starting column and mobile phase for the H P L C of central nervous system drugs [4]. The second system uses expertise to r e c o m m e n d the best criterion for the selectivity optimisation [5]. The third system optimises the physical parameters of an H P L C method, e.g. flow rate and column dimensions [6]. The final system tackled the problem of ruggedness testing H P L C methods [7]. As the project progressed it became apparent that although each of the four systems performed well in its application, analysts require a combination of these tasks. Therefore, the expert systems were integrated in such a way that communica- tion between the various stages of m e t h o d development was possible. Three possible integrations were proposed, an integration of the stages of m e t h o d development, an integration of the ruggedness test with system optimisa- tion and an integration of repeatability with system optimisation. The purpose of this paper is firstly to summarise the objectives of each inte- gration and to describe the results of testing these systems. Each team is preparing publication of the detailed results from the evaluation of the individual systems. However, some conclusions were c o m m o n t h r o u g h o u t the work. It is the aim of this paper to describe these as they reflect both the successes and limitations of current expert system technology. Integration of the Method Development Stages The first step in the definition of an integrated system for H P L C m e t h o d devel- opment was to identify areas of knowledge which were missing from the existing prototype expert systems. The existing prototype for column and mobile phase selection only concerned the specific application of central nervous system (CNS) drugs. In order to widen the scope of this system two other applications were added. Prior to the ESCA project the Free University of Brussels had developed an expert system which could define methods for Label Claim Analysis. This involved devel- oping methods for a very wide range of drug formulations to ascertain the correctly labelled dosage content [8]. This system was implemented in the same expert system shell as the CNS drugs prototype and thus they could be combined in an integrated system. A second system was also added which could refine methods taken from the literature. The addition of these two modules considerably expanded the application scope of the first stage in the m e t h o d development process. In order to combine the first guess systems with a selectivity optimisation stage it was necessary to build an extra system which optimised the retention range of the sample components. Expert Systems for Method Development ESCA Expert Systems Applied to Chemical Analysis 495 First Guess 1 Retention Optimisation Selectivity Optimisation CNS Actives Label Claim Literature System Optimisation I Flow Rate Column Flow Cell Fig. 1. The integrated system for method development Selection of Criteria Solvent Optimisation pH Optimisation The selectivity o p t i m i s a t i o n stage was e x p a n d e d by including m o d u l e s for the o p t i m i s a t i o n o f solvent c o m p o s i t i o n a n d pH. A simplex p r o c e d u r e was used to optimise solvents a n d the D o e h l e r t design was used to optimise pH. Finally, the system o p t i m i s a t i o n p r o t o t y p e c o u l d be integrated to define the physical p a r a m e t e r s of the m e t h o d . These systems were integrated such t h a t c o m m u n i c a t i o n between the stages was possible via a supervisor system with a c o m m o n d a t a base [9]. This system is illustrated in Fig. 1. Integration of the Method Validation Sta#es T h e aim of validating a m e t h o d is to identify any sources of error in the m e t h o d a n d to check w h e t h e r these errors are within acceptable ranges. However, sometimes a m e t h o d can s h o w errors which are unacceptable for its application. It is at this stage in the validation process w h e n a link to a re-optimisation m o d u l e is required. T o p r o v i d e this link t h e ruggedness test system was integrated to the system o p t i m i s a t i o n p r o g r a m . Earlier in the project a small expert system for repeatability testing was built [10]. This system was used as a starting p o i n t for a third integrated system which c o u l d c o m b i n e a repeatability test with the system o p t i m i s a t i o n p r o g r a m . F o r b o t h these systems the process of the validation was divided into four stages, which are illustrated in Fig. 2. T h e first stage required the definition o f the character- istic to test. F o r b o t h repeatability a n d ruggedness testing this was precision. N e x t the test was defined as either ruggedness or repeatability a n d a n experimental p r o c e d u r e was recommended. Finally the results are processed in a diagnosis m o d u l e . This included a pass/fail criterion, a m e t h o d for interpreting a n d curing any p r o b l e m s a n d a m e a n s to re-optimise a failed m e t h o d . T h e i n t e g r a t i o n o f the system 496 M. Mulholland et al. Method Validation Define Characteristic Accuracy Precision Sensitivity Define test Repeatability Ruggedness Experimental Diagnosis Pass/Fail Cure Problem Re-optimise Method Fig. 2. The four stages of method validation Repeatability Reduced test Re-optimise ~ [ Method feature 1 Test Design 1 Test Measure 1 Diagnosis ~" [ Sample Preparation 'I in oction I 'ISing'oPoiotl Concentration Range lOut.ers Ori t I Fig. 3. The repeatability text expert system o p t i m i s a t i o n p r o g r a m in the d i a g n o s i s m o d u l e a l l o w e d t h e r e s o l u t i o n o f a critical p a i r o f p e a k s t o be i m p r o v e d . Fig. 3 illustrates the r e p e a t a b i l i t y test i n t e g r a t e d system. T h e m e t h o d f e a t u r e t o b e t e s t e d for r e p e a t a b i l i t y c o u l d be t h e s a m p l e p r e p a r a t i o n o r t h e i n j e c t i o n p r o c e d u r e . T h e e x p e r i m e n t a l d e s i g n p r o p o s e d c o u l d test t h e r e p e a t a b i l i t y across a c o n c e n t r a t i o n r a n g e o r at a single c o n c e n t r a t i o n p o i n t . T h e m e a s u r e m e n t s m a d e w e r e for t h e relative s t a n d a r d d e v i a t i o n o f c o n c e n t r a t i o n , p e a k heights, r e t e n t i o n t i m e s a n d p e a k areas. T h e s e results c o u l d t h e n be c h e c k e d for a n y o u t l i e r s o r drift. T h e d i a g n o s i s m o d u l e t h e n c o u l d identify a n y p r o b l e m s a n d s u g g e s t a cure. W h e n t h e p r o b l e m was d u e t o i n a d e q u a t e r e s o l u t i o n b e t w e e n a p a i r o f p e a k s , t h e s y s t e m Expert Systems for Method Development Ruggedness Test 497 ) Method Feature I Flow rate ) Column Manufacturer Solvent Test Design I ) Factorial Design I Test Measure ] ) Standard Errors Main Effects Interaction Effects t ~176 I IOia I Report Fig. 4. The ruggedness test expert system Example Diagnosis If column temperature causes a large effect on retention times Then conclude recommend frequent recalibration If a factor causes loss of resolution Then conclude consult system optimisation to increase resolution Fig. 5. An example diagnosis rule r o u t e d t o t h e r e - o p t i m i s a t i o n m o d u l e . T h e n e w m e t h o d was t h e n r e - t e s t e d for r e p e a t a b i l i t y . T h e r u g g e d n e s s test is i l l u s t r a t e d in a similar w a y in Fig. 4. A r u g g e d n e s s test n o r m a l l y r e q u i r e d t h e t e s t i n g o f b e t w e e n t w o a n d t e n f e a t u r e s o f t h e m e t h o d . T h e s e w e r e c h o s e n b y c o n s i d e r i n g t h e i r p o t e n t i a l influence o n t h e r u g g e d n e s s o f t h e m e t h o d . F a c t o r i a l d e s i g n s w e r e e m p l o y e d t o test t h e effect o f c h a n g i n g t h e s e m e t h o d features, a n d s t a n d a r d errors, m a i n effects a n d i n t e r a c t i o n effects w e r e m e a s u r e d . T h e d i a g n o s i s m o d u l e t h e n c h e c k e d these m e a s u r e m e n t s a g a i n s t pass/fail criteria a n d identified p o t e n t i a l p r o b l e m s . W h e n r e q u i r e d it r o u t e d t o t h e s y s t e m o p t i m i s a - t i o n m o d u l e t o i m p r o v e t h e r e s o l u t i o n . A n e x a m p l e d i a g n o s i s rule is s h o w n in Fig. 5. Evaluation of the Validation Expert Systems T h e e x p e r t s y s t e m s r e q u i r e d t w o stages o f testing, v a l i d a t i o n a n d e v a l u a t i o n . V a l i d a - t i o n o f t h e s o f t w a r e w a s p e r f o r m e d b y t h e i m m e d i a t e t e a m m e m b e r s a n d t h e o b j e c t 498 M. Mulholland et al. was to test the correct operation of the software and to check that the decisions m a d e by the expert system agree with those of the expert. Evaluation was performed by other team members, who were not directly involved with the development of the software, and by third parties external to ESCA. The objective of the evaluation was to determine how well the expert system performed in practice. The validation of the software involved checking the system for logical reasoning and testing for bugs. Any inconsistencies or bugs could then be removed at this stage. The expert then selected a n u m b e r of problems which demonstrated a variety of situations. The expert predicted the answers, and then tested the problem on the software. If the expert and expert system disagreed the reason needed to be identified and a solution found. Knowledge could be incorrect in which case it should be corrected. Knowledge could be missing which then should be added. In this way, the validation process showed whether the systems provided g o o d answers to problems falling within the intended scope of expertise. In order to validate the ruggedness test expert system, the expert selected ten H P L C methods representing a variety of pharmaceutical applications. The expert selected the factors to test and a statistical design to perform the experiments. These methods were then applied to the expert system and the answers compared. Several modifications and additions were then made to the software until the agreement between consultations was acceptable. The expert and expert system always agreed on the factors to be tested within a difference of two factors, i.e. there were never m o r e than two factors over which the expert and expert system did not agree. W h e n the software was successfully validated and its performance was con- sidered sufficiently consistent with that of the expert, it could then be given to external evaluators. The evaluation then involved putting problems to the systems in a practical laboratory environment. Two types of evaluators could be dis- tinguished, those who were experts themselves in the application area and those who had H P L C experience but did not have the specific expertise contained in the software. It was i m p o r t a n t to involve both types of evaluator to assess both the quality and overall usefulness of the advice given by the expert systems. A list of evaluation criteria was proposed by the knowledge engineers and experts within ESCA. This list is shown in Table 1. The evaluators were then allowed to select relevant criteria from this list to test the software against. The list consisted of three types of criteria, these concerned the user interface, the consistency of advice and the limitations of the system. These criteria allowed the expert evaluators to contribute to identifying the system limits, whereas the non-experts could evaluate the ease of use and consistency of the software. The time scales of the evaluations varied from a couple of weeks to over a month, so some evaluations were more t h o r o u g h than others. D u e to the fact that the repeatability system was built at such a late stage in the project there was some overlap between the validation and evaluation of this package. Results o f the Evaluation The Repeatability Expert System This system was evaluated in two laboratories, at O r g a n o n International and Philips Research in Eindhoven. Expert Systems for Method Development Table 1. List of evaluation criteria Man machine interface (user interface) Choice of phrases Explanation Operation (mouse, keyboard, etc.) Ease of use Consistency testin9 Accuracy (correct answers, quality of advice) Robustness (does the system crash or lock-up) Reproducibility (same input, same output) System limits Conflict (do answers provide conflicting advice) Missing knowledge Strange or inexplicable advice Technical content 499 T h e e v a l u a t i o n at O r g a n o n w a s c a r r i e d o u t in t h r e e s e p a r a t e ways: 1. M e t h o d d e v e l o p m e n t a n d v a l i d a t i o n o f n e w H P L C m e t h o d s . 2. R e p e a t a b i l i t y testing o f v a l i d a t e d m e t h o d s . 3. T r o u b l e s h o o t i n g . F o r e a c h t y p e o f a p p l i c a t i o n , different e x p e r i m e n t s w e r e selected t o p e r f o r m the e v a l u a t i o n a n d to d e m o n s t r a t e the a p p l i c a b i l i t y o f this s y s t e m t o the p h a r m a c e u t i c a l industry. A full d e s c r i p t i o n o f this e v a l u a t i o n a n d its results is c u r r e n t l y in p r e p a r a - tion for p u b l i c a t i o n . H o w e v e r , a s u m m a r y o f t h e p r o b l e m s identified a n d the s u b s e q u e n t a c t i o n t a k e n is s h o w n in T a b l e 2. This s h o w s the t y p e o f p r o b l e m s w h i c h w e r e identified, it also d e m o n s t r a t e s h o w t h e v a l i d a t i o n o f the s o f t w a r e h a d t o b e c o m b i n e d w i t h the e v a l u a t i o n . Ideally, p r o b l e m s such as i n c o r r e c t c a l c u l a t i o n s s h o u l d h a v e b e e n identified a n d fixed d u r i n g a s e p a r a t e v a l i d a t i o n stage. T h e s e c o n d e v a l u a t i o n o f t h e r e p e a t a b i l i t y s y s t e m a p p l i e d t h e s o f t w a r e t o a n H P L C m e t h o d for t h e d e t e r m i n a t i o n o f caffeine in coffee. T h e m e t h o d w a s tested at t w o different injection volumes. I n the p r o c e s s o f this e v a l u a t i o n several limitations o f the s y s t e m were high- lighted. Table 2. Summary of some of the problems observed Problem description Action Usage requirement unclear Inaccurate retention times Inaccurate RSD calculation Poor file handling No outlier test Drifting baseline not observed Text altered Calculation corrected Calculation corrected Load options added Outlier test added In study 500 M. M u l h o l l a n d et al. (1) The diagnosis o f p r o b l e m s by the system was n o t completely correct. How- ever, the evaluators u n d e r s t o o d t h a t this was p r o b a b l y inevitable, because a real expert was also expected to c o m e u p with several possible causes for a single observed problem. Hence the criteria used by the evaluators were (a) w h e t h e r the suggestions were sensible a n d (b) w h e t h e r the real cause o f the p r o b l e m was a m o n g those suggested. P u t in this perspective, the diagnosis m o d u l e was considered to function correctly. (2) A design editor which w o u l d allow changes to the suggested experiments was requested. (3) F o r the system to be used in a practical e n v i r o n m e n t it w o u l d need to be linked to a c h r o m a t o g r a p h y d a t a station. (4) T h e system c a n n o t be used by c h r o m a t o g r a p h e r s w i t h o u t a certain a m o u n t of knowledge. It was n o t a tool to educate novices. D u r i n g the evaluation of this software m a n y of the identified p r o b l e m s were rectified a n d requested additions were made. This d e m o n s t r a t e d the c o n t r i b u t i o n m a d e to the overall value of the software due to the evaluation process. The Ruggedness Test Expert System The ruggedness test system was evaluated in several p h a r m a c e u t i c a l laboratories. The detailed results will be published in a separate paper. The evaluations were divided into three stages: 1. The i n p u t of a variety of m e t h o d s to the system. 2. A critical evaluation of the factors selected. 3. T h e collection o f d a t a a n d a n evaluation o f the diagnosis. The p u r p o s e of applying several different m e t h o d s to the expert system was to allow the evaluators to find any m e t h o d features which could n o t be defined by the system. Also any missing k n o w l e d g e could be identified. Table 3 summarises some of the results from this stage of testing. T h e first c o l u m n lists the m e t h o d features where k n o w l e d g e was missing. T h e second c o l u m n lists s o m e of the m e t h o d features which c o u l d n o t be defined in the i n p u t module. It was recognised that to some extent this was inevitable as H P L C is very complex a n d it was difficult to p r o g r a m all the available expertise. However, it was r e c o m m e n d e d t h a t m o d u l e s be a d d e d to the system at a later date with k n o w l e d g e o n o t h e r types of detection. The rugged- ness test system did allow the user to use his o w n k n o w l e d g e to select factors when necessary. This was invaluable in these cases where knowledge was missing. Table 3. Results on the input of methods Missing knowledge M e t h o d features not defined Fluorescence detection Ion c h r o m a t o g r a p h y G r a d i e n t methods Chiral c h r o m a t o g r a p h y D u a l detection Solid phase extraction Both sonicate and shake Sample solvent composition Expert Systems for Method Development 501 T h e selection o f factors m o d u l e was critically evaluated by c o m p a r i n g the factors selected with the o p i n i o n s of the evaluator. T h e following list shows examples o f the c o m m e n t s made: 1. T h e selection o f centrifuge time was n o t necessary. 2. I n s o m e instances the levels selected for solvent mix were t o o extreme. 3. C o l u m n b a t c h variation was often preferred to c o l u m n m a n u f a c t u r e r variation. T h e selection of factors m o d u l e was very flexible a n d any changes in factors o r levels required by the user could be made. The analysts used this system in a completely different m a n n e r to a c o n v e n t i o n a l software p r o g r a m . T h e y e x a m i n e d the advice a n d the explanations a n d accepted o r rejected suggestions as they required. This is exactly the same as a typical c o n s u l t a t i o n with an expert a n d it e m p h a s i z e d the i m p o r t a n c e of flexibility in expert systems. It was n o t always possible to o v e r c o m e conflicts o f o p i n i o n between evaluators a n d expert. F o r instance, some evaluators felt restricted by the allowed statistical designs. However, the expert felt t h a t t o o m u c h flexibility c o u l d lead to errors. T h e diagnosis m o d u l e of the system was also f o u n d to have missing k n o w l e d g e a n d some differences in o p i n i o n were noted. T h e following list summarises some c o m m e n t s : 1. T h e m a i n effects were n o t tested o n their statistical significance. 2. T h e pass/fail levels were sometimes t o o strict. 3. T h e h a n d l i n g o f internal s t a n d a r d m e t h o d s was inadequate. 4. The system suitability criteria did n o t include noise. T o s o m e extent the flexibility of the system c o u l d o v e r c o m e these problems. However, it was clear t h a t a d d i t i o n a l k n o w l e d g e is required, particularly for h a n d l i n g internal s t a n d a r d s a n d noise m e a s u r e m e n t s . Conclusions The Repeatability Test Expert System It was clear t h a t there was insufficient time to evaluate the c o m p l e t e system a n d especially the link f r o m the diagnosis m o d u l e to the system o p t i m i s a t i o n module. This p a r t o f the system w o u l d only be c o n s u l t e d if the d i a g n o s e d p r o b l e m was d u e to i n a d e q u a t e resolution. This d i d n o t occur for any o f the test cases. F o r this reason, it was difficult to c o n c l u d e o n the usefulness of the integrated expert system. A l t h o u g h the evaluators d i d n o t use the r e - o p t i m i s a t i o n m o d u l e , they agreed t h a t it could p r o v e invaluable for the relatively few times it w o u l d be required. Initially, the software h a d several p r o b l e m s with flexibility a n d missing k n o w l - edge, b u t these were solved in later u p d a t e s o f the software. This process o f evalua- tion, r e c o m m e n d a t i o n s a n d i m p r o v e m e n t s was invaluable to the quality o f the final software. The user interface o f this p a c k a g e was considered excellent, with g o o d edit a n d explain facilities. The quality of advice was g o o d a n d it c o u l d deal with relatively difficult problems. T h e evaluators c o n c l u d e d t h a t the system was of use to analysts with little software knowledge. However, s o m e experience o f H P L C was required a n d training in the use o f the p a c k a g e w o u l d be necessary. 502 M. Mulholland et al. The Ruggedness Test Expert System As for the repeatability system the integrated module for re-optimisation was not fully evaluated. N o n e of the test cases required an improvement in resolution. The evaluation of this expert system clearly illustrated the difference between an expert system and a conventional software package. Analysts were happy to accept advice or modify it when required. This is exactly how a typical consultation with an expert would be. Generally the expert system was found to be useful, particularly for analysts who had not been involved in the m e t h o d development process. There were areas of missing knowledge which should eventually be added to enhance its usefulness. General Conclusions As the ESCA research progressed the team built a total of seven different expert systems and it was concluded that these systems provided solutions to m a n y problems in H P L C m e t h o d development. They were successfully used at each stage of the m e t h o d development process and an integration of these stages provided useful communication links. W h e n the evaluation of these packages began m a n y problems were predicted. This was due to the unique nature of expert systems. They do not give right or wrong answers, but good or bad advice. The quality of this advice would be very difficult to measure. Users are expected to interact with expert systems in a completely different way to other software packages. They need to question the system and confirm or modify advice. In some cases they were required to supplement the system with their own knowledge. In fact, we found that analysts welcomed this kind of interaction and were perfectly capable of critically evaluating the advice given. The overall conclusion was that expert systems do not replace analysts as indeed experts cannot directly replace analysts. However, they provide easy access to expertise which can help them work more efficiently. References [1] J. A. van Leeuwen, B. G. M. Vandeginste, G. Postma, G. Kateman, Chemom. Intel. Lab. Syst. 1989, 6, 239. [2] D. Goulder, T. Blaffert, A. Blokland, L. Buydens, A. Chabra, A. Cleland, N. Dunand, H. Hindriks, G. Kateman, J. A. van Leeuwen, D. Massart, M. Mulholland, G. Musche, P. Naish, A. Peeters, G. Postma, P. Schoenmakers, M. de Smet, B. Vandeginste, J. Vink, Chromatographia 1988, 26, 37. [-3] P.J. Schoenmakers, M. Mulholland, Chromatographia 1988, 25, 737. [4] H. Hindricks, F. Maris, J. Vink, A. Peeters, M. de Srnet, D. L. Massart, L. Buydens, J. Chromatogr. 1989, 485, 255. [5] A. Peeters, L. Buydens, D. L. Massart, P. J. Schoenmakers, Chromatographia 1988, 26, 101. [-6] P.J. Schoenmakers, N. Dunand, A. Cleland, G. Musche, T. Blaffert, Chromatographia 1988, 26, 37. [-7] M. Mulholland, N. Dunand, A. Cleland, J. A. van Leeuwen, B. G. M. Vandeginste, J. Chromatogr. 1989, 485, 283. 1-8] M. de Smet, A. Peeters, L. Buydens, D. L. Massart, J. Chromatogr. 1989, 485, 283. Expert Systems for Method Development 503 [9] P. Conti, H. Piryns, N. Van den Driessche, M. de Smet, T. Hamoir, F. Marls, H. Hindriks, P. Schoenmakers, D. L. Massart, Chromatographia (accepted). [10] M. Mulholland, J. A. van Leeuwen, B. G. M. Vandeginste, Anal. Chim. Acta 1989, 223, 183. Received August 31, 1990. Revision February 12, 1991.