Review of the main developments of the Analytic Hierarchy Process [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 Review of the main developments in the Analytic Hierarchy Process Alessio Ishizaka and Ashraf Labib University of Portsmouth, Portsmouth Business School, Richmond Building, Portland Street, Portsmouth PO1 3DE, United Kingdom Alessio.Ishizaka@port.ac.uk Ashraf.Labib@port.ac.uk ABSTRACT. In this paper the authors review the developments of the Analytic Hierarchy Process (AHP) since its inception. The focus of this paper is a neutral review on the methodological developments rather than reporting its applications that have appeared since its introduction. In particular, we discuss problem modelling, pair-wise comparisons, judgement scales, derivation methods, consistency indices, incomplete matrix, synthesis of the weights, sensitivity analysis and group decisions. All have been important areas of research in AHP. Keywords: AHP, Multicriteria Decision Making, Review 1. Introduction The Analytic Hierarchy Process (AHP) is a multi-criteria decision making (MCDM) method. The oldest reference that we have found dates from 1972 (T. Saaty, 1972). Then, a paper in the Journal of Mathematical Psychology (T. Saaty, 1977) precisely described the method. The vast majority of the applications still use AHP as described in this first publication and are unaware of successive developments. This paper provides a sketch of the major directions in methodological developments (as opposed to a discussion of applications) and further research in this important field. AHP has been inspired by several previous discoveries. The use of pair-wise comparisons (called paired comparisons by psychologists), the essence of AHP, instead of direct allocation of weights has been used long time before by psychologists, e.g. (Thurstone, 1927; Yokoyama, 1921). The hierarchic formulation of the criteria, a major feature of AHP, was first proposed by Miller in his 1966 doctoral dissertation (J. Miller, 1966) and applied in (J. Miller, 1969) and (J. Miller, 1970). The 1-9 scale is based on psychological observations (Fechner 1860; Stevens, 1957). The number of items in each level is inspired by (G. A. Miller, 1956), who recommends seven plus or minus two items. Since its introduction, AHP has been widely used, for example in banks (Haghighi, Divandari, & Keimasi, 2010; Seçme, Bayrakdaroglu, & Kahraman, 2009), manufacturing systems (Iç & Yurdakul, 2009; T.-S. Li & Huang, 2009; Yang, Chuang, & Huang, 2009), operators evaluation (Sen & ÇInar, 2010), drugs selection (Vidal, Sahin, Martelli, Berhoune, & Bonan, 2010), site selection (Önüt, Efendigil, & Soner Kara, 2009), software evaluation (Cebeci, 2009; Chang, Wu, & Lin, 2009), evaluation of website performance (Liu & Chen, 2009), strategy selection (M. K. Chen & Wang, 2010; S. Li & Li, 2009; Limam Mansar, Reijers, & Ounnar, 2009; Wu, Lin, & Lin, 2009), supplier selection (Chamodrakas, Batis, & Martakos, 2010; A. W. Labib, 2011; H. S. Wang, Che, & Wu, 2010; T.-Y. Wang & Yang, mailto:Alessio.Ishizaka@port.ac.uk mailto:Ashraf.Labib@port.ac.uk [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 2009), selection of recycling technology (Y.-L. Hsu, Lee, & Kreng, 2010), firms competence evaluation (M. Amiri, Zandieh, Soltani, & Vahdani, 2009), weapon selection (Dagdeviren, Yavuz, & KilInç, 2009), underground mining method selection (Naghadehi, Mikaeil, & Ataei, 2009) and its sustainability evaluation (Su, Yu, & Zhang, 2010), software design (S. H. Hsu, Kao, & Wu, 2009), organisational performance evaluation (Tseng & Lee, 2009), staff recruitment (Celik, Kandakoglu, & Er, 2009; Khosla, Goonesekera, & Chu, 2009), construction method selection (Pan, 2009), warehouse selection (W Ho & Emrouznejad, 2009), technology evaluation (Lai & Tsai, 2009), route planning (Niaraki & Kim, 2009), project selection (M. P. Amiri, 2010), customer requirement rating (Y. Li, Tang, & Luo, 2010; C.-L. Lin, Chen, & Tzeng, 2010), energy selection (Kahraman & Kaya, 2010), university evaluation (Lee, 2010) and many others. Several papers have compiled the AHP success stories (EH Forman & Gass, 2001; Golden, Wasil, & Harker, 1989; William Ho, 2008; Kumar & Vaidya, 2006; Liberatore & Nydick, 2008; Omkarprasad & Sushil, 2006; T. Saaty & Forman, 1992; Shim, 1989; Sipahi & Timor, 2010; Vargas, 1990; Zahedi, 1986). However AHP has also received strong criticisms. Despite the predicted demise by some researchers, there has been a strong response leading to steady increase in its usage. 2. The AHP method AHP is a multi-criteria decision making (MCDM) method helping decision-maker facing a complex problem with multiple conflicting and subjective criteria (e.g. location or investment selection, projects ranking, etc). Several MCDM methods have been developed (e.g. ELECTRE, MacBeth, SMART, PROMETHEE, UTA,… see (Barthélemy, 2003; Valerie Belton & Stewart, 2002)) and all are based on four steps: problem modelling, weights valuation, weights aggregation and sensitivity analysis. In the next sections we will review these four steps used by AHP and its evolutions. 2.1 Problem modelling As with all decision-making processes, the facilitator will sit a long time with the decision- maker(s) to structure the problem. AHP has the advantage of permitting a hierarchical structure of the criteria (figure 1), which provides users with a better focus on specific criteria and sub-criteria when allocating the weights. This step is important, because a different structure may lead to a different final ranking. Several authors (Pöyhönen, Hamalainen, & Salo, 1997; Stillwell, von Winterfeldt, & John, 1987; Weber, Eisenführ, & von Winterfeldt, 1988) have observed that criteria with a large number of sub-criteria tend to receive more weight than when they are less detailed. Brugha (2004) has provided a complete guideline to structure a problem hierarchically. A book (T. Saaty & Forman, 1992) compiling hierarchies in different applications has been written. When setting up the AHP hierarchy with a large number of elements, the decision maker should attempt to arrange these elements in clusters so they do not differ in extreme ways (Ishizaka, 2004a, 2004b; T. Saaty, 1991). [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 Figure 1: Example of a hierarchy (Akarte, Surendra, Ravi, & Rangaraj, 2001) 2.2 Pair-wise comparisons Psychologists argue that it is easier and more accurate to express one’s opinion on only two alternatives than simultaneously on all the alternatives. It also allows consistency cross checking between the different pair-wise comparisons (see section 2.5). AHP uses a ratio scale, which, contrary to methods using interval scales (Kainulainen, Leskinen, Korhonen, Haara, & Hujala, 2009), requires no units in the comparison. The judgement is a relative value or a quotient a / b of two quantities a and b having the same units (intensity, meters, utility, etc). The decision maker does not need to provide a numerical judgement; instead a relative verbal appreciation, more familiar in our daily lives, is sufficient. Comparisons are recorded in a positive reciprocal matrix (1). In special cases, such as in currencies exchanges, not reciprocal matrices can be used (Hovanov, Kolari, & Sokolov, 2008). A =              1...... ....../1... ...... 1 1 21 112 n ijji ij n a aa aa aa (1) where aij is the comparison between element i and j If the matrix is perfectly consistent, then the transitivity rule (2) holds for all comparisons: aij = aik · akj (2) For example, if team A beats team B two-zero and team B beats team C three-zero, then it is expected with the transitivity rule (2) that team A beats team C six-zero (3 · 2 = 6). However, this is seldom the case because our world is inconsistent by nature. As a minimal consistency is required to derive meaningful priorities, a test must be done (see section 2.5). Webber et al. (1996) state that the order in which the comparisons are entered in the matrix may affect the successive judgments. [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 2.3 Judgement scales One of AHP’s strengths is the possibility to evaluate quantitative as well as qualitative criteria and alternatives on the same preference scale. These can be numerical, verbal (table 1) or graphical. The use of verbal responses is intuitively appealing, user-friendly and more common in our everyday lives than numbers. It may also allow some ambiguity in non-trivial comparisons. This ambiguity in the English language has also been criticised (Donegan, Dodd, & McMaster, 1992). Due to its pair-wise comparisons AHP needs ratio scales. Barzilai (2005) claims that preferences cannot be represented with ratio scales, because in his opinion an absolute zero does not exists, as with temperature or electrical tension. Saaty (1994) states that ratio scales are the only possible measurement if we want to be able to aggregate measurement, as in a weighted sum. Dodd and Donegan (1995) have criticised the absence of a zero in the preference scale. To derive priorities, the verbal comparisons must be converted into numerical ones. In Saaty’s AHP the verbal statements are converted into integers from one to nine. Theoretically there is no reason to be restricted to these numbers and verbal gradation. Although the verbal gradation has been little investigated, several other numerical scales have been proposed (table 2). Harker and Vargas (1987) have evaluated a quadratic and a root square scale in only one simple example and argued in favour of Saaty’s 1 to 9 scale. However, one example seems not enough to conclude the superiority of the 1-9 linear scale. Lootsma (1989) argued that the geometric scale is preferable to the 1-9 linear scale. Salo and Hämäläinen (1997) point out that the integers from one to nine yield local weights, which are unevenly dispersed, so that there is lack of sensitivity when comparing elements, which are preferentially close to each other. Based on this observation, they propose a balanced scale where the local weights are evenly dispersed over the weight range [0.1, 0.9]. Earlier Ma and Zheng (1991) have calculated a scale where the inverse elements x of the scale 1/x are linear instead of the x in the Saaty scale. Donegan, Dodd and McMaster (1992) have proposed an asymptotic scale avoiding the boundary problem, e.g. if the decision-maker enters aij = 3 and ajk = 4, s/he is forced to an intransitive relation (2) because the upper limit of the scale is 9 and s/he cannot enter aik = 12. Ji and Jiang (2003) propose a mixture of verbal and geometric scale. The possibility to integrate negative values in the scale has been also explored (Millet & Schoner, 2005; T. Saaty & Ozdemir, 2003). Intensity of importance Definition 1 Equal importance 2 Weak 3 Moderate importance 4 Moderate plus 5 Strong importance 6 Strong plus 7 Very strong or demonstrated importance 8 Very, very strong 9 Extreme importance Table 1: The 1 to 9 fundamental scale [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 Scale type Definition Parameters Linear (T. Saaty, 1977) c = a · x a > 0 ; x = {1, 2, …, 9} Power (Harker & Vargas, 1987) c = x a a > 1 ; x = {1, 2, …, 9} Geometric (Lootsma, 1989) c = a x-1 a > 1 ; x = {1, 2, …, 9} or x = {1, 1.5, …, 4} or other step Logarithmic (Ishizaka, Balkenborg, & Kaplan, 2010) c = log a(x+(a-1)) a > 1 ; x = {1, 2, …, 9} Root square (Harker & Vargas, 1987) c = a x a > 1 ; x = {1, 2, …, 9} Asymptotical (Dodd & Donegan, 1995) c =          14 )1(3 tanh 1 x x = {1, 2, …, 9} Inverse linear (Ma & Zheng, 1991) c = 9/(10-x) x = {1, 2, …, 9} Balanced (Salo & Hamalainen, 1997) c = w/(1-w) w = {0.5, 0.55, 0.6,…, 0.9} Table 2: Different scales for comparing two alternatives (for the comparison of A and B, c = 1 indicates A = B; c > 1 indicates A > B; when A < B, the reciprocal values 1/c are used) Among all the proposed scales, the linear scale with the integers one to nine and their reciprocals has been used by far the most often in applications. Saaty (1980; 1991) advocates it as the best scale to represent weight ratios. However, the cited examples deal with objective measurable alternatives such as the areas of figures, whereas AHP mainly treats decision processes as subjective issues. We understand the difficulty of verifying the effectiveness of scales through subjective issues. Salo and Hämäläinen (1997) demonstrate the superiority of the balanced scale when comparing two elements. The choice of the ―best‖ scale is a very heated debate. Some scientists argue that the choice depends on the person and the decision problem (Harker & Vargas, 1987; Pöyhönen, et al., 1997). 2.4 Priorities derivation The goal is to find a set of priorities p1,…,pn such that pi/pj match the comparisons aij in a consistent matrix and when slight inconsistencies are introduced, priorities should vary only slightly. Different methods have been developed to derive priorities. Psychologists using pair-wise matrices before Saaty used the mean of the row. This old method is based on three steps (see example 1): 1. Sum the elements of each column j:   n i ij a 1 ji, 2. Divide each value by its column sum:    n i ij ij ij a a a 1 ' ji, 3. Mean of row i: n a p n j ij i    1 ' [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 Example 1: Consider the following comparison matrix: The method ―mean of row‖ derives the priorities as follow: 1. Add the elements of the columns: (1.75; 7; 3.5) 2. Normalize the columns: 3. Calculate the mean of the rows: (a= 0.57; b=0.14; c=0.29). The result can be verified simply: 0.57 ≈ 4 · 0.14, which is equivalent to the entered comparison a = 4 · b 0.57 ≈ 2 · 0.29, which is equivalent to a = 2 · c 0.14 ≈ 0.5 · 0.29, which is equivalent to b = 0.5 · c In the case of the introduction of small inconsistency, we can decently think that it induces only a small distortion. Based on this idea, Saaty (1977) uses the perturbation theory to justify the use of the principal eigenvector p as the desired priorities vector (3). He argues that slight variations in a consistent matrix imply slight variations of the eigenvector and the eigenvalue. A · p = λ · p (3) where A is the comparison matrix p is the priorities vector λ is the maximal eigenvalue Only two years later after the publication of the original AHP, Johnson et al. (1979) show a rank reversal problem for scale inversion with the eigenvalue method. The solution of the eigenequation (3) gives the right eigenvector p, which is not necessary the same as the left eigenvector p’, solution of p’ T · A = λ · p’ T  A T · p’ = λ · p’. The solution depends on the formulation of the problem. This right and left inconsistency (or asymmetry) arises only for inconsistent matrices with a dimension higher than three (T. Saaty & Vargas, 1984a). In order to avoid this problem, Crawford and Williams (1985) have adopted another approach in minimizing the multiplicative error (4): aij = j i p p eij (4) where aij is the comparison between object i and j pi is the priority of object i eij is the error a b c a 1 4 2 b 1/4 1 1/2 c 1/2 2 1 a b c a 0.57 0.57 0.57 b 0.14 0.14 0.14 c 0.29 0.29 0.29 [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 The multiplicative error is commonly accepted to be log normal distributed (similarly the additive error would be assumed to be normal distributed). The geometric mean (5) will minimize the sum of these errors (6). pi = n n j ij a 1 (5) min 2 11 )ln()ln(           n j j i ij n i p p a (6) The geometric mean (also sometimes known as Logarithmic Least Squares Method) can be easily calculated by hand (see example 2) and has been supported by a large segment of the AHP community (Aguarón & Moreno-Jiménez, 2000, 2003; Barzilai, 1997; Barzilai & Lootsma, 1997; Budescu, 1984; MT Escobar & Moreno-Jiménez, 2000; Fichtner, 1986; Leskinen & Kangas, 2005; Lootsma, 1993, 1996). Its main advantage is the absence of rank reversals due to the right and left inconsistency: in fact geometric mean of rows and columns provide the same ranking (which is not necessarily the case with the eigenvalue method). Example 2: The priorities derived with the geometric mean (5) from the matrix of the Example 1 are: p1 = 2241 3  , p2 = 5.0 2 1 1 4 1 3  , p3 = 112 2 1 3  Normalizing, we obtain: p  = (0.57; 0.14; 0.29) If mathematical evidences testify clearly for the geometric mean over the eigenvalue method, there is no clear differences between these two methods when simulations are applied (Budescu, Zwick, & Rapoport, 1986; Cho & Wedley, 2004; Golany & Kress, 1993; Herman & Koczkodaj, 1996; Ishizaka & Lusti, 2006; Jones & Mardle, 2004; Mikhailov & Singh, 1999), apart from special cases (Bajwa, Choo, & Wedley, 2008). Perhaps in the light of this lack of practical evidence, Saaty’s group has always supported the eigenvalue method (Harker & Vargas, 1987; T. Saaty, 2003; T. Saaty & Hu, 1998; T. Saaty & Vargas, 1984a, 1984b). Other methods have been proposed, each one based either on the idea of the distance minimisation (like the geometric mean) or on the idea that small perturbation inducing small errors (like the eigenvalue method or the arithmetic mean of rows). Cho and Wedley (2004) have enumerated 18 different methods, which are effectively 15 because three are equivalent to others (C. Lin, 2007). 2.5 Consistency As priorities make sense only if derived from consistent or near consistent matrices, a consistency check must be applied. Saaty (1977) has proposed a consistency index (CI), which is related to the eigenvalue method: [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 CI = 1 m ax   n n , (7) where n = dimension of the matrix λmax = maximal eigenvalue The consistency ratio, the ratio of CI and RI, is given by: CR = CI/RI, (8) where RI is the random index (the average CI of 500 randomly filled matrices) If CR is less than 10%, then the matrix can be considered as having an acceptable consistency. Saaty (1977) calculated the random indices shown in table 3: n 3 4 5 6 7 8 9 10 RI 0.58 0.9 1.12 1.24 1.32 1.41 1.45 1.49 Table 3: Random indices from (T. Saaty, 1977) Other researchers have run simulations with different numbers of matrices (Aguarón & Moreno-Jiménez, 2003; Alonso & Lamata, 2006; Lane & Verdini, 1989; Tummala & Wan, 1994) or incomplete matrices (E. Forman, 1990). Their random indices are different but close to Saaty’s. This consistency index has been criticised because it allows contradictory judgements in matrices (Bana e Costa & Vansnick, 2008; Kwiesielewicz & van Uden, 2004) or rejects reasonable matrices (Karapetrovic & Rosenbloom, 1999). Techniques based on the transitivity rule (2) have been developed in order to discover contradictory judgements and correct them (Ishizaka & Lusti, 2004; Y. Wang, Chin, & Luo, 2009). Several other methods have been proposed to measure consistency. Peláez and Lamata (2003) describe a method based on the determinant of the matrix. Crawford and Williams (1985) prefer to sum the difference between the ratio of the calculated priorities and the given comparisons in the Geometric Consistency Index (GCI): GCI=   )2)(1( loglog2 2    nn a ji p p ij j i (9) Aguarón and Moreno-Jiménez (2003) determines a threshold that provides an interpretation of the inconsistency threshold analogous to the CR=10%: GCI = 0.3147 for n = 3, GCI = 0.3526 for n = 4 and GCI = 0.370 for n > 4. The transitivity rule (2) has been used by Salo and Hamalainen (1997) and later by Ji and Jiang (2003) in another formulation:   2/)1( log 1 1 1       nn a n i n ij p p ij j i (10) [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 Alonso and Lamata (2006) have computed a regression of the random indices and propose the formulation: λmax < n + 0.1(1.7699n-4.3513) (11) Stein and Mizzi (2007) use the normalised column of the comparison matrix. For all consistency checking, some questions remain: what is the cut-off rule to declare my matrix inconsistent? Should this rule depend on the size of the matrix? How should I adapt my consistency definition, when I use another judgement scale? 2.6 Incomplete pair-wise matrix The number of pair-wise comparisons requested can be very high: (n 2 -n)/2 for n alternatives/criteria. For example 8 alternatives and 6 criteria request 183 entries. This high number of questions can quickly become overwhelming and comparisons may be entered with a small reflexion time due in order to speed up the process. Therefore it has been proposed to enter fewer comparisons, which are well evaluated, than the whole number of comparisons, which may be approximate evaluation. Another reason for incomplete comparisons matrix is that the decision-maker may not have formed a strong opinion on a particular judgement and rather that forcing him to give an often wild guess or to have the entire process slowed down due to one comparison; one can simply skip this question. In a Monte-Carlo simulation study, where comparisons are deleted from large matrices (rank 10, 15 and 20), it has been discovered that one can randomly delete as much as 50% of the comparisons without significantly reducing the results (Carmone, Kara, & Zanakis, 1997). The minimal number of comparisons required is n-1, one for each row or column of the pariwise comparison matrix. The other comparisons are redundant and only necessary to check consistency and possibly improve accuracy. They can be calculated by the transitivity rule (2). This transitivity rule can be extended: aij = aik · akm · ... ·avj (12) The literature related to incomplete comparisons falls into three categories: calculation of missing comparisons, starting rules and stopping rules. 2.6.1 Calculation of missing comparisons A natural way to fill in the missing matrix element is to take the geometric average of all the indirectly calculated missing comparisons with the extended transitivity rule (12) (P. Harker, 1987). The drawback of this method is that the number of indirect comparisons grows with the number of alternatives n in such a way that the calculation requires a long processing time. In order to overcome this problem, eigenvector can be derived directly without estimating unknown comparisons (P. T. Harker, 1987b). If we consider the matrix A with missing comparisons, the method to calculate the priorities has two steps: i. A new matrix B is created from the incomplete matrix A: bij = aij if aij is a real number > 0 = 0 otherwise bii = the number of unanswered questions in the row i [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 ii. The eigenvalue method is applied on the matrix B = A + I, where I is the identity matrix. Other methods have been proposed to decrease the number of comparisons:  To use clusters and pivots (Ishizaka, 2008; Shen, Hoerl, & McConnell, 1992). Objects are divided into several clusters such that all these clusters have one common object: the pivot. Then, pair-wise comparisons are performed for each cluster and priorities are calculated. Finally, the global priority is derived by using the pivot and priorities of each cluster.  To make comparisons for a node that has an overall high impact on the final priorities and froze node with a very low global weight (Millet & Harker, 1990). 2.6.2 Starting rule As only n-1 comparisons are required, the question is which one should we estimate? Harker (1987a) used random selection. Ishizaka and Lusti (2004) prefers the first upper diagonal of the comparison matrix as all items are compared exactly same number of time: two. They prefer to reject a common row or column approach because they felt that it compromised the psychological independence of the comparisons. Wedley et al. (1993) investigated starting rules for the selection of the first n-1 comparisons in an experiment with 144 business students for a problem with known answer. The students have to estimate the proportions of five distinct colours in a rectangular area. Six different referents were considered for the n-1 initial comparisons: first column, bottom row, upper diagonal, lowest ranked item, median ranked item and highest ranked item. The lowest ranked item for the first n-1 comparisons proved to be statistically more accurate than any of the other starting methods. However instead of imposing a starting rule, it may be preferable to leave the choice to the decision- maker, who can select the comparisons (s)he is the most comfortable to evaluate. 2.6.3 Stopping rule Harker (1987b) uses a gradient procedure to select the next comparison which will have the greatest impact on the priorities. He suggested three different stopping criteria:  Subjective satisfaction of the user with the priorities.  Tolerance percentage change in absolute attribute weights from one question to another.  Ordinal rank will not be reversed whatever further comparison is entered. This stopping criteria is not effective if two alternatives have almost equivalent priorities. Wedley (1993) has simulated several matrices with different degrees of inconsistency and then used then to develop regressions equations that predict consistency ratio at each step beyond n comparisons. This method is satisfying only if the decision-maker does not enter a too inconsistent comparison. Alternatively, the consistency index can be calculated only based upon the entered comparisons (P. T. Harker, 1987b): the priority vector (pi, i = 1,2,...,n) is calculated (see section 2.6.1) which in turn is used to calculate estimates missing values (pi/pj, i = 1,2,...,n; j = 1,2,...,n). Since these estimates are based only on the known comparisons, the consistency index tends to underestimate the true degree of inconsistency that would occur if all redundant comparisons were entered. In order to correct this bias, new random indexes were calculated (E. H. Forman, 1990). This method is used in Expert Choice, the leading supporting software of AHP, to estimate the consistency ratio for incomplete matrices. [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 Lin and Swenseth (1993) have proposed to stop the process when one alternative becomes dominant to such a degree that regardless of the effects of the remaining comparisons to evaluate, it cannot be overtaken as the preferred choice. In fact, in a scenario with n criteria, if the difference between the cumulative scores for the two best alternatives, after considering r criteria, is greater than the remaining total eigenscores of the n-r remaining criteria, the best alternative is identified. The drawback of this method is that only the best choice is identified and the decision-maker must use a top-down sequence for establishing priorities. 2.7 Aggregation The last step is to synthesize the local priorities across all criteria in order to determine the global priority. The historical AHP approach (called later distributive mode) adopts an additive aggregation with normalization of the sum of the local priorities to unity:   j ijji lwp (13) where pi: global priority of the alternative i lij: local priority wj: weight of the criterion j The distributive mode is subject to rank reversal, a phenomenon that has been extensively discussed in the literature. In particular, a memorable debate has appeared first in Omega and then in Management Science and in the Journal of the Operational Society. The saga in Omega has four episodes. The first article (Valerie Belton & Gear, 1983) came as a bombshell. It describes an example where the introduction of a copy of an alternative changes the ranking. The rank reversal is due to the modification of the relative values between the local priorities (which is a different and independent cause of the rank reversal due to the right and left inconsistency described in the section 2.4). As the sum of the local priorities to unity changes with the introduction of a new alternative, the local priorities are also modified when normalised and therefore the global priorities may be reversed. The rank reversal phenomenon is therefore independent of the consistency of the matrix and the derivation method of the priorities. As this phenomenon is not unique to AHP but to all additive models (Triantaphyllou, 2001; Y. Wang & Luo, 2009), Belton and Gear, inspired by the weighted sum model, suggested using a normalisation by dividing the score of each alternative only by the score of the best alternative under each criteria. This normalisation will be called later in the literature B-G normalisation or ideal mode. In the second article of the saga, Saaty and Vargas (1984c) provided a counter-example to show that the ideal mode is also subject to rank reversal. In this case, they introduce an alternative, which is a copy on only two of three criteria. On the last criterion, the new alternative has the largest value, which implies different normalisation and priorities. In the third episode, Belton and Gear (1985) responded that if a new alternative is introduced, then the weight criteria should also be modified. This contradicts the AHP philosophy of independence of weights and alternatives. In the fourth episode, Vargas (1985) claims that a method must not be applied only because it gives the results we want (i.e. preserve ranks). He argued that the process used if one has a set of alternatives and adds or deletes some alternatives should be the same if when starting from scratch. The same remark applies to the work of Wang and Elhag (2006), which propose a normalisation by the sum of all priorities with the exception of the new added one. Later, Schoner and Wedley (1989), showed a tangible example (i.e. all criteria are monetary measurable), that a weight change is required to preserve ranks. In a successive paper, Schoner et al. (1993) proposed the linking pin AHP. The local priorities of a specific alternative is normalised to unity. The rank is preserved because the normalisation [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 is always the same. However the final solution depends on which alternative is selected to link across criteria. This phenomenon, appearing also for near identical copy (Dyer, 1990b) or when a copy is removed (Troutt, 1988), has then been debated in Management Science, in the Journal of the Operational Society and in the European Journal of Operational Research where one side has criticized the rank reversal phenomenon (Dyer, 1990a, 1990b; Holder, 1990, 1991; Stam & Duarte Silva, 2003) and the other has legitimized it (Harker & Vargas, 1987, 1990; Pérez, 1995; T. Saaty, 1986; T. Saaty, 1990; T. Saaty, 1991, 1994; T. Saaty, 2006). Millet and Saaty (2000) gave some guidance on which normalisation to use. However, due to the absence of a causal effect demonstration, we believe that the occasional rank reversals are more side-effects of the procedure rather than credible results of the modelling procedure. The multiplicative aggregation (14) has been proposed to prevent the rank reversal phenomenon (Barzilai & Lootsma, 1997; Lootsma, 1993).  j jw iji lp (14) The multiplicative aggregation has non-linearity properties allowing a superior compromise to be select, which is not the case with the additive aggregation (Ishizaka, et al., 2010; Stam & Duarte Silva, 2003). However, Vargas (1997) showed that additive aggregation is the only way to retrieve exact weights of known objects. 2.8 Sensitivity analysis The last step of the decision process is the sensitivity analysis, where the input data are slightly modified in order to observe the impact on the results. As complex decision models may be inherently unstable, it allows the generation of different scenarios, which may results in other rankings and further discussion may be needed to reach a consensus. If the ranking does not change, the results are said to be robust otherwise it is sensitive. In AHP, the sensitivity analysis can be done on three levels: weights, local priorities and comparisons. The sensitivity analysis in Expert Choice allows the variation the weights of the criteria only as input data. Its interactive graphical interface allows a better visualization of the impact of the changes (figure 2). A sensitivity analysis to single and multiple changes of local priorities has also been studied (H. Chen & Kocaoglu, 2008; Huang, 2002; Masuda, 1990) but not yet implemented in a software. Armacost and Hosseini (1994), inspired from the dual questioning approach attribute (DQDA), have described the way to calculate the most determinant criteria. In fact, it is not necessarily the most weighted criterion that is the most critical; the weight must be multiplied by the difference of the local priorities of the alternatives. Triantaphyllou and Sánchez (1997) have defined a sensitivity coefficient of the weights and local priorities. It is calculated with the minimum change of the current weight/local priority such as the ranking of two alternatives is reversed. The sensitivity coefficient of criterion ck is given by: Sensitivity (ck) = 1/Dk, (15) where Dk is the smallest percent amount by which the current value must change such that the ranking is reversed: Dk = min{σk,i,j} for all k,i,j (16) [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 where k: criterion k i, j: alternative i, j and the perturbation σk,i,j is given by: σk,i,j = (pj-pi)/(lij-lii) AND σk,i,j ≤ wj (17) where pi: global priority of the alternative i lij: local priority of alternative i on the weight j The same paper (Triantaphyllou & Sánchez, 1997) contains also the formulas for the most sensitive weight and criteria for the multiplicative AHP. These parameters are critical and careful attention should be given to them. A sensitivity analysis at a micro level has been developed to calculate the interval a single comparison can vary without changing the rank of the alternatives (Aguarón & Moreno- Jiménez, 2000) and to remain in an acceptable inconsistency index (Aguarón & Moreno- Jiménez, 2003) for AHP using the geometric mean. Figure 2: An example of four possible graphical sensitivity analyses in Expert Choice 3. AHP in group decision making As a decision affects often several persons, the standard AHP has been adapted in order to be applied in group decisions. Consulting several experts avoids also bias that may be present when the judgements are considered from a single expert. There are four ways to combine the preferences into a consensus rating (table 4). [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 mathematical aggregation Yes No a g g re g a ti o n o n : judgements geometric mean on judgements consensus vote on judgements priorities weighted arithmetic mean on priorities consensus vote on priorities Table 4: Four ways to combine preferences. The consensus vote is used, when we have a synergistic group and not a collection of individuals. In this case, the hierarchy of the problem must be the same for all decision- makers. On the judgements level, this method requires the group to reach an agreement on the value of each entry in a matrix of pair-wise comparisons. A consistent agreement is usually difficult to obtain with increasing difficulty with the number of comparison matrices and related discussions. In order to bypass this difficulty, the consensus vote can be postponed after the calculation of the priorities of each participant. O’Learly (1993) recommends this version because an early aggregation could result ―in a meaningless average performance measure‖. An aggregation after the calculation of priorities allows to detect decision-makers from different boards and to discuss further any disagreement. If a consensus is difficult to achieve (e.g. with a large number of persons or distant persons), a mathematical aggregation can be adopted. Two synthesizing methods exist and provide the same results in case of perfect consistency of the pair-wise matrices (T. L. Saaty & Vargas, 2005). In the first method, the geometric mean of individual evaluations are used as elements in the pair-wise matrices and then priorities are computed. The geometric mean method (GMM) must be adopted instead of the arithmetical mean in order to preserve the reciprocal property (Aczél & Saaty, 1983). For example, if person A enters a comparison 9 and person B enters 1/9, then by intuition the mathematical consensus should be 9 1 9  =1, which is a geometric mean and not (9 + 1/9)/2 = 4.56, which is an arithmetic mean. Ramanathan and Ganesh (1994) give an example where the Pareto optimality (i.e. if all group members prefer A to B, then the group decision should prefer A) is not satisfied with the GMM. Van den Honert and Lootsma (1997) argue that this violation could be expected because the pair-wise assessments are a compromise of all the group members’ assessments and therefore it is a compromise that does not represent any opinion of the group member. Madu and Kuei (1995), Bryson (1996) and then Saaty and Vargas (2007) introduce a measure of the dispersion of the judgements (or consensus indicator) in order to avoid this problem. If the group is not homogenous, further discussions are required to reach a consensus. In the second method, decision-makers constitute the first level below the goal of the AHP hierarchy. Priorities are computed and then aggregated using the weighted arithmetic mean method (WAMM). Applications can be found in (A. Labib & Shah, 2001; A. Labib, Williams, & O’Connor, 1996). Arbel and Orgler (1990) have introduced a further level above the stakeholders’ level representing the several economics scenarios. This extra level determines the priorities (weights) of the stakeholders. In a compromised method individual’s derived priorities can be aggregated at each node. However according to Forman and Peniwati (1998), this method is ―less meaningful and not commonly used‖. Aggregation methods with linear programming (Mikhailov, 2004) and Bayesian approach (Altuzarra, Moreno-Jiménez, & Salvador, 2007) have been proposed in order to take a decision even when comparisons are missing, for example when a stakeholder [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 does not feel to have the expertise to judge a particular comparison. Uncertainty has also been taken into account in proposing several ranking with an attached probability (M Escobar & Moreno-jiménez, 2007; Van den Honert, 1998). Group decision may be skewed because of collusion or distortion of the judgements in order to advantage its preferred outcome. As individual identities are lost with an aggregation, we prefer to avoid an early aggregation. Condon, Golden, & Wasil (2003) have developed a programme in order to visualise the decision of each participant, which facilitate the detection of outliers. 4. Conclusion and future developments Decisions that need support methods are difficult by definition and therefore complex to model. A trade-off between prefect modelling and usability of the model should be achieved. It is our belief that AHP has reached this compromise and will be useful for many other cases as it has been in the past. In particular, AHP has broken through the academic community to be widely used by practitioners. This widespread use is certainly due to its ease of applicability and the structure of AHP which follows the intuitive way in which managers solve problems. The hierarchical modelling of the problem, the possibility to adopt verbal judgements and the verification of the consistency are its major assets. Expert Choice, the user-friendly supporting software, has certainly largely contributed to the success of the method. It incorporates intuitive graphical user interfaces, automatic calculation of priorities and inconsistencies and several ways to process a sensitivity analysis (Ishizaka & Labib, 2009). Today, several other supporting software packages have been developed: Decision Lens, HIPRE 3+, RightChoiceDSS, Criterium, EasyMind, Questfox, ChoiceResults, AHPProject, 123AHP... not to mention that a template in Excel could also be easily generated. Along with its traditional applications, a new trend, as compiled by the work of Ho (2008), is to use AHP in conjunction with others methods: mathematical programming techniques like linear programming, data envelopment analysis (DEA), fuzzy sets, house of quality, genetic algorithms, neural networks, SWOT-analysis and so on. There is little doubt that AHP will be more and more frequently adopted. AHP still suffers from some theoretical disputes. The rank reversal is surely the most debated problem. This phenomenon is still not fully resolved and maybe it will never be because the aggregation of preferences transposed from scales of different units is not easily interpretable and even questionable according to the French school (Roy, 1996). The assumption of criteria independence (no correlation) may be sometimes a limitation of AHP (and other MCDM methods). The Analytic Network Process (ANP), a generalisation of AHP with feed-backs to adjust weights, may be a solution. However the decision-maker must answer a much larger number of questions, which may be quite complex: e.g. ―Given an alternative and a criterion, which of the two alternatives influences the given criterion more and how much more than another alternative‖ (T. Saaty & Takizawa, 1986). A simplified ANP, while still keeping its proprieties, would be beneficial for a wider adoption of the method. Another direction of the research will probably be on a more soft side. The choice of a hierarchy and a judgement scale is important and difficult. Problem structuring methods could help in the construction of AHP hierarchies, which is its less formalised aspect (Petkov & Mihova-Petkova, 1997; Petkov, Petkova, Andrew, & Nepal, 2007). [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 5. References Aczél, J., & Saaty, T. (1983). Procedures for synthesizing ratio judgements. Journal of Mathematical Psychology, 27, 93-102. Aguarón, J., & Moreno-Jiménez, J. (2000). Local Stability Intervals in the Analytic Hierarchy Process. European Journal of Operational Research, 125, 113-132. Aguarón, J., & Moreno-Jiménez, J. (2003). The Geometric Consistency Index: Approximated Thresholds. European Journal of Operational Research, 147, 137-145. Akarte, M., Surendra, N., Ravi, B., & Rangaraj, N. (2001). Web based casting supplier evaluation using analytical hierarchy process. Journal of the Operational Research Society, 52, 511-522. Alonso, J., & Lamata, T. (2006). Consistency in the Analytic Hierarchy Process: a New Approach. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 14, 445-459. Altuzarra, A., Moreno-Jiménez, J., & Salvador, M. (2007). A Bayesian priorization procedure for AHP-group decision making. European Journal of Operational Research, 182, 367-382. Amiri, M., Zandieh, M., Soltani, R., & Vahdani, B. (2009). A hybrid multi-criteria decision- making model for firms competence evaluation. Expert Systems with Applications, 36, 12314-12322. Amiri, M. P. (2010). Project selection for oil-fields development by using the AHP and fuzzy TOPSIS methods. Expert Systems with Applications, In Press, Uncorrected Proof, doi: 10.1016/j.eswa.2010.1002.1103. Arbel, A., & Orgler, Y. (1990). An application of the AHP to bank strategic planning: The mergers and acquisitions process. European Journal of Operational Research, 48, 27- 37. Armacost, R., & Hosseini, J. (1994). Identification of determinant attributes using the Analytic Hierarchy Process. Journal of the Academy of Marketing Science, 22, 383- 392. Bajwa, G., Choo, E., & Wedley, W. (2008). Effectiveness Analysis of Deriving Priority Vectors from Reciprocal Pairwise Comparison Matrices. Asia-Pacific Journal of Operational Research, 25, 279-299. Bana e Costa, C., & Vansnick, J. (2008). A Critical Analysis of the Eigenvalue Method Used to Derive Priorities in AHP. European Journal of Operational Research, 187, 1422- 1428. Barthélemy, J. (2003). The seven deadly sins of outsourcing. Academy of Management Executive, 17, 87‑100. Barzilai, J. (1997). Deriving Weights from Pairwise Comparisons Matrices. Journal of the Operational Research Society, 48, 1226-1232. Barzilai, J. (2005). Measurement and preference function modelling. International Transactions in Operational Research, 12, 173-183. Barzilai, J., & Lootsma, F. (1997). Power relation and group aggregation in the multiplicative AHP and SMART. Journal of Multi-Criteria Decision Analysis, 6, 155-165. Belton, V., & Gear, A. (1983). On a Shortcoming of Saaty's Method of Analytical Hierarchies. Omega, 11, 228-230. Belton, V., & Gear, A. (1985). The Legitimacy of Rank Reversal—A Comment. Omega, 13, 143-144. Belton, V., & Stewart, T. J. (2002). Multiple Criteria Decision Analysis: An Integrated Approach. Boston: Kluwer Academic Publishers. [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 Brugha, C. (2004). Structure of multi-criteria decision-making. Journal of the Operational Research Society, 55, 1156-1168. Bryson, N. (1996). Group decision-making and the analytic hierarchy process: Exploring the consensus-relevant information content. Computers & Operations Research, 23, 27- 35. Budescu, D. (1984). Scaling binary comparison matrices: A comment on Narasimhan’s proposal and other methods. Fuzzy Sets and Systems, 14, 187-192. Budescu, D., Zwick, R., & Rapoport, A. (1986). A comparison of the eigenvalue method and the geometric mean procedure for ratio scaling. Applied psychological measurement, 10, 69–78. Carmone, F., Kara, A., & Zanakis, S. (1997). A Monte Carlo investigation of incomplete pairwise comparison matrices in AHP. European Journal of Operational Research, 102, 538-553. Cebeci, U. (2009). Fuzzy AHP-based decision support system for selecting ERP systems in textile industry by using balanced scorecard. Expert Systems with Applications, 36, 8900-8909. Celik, M., Kandakoglu, A., & Er, D. (2009). Structuring fuzzy integrated multi-stages evaluation model on academic personnel recruitment in MET institutions. Expert Systems with Applications, 36, 6918-6927. Chamodrakas, I., Batis, D., & Martakos, D. (2010). Supplier selection in electronic marketplaces using satisficing and fuzzy AHP. Expert Systems with Applications, 37, 490-498. Chang, C.-W., Wu, C.-R., & Lin, H.-L. (2009). Applying fuzzy hierarchy multiple attributes to construct an expert decision making process. Expert Systems with Applications, 36, 7363-7368. Chen, H., & Kocaoglu, D. (2008). A sensitivity analysis algorithm for hierarchical decision models. European Journal of Operational Research, 185, 266-288. Chen, M. K., & Wang, S.-C. (2010). The critical factors of success for information service industry in developing international market: Using analytic hierarchy process (AHP) approach. Expert Systems with Applications, 37, 694-704. Cho, E., & Wedley, W. (2004). A Common Framework for Deriving Preference Values fro m Pairwise Comparison Matrices. Computers and Operations Research 31, 893-908. Condon, E., Golden, B., & Wasil, E. (2003). Visualizing group decisions in the analytic hierarchy process. Computers & Operations Research, 30, 1435-1445. Crawford G, & C, W. (1985). A Note on the Analysis of Subjective Judgement Matrices. Journal of Mathematical Psychology, 29, 387-405. Dagdeviren, M., Yavuz, S., & KilInç, N. (2009). Weapon selection using the AHP and TOPSIS methods under fuzzy environment. Expert Systems with Applications, 36, 8143-8151. Dodd, F., & Donegan, H. (1995). Comparison of priotization techniques using interhierarchy mappings. Journal of the Operational Research Society, 46, 492-498. Donegan, H., Dodd, F., & McMaster, T. (1992). A new approach to AHP decision-making. The Statician 41, 295-302. Dyer, J. (1990a). A clarification of ―Remarks on the Analytic Hierarchy Process‖. Management Science, 36, 274-275. Dyer, J. (1990b). Remarks on the Analytic Hierarchy Process. Management Science, 36, 249- 258. Escobar, M., & Moreno-Jiménez, J. (2000). Reciprocal distributions in the analytic hierarchy process. European Journal of Operational Research, 123, 154-174. [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 Escobar, M., & Moreno-jiménez, J. (2007). Aggregation of Individual Preference Structures in Ahp-Group Decision Making. Group Decision and Negotiation, 16, 287-301. Fechner , G. (1860). Elemente der Psychophysik (Vol. 2): Breitkopf und Härtel. Fichtner, J. (1986). On deriving priority vectors from matrices of pairwise comparisons. Socio-Economic Planning Sciences, 20, 341-345. Forman, E. (1990). Random Indices for Incomplete Pairwise Comparison Matrices. European Journal of Operational Research, 48, 153-155. Forman, E., & Gass, S. (2001). The Analytic Hierarchy Process – An Exposition. Operations Research, 49, 469-486. Forman, E., & Peniwati, K. (1998). Aggregating individual judgments and priorities with the analytic hierarchy process. European Journal of Operational Research, 108, 165-169. Forman, E. H. (1990). Random indices for incomplete pairwise comparison matrices. European Journal of Operational Research, 48, 153-155. Golany, B., & Kress, M. (1993). A multicriteria evaluation of the methods for obtaining weights from ratio-scale matrices. European Journal of Operational Research, 69, 210–220. Golden, B., Wasil, E., & Harker, P. (1989). The Analytic Hierarchy Process: Applications and Studies. Heidelberg: Springer-Verlag. Haghighi, M., Divandari, A., & Keimasi, M. (2010). The impact of 3D e-readiness on e- banking development in Iran: A fuzzy AHP analysis. Expert Systems with Applications, 37, 4084-4093. Harker, P. (1987). Incomplete Pairwise comparisons in the Analytic Hierarchy Process. Mathematical and Computer Modelling. Harker, P., & Vargas, L. (1987). The Theory of Ratio Scale Estimation: Saaty's Analytic Hierarchy Process. Management Science, 33, 1383-1403. Harker, P., & Vargas, L. (1990). Reply to ―Remarks on the Analytic Hierarchy Process‖. Management Science, 36, 269-273. Harker, P. T. (1987a). Alternative modes of questioning in the analytic hierarchy process. Mathematical Modelling, 9, 353-360. Harker, P. T. (1987b). Incomplete pairwise comparisons in the analytic hierarchy process. Mathematical Modelling, 9, 837-848. Herman, M., & Koczkodaj, W. (1996). A Monte Carlo Study of Pairwise Comparison. Information Processing Letters 57, 25-29. Ho, W. (2008). Integrated analytic hierarchy process and its applications - A literature review. European Journal of Operational Research, 186, 211-228. Ho, W., & Emrouznejad, A. (2009). Multi-criteria logistics distribution network design using SAS/OR. Expert Systems with Applications, 36, 7288-7298. Holder, R. (1990). Some Comment on the Analytic Hierarchy Process. Journal of the Operational Research Society, 41, 1073-1076. Holder, R. (1991). Response to Holder's Comments on the Analytic Hierarchy Process: Response to the Response. Journal of the Operational Research Society, 42, 914-918. Hovanov, N., Kolari, J., & Sokolov, M. (2008). Deriving weights from general pairwise comparisons matrices. Mathematical Social Sciences 55, 205-220. Hsu, S. H., Kao, C.-H., & Wu, M.-C. (2009). Design facial appearance for roles in video games. Expert Systems with Applications, 36, 4929-4934. Hsu, Y.-L., Lee, C.-H., & Kreng, V. B. (2010). The application of Fuzzy Delphi Method and Fuzzy AHP in lubricant regenerative technology selection. Expert Systems with Applications, 37, 419-425. Huang, Y.-F. (2002). Enhancement on sensitivity analysis of priority in analytic hierarchy process. International Journal of General Systems, 31, 531 - 542. [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 Iç, Y. T., & Yurdakul, M. (2009). Development of a decision support system for machining center selection. Expert Systems with Applications, 36, 3505-3513. Ishizaka, A. (2004a). The Advantages of Clusters in AHP. In 15th Mini-Euro Conference MUDSM. Coimbra. Ishizaka, A. (2004b). Développement d’un Système Tutorial Intelligent pour Dériver des Priorités dans l’AHP. Berlin: http://www.dissertation.de. Ishizaka, A. (2008). A Multicriteria Approach with AHP and Clusters for Supplier Selection. In 15th International Annual EurOMA Conference. Groningen. Ishizaka, A., Balkenborg, D., & Kaplan, T. (2010). Influence of aggregation and measurement scale on ranking a compromise alternative in AHP. Journal of the Operational Research Society, 62, 700-710. Ishizaka, A., & Labib, A. (2009). Analytic Hierarchy Process and Expert Choice: benefits and limitations. OR Insight, 22, 201–220. Ishizaka, A., & Lusti, M. (2004). An Expert Module to Improve the Consistency of AHP Matrices. International Transactions in Operational Research, 11, 97-105. Ishizaka, A., & Lusti, M. (2006). How to derive priorities in AHP: a comparative study. Central European Journal of Operations Research 14, 387-400. Ji, P., & Jiang, R. (2003). Scale transitivity in the AHP. Journal of the Operational Research Society, 54, 896-905. Johnson, C., Beine, W., & Wang, T. (1979). Right-Left Asymmetry in an Eigenvector Ranking Procedure. Journal of Mathematical Psychology, 19, 61-64. Jones, D., & Mardle, S. (2004). A Distance-Metric Methodology for the Derivation of Weights from a Pairwise Comparison Matrix. Journal of the Operational Research Society, 55, 869-875. Kahraman, C., & Kaya, I. (2010). A fuzzy multicriteria methodology for selection among energy alternatives. Expert Systems with Applications, In Press, Corrected Proof, DOI: 10.1016/j.eswa.2010.1002.1095. Kainulainen, T., Leskinen, P., Korhonen, P., Haara, A., & Hujala, T. (2009). A statistical approach to assessing interval scale preferences in discrete choice problems. Journal of the Operational Research Society, 60, 252-258. Karapetrovic, S., & Rosenbloom, E. (1999). A Quality Control Approach to Consistency Paradoxes in AHP. European Journal of Operational Research, 119, 704-718. Khosla, R., Goonesekera, T., & Chu, M.-T. (2009). Separating the wheat from the chaff: An intelligent sales recruitment and benchmarking system. Expert Systems with Applications, 36, 3017-3027. Kumar, S., & Vaidya, O. (2006). Analytic hierarchy process: An overview of applications. European Journal of Operational Research, 169, 1-29. Kwiesielewicz, M., & van Uden, E. (2004). Inconsistent and Contradictory Judgements in Pairwise Comparison Method in AHP. Computers and Operations Research 31, 713- 719. Labib, A., & Shah, J. (2001). Management decisions for a continuous improvement process in industry using the Analytical Hierarchy Process. Journal of Work Study, 50, 189- 193. Labib, A., Williams, G., & O’Connor, R. (1996). Formulation of an appropriate productive maintenance strategy using multiple criteria decision making. Maintenance Journal, 11, 66-75. Labib, A. W. (2011). A supplier selection model: a comparison of fuzzy logic and the analytic hierarchy process. International Journal of Production Research. Lai, W.-H., & Tsai, C.-T. (2009). Fuzzy rule-based analysis of firm's technology transfer in Taiwan's machinery industry. Expert Systems with Applications, 36, 12012-12022. http://www.dissertation.de/ [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 Lane, E., & Verdini, W. (1989). A Consistency Test for AHP Decision Makers. Decision Sciences, 20, 575-590. Lee, S.-H. (2010). Using fuzzy AHP to develop intellectual capital evaluation model for assessing their performance contribution in a university. Expert Systems with Applications, In Press, Corrected Proof, DOI: 10.1016/j.eswa.2009.1012.1020. Leskinen, P., & Kangas, J. (2005). Rank reversal in multi-criteria decision analysis with statistical modelling of ratio-scale pairwise comparisons. Journal of the Operational Research Society, 56, 855-861. Li, S., & Li, J. Z. (2009). Hybridising human judgment, AHP, simulation and a fuzzy expert system for strategy formulation under uncertainty. Expert Systems with Applications, 36, 5557-5564. Li, T.-S., & Huang, H.-H. (2009). Applying TRIZ and Fuzzy AHP to develop innovative design for automated manufacturing systems. Expert Systems with Applications, 36, 8302-8312. Li, Y., Tang, J., & Luo, X. (2010). An ECI-based methodology for determining the final importance ratings of customer requirements in MP product improvement. Expert Systems with Applications, In Press, Uncorrected Proof, doi: 10.1016/j.eswa.2010.1002.1100. Liberatore, M., & Nydick, R. (2008). The analytic hierarchy process in medical and health care decision making: A literature review. European Journal of Operational Research, 189, 194-207. Lim, K., & Swenseth, S. (1993). An iterative procedure for reducing problem size in large scale AHP problems. European Journal of Operational Research, 67, 64-74. Limam Mansar, S., Reijers, H., & Ounnar, F. (2009). Development of a decision-making strategy to improve the efficiency of BPR. Expert Systems with Applications, 36, 3248-3262. Lin, C.-L., Chen, C.-W., & Tzeng, G.-H. (2010). Planning the development strategy for the mobile communication package based on consumers' choice preferences. Expert Systems with Applications, In Press, Corrected Proof, DOI: 10.1016/j.eswa.2009.1011.1009. Lin, C. (2007). A Revised Framework for Deriving Preference Values from Pairwise Comparison Matrices. European Journal of Operational Research, 176, 1145-1150. Liu, C.-C., & Chen, S.-Y. (2009). Prioritization of digital capital measures in recruiting website for the national armed forces. Expert Systems with Applications, 36, 9415- 9421. Lootsma, F. (1989). Conflict Resolution via Pairwise Comparison of Concessions. European Journal of Operational Research, 40, 109-116. Lootsma, F. (1993). Scale sensitivity in the multiplicative AHP and SMART. Journal of Multi-Criteria Decision Analysis, 2, 87-110. Lootsma, F. (1996). A model for the relative importance of the criteria in the Multiplicative AHP and SMART. European Journal of Operational Research, 94, 467-476. Ma, D., & Zheng, X. (1991). 9/9-9/1 Scale Method of AHP. In 2nd Int. Symposium on AHP (Vol. 1, pp. 197-202). Pittsburgh Madu, C., & Kuei, C.-H. (1995). Stability analyses of group decision making. Computers & Industrial Engineering, 28, 881-892. Masuda, T. (1990). Hierarchical sensitivity analysis of priority used in analytic hierarchy process. International Journal of Systems Science, 21, 415 - 427. Mikhailov, L. (2004). Group prioritization in the AHP by fuzzy preference programming method. Computers & Operations Research, 31, 293-301. [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 Mikhailov, L., & Singh, M. G. (1999). Comparison analysis of methods for deriving priorities in the analytic hierarchy process. In IEEE International Conference on Systems, Man, and Cybernetics (Vol. 1, pp. 1037-1042 ). Tokyo. Miller, G. A. (1956). The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological Review, 63, 81-97. Miller, J. (1966). The assessment of Worth: a systematic procedure and its experimental validation. MIT. Miller, J. (1969). Assessing alternative transportation systems. In Memorandum RM-5865- DOR: The RAND Corporation. Miller, J. (1970). Professional decision-making: a procedure for evaluation complex alternatives. New York: Praeger Publishers. Millet, I., & Harker, P. (1990). Globally effective questioning in the Analytic Hierarchy Process. European Journal of Operational Research, 48, 88-97. Millet, I., & Saaty, T. (2000). On the Relativity of Relative Measures-Accommodating both Rank Preservation and Rank Reversals in the AHP. European Journal of Operational Research, 121, 205-212. Millet, I., & Schoner, B. (2005). Incorporating negative values into the Analytic Hierarchy Process. Computers and Operations Research 32, 3163-3173. Naghadehi, M. Z., Mikaeil, R., & Ataei, M. (2009). The application of fuzzy analytic hierarchy process (FAHP) approach to selection of optimum underground mining method for Jajarm Bauxite Mine, Iran. Expert Systems with Applications, 36, 8218- 8226. Niaraki, A. S., & Kim, K. (2009). Ontology based personalized route planning system using a multi-criteria decision making approach. Expert Systems with Applications, 36, 2250- 2259. O'Leary, D. (1993). Determining Differences in Expert Judgment: Implications for Knowledge Acquisition and Validation. Decision Sciences, 24, 395-408. Omkarprasad, V., & Sushil, K. (2006). Analytic hierarchy process: an overview of applications. European Journal of Operational Research, 169, 1-29. Önüt, S., Efendigil, T., & Soner Kara, S. (2009). A combined fuzzy MCDM approach for selecting shopping center site: An example from Istanbul, Turkey. Expert Systems with Applications, In Press, Uncorrected Proof, doi:10.1016/j.eswa.2009.1006.1080. Pan, N. (2009). Selecting an appropriate excavation construction method based on qualitative assessments. Expert Systems with Applications, 36, 5481-5490. Peláez, P., & Lamata, M. (2003). A New Measure of Consistency for Positive Reciprocal Matrices. Computers & Mathematics with Applications, 46, 1839-1845. Pérez, J. (1995). Some comments on Saaty’s AHP. Management Science, 41, 1091-1095. Petkov, D., & Mihova-Petkova, O. (1997). The Analytic Hierarchy Process and Systems Thinking. In 13th International MCDM Conference (Vol. 465, pp. 243-252). Cape Town Springer. Petkov, D., Petkova, O., Andrew, T., & Nepal, T. (2007). Mixing Multiple Criteria Decision Making with soft systems thinking techniques for decision support in complex situations. Decision Support Systems, 43, 1615-1629. Pöyhönen, M., Hamalainen, R., & Salo, A. (1997). An Experiment on the Numerical Modelling of Verbal Ratio Statements. Journal of Multi-Criteria Decision Analysis, 6, 1-10. Ramanathan, R., & Ganesh, L. (1994). Group preference aggregation methods employed in AHP: An evaluation and an intrinsic process for deriving members' weightages. European Journal of Operational Research, 79, 249-265. [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 Roy, B. (1996). Multicriteria Methodology for Decision Analysis. Dordrecht: Kluwer Academic Publishers. Saaty, T. (1972). An eigenvalue allocation model for prioritization and planning. In Working paper, Energy Management and Policy Center: University of Pennsylvania. Saaty, T. (1977). A scaling method for priorities in hierarchical structures. Journal of Mathematical Psychology, 15, 234-281. Saaty, T. (1980). The Analytic Hierarchy Process. New York: McGraw-Hill. Saaty, T. (1986). Axiomatic Foundation of the Analytic Hierarchy Process. Management Science, 32, 841-855. Saaty, T. (1990). An Exposition of the AHP in Reply to the Paper ―Remarks on the Analytic Hierarchy Process‖. Management Science, 36, 259-268. Saaty, T. (1991). Response to Holder’s Comments on the Analytic Hierarchy Process. Journal of the Operational Research Society, 42, 909-929. Saaty, T. (1994). Highlights and critical points in the theory and application of the Analytic Hierarchy Process. European Journal of Operational Research, 74, 426-447. Saaty, T. (2003). Decision-making with the AHP: Why is the Principal Eigenvector necessary? European Journal of Operational Research, 145, 85-91. Saaty, T. (2006). Rank from Comparisons and from Ratings in the Analytic Hierarchy/Network Processes. European Journal of Operational Research, 168, 557- 570. Saaty, T., & Forman, E. (1992). The Hierarchon: A Dictionary of Hierarchies (Vol. V). Pittsburgh: RWS Publications. Saaty, T., & Hu, G. (1998). Ranking by Eigenvector Versus other Methods in the Analytic Hierarchy Process. Applied Mathematics Letters 11, 121-125. Saaty, T., & Ozdemir, M. (2003). Negative Priorities in the Analytic Hierarchy Process. Mathematical and Computer Modelling, 37, 1063-1075. Saaty, T., & Takizawa, M. (1986). Dependence and Independence: from Linear Hierarchies to Nonlinear Networks. European Journal of Operational Research, 26, 229-237. Saaty, T., & Vargas, L. (1984a). Comparison of Eigenvalue, Logarithmic Least Squares and Least Squares Methods in Estimating Ratios. Mathematical Modeling 5, 309-324. Saaty, T., & Vargas, L. (1984b). Inconsistency and Rank Preservation. Journal of Mathematical Psychology, 28, 205-214. Saaty, T., & Vargas, L. (1984c). The legitimacy of rank reversal. Omega, 12, 513-516. Saaty, T., & Vargas, L. (2007). Dispersion of group judgments. Mathematical and Computer Modelling, 46, 918-925. Saaty, T. L., & Vargas, L. G. (2005). The possibility of group welfare functions. International Journal of Information Technology & Decision Making, 4, 167-176. Salo, A., & Hamalainen, R. (1997). On the Measurement of Preference in the Analytic Hierarchy Process. Journal of Multi-Criteria Decision Analysis, 6, 309-319. Schoner, B., & Wedley, W. (1989). Ambiguous Criteria Weights in AHP: Consequences and Solutions. Decision Sciences, 20, 462-475. Schoner, B., Wedley, W., & Choo, E. (1993). A Unified Approach to AHP with Linking Pins. European Journal of Operational Research, 64, 384-392. Seçme, N. Y., Bayrakdaroglu, A., & Kahraman, C. (2009). Fuzzy performance evaluation in Turkish Banking Sector using Analytic Hierarchy Process and TOPSIS. Expert Systems with Applications, 36, 11699-11709. Sen, C. G., & ÇInar, G. (2010). Evaluation and pre-allocation of operators with multiple skills: A combined fuzzy AHP and max-min approach. Expert Systems with Applications, 37, 2043-2053. [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 Shen, Y., Hoerl, A., & McConnell, W. (1992). An incomplete design in the analytic hierarchy process. Mathematical and Computer Modelling, 16, 121-129. Shim, J. (1989). Bibliography research on the analytic hierarchy process (AHP). Socio- Economic Planning Sciences, 23, 161-167. Sipahi, S., & Timor, M. (2010). The analytic hierarchy process and analytic network process: an overview of applications. Management Decision, 48, 775-808. Stam, A., & Duarte Silva, P. (2003). On Multiplicative Priority Rating Methods for AHP. European Journal of Operational Research, 145, 92-108. Stein, W., & Mizzi, P. (2007). The Harmonic Consistency Index for the Analytic Hierarchy Process. European Journal of Operational Research, 177, 488-497. Stevens, S. (1957). On the psychophysical law. Psychological Review, 64, 153-181. Stillwell, W., von Winterfeldt, D., & John, R. (1987). Comparing hierarchical and non- hierarchical weighting methods for eliciting multiattribute value models. Management Science, 33, 442-450. Su, S., Yu, J., & Zhang, J. (2010). Measurements study on sustainability of China's mining cities. Expert Systems with Applications, In Press, DOI: 10.1016/j.eswa.2010.1002.1140. Thurstone, L. (1927). A law of comparative judgments. Psychological Review, 34, 273-286. Triantaphyllou, E. (2001). Two new cases of rank reversals when the AHP and some of its additive variants are used that do not occur with the Multiplicative AHP. Journal of Multi-Criteria Decision Analysis, 10, 11-25. Triantaphyllou, E., & Sánchez, A. (1997). A sensitivity analysis approach for some deterministic multi-criteria decision-making methods. Decision Sciences, 28, 151- 194. Troutt, M. (1988). Rank Reversal and the Dependence of Priorities on the Underlying MAV Function. Omega, 16, 365-367. Tseng, Y.-F., & Lee, T.-Z. (2009). Comparing appropriate decision support of human resource practices on organizational performance with DEA/AHP model. Expert Systems with Applications, 36, 6548-6558. Tummala, V., & Wan, Y. (1994). On the Mean Random Inconsistency Index of the Analytic Hierarchy Process (AHP). Computers & Industrial Engineering, 27, 401-404. Van den Honert, R. (1998). Stochastic group preference modelling in the multiplicative AHP: A model of group consensus. European Journal of Operational Research, 110, 99- 111. Van Den Honert, R., & Lootsma, F. (1997). Group preference aggregation in the multiplicative AHP The model of the group decision process and Pareto optimality. European Journal of Operational Research, 96, 363-370. Vargas, L. (1985). A Rejoinder. Omega, 13, 249. Vargas, L. (1990). An overview of the analytic hierarchy process and its applications. European Journal of Operational Research, 48, 2-8. Vargas, L. (1997). Comments on Barzilai and Lootsma Why the Multiplicative AHP is Invalid: A Practical Counterexample. Journal of Multi-Criteria Decision Analysis, 6, 169-170. Vidal, L.-A., Sahin, E., Martelli, N., Berhoune, M., & Bonan, B. (2010). Applying AHP to select drugs to be produced by anticipation in a chemotherapy compounding unit. Expert Systems with Applications, 37, 1528-1534. Wang, H. S., Che, Z. H., & Wu, C. (2010). Using analytic hierarchy process and particle swarm optimization algorithm for evaluating product plans. Expert Systems with Applications, 37, 1023-1034. [Pre-print version], please cite as: Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336-14345, 2011 Wang, T.-Y., & Yang, Y.-H. (2009). A fuzzy model for supplier selection in quantity discount environments. Expert Systems with Applications, 36, 12179-12187. Wang, Y., Chin, K.-S., & Luo, Y. (2009). Aggregation of direct and indirect judgements in pairwise comparison matrices with a re-examination of the criticisms by Bana e Costa and Vansnick. Information Sciences, 179, 329-337. Wang, Y., & Elhag, T. (2006). An Approach to Avoiding Rank Reversal in AHP. Decision Support Systems, 42, 1474-1480. Wang, Y., & Luo, Y. (2009). On Rank Reversal in Decision Analysis. Mathematical and Computer Modelling, 49, 1221-1229. Webber, S., Apostolou, B., & Hassell, J. (1996). The sensitivity of the analytic hierarchy process to alternative scale and cue presentations. European Journal of Operational Research, 96, 351-362. Weber, M., Eisenführ, F., & von Winterfeldt, D. (1988). The Effects of Spitting Attributes on Weights in Multiattribute Utility Measurement. Management Science, 34, 431-445. Wedley, W. (1993). Consistency prediction for incomplete AHP matrices. Mathematical and Computer Modelling, 17, 151-161. Wedley, W., Schoner, B., & Tang, T. (1993). Starting rules for incomplete comparisons in the analytic hierarchy process. Mathematical and Computer Modelling, 17, 93-100. Wu, C.-R., Lin, C.-T., & Lin, Y.-F. (2009). Selecting the preferable bancassurance alliance strategic by using expert group decision technique. Expert Systems with Applications, 36, 3623-3629. Yang, C.-L., Chuang, S.-P., & Huang, R.-H. (2009). Manufacturing evaluation system based on AHP/ANP approach for wafer fabricating industry. Expert Systems with Applications, 36, 11369-11377. Yokoyama, M. (1921). The Nature of the affective judgment in the method of paired comparison. The American Journal of Psychology 32, 357-369. Zahedi, F. (1986). The analytic hierarchy process: a survey of the method and its applications. Interface 16, 96-108.