SURE: A method for decision-making under uncertainty This is a repository copy of SURE: A method for decision-making under uncertainty. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/135170/ Version: Accepted Version Article: Hodgett, RE orcid.org/0000-0002-4351-7240 and Siraj, S orcid.org/0000-0002-7962-9930 (2019) SURE: A method for decision-making under uncertainty. Expert Systems with Applications, 115. pp. 684-694. ISSN 0957-4174 https://doi.org/10.1016/j.eswa.2018.08.048 © 2018 Elsevier Ltd. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/. eprints@whiterose.ac.uk https://eprints.whiterose.ac.uk/ Reuse This article is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs (CC BY-NC-ND) licence. This licence only allows you to download this work and share it with others as long as you credit the authors, but you can’t change the article in any way or use it commercially. More information and the full terms of the licence here: https://creativecommons.org/licenses/ Takedown If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing eprints@whiterose.ac.uk including the URL of the record and the reason for the withdrawal request. mailto:eprints@whiterose.ac.uk https://eprints.whiterose.ac.uk/ For consideration in Expert Systems with Applications. SURE: a method for decision-making under uncertainty Richard Edgar Hodgett Leeds University Business School, The University of Leeds, LS2 9JT, United Kingdom - r.e.hodgett@leeds.ac.uk Sajid Siraj Leeds University Business School, The University of Leeds, LS2 9JT, United Kingdom - s.siraj@leeds.ac.uk Managerial decision-making often involves the consideration of multiple criteria with high levels of uncertainty. Multi-attribute utility theory, a primary method proposed for decision-making under uncertainty, has been repeatedly shown to be difficult to use in practice. This paper presents a novel approach termed Simulated Uncertainty Range Evaluations (SURE) to aid decision makers in the presence of high levels of uncertainty. SURE has evolved from an existing method that has been applied extensively in the pharmaceutical and speciality chemical sectors involving uncertain decisions in whole process design. The new method utilises simulations based upon triangular distributions to create a plot which visualises the preferences and overlapping uncertainties of decision alternatives. It facilitates decision- makers to visualise the not-so-obvious uncertainties of decision alternatives. In a real-world case study for a large pharmaceutical company, SURE was compared to other widely-used methods for decision-making and was the only method that correctly identified the alternative eventually chosen by the company. The case study demonstrates that SURE can perform better than other existing methods for decision-making involving multiple criteria and uncertainty. Key words: Simulated Uncertainty Range Evaluations; MCDM; Uncertainty; Simulations; AHP; ELECTRE III. History: This paper was first submitted on 13th December 2017. Revisions were submitted on 9th May 2018. Revisions were submitted on 2nd August 2018. Paper was accepted on 28th August 2018. mailto:r.e.hodgett@leeds.ac.uk mailto:s.siraj@leeds.ac.uk 2 1. Introduction It is often the case in managerial decision-making that alternatives are assessed in terms of several criteria. These assessments are not so straightforward due to the uncertainty present in real-life situations. Most multi-criteria decision-making (MCDM) methods have been developed or adapted in one way or another to handle uncertainty, often focusing on the uncertainty of the criteria weights. Many of these methods are founded on multi-attribute utility theory (MAUT) (Keeney & Raiffa, 1976) which is primarily designed to handle trade-offs among multiple criteria for a given situation. MAUT is one of the most well-known MCDM methods that was explicitly developed to deal with uncertain information (Belton & Stewart, 2002). It requires the selection of utility functions which represent the risk attitude of the decision-maker for each criterion in a decision problem. It has been extensively discussed in the decision-making literature and is generally valued for its axiomatic foundations. However, MAUT is also known to be difficult to use in practice (Polatidis, et al., 2006; Kumar, et al., 2017) as it specifies uncertain outcomes by means of probability distrubutions which are not typically known (Schaetter, 2016). Excessive time and a high cognitive load is required to derive an accurate representation of an individual╆s utility function ゅLumby ┃ Jones, 2003; Cinelli, et al., 2014). Perhaps as a result, there are few real-world examples of MAUT being used in the literature in comparison to its theoretical development (Durbach & Stewart, 2012b). In this context, Multi-Attribute Range Evaluations (MARE) (Hodgett, et al., 2014) is recomended for handling uncertain decisions. Although MARE was primarily proposed for decision-making in whole process design in the manufacturing industry, the technique is applicable to any decision problem involving multiple criteria and 3 uncertainty. As a result, MARE has been further developed into a number of proprietory software tools as well as open-source libraries like the MCDA package for R (Bigaret, et al., 2017). MARE requires the decision-maker to provide a range in the form of a minimum, most likely and maximum value for each alternative with respect to each criterion. Using a range to capture preferences has become more common in medical applications (Peleg, et al., 2012), survey design (Schwarz, 1999; Bruine de Bruin, et al., 2012) and software development (Wagner, et al., 2017). Peleg et al. (2012) identified that some factors are difficult to be represented by a single value and that ranges can be relatively easy to agree upon by experts. This indicates that asking for ranges is beneficial for both single and group decision-making environments. Therefore it is important to investigate and incorporate the use of ranges in MCDM techniques. In this paper, we propose a new MCDM methodology, termed as Simulated Uncertainty Range Evaluations or SURE, which allows decision-makers to provide their preferences in ranges and the technique utilises triangular distributions to account for uncertain information. SURE offers a more theoretially sound methodology and an improved output for visualising the uncertainty associated with each decision alternative. The value of the proposed method is assessed using a real-life case study from a large pharmaceutical company where it is compared against other widely-used methods for decision-making. In the next section, we give a detailed overview of MARE and the issues associated with it in order to make the case for SURE discussed in the following section. 2. Overview and limitations of Multi-Attribute Range Evaluations MARE was initially proposed as a methodology for handling uncertain decisions in whole process design. Whole process design considers the optimisation of the entire product development process, from raw materials to end product, rather than focusing on each 4 individual unit operation. The complexity involved with the implementation of whole process design requires decision-making, often with limited or uncertain information. A survey sent to management in the speciality chemical and pharmaceutical industries who utilise whole process design identified that the majority (69%) of respondents would spare an hour (or even less) for decision making tasks, and many respondents (89%) prefered to have a decision-making system that guides the user in the right direction quickly, as opposed to those producing exact results but with a very long entry procedure (Hodgett, et al., 2014). These findings meant that a decision-making methodology was needed that could handle uncertain information but could also be used quickly. Most literature that discusses uncertainty in MCDM focuses on probabilities, decision weights, explicit risk measures, fuzzy numbers and scenarios (Durbach & Stewart, 2012). The search for a balance between theoretically rich and complex methods that can handle uncertainty and those that are transparent and easily understood (yet may not conform to the prescriptive principles of rationality) has polarized much of MCDM research. This led to the development of MARE. MARE is based on aggregation-based methods like weighted sums of judgments and preferences, one of the widely-used and simple MCDM approaches. The weighted sum method calculates a score for each alternative Ai by summing the products of each decision variable with its corresponding criterion weight. The decision-maker can provide values (vj) for the importance of each criterion or directly provide criteria weights (wj) that sum to one. If values are provided, the summation ratio normalisation method (sometimes referred to as additive normalisation or distributed normalisation) is typically used to convert the values into weights: 拳珍 噺 塚乳デ 塚乳韮乳転迭 (1) 5 where a decision problem has n fixed criteria, vj denotes the criterion value of the jth criterion and wj denotes the criterion weight of the jth criterion. As the weighted sum method assumes every criterion is maximised, any minimising criterion should have their corresponding decision variables inverted. In the likely event that the decision criteria use different measurement units, the summation ratio normalisation method (Eq. 1) is often utilised before calculating the alternative scores with: 畦沈 噺 デ 拳珍 欠沈珍津珍退怠 for i サ な┸に┸┼┸m. (2) where a decision problem has m fixed alternatives and n fixed criteria, wj denotes the weight of each criterion and aij is the score for the ith alternative with respect to the jth criterion. The primary difference between the weighted sum method and MARE is that the decision-maker can assign up to three scores for each alternative (Aj) in terms of each criterion (Cj). These scores represent the most likely values (aij), the lowest possible values (欠沈珍陳沈津 ) and the highest possible values (欠沈珍陳銚掴 ). If the most likely decision variable is certain, then all the three values converge together and hence no values are required for 欠沈珍陳沈津 or 欠沈珍陳銚掴 as aij is sufficient to represent all three values. In any case, the most likely value should always remain between the lowest and highest values, and the lowest value must remain less than or equal to the highest value, that is, 欠沈珍陳沈津 ズ aij ズ 欠沈珍陳銚掴 . There are two major limitations of the MARE methodology. The first stems from an issue with using three values for each alternative. Using three values means that the summation ratio normalisation method (Eq. 1) which divides by the sum of all the decision variables cannot be applied. This is because there has to be an equal scale length for the minimum, most likely and maximum decision variables. This issue with 6 maintaining an equal scale length also applies to vector normalisation: 欠沈珍茅 噺 銚日乳謬デ 銚鉄日乳尿日転迭 (3) where 欠沈珍茅 is the normalised decision variable for the ith alternative with respect to the jth criterion, 欠沈珍 is the decision variable for the ith alternative with respect to the jth criterion. Consequently, the max scale normalisation procedure that utilises the largest decision variable (欠沈珍陳銚掴 ) for normalisation is proposed: 欠沈珍茅 噺 銚日乳銚乳尿尼猫 (4) where 欠珍陳銚掴 is the largest decision variable with respect to the jth criterion. Max scale normalisation was chosen based on the results of a simulated experiment by Chakraborty & Yeh (2007) who identified that vector normalisation and max scale normalisation are more suitable to be used with the weighted sum method when criteria measurement units are diverse in range and there are a small number of alternatives to be assessed. There are of course other normalisation methods such as the Max-Min method which could also be used: 欠沈珍茅 噺 銚日乳貸 銚乳尿日韮銚乳尿尼猫貸 銚乳尿日韮 (5) where 欠珍陳沈津 is the smallest decision variable with respect to the jth criterion. Once normalisation is performed the decision variables are represented by a value between 0 and 1 and are used to calculate the alternatives scores with respect to minimum, most likely and maximum, with scores closer to 1 being better. The choice of normalisation procedure has been shown to affect the outcome of a MCDM method (Çelen, 2014; Gardziejczyk & Zabicki, 2017) and limiting the number of potential normalisation procedures available is a weakness of the MARE methodology. Especially considering the most widely applied normalisation approach of summation ratio normalisation isn╆t compatible with MARE (Hodgett, et al., 2014). 7 The second major issue with MARE is the level of preference understood between the lowest/highest and most likely values. Figure 1 shows the output of MARE for an equipment selection decision made by Fujifilm Imaging Colorants Ltd (Hodgett, 2016). The circles indicate the most likely values for each alternative and the error bars represent the minimum and maximum outputs for each alternative. The problem with this output is that you cannot interpret the strength of preference between the minimum/maximum and the most likely values. This makes it difficult for selecting an alternative based on the output, particularly if there are many overlapping ranges. Both of the issues with MARE described above are overcome in the SURE methodology which is presented in the next section. Figure 1 Output of MARE for an equipment selection decision (Hodgett, 2016) 8 3. The Simulated Uncertainty Range Evaluations Methodology The SURE methodology is also based on the weighted sum method and requires the same information from the decision-maker as MARE, i.e. criteria weights and the minimum, most likely and maximum values for each alternative in respect to each criterion. However, instead of independently calculating single values for the minimum, most likely and maximum for each alternative, random deviates are generated based upon triangular distributions. The methodology for SURE can be summarised in the following five steps: 1. Set number of decision tables to be simulated (s). 2. Generate s number of simulated decision tables using the minimum, most likely and maximum values as the input parameters to the triangular distributions. 3. Normalise decision tables using summation ratio normalisation. 4. Calculate the results of s decision tables using the weighted sum method. 5. Plot the results using a kernel density plot. It may be possible to implement SURE without using simulations, using convolution- based formulas for mathematically deriving the densities but this would be challenging for a decision with many criteria. Using simulations with SURE offers a quick and simple solution on a modern computer. There are a number of possible distributions that can be used to generate simulated decision tables, however, we suggest the use of triangular distributions. A triangular distribution is a continuous probability distribution that has three parameters, a lower limit, an upper limit and mode. With SURE, 欠珍陳沈津 is used as the lower limit, 欠珍陳銚掴 as the upper limit and 欠沈珍 as the mode. Although other distributions are proposed in theory to account for uncertainty, in practice the triangular distribution is used more frequently as it is easier to turn decision-makers viewpoints into the parameter estimates needed for triangular distributions (Stein & Keblis, 2009). 9 Triangular distributions are often used in areas such as reliability analysis (Ormon, et al., 2002), project scheduling (Vanhoucke, 2016) and corporate finance (Nersesian, 2004) where there is a high amount of uncertainty present. Figure 2 shows various possible triangular distributions with varying levels of skewness. The case on the far left is a left triangular density while the one on the far right is a right triangular density. The case in the centre is a symmetric triangular density as 欠沈珍 = (欠珍陳沈津 + 欠珍陳銚掴 )/2. Figure 2 possible cases of triangular distributions Random deviates are generated based upon triangular distributions which form s simulated decision tables. These decision tables contain various possible scenarios based on the uncertainty in the minimum/maximum ranges given by the decision-maker. The higher the number chosen for s, the more random scenarios generated but also the more computational time required to generate the random numbers. Any minimising criterion should have their corresponding decision variables inverted in all the simulated decision tables. As the simulated decision tables only have one value for each alternative with respect to each criterion, any normalisation method can be used. This overcomes the first major issue with MARE, regarding the limited choice of applicable normalisation methods, as the summation ratio normalisation can be used. The weighted sum method is then used to calculate a score for each alternative for the s simulated decision tables. The output can be shown using a kernel density plot which visualises the overlapping 10 distribution of possible outcomes and the uncertainty present. This overcomes the second major limitation with MARE. The next section presents an illustrated example of the SURE methodology with a real-world case study built in collaboration with a large pharmaceutical company. 4. A case study in the pharmaceutical industry The case study was developed with a process engineering manager at a large pharmaceutical company who had over にど years╆ experience in industrial process engineering, an honours degree in chemical engineering and is a fellow of the Institution of Chemical Engineers. The decision was to select an appropriate degasification technology for a new chemical development process. Details of the product and the process are withheld for reasons of confidentiality. The decision-maker identified five criteria (shown in Table 1) on which to base the decision. The underlying philosophy for the company was to select a technology that was inexpensive, available and straightforward to implement. For all the criteria except Technically Possible (c3), the decision maker chose to use a 0- 100 scale which meant the decision maker provided all inputs using a slider bar where numeric values were not visible but rather textual descriptions were provided from ╉Extremely poor╊ to ╉Excellent╊. For the criterion Technically Possible (c3), only two values (0 or 1) were possible because the technique was either possible at the time or not. The decision-maker selected the term ╉Available Now╊ for cね which can essentially be considered as the level of availability at that time. 11 Table 1 Criteria for the decision problem ID Name Scale Rationale (from the decision-maker) c1 Minimises Hold Up 0-100 ╉Supports the economics of the process and ease of operation┻╊ c2 Simple to Build 0-100 ╉Simplicity in build will speed up development. Must increase robustness of the solution and make the equipment easier to clean. This will contribute to a lower cost┻╊ c3 Technically Possible 0 or 1 ╉The solution has to be capable of removing the gas from the solution to a low enough level┻╊ c4 Available Now 0-100 ╉Need to test and place orders now┸ solutions not off the shelf need to be excluded┻╊ c5 Low Cost 0-100 ╉Lower the cost┸ the better the project payback┻╊ The decision-maker also identified five alternatives; Packed Column (a1), Membrane (a2), Duty Standby CSTR ‒ Vacuum (a3), Duty Standby CSTR with Sparge (a4) and Ultrasonic (a5). From the five alternatives, four were declared technically viable as Ultrasonic (a5) was not capable of removing enough gas from the solution. However, this was included in the analyses as it could become a viable alternative in the future, for example if advances are made in the technology. The least expensive alternatives were Packed Column (a1) and Membrane (a2). However, these options were not readily available to implement quickly within the company. The best options in terms of availability were the two Duty Standby CSTR alternatives (a3 and a4). The data with a 0-100 scale was collected through a user interface with slider bars that had one thumb for certain input (where 欠沈珍陳沈津 = 欠沈珍 = 欠沈珍陳銚掴 ) or three thumbs for uncertain input on a scale of 0-100. As the scores represent the decision-maker╆s preference for 12 each alternative (higher the better), the data collected were all maximising. The 0-1 scale data was collected through a user interface which accepted the input of numerical values. The weights where also collected through slider bars with one thumb. The data collected are shown in Table 2. When using simulations, it is important to consider correlations between the criteria as to not misinform the decision-maker. As the decision-maker mentioned that Simple to Build ゅcにょ ╉will contribute to a lower cost╊ (in Table1) it is possible that this criterion is highly correlated with Low Cost (c5). Therefore, we performed two versions of the analysis. In the first analysis, we calculated the results using independent simulations assuming no correlations between the criteria and in the second analysis we calculated the results assuming that the criteria Simple to Build (c2) and Low Cost (c5) are highly correlated. Table 2 Data collected for the case study Criteria Weights Packed Column (a1) Membrane (a2) Duty Standby CSTR Ȃ Vacuum (a3) Duty Standby CSTR with Sparge (a4) Ultrasonic (a5) Minimises Hold Up (c1) 71 Minimum 49 56 25 6 45 Most Likely 61 88 40 40 50 Maximum 75 97 48 48 60 Simple to Build (c2) 26 Minimum 58 58 29 29 4 Most Likely 62 70 35 36 50 Maximum 66 75 51 52 93 Technically Possible (c3) 96 Most Likely 1 1 1 1 0 Available Now (c4) 61 Minimum 87 25 74 74 0 Most Likely 91 76 85 85 17 Maximum 100 83 97 97 39 Low Cost (c5) 50 Minimum 69 25 34 28 3 Most Likely 80 80 50 50 50 Maximum 91 91 59 59 75 13 There are different simulation methods based on triangular distributions which have been compared on factors such as speed, algorithmic code length, applicability and simplicity (Stein & Keblis, 2009). Josselin & Maux (2017) and Nguyen & McLachlan (2016) both suggest the use of the rtriangle function in the triangle package for R (Carnell, 2017) for the generation of triangular random variates. Initially, this function was used to generate 500,000 decision tables using the data in Table 2. This was then repeated with 1 million decision tables to assess whether there was any significant change in the output. There was no significant change/improvement found as we increased the number of simulations further, so we stopped and calculated the final results using 1 million simulations. As all the data are maximising, there is no need to invert any of the data. Summation ratio normalisation is used on all the simulated decision tables and then the 1million results for each alternative are calculated using the weighted sum method. The results can then be visualised using a kernel density plot. The alternatives can be plotted together as shown in Figure 3 or separately as shown in Figure 4. The vertical lines shown in Figure 4 represent the mean result of each alternative. The performance of alternatives can be judged by the horizontal position of their distribution, with alternatives to the right performing better. The width and height of the distributions illustrate the uncertainty present, with wider and shorter distributions having greater uncertainty. Figures 3 & 4 show the results of independent simulations assuming no correlations between the criteria. However, we also want to evaluate what happens when Simple to Build (c2) and Low Cost (c5) are highly correlated. We implemented this in R using the datasynthR package (Knowles, 2018). Using this package, we can simulate two vectors of 1million uniform random numbers that are highly correlated which can then be used as inputs to the rtriangle function to simulate results (where c2 and c5 are considered highly 14 correlated). The results for this analysis are shown in Figure 5 where the primary difference from the previous analysis (see Figure 3) is that the overlapping distributions for Packed Column (a1) and Membrane (a2) have reduced and the distribution for Ultrasonic (a5) has narrowed. Although the results from the two analyses are similar, we feel it is important to investigate whether correlations between the criteria have any impact on the results as to not misinform the decision-maker. Figure 3 Case study results for SURE on the same plot where simulations are independent The data in Table 2 can also be used with MARE to create the plot shown in Figure 6. Unlike the SURE outputs shown in Figures 3, 4 and 5, in MARE the performance is judged by the vertical position with the dots representing the most likely values and the error bars represent the maximum and minimum values. 15 Figure 4 Case study results for SURE where simulations are independent on separate plots with means shown Figure 5 Case study results for SURE on the same plot where c2 and c5 are highly correlated 16 Figure 6 Case study results for MARE To evaluate SURE against more traditional MCDM approaches, data for this decision were also collected in the form of pairwise comparisons for AHP (Saaty, 1980) and the values for indifference, preference and veto with respect to each criterion was also collected so that ELECTRE III (Roy, 1978; Roy, 1968) can be evaluated. AHP was chosen as it is arguably the most widely used MCDM method while ELECTRE III was included as it has been identified as a superior method for its ability to directly deal with uncertainty (Sayyadi & Makui, 2012; Salminen, et al., 1998). AHP was proposed as a method to solve decision problems using a hierarchical structure of criteria and alternatives. It uses pairwise comparisons as input on the scale of 1-9. 1 infers equal importance, 3 for moderate importance, 5 for strong importance, 7 for very strong importance and 9 for extreme importance. The values of 2, 4, 6 and 8 are compromises between the previous definitions. Pairwise comparisons given by the decision-maker are placed into reciprocal matrices and priorities are identified as the principal eigenvectors of the matrices. These priorities form a decision table where the weighted sum method can then be used to calculate a score for each alternative. ELECTRE III is an outranking approach which uses 17 concordance and discordance indices that are calculated for every possible pair of alternatives. A concordance index expresses how many criteria are in favour of each alternative and a discordance index expresses how many criteria are not in favour of each alternative. Using threshold values provided by the decision-maker, it is possible to determine if each alternative pair is preferred, indifferent or incomparable. ELECTRE III uses pseudo criteria to derive the concordance and discordance indices. Pseudo criteria are a fuzzy representation of each criterion thus the method is capable of dealing with uncertain and limited information. Pseudo criteria are incorporated through the use of indifference, preference and veto thresholds. The indifference threshold is a value below which the decision-maker is indifferent in terms of two alternatives whilst the preference threshold is a value above which the decision-maker prefers one alternative to another. Finally, the veto threshold is the value at which the decision-maker ultimately prefers one alternative over another and wishes to select that alternative with total certainty. ELECTRE III results are calculated through two distillations procedures, one in descending order (finding the best to worst alternatives) and the other in ascending order (finding the worst to best alternatives). The final ranking order is produced by taking an intersection of the descending and ascending orders. The result for AHP is shown in Figure 7 using a bar chart. Alternatives with higher scores are better. The pairwise comparisons given for AHP can be found in the supplementary material along with the R code for creating Figures 3 to 7. All pairwise matrices given were consistent according to the consistency ratio rule propsoed by Saaty (1980) (i.e. CR < 0.1). The most likely values in Table 2 along with the threshold values in Table 3 are used to calculate the results for ELECTRE III shown in Table 4. Unlike the other methods, ELECTRE III results are given in the form of an ordinal ranking. The ascending distillation placed Membrane (a2) and Packed Column (a1) as joint best alternatives while the 18 descending distillation placed Membrane (a2) as the single best alternative. Consequently, the final order classification placed Membrane (a2) as the best alternative in the final rank. The credibility matrix in Table 5 shows that Packed Column (a1) outranked Membrane (a2) by 0.75 while Membrane (a2) outranked Packed Column (a1) by 0.87 resulting in Membrane (a2) achieving a better rank in the descending distillation and subsequently, the final rank. Table 5 also shows that the two Duty Standby CSTR options (a3 and a4) are not comparable as their outranking relationships are both 1 which is why they have been ranked together in the descending, ascending and final rankings in Table 4. Figure 7 Case study results for AHP Table 3 Thresholds for ELECTRE III Indifference Threshold Preference Threshold Veto Threshold c1 5 20 80 c2 5 20 80 c3 0.1 0.9 1 c4 5 20 80 c5 5 20 80 19 Table 4 Case study results for ELECTRE III Rank Descending Order Ascending Order Final Order 1  Membrane  Membrane  Packed Column  Membrane 2  Packed Column  Duty Standby CSTR with Sparge  Duty Standby CSTR - Vacuum  Packed Column 3  Duty Standby CSTR ‒ Vacuum  Duty Standby CSTR with Sparge  Ultrasonic  Duty Standby CSTR ‒ Vacuum  Duty Standby CSTR with Sparge 4  Ultrasonic  Ultrasonic Table 5 Case study credibility matrix for ELECTRE III Packed Column Membrane Duty Standby CSTR Ȃ Vacuum Duty Standby CSTR with Sparge Ultrasonic Packed Column 0.75 1 1 1 Membrane 0.87 0.95 0.95 1 Duty Standby CSTR Ȃ Vacuum 0.5 0.52 1 0.87 Duty Standby CSTR with Sparge 0.5 0.52 1 0.87 Ultrasonic 0 0 0 0 After conducting the analyses, the decision-maker made time to review his experiences and to discuss the results. The decision-maker preferred the methods that allowed the user to ╉spread their answers╊ as ╉it was much more useful in terms of seeing the uncertainty behind the membrane option╊┻ (e explained that Membrane ゅaにょ would have been the favoured alternative internally within the company if it had been possible to 20 reduce the uncertainty associated with it. However, post analysis, he favoured Packed Column (a1), as that alternative was more certain to perform well. In terms of data entry, the decision-maker found the AHP consistency check ╉somewhat disconcerting╊ and he stated that straight data entry was faster in contrast to pairwise comparisons. Nevertheless, when asked about the differences in the criteria weights, the decision-maker said the weights produced by AHP were more representative. His reasoning was that cぬ ゅ╅technically possible╆ょ was a ╉veto type attribute╊ and A(P weighted this criterion much higher. Considering the analysis output, the decision-maker disliked ELECTRE III. He explained that the credibility index (Table 5ょ was ╉confusing╊ and that he disliked output in the form of an ordinal ranking as the differences between the alternatives were not clear. The next section provides a detailed discussion of the case study results which is followed by a list of conclusions, further work and limitations. 5. Discussion and Conclusions This paper presented a new multi-criteria decision-making method termed Simulated Uncertainty Range Evaluations (SURE) which improves upon multi-attribute range evaluations (MARE), a method that has been found to be very effective in supporting uncertain decisions in whole process design. We have shown how SURE is superior in allowing for any form of normalisation procedure and in its ability to visualise the strength of preference and uncertainty associated with each alternative. The practicality of the new approach was illustrated in a real-world case study for a large pharmaceutical company. SURE was compared against MARE, AHP and ELECTRE III and was the only method to identify the alternative chosen by the company as the best alternative. As ELECTRE III provides results in the form of an ordinal ranking, the outputs of the four 21 analyses were not comparable on a numerical scale. The company selected Packed Column (a1) as the best alternative due to the large uncertainty associated with the Membrane (a2) option. AHP and ELECTRE III failed to identify Packed Column (a1) as the best alternative. The uncertainty associated with the Membrane (a2) option was only identified with the MARE and SURE analyses. One could argue however that MARE didn╆t explicitly identify Membrane (a2) as the best option. The output provided by MARE requires the decision-maker to make a choice in terms of which alternative to select given the overlapping uncertainty of the alternatives. In fact, MARE identified Membrane (a2) as the best option in terms of the most likely value but you can also see (in Figure 6) the large uncertainty associated with Membrane (a2) and the smaller uncertainty associated with Packed Column (a1) which informed the decision-maker to select Packed Column (a1). This uncertainty is also clearly visible in the output for SURE through the overlapping distributions (Figures 3-5). The output clearly shows Packed Column (a1) as the furthest distribution to the right signifying it as the best alternative. Furthermore, in the separated plots output for SURE (Figure 4) you can see that the mean result for Packed Column (a1) is higher than all other alternatives. Therefore, SURE has outperformed the other methods with regards to identifying the correct result. The dissimilar results for MARE and SURE that are both based upon the weighted sum method will be a consequence of the different normalisation procedures used. MARE is unable to utilise summation ratio normalisation, the most common approach to normalisation which is utilised in SURE. With respect to criteria weights, the decision-maker favoured the weights calculated by AHP as it gave a higher weighting to cぬ ゅ╅technically possible╆ょ. AHP has been known to exaggerate weights in comparison to direct weighting methods (Hodgett, 2016). This can 22 be seen in Figure 8 which shows the normalised weights calculated by AHP beside the weights used by the other methods in the case study. Figure 8 Comparison of the weights for AHP and the other methods in the case study Clearly┸ cぬ ゅ╅technically possible╆ょ has been exaggerated which in this case the decision- maker wanted. Therefore, for certain applications it might be worth using AHP to calculate the criteria weights for SURE rather than normalising direct weightings. Figure 9 shows the output for SURE if the normalised criteria weights for AHP are used rather than the weights given in Table 2. As expected, there is a much greater divide between Ultrasonic (a5) and the other alternatives as the weighting has increased for c3 ゅ╅technically possible╆ょ where Ultrasonic (a5) performs much worse than the other alternatives. This has also reduced the difference between the other alternatives as with Ultrasonic (a5) getting a lower overall score the other alternatives will achieve a higher overall score. Nevertheless, Packed Column (a1) remains as the highest performing alternative which confirms the result that SURE is the best method to use for this particular case study. Another way to assess the weight given to cぬ ゅ╅technically possible╆ょ is to conduct a sensitivity analysis to investigate how sensitive the results are with respect to a change in an input parameter. 0 0.2 0.4 0.6 0.8 Minimises Hold Up Simple to BuildTechnically PossibleAvailable Now Low Cost AHP Other Methods 23 Figure 9 Case study results for SURE on separate plots using AHP weights SURE handles the uncertainty in evaluations using triangular distributions and therefore minor changes of evaluations scores should not significantly alter the results as random deviates will be between the minimum and maximum values. However, we propose the use of a one-at-a-time (OAT) sensitivity analysis where one evaluation can be modified at a time to see its impact on the results. The uncertainty in criteria weights can also be analysed using the same approach. To illustrate this. we used OAT changing the weight for cぬ ゅ╅technically possible╆ょ from 96 (see Figure 10a) to 1 (see Figure 10b) to see how this impacted the distributions. We consider the evaluation of other possible ways to perform sensitivity analysis for SURE to be an important area of future work. 24 (a) cぬ ゅ╅technically possible╆ょ weight = 96 (b) cぬ ゅ╅technically possible╆ょ weight = 1 Figure 10 illustrative example of OAT sensitivity analysis 25 Further work is also needed to evaluate SURE with other decisions, particularly decisions faced in other sectors outside of the pharmaceutical industry. There are also other well- known MCDM methods such as MAUT, TOPSIS (Hwang & Yoon, 1981) and PROMETHEE (Brans, 1982) which can be compared and assessed against SURE. The procedure for assigning criteria weights also needs further investigation. It is unclear from the case study whether AHP or a direct weighting approach is best for assigning the criteria weights for SURE. There are, therefore, several opportunities for further research and practice. To assist with further work in this area and to make this work as open, transparent and replicable as possible, the R code used to create Figures 3-7 in this paper is included as supplementary material. Acknowledgments The authors would like to thank the large (anonymous) Pharmaceutical company for providing the case study and Britest Limited (http://www.britest.co.uk) for their support. We would also like to thank Prof. Alan Pearman and Prof. Wändi Bruine de Bruin in the Centre for Decision Research for their helpful comments and suggestions. References Belton, V. & Stewart, T. J., 2002. Multiple Criteria Decision Analysis: an Integrated Approach. Kluwer Academic Publisher. Bigaret, S., Hodgett, R.E., Meyer, P. et al. , 2017. Supporting the Multi-Criteria Decision Aiding process: R and the MCDA package. EURO Journal on Decision Processes, 5 (1-4) p. 169-194 Brans┸ J┻ P┻┸ なひぱに┻ L╆ingénierie de la décision┺ élaboration d╆instruments d╆aide à la décision┻ La méthode PROMET(EE┻ Presses de l╆Université Laval┻ http://www.britest.co.uk/ 26 Bruine de Bruin┸ W┻ et al┻┸ にどなに┻ The effect of question wording on consumers╆ reported inflation expectations. Journal of Economic Psychology, 33 p. 749‒757. Carnell, R., 2017. The triangle Package. [Online] Available at: https://cran.r- project.org/web/packages/triangle/triangle.pdf Çelen, A., 2014. Comparative Analysis of Normalization Procedures in TOPSIS Method: With an Application to Turkish Deposit Banking Market. Informatica, 24(2) p.185-208. Chakraborty, S. & Yeh, C.-H., 2007. A Simulation Based Comparative Study of Normalization Procedures in Multiattribute Decision Making. Corfu Island, Greece, Proceedings of the 6th WSEAS Int. Conf. on Artificial Intelligence, Knowledge Engineering and Data Bases. Cinelli, M., Coles, S. R. & Kirwan, K., 2014. Analysis of the potentials of multi criteria decision analysis methods to conduct sustainability assessment. Ecological Indicators, 46 p. 138-148. Durbach, I. N. & Stewart, T. J., 2012b. A comparison of simplified value function approaches for treating uncertainty in multi-criteria decision analysis. Omega,40 p. 456‒ 464. Durbach, I. N. & Stewart, T. J., 2012. Modelling uncertainty in multi-criteria decision analysis. European Journal of Operational Research, 223 p. 1‒14. Gardziejczyk, W. & Zabicki, P., 2017. Normalization and variant assessment methods in selection of road alignment variants ‒ case study. Journal of Civil Engineering and Management, 23(4), p. 510-523. Gass, S. I., 2005. Model World: The Great Debate: MAUT versus AHP. Interfaces, 35 (4) p. 308-312. 27 Hodgett, R. E., 2016. Comparison of multi-criteria decision-making methods for equipment selection. The International Journal of Advanced Manufacturing Technology, 85 (5-8) p. 1145‒1157. Hodgett, R. E., Martin, E. B., Montague, G. & Talford, M., 2014. Handling uncertain decisions in whole process design. Production Planning & Control, 25(12) p.1028-1038. Hwang, C. L. & Yoon, K., 1981. Multiple Attribute Decision Making Methods and. In: a state of the art survey. Springer-Verlag. Josselin, J.-M. & Maux, B. L., 2017. Statistical Tools for Program Evaluation: Methods and Applications to Economic Policy, Public Health, and Education. Springer. Keeney, R. L. & Raiffa, H., 1976. Decisions with Multiple Objectives: Preferences and Value Trade-offs. New York: Wiley. Knowles, J., 2018. The datasynthR Package. [Online] Available at: https://github.com/jknowles/datasynthR Kumar, A. et al., 2017. A review of multi criteria decision making (MCDM) towards sustainable renewable energy development. Renewable and Sustainable Energy Reviews,69 p. 596‒609. Lumby, S. & Jones, C., 2003. Corporate Finance: Theory & Practice. Cengage Learning EMEA. Nersesian, R. L., 2004. Corporate Financial Risk Management: A Computer-based Guide for Nonspecialists. Greenwood Publishing Group. Nguyen, H. D. & McLachlan, G. J., 2016. Linear mixed models with marginally symmetric nonparametric random effects. Computational Statistics and Data Analysis, 103 p. 151- 169. Ormon, S. W., Cassady, C. R. & Greenwood, A. G., 2002. Reliability prediction models to support conceptual design. IEEE Transactions on Reliability, 51(2) p. 151 - 157. 28 Peleg, M., Normand, M. D. & Corradini, M. G., 2012. A method to estimate a person or group health risks and benefits from additive and multiplicative factors. Trends in Food Science & Technology, 28 p. 44-51. Polatidis, H., A. Haralambopoulos, D. A., Munda, G. & Vreeker, R., 2006. Selecting an Appropriate Multi-Criteria Decision Analysis Technique for Renewable Energy Planning. Energy Sources, Part B: Economics, Planning, and Policy, 1:2p. 181-193. Roy, B., 1968. Classement et choix en présence de points de vue multiples (la méthode ELECTRE). La Revue d'Informatique et de Recherche Opérationelle, 8 p. 57‒75. Roy, B., 1978. ELECTRE III: Un algorithme de classements fondé sur une représentation floue des préférences en présence de critéres multiples┻ Cahiers du Centre d╆Etudes de Recherche Opérationnelle, 20, p. 3-24. Saaty, T. L., 1980. The Analytic Hierarchy Process. New York: McGraw-Hill. Salminen, P., Hokkanen, J. & Lahdelma, R., 1998. Comparing multicriteria methods in the context of environmental problems. European Journal of Operational Research, 104(3), p. 485-496. Sayyadi, M. K. & Makui, A., 2012. A new view to uncertainty in Electre III method by introducing interval numbers. Decision Science Letters, 1 p. 33-38. Schaetter, F., 2016. Decision support system for a reactive management of disaster- caused supply chain disturbances. KIT Scientific Publishing. Schwarz, N., 1999. Self-reports: how the questions shape the answers. American psychologist, 54(2). Stein, W. E. & Keblis, M. F., 2009. A new method to simulate the triangular distribution. Mathematical and Computer Modelling, 49 p. 1143‒1147. Stein, W. E. & Keblis, M. F., 2009. A new method to simulate the triangular distribution. Mathematical and Computer Modelling, 49(5-6), p. 1143-1147. 29 Stewart, T. J., 2005. Dealing with Uncertainties in MCDA. In: Multiple Criteria Decision Analysis: State of the Art Surveys. New York: Springer, p. 445-466. Vanhoucke, M., 2016. Integrated Project Management Sourcebook - A Technical Guide to Project Scheduling, Risk and Control. Springer International Publishing. Wagner, M., Rind, A., Thür, N. & Aigner, W., 2017. A knowledge-assisted visual malware analysis system: Design, validation, and reflection of KAMAS. Computers & Security, 67 p. 1-15. 30 Supplementary Material # Install and add all packages that are needed install.packages("triangle") library("triangle") install.packages("plyr") library(plyr) install.packages("ggplot2", dependencies = T) library("ggplot2") install.packages("MCDA") library(MCDA) install.packages("devtools") library(devtools) install_github("jknowles/datasynthR") library("datasynthR") # Enter MARE / SURE data weights <- c(71, 26, 96, 61, 50) c1min <- c(49, 56, 25, 6, 45) c1 <- c(61, 88, 40, 40, 50) c1max <- c(75, 97, 48, 48, 60) c2min <- c(58, 58, 29, 29, 4) c2 <- c(62, 70, 35, 36, 50) c2max <- c(66, 75, 51, 52, 93) c3 <- c(1, 1, 1, 1, 0) c4min <- c(87, 25, 74, 74, 0) c4 <- c(91, 76, 85, 85, 17) c4max <- c(100, 83, 97, 97, 39) c5min <- c(69, 25, 34, 28, 3) c5 <- c(80, 80, 50, 50, 50) c5max <- c(91, 91, 59, 59, 75) ### Use SURE ŷ assuming independence # Set number of simulations s <- 1000000 ### Normalise Weights using Linear Scale Sum Normalisation total <- sum(weights) for (i in 1:length(weights)) { weights[i] <- weights[i] / total } names(weights) <- c("Minimises Hold Up","Simple to Build","Technically Possible","Available Now","Low Cost") # Create performance table to save simulations performanceTable <- array(NA, c(5,5,s)) 31 # Simulations based on triangular distributions performanceTable[1,1,] <- rtriangle(s, a=c1min[1], b=c1max[1], c=c1[1]) performanceTable[1,2,] <- rtriangle(s, a=c1min[2], b=c1max[2], c=c1[2]) performanceTable[1,3,] <- rtriangle(s, a=c1min[3], b=c1max[3], c=c1[3]) performanceTable[1,4,] <- rtriangle(s, a=c1min[4], b=c1max[4], c=c1[4]) performanceTable[1,5,] <- rtriangle(s, a=c1min[5], b=c1max[5], c=c1[5]) performanceTable[2,1,] <- rtriangle(s, a=c2min[1], b=c2max[1], c=c2[1]) performanceTable[2,2,] <- rtriangle(s, a=c2min[2], b=c2max[2], c=c2[2]) performanceTable[2,3,] <- rtriangle(s, a=c2min[3], b=c2max[3], c=c2[3]) performanceTable[2,4,] <- rtriangle(s, a=c2min[4], b=c2max[4], c=c2[4]) performanceTable[2,5,] <- rtriangle(s, a=c2min[5], b=c2max[5], c=c2[5]) performanceTable[3,1,] <- c3[1] performanceTable[3,2,] <- c3[2] performanceTable[3,3,] <- c3[3] performanceTable[3,4,] <- c3[4] performanceTable[3,5,] <- c3[5] performanceTable[4,1,] <- rtriangle(s, a=c4min[1], b=c4max[1], c=c4[1]) performanceTable[4,2,] <- rtriangle(s, a=c4min[2], b=c4max[2], c=c4[2]) performanceTable[4,3,] <- rtriangle(s, a=c4min[3], b=c4max[3], c=c4[3]) performanceTable[4,4,] <- rtriangle(s, a=c4min[4], b=c4max[4], c=c4[4]) performanceTable[4,5,] <- rtriangle(s, a=c4min[5], b=c4max[5], c=c4[5]) performanceTable[5,1,] <- rtriangle(s, a=c5min[1], b=c5max[1], c=c5[1]) performanceTable[5,2,] <- rtriangle(s, a=c5min[2], b=c5max[2], c=c5[2]) performanceTable[5,3,] <- rtriangle(s, a=c5min[3], b=c5max[3], c=c5[3]) performanceTable[5,4,] <- rtriangle(s, a=c5min[4], b=c5max[4], c=c5[4]) performanceTable[5,5,] <- rtriangle(s, a=c5min[5], b=c5max[5], c=c5[5]) ### Normalise decision tables using summation ratio normalisation for (i in 1:s){ sumj <- c(1:5) for (j in 1:5) { sumj[j] <- sum(performanceTable[j,,i]) } for (j in 1:5) { performanceTable[j,,i] <- performanceTable[j,,i] / sumj[j] } } ### Calculate the results 32 results <- array(NA, c(1,5,s)) dimnames(results)[[2]] <- c("Packed Column", "Membrane", "Duty Standby CSTR - Vacuum", "Duty Standby CSTR with Sparge", "Ultrasonic") for (k in 1:s){ for (i in 1:5) { result <- 0 for (j in 1:5) { result <- result + (performanceTable[j,i,k] * weights[j]) } results[1,i,k] <- result } } # Figure 3 - Greyscale plot <- data.frame(data = c(results[1,1,], results[1,2,], results[1,3,], results[1,4,], results[1,5,]), lines = rep(c("Packed Column", "Membrane", "Duty Standby CSTR - Vacuum", "Duty Standby CSTR with Sparge", "Ultrasonic"), each = s)) ggplot(plot, aes(x = data, fill = lines)) + geom_density(alpha = 0.5) + scale_fill_grey() # Figure 3 - Colour plot <- data.frame(data = c(results[1,1,], results[1,2,], results[1,3,], results[1,4,], results[1,5,]), lines = rep(c("Packed Column", "Membrane", "Duty Standby CSTR - Vacuum", "Duty Standby CSTR with Sparge", "Ultrasonic"), each = s)) ggplot(plot, aes(x = data, fill = lines)) + geom_density(alpha = 0.5) # Figure 4 - Greyscale plot <- data.frame(data = c(results[1,1,], results[1,2,], results[1,3,], results[1,4,], results[1,5,]), lines = rep(c("Packed Column", "Membrane", "Duty Standby CSTR - Vacuum", "Duty Standby CSTR with Sparge", "Ultrasonic"), each = s)) mu <- ddply(plot, "lines", summarise, grp.mean=mean(data)) ggplot(plot, aes(x = data, fill = lines)) + geom_density(alpha = 0.5, fill="grey") + facet_grid(lines ~ .) + theme(legend.position="none") + geom_vline(data=mu, aes(xintercept=grp.mean),color="black", linetype="solid", size=1) # Figure 4 - Colour plot <- data.frame(data = c(results[1,1,], results[1,2,], results[1,3,], results[1,4,], results[1,5,]), lines = rep(c("Packed Column", "Membrane", "Duty Standby CSTR - Vacuum", "Duty Standby CSTR with Sparge", "Ultrasonic"), each = s)) mu <- ddply(plot, "lines", summarise, grp.mean=mean(data)) ggplot(plot, aes(x = data, fill = lines)) + geom_density(alpha = 0.5) + facet_grid(lines ~ ., labeller = labeller(lines = label_wrap_gen(10))) + theme(legend.position="none") + geom_vline(data=mu, aes(xintercept=grp.mean),color="black", linetype="solid", size=1) ### Use SURE ŷ where c2 and c5 are correlated 33 # generate s uniform random numbers for c2 and c5 that are correlated cormatrix <- cor(data.frame(c2,c5), method="pearson") rc2 <- runif(s) rc5 <- runifcor.cor(rc2, cormatrix[2]) cor(rc2,rc5) # a modified rtriangle function that accepts custom random numbers in crunif mrtriangle <- function (n = 1, a = 1, b = 100, c = 10^((log10(a) + log10(b))/2), logbase = 10, crunif = NULL) { stopifnot(length(n) == 1) if (n < 1 | is.na(n)) stop(paste("invalid argument: n =", n)) n <- floor(n) if (any(is.na(c(a, b, c)))) return(rep(NaN, times = n)) if (any(a > c | b < c)) return(rep(NaN, times = n)) if (any(is.infinite(c(a, b, c)))) return(rep(NaN, times = n)) if (any(c(a, b, c) == 0)) return(rep(-Inf, times = n)) if (any(c(a, b, c) < 0)) return(rep(NaN, times = n)) if (is.null(crunif)) { lp <- runif(n) } else { stopifnot(nrow(crunif) >= length(n)) lp <- crunif[n] } stopifnot(length(logbase) == 1) if (logbase == 10) { la <- log10(a) lb <- log10(b) lc <- log10(c) } else { la <- log(a)/log(logbase) lb <- log(b)/log(logbase) lc <- log(c)/log(logbase) } if (a != c) { i <- which((la + sqrt(lp * (lb - la) * (lc - la))) <= lc) j <- which((lb - sqrt((1 - lp) * (lb - la) * (lb - lc))) > lc) 34 } else { i <- which((la + sqrt(lp * (lb - la) * (lc - la))) < lc) j <- which((lb - sqrt((1 - lp) * (lb - la) * (lb - lc))) >= lc) } if (length(i) != 0) lp[i] <- la + sqrt(lp[i] * (lb - la) * (lc - la)) if (length(j) != 0) lp[j] <- lb - sqrt((1 - lp[j]) * (lb - la) * (lb - lc)) p <- logbase^lp return(p) } performanceTable <- array(NA, c(5,5,s)) # Simulations based on triangular distributions - c2 and c5 correlated performanceTable[1,1,] <- rtriangle(s, a=c1min[1], b=c1max[1], c=c1[1]) performanceTable[1,2,] <- rtriangle(s, a=c1min[2], b=c1max[2], c=c1[2]) performanceTable[1,3,] <- rtriangle(s, a=c1min[3], b=c1max[3], c=c1[3]) performanceTable[1,4,] <- rtriangle(s, a=c1min[4], b=c1max[4], c=c1[4]) performanceTable[1,5,] <- rtriangle(s, a=c1min[5], b=c1max[5], c=c1[5]) performanceTable[2,1,] <- mrtriangle(s, a=c2min[1], b=c2max[1], c=c2[1], crunif=rc2) performanceTable[2,2,] <- mrtriangle(s, a=c2min[2], b=c2max[2], c=c2[2], crunif=rc2) performanceTable[2,3,] <- mrtriangle(s, a=c2min[3], b=c2max[3], c=c2[3], crunif=rc2) performanceTable[2,4,] <- mrtriangle(s, a=c2min[4], b=c2max[4], c=c2[4], crunif=rc2) performanceTable[2,5,] <- mrtriangle(s, a=c2min[5], b=c2max[5], c=c2[5], crunif=rc2) performanceTable[3,1,] <- c3[1] performanceTable[3,2,] <- c3[2] performanceTable[3,3,] <- c3[3] performanceTable[3,4,] <- c3[4] performanceTable[3,5,] <- c3[5] performanceTable[4,1,] <- rtriangle(s, a=c4min[1], b=c4max[1], c=c4[1]) performanceTable[4,2,] <- rtriangle(s, a=c4min[2], b=c4max[2], c=c4[2]) performanceTable[4,3,] <- rtriangle(s, a=c4min[3], b=c4max[3], c=c4[3]) performanceTable[4,4,] <- rtriangle(s, a=c4min[4], b=c4max[4], c=c4[4]) performanceTable[4,5,] <- rtriangle(s, a=c4min[5], b=c4max[5], c=c4[5]) performanceTable[5,1,] <- mrtriangle(s, a=c5min[1], b=c5max[1], c=c5[1], crunif=rc5) performanceTable[5,2,] <- mrtriangle(s, a=c5min[2], b=c5max[2], c=c5[2], crunif=rc5) performanceTable[5,3,] <- mrtriangle(s, a=c5min[3], b=c5max[3], c=c5[3], crunif=rc5) performanceTable[5,4,] <- mrtriangle(s, a=c5min[4], b=c5max[4], c=c5[4], crunif=rc5) 35 performanceTable[5,5,] <- mrtriangle(s, a=c5min[5], b=c5max[5], c=c5[5], crunif=rc5) for (i in 1:s){ sumj <- c(1:5) for (j in 1:5) { sumj[j] <- sum(performanceTable[j,,i]) } for (j in 1:5) { performanceTable[j,,i] <- performanceTable[j,,i] / sumj[j] } } results <- array(NA, c(1,5,s)) dimnames(results)[[2]] <- c("Packed Column", "Membrane", "Duty Standby CSTR - Vacuum", "Duty Standby CSTR with Sparge", "Ultrasonic") for (k in 1:s){ for (i in 1:5) { result <- 0 for (j in 1:5) { result <- result + (performanceTable[j,i,k] * weights[j]) } results[1,i,k] <- result } } plot <- data.frame(data = c(results[1,1,], results[1,2,], results[1,3,], results[1,4,], results[1,5,]), lines = rep(c("Packed Column", "Membrane", "Duty Standby CSTR - Vacuum", "Duty Standby CSTR with Sparge", "Ultrasonic"), each = s)) ggplot(plot, aes(x = data, fill = lines)) + geom_density(alpha = 0.5) # Figure 5 - Greyscale plot <- data.frame(data = c(results[1,1,], results[1,2,], results[1,3,], results[1,4,], results[1,5,]), lines = rep(c("Packed Column", "Membrane", "Duty Standby CSTR - Vacuum", "Duty Standby CSTR with Sparge", "Ultrasonic"), each = s)) ggplot(plot, aes(x = data, fill = lines)) + geom_density(alpha = 0.5) + scale_fill_grey() # Figure 5 - Colour plot <- data.frame(data = c(results[1,1,], results[1,2,], results[1,3,], results[1,4,], results[1,5,]), lines = 36 rep(c("Packed Column", "Membrane", "Duty Standby CSTR - Vacuum", "Duty Standby CSTR with Sparge", "Ultrasonic"), each = s)) ggplot(plot, aes(x = data, fill = lines)) + geom_density(alpha = 0.5) ### Use MARE performanceTableMin <- matrix(c(c1min,c2min,c3,c4min,c5min),nrow=5,ncol=5, byrow=TRUE) performanceTable <- matrix(c(c1,c2,c3,c4,c5),nrow=5,ncol=5, byrow=TRUE) performanceTableMax <- matrix(c(c1max,c2max,c3,c4max,c5max),nrow=5,ncol=5, byrow=TRUE) row.names(performanceTable) <- names(weights) colnames(performanceTable) <- c("Packed Column", "Membrane", "Duty Standby CSTR - Vacuum", "Duty Standby CSTR with Sparge", "Ultrasonic") row.names(performanceTableMin) <- names(weights) colnames(performanceTableMin) <- colnames(performanceTable) row.names(performanceTableMax) <- names(weights) colnames(performanceTableMax) <- colnames(performanceTable) criteriaMinMax <- c("max", "max", "max", "max", "max") MAREResults <- MARE(performanceTableMin, performanceTable, performanceTableMax, weights, criteriaMinMax) # Figure 6 plotMARE(MAREResults) ### Use AHP criteriaWeightsPairwiseComparisons <- t(matrix(c(1,4,1/9,1,3,1/4,1,1/9,1/3,1,9,9,1,9,9,1,3,1/9,1,2,1/3,1,1/9,1/2,1),nrow=5,ncol=5)) colnames(criteriaWeightsPairwiseComparisons) = names(weights) rownames(criteriaWeightsPairwiseComparisons) = names(weights) ac1 <- t(matrix(c(1,1/4,3,4,1/2,4,1,6,4,2,1/3,1/6,1,1,1/2,1/4,1/4,1,1,1/2,2,1/2,2,2,1),nrow=5,ncol=5)) colnames(ac1) <- c("Packed Column", "Membrane", "Duty Standby CSTR - Vacuum", "Duty Standby CSTR with Sparge", "Ultrasonic") rownames(ac1) <- colnames(ac1) ac2 <- t(matrix(c(1,1/3,3,3,1/3,3,1,6,6,2,1/3,1/6,1,1,1/3,1/3,1/6,1,1,1/3,3,1/2,3,3,1),nrow=5,ncol=5)) colnames(ac2) <- colnames(ac1) rownames(ac2) <- colnames(ac1) ac3 <- t(matrix(c(1,1,1,1,9,1,1,1,1,9,1,1,1,1,9,1,1,1,1,9,1/9,1/9,1/9,1/9,1),nrow=5,ncol=5)) colnames(ac3) <- colnames(ac1) rownames(ac3) <- colnames(ac1) ac4 <- t(matrix(c(1,5,2,2,5,1/5,1,1/4,1/4,3,1/2,4,1,1/2,4,1/2,4,2,1,4,1/5,1/3,1/4,1/4,1),nrow=5,ncol=5)) colnames(ac4) <- colnames(ac1) rownames(ac4) <- colnames(ac1) ac5 <- t(matrix(c(1,1,3,3,5,1,1,1,1,1,1/3,1,1,1,1,1/3,1,1,1,1,1/5,1,1,1,1),nrow=5,ncol=5)) 37 colnames(ac5) <- colnames(ac1) rownames(ac5) <- colnames(ac1) alternativesPairwiseComparisonsList <- list(c1=ac1, c2=ac2, c3=ac3, c4=ac4, c5=ac5) # Check consistency measures pairwiseConsistencyMeasures(criteriaWeightsPairwiseComparisons) pairwiseConsistencyMeasures(ac1) pairwiseConsistencyMeasures(ac2) pairwiseConsistencyMeasures(ac3) pairwiseConsistencyMeasures(ac4) pairwiseConsistencyMeasures(ac5) AHPResults <- AHP(criteriaWeightsPairwiseComparisons, alternativesPairwiseComparisonsList) # Figure 7 barplot(AHPResults)