key: cord-1007999-k22vse9h authors: Yu, Bin; Xu, Zeshui title: Advantage matrix: two novel multi-attribute decision-making methods and their applications date: 2022-01-17 journal: Artif Intell Rev DOI: 10.1007/s10462-021-10126-9 sha: e53eb6c2ac76c955d352e9d4d2d0d479ae1f0058 doc_id: 1007999 cord_uid: k22vse9h By comparing attributes of objects in an information system, the advantage matrix on the object set is established in this paper. The contributions can be identified as follows: (1) The advantage degree is proposed by the accumulation of the advantage matrix. (2) Based on the advantage matrix, the advantage (disadvantage) neighborhood approximation operator and the advantage (disadvantage) correlation approximation operator are defined and studied. Based on these two new operators, the neighborhood degree and the correlation degree are presented. The relationships between them are also investigated to demonstrate the value of the proposed method. (3) Finally, based on the above three degrees, new algorithms are designed, in which the effectiveness and robustness of the algorithms are analyzed by practical examples. Many decision-making situations in real life need to consider several criteria (Liang et al. 2015; Lin et al. 2013; Pearman 2014; Qian et al. 2014; . Multipleattribute decision making (MADM) is an important branch of decision-making. MADM methods attempt to select the best alternative(s) from a set of alternatives with respect to several criteria. Some of the classic methods of MADM have been proposed, such as ELECTRE method (Figueira et al. 2009 ) and PROMETHEE method (Brans and Vincke 1986) which are based on "Outranking Relationship", AHP method constructed with the concept of Hierarchy (Belton 1986; Tian et al. 2018) , MAVT method based on "Attribute Value function" (Ferretti et al. 2014 ) and TOPSIS method (Hwang and Yoon 1981; Fan et al. 2013 ) and EDAS (Evaluation Based On Distance from Average Solution) method (Ghorabaee et al. 2015; Canciglieri et al. 2015) . These methods with clear concept and strong practicability, are known as effective ranking approaches and have been widely used in the research of various MADM problems. For example: Dragan and Fatih (Pamučar and Ecer 2020) propose a novel subjective weighting method called the Fuzzy Full Consistency Method (FUCOM-F) for determining weights as accurately as possible under fuzziness, and apply this method to the green supplier evaluation problem. In order to meet the challenge of the COVID-19 (COronaVIrus Disease-2019) pandemic, health systems must adjust to new circumstances and establish separate hospitals exclusive for patients infected with SARS-CoV-2 virus. Žižović et al. (2021) put forward a multiple-criteria model for the evaluation and selection of nurses for COVID-19 hospitals and so on. Rough set theory (RST) (Pawlak 1982 ) is a mathematical approach to imprecision, vagueness and uncertainty in data analysis. In RST, lower and upper approximations were defined to characterize a concept, namely, a subset of the universe. The main advantage of RST in data analysis is that it does not need any preliminary or additional information about data. RST has been widely applied to many fields, e.g., forecasting (Yu et al. 2020 , medical diagnosis (Pattaraintakorn and Cercone 2008) , machine learning (Hong et al. 2008 (Hong et al. , 2002 , decision-making (Salamó and López-Sánchez 2011; Son et al. 2012; Swiniarski and Skowron 2003; Tian et al. 2011; Xiong et al. 2012; Zeleny 1976; Ye et al. 2021; Wang et al. 2021) , pattern recognition (Wang and Wang 2009 ), case-based reasoning (Huang and Tseng 2004) , and data mining (Lingras and Yao 1998; Yamaguchi 2009 ). RST (Pawlak 1997; Pawlak and Slowinski 1994) is a valid mathematical approach to deal with the decision analysis, and has attracted much of attentions. In the past few years, many rough set-based MADM methods were presented. Based on covering (I, T)fuzzy rough sets, defined four kinds of fuzzy -covering models, and designed a fuzzy rough TOPSIS method. By triangular norms and fuzzy implication operators, Jiang et al. (2018) defined four types of fuzzy -covering variable precision (I, T)fuzzy rough set models, and designed another kind of fuzzy rough TOPSIS method. In the intuitionistic fuzzy environment, defined covering (I, T)-intuitionistic fuzzy rough set model based on D'eer fuzzy domain operator, and designed intuitionistic fuzzy rough TOPSIS method. At the same time, from the perspective of classical indiscernibility relation and big data, gave the attribute oriented -indiscernibility relation, then constructed the -rough set model, and finally successfully integrate the -rough set model and TOPSIS method to establish a kind of rough TOPSIS method, which is applied to student performance evaluation. used the -indiscernibility relation to construct an -rough fuzzy set model from the perspective of granular computing. Then they successfully integrated the -rough fuzzy set model with PROMETHEE, and established a class of rough fuzzy PROMETHEE method, and applied it to the analysis of enterprise development. Based on the ( , )-rough fuzzy set model, Yu et al. (2020) designed a multi-attribute decision-making method (the prediction method based on the minimum deviation), and discussed the application of this method. Based on the review conducted above, some genuine challenges are identified which are presented in a nutshell below: 1. In traditional methods, dimension and attribute weight are used as a priori information, and different processing methods lead to great differences in decision-making results. Next, the TOPSIS method is illustrated as an example. In Table 1 , the TOPSIS based on 3 determining attribute weight methods (Entropy method (Zou et al. 2006) , Coefficient of variation method (Faber and Korn 1991) , Cumulation method) and 4 normalization methods (Max method, Sum method, Max-Min method, Vector method) carries out the decision analysis, and the decision results are shown in Fig. 1a , b. In Fig. 1a , the influence of weight on sorting results is analyzed, the distance is European distance, and the vector method is used for normalization. In Fig. 1b , the influence of normalization on sorting results is analyzed, the distance is European distance, and all attribute weights are set to the same. 2. In the process of decision method design, the distance (Euclidean distance, City block metric, Chebyshev distance, Mahalanobis distance, etc.) is usually used to measure the quality of objects. Choosing different distances also affects the decision results. Next, the TOPSIS method is illustrated as an example, too. In Table 1 , the TOPSIS based on 4 distance carries out the decision analysis, and the decision results are shown in Fig. 1c . In terms of decision semantics, although weight is used to adjust the importance of attributes in the distance, it do not think about the internal influence between attributes, such as incompatibility and non operability. 3. In the existing methods, the advantage relations between objects are established from various angles, but these relations often have nonlinear relationships, which is difficult to meet the needs of optimal ranking. 1. Through the comparison between attributes one by one, the advantage matrix is established to intuitively judge the advantage attributes among objects. From the perspective of decision semantics, it only compares the same attributes of different objects without considering the impact of attributes. The decision-making method based on advantage matrix design avoids dimensional processing, so as to weaken the influence of prior information on decision-making results. 2. The approximation operator is the core of domain rough set, and domain rough set does not need any prior information other than the data set required by the problem when describing or processing the problem, so the description or processing of the problem is more objective. The decision-making method based on approximation operators is designed to avoid the determination of attribute weight, so as to weaken the influence of a prior information on the decision-making results. 3. The non-linear advantage relation between objects is constructed based on the advantage matrix, the approximation operator of the advantage relation is established, and the neighborhood degree and the correlation degree are proposed to global analysis, further analysis of the object sequencing, so as to achieve the optimal ranking of objects. The rest of this paper is organized as follows. In Sect. 2, based on the advantage matrix of objects, the neighborhood approximation operators and correlation approximation operators are constructed, together with investigating the properties of four types of approximations. In Sect. 3, two new MADM methods are designed according to the advantage matrix of objects. The experiments demonstrate that the synthetical results are effective and robust. An information system (or knowledge representation system) is a finite table. The rows are labeled by objects, the columns are labeled by attributes, and the entries of the table are in their attribute-values. An information system is a 4-tuple S = {U, Q, f , V} , where (1) U is a non-empty finite set of objects; (2) Q is a non-empty finite set of attributes; where V q is the set of values of the attribute q; for every x ∈ U and q ∈ Q . For our purpose, throughout this study, let V = [0, 1] . The cardinality of a finite set X in this study is written as |X|. Definition 2.1 (Advantage matrix) Given an information system S, its advantage matrix The physical meaning of the matrix element D(x i , x j ) is that the object x i is superior to x j with respect to any attribute in D(x i , x j ) . An advantage matrix D is nonsymmetric, i.e., D(x i , x j ) ≠ D(x j , x i ) , ∃x i , x j ∈ U . Therefore, it is sufficient to consider the lower triangle and the upper triangle of the matrix. Proof According to Definitions 2.3 and 2.5 , obviously. ◻ Proposition 2.7 For an information system S and x, y, Proof According to Definitions 2.1 and 2.3 , obviously. ◻ Definition 2.8 (Advantage degree) The advantage degree (AD) of an object x is defined by Note 1. In an information system, the advantage degree is used to describe by the advantage attribute of one object to other objects. By comparing the advantage degrees of each object, the advantages and disadvantages of the object are analyzed. This description method not only compares each object (by [D(x, x i )] ), but also compares all objects comprehensively (by ∑ ). In particular, Table 2 , based on the total utility value ( 10 . The results allow us to believe that the objects x 2 , x 3 , and x 4 are better than x 1 , and that x 2 , x 3 and x 4 are not comparable. To achieve a more efficient sorting result in Table 2 , the advantage degrees based on the advantage matrix are as follows: AD(x 1 ) = 0.13 , AD(x 2 ) = 0.63 , AD(x 3 ) = 0.50 , AD(x 4 ) = 0.56 , and AD(x 5 ) = 0.69 . Immediately, x 5 is superior to x 2 ; x 2 is superior to x 4 ; x 4 is superior to x 3 ; x 3 is superior to x 1 . For the above two methods, incomparable situations will appear for two objects. In order to solve this kind of problem, two kinds of approximate operators on the basis of the advantage matrix are proposed. Definition 2.9 Let S = {U, Q, f , V} be an information system and x, y ∈ U . 1. The advantage and disadvantage neighborhood approximation operators are respectively defined as: and (2) 1. For an information system, the advantages and disadvantages of the objects are analyzed by combining the advantage and disadvantage neighborhood. They reflect the advantages and disadvantages of objects in information systems from different perspectives. Assuming that we have six objects x 1 , x 2 , x 3 , x 4 , x 5 , and x 6 , in which x 2 is superior to x 5 is superior to x 4 ; and x 6 is superior to x 5 . The less objects in the advantage neighborhood of the object x, the result in the higher priority x achieves. The more objects in the disadvantage neighborhood of the object x, the result in the higher priority x achieves. 2. In an information system, ↑x represents the set of objects that have an advantage attribute for the object x, ↓x represents the set of objects that have a disadvantage attribute for the object x. They explain the correlation between objects in the information system from different views. The semantic expression is as follows: The less objects in the advantage correlation of the object x, the result in the higher priority x achieves. The more objects in the disadvantage correlation of the object x, the result in the higher priority x achieves. According to the semantic expression of neighborhood and correlation approximation operators, we propose the following two new degrees. 1. The neighborhood degree (ND) of an object x is defined as 3. According to (2) and Theorem 2.13 (2), we have |↑x| + |↓x| = |U| − 1 . Then, It is similar to the proof of (3). In the sequel, we present the general problem with the best selection from m objects ( x i , i = 1, 2, ..., m ). Here we evaluate and compare with the other objects on the basis of attribute ( a j , j = 1, 2, ..., n ) whose values are known to us. Given a decision information sys- Input: An information system S = {U, Q, f , V} Output: A ranking result of all objects. Step 1: Step 2: Compute the advantage degree AD(x i ) , x i ∈ U , ∖∖ according to Definition 2.8; Step 3: Rank the objects according to the AD of each object. The bigger the AD(x i ) is, the better the object x i will be, Step 4: End. Step 1: Step 2: Compute the neighborhood degree ND(x i ) , x i ∈ U , ∖∖ according to Definition 2.11 (1); Step 3: Compute the correlation degree CD(x i ) , x i ∈ U , ∖∖ according to Definition 2.11 (2) ; NDCD(x j ) = ND(x j ) , where ND and CD are mixed to establish NDCD; Step 5: Rank the objects according to the NDCD of each object. The bigger the NDCD{x i } is, the better the object x i will be, x i ≺ x j ⟺ NDCD{x i } < NDCD{x j } , i, j = 1, 2, ..., |U|; Step 6: End. To understand Algorithms 3.1 and 3.2 better, they are described by a flowchart as Fig. 2 . Next, an example is introduced to demonstrate the two algorithms. , Q, f , V} be an information system given in Table 3 . (1) Based on Algorithm 3.1, Step 1: Step 2: Compute the advantage degree, AD(x 1 ) = 0.5000 , AD(x 2 ) = 0.3400 , AD(x 3 ) = 0.3200 , AD(x 4 ) = 0.6400 , AD(x 5 ) = 0.7400 , and AD(x 6 ) = 0.4600. Step 3: Rank the objects according to the AD of each object, Step 4: End. (2) Based on Algorithm 3.2, Step 1: Compute the advantage matrix D, it is the same as (1). Step 2: Compute the neighborhood degree, ND(x 1 ) = 1.5000 , ND(x 2 ) = 0.2500 , ND(x 3 ) = 0 , ND(x 4 ) = 4.0000 , ND(x 5 ) = ∞ , and ND(x 6 ) = 0.6667. Step 3: According to Step 2, skip. Step 4: Skip. Step 5: Rank the objects according to the NDCD(NDCD = ND) , Step 6: End. For the sake of proving the effectiveness and robustness of our new algorithms, Student Performance Data Set Student performance (2014) is selected from the UC Irvine Machine Learning Repository (UCI). In this data approach student achievement in secondary education of two Portuguese schools. The data attributes include student grades, demographic, social and school related features) and it was collected by using school reports and questionnaires, as shown in Tables 4 and 5 . Two data sets are provided regarding the performance in two distinct subjects: Mathematics (Mat) and Portuguese language (Por). In Cortez and Silva (2008) , the two data sets are modeled under 5-level classification and regression tasks. Important note: the target attribute d 3 has a strong correlation with attributes d 2 and d 1 . It is generally believed that a 4 , a 7 , a 8 , a 13 , a 14 , a 16 , a 17 , a 18 , a 20 , a 21 , a 24 , a 25 , a 27 , a 28 , a 29 , and a 30 are related to students' grades. Based on Student Performance Data Set Student performance (2014), 4 new information systems are set up, as shown in Tables 6, 7, 8 and 9. In Tables 7 and 9 , d 1 , d 2 and d 3 represent the grades of 3 stages and actual results, as [ a 2 , a 3 , a 4 , a 5 , a 6 , a 9 ] [ a 2 , a 3 , a 5 , a 6 , a 8 , a 9 ] [a 1 , a 7 , a 8 , a 10 ] � [ a 1 , a 3 , a 5 , a 8 , a 9 , a 10 ] [a 1 , a 4 , a 7 , a 10 ] [ a 2 , a 4 , a 6 , a 7 ] � [ a 1 , a 3 , a 7 , a 8 , a 9 , a 10 ] [a 1 , a 2 , a 3 , a 4 , a 5 , a 6 , a 7 , a 9 , a 10 ] [a 1 , a 2 , a 3 , a 5 , a 7 , a 8 , a 9 , a 10 ] [a 1 , a 2 , a 3 , a 4 , a 6 , a 8 , a 9 ] [a 1 , a 2 , a 3 , a 4 , a 5 , a 6 , a 8 , a 9 ] [a 1 , a 2 , a 3 , a 4 , a 5 , a 6 , a 8 , a 9 ] [ a 1 , a 7 , a 8 , a 9 ] [ a 1 , a 2 , a 6 , a 7 , a 8 , a 9 ] [ a 1 , a 2 , a 3 , a 7 , a 8 , Second period grade (numeric: 5 -very good (16-20), 4 -good (14-15), 3 -satisfactory (12-13), 2 -sufficient (10-11) , 1 -fail (0-9) d 3 Final grade (numeric: 5 -very good (16-20), 4 -good (14-15), 3 -satisfactory (12-13), 2 -sufficient (10-11) , 1 -fail (0-9) In Fig. 4a , Algorithms 3.1 and 3.2 are used to sort y 426 , y 62 , y 201 , y 366 , y 625 , and compare sorting results with Table 9 (d 1 ) . In Fig. 4b , Algorithms 3.1 and 3.2 are used to sort y 598 , y 401 , y 193 , y 103 , y 411 , and compare sorting results with Table 9 (d 2 ) . In Fig. 4c , To better describe the consistency between the sorting results and the actual results, the Spearman rank correlation coefficient or Spearman (Myers et al. 2013 ) is used to analyze the results in Figs. 3 and 4 , the results are shown in Tables 10, 11 Fig. 4 The sort based on Portuguese and 15. It is widely known that if the Spearman between sorting results of two methods belongs to [0, 1], then the two methods are positively correlated. Moreover, as a commonly accepted notion, if the Spearman between sorting results of two methods is greater than 0.6, then the correlation degree between the two methods is high. In Figs. 3 and 4 , the broken line trend of each subgraph is basically consistent, which also shows that the results obtained by Algorithms 3.1 and 3.2 are basically consistent with the actual sorting results. In Tables 10, 11, 12, 13, 14 and 15, the Spearman among Algorithms 3.1, 3.2 and the real sorting results are greater than or equal to 0.6. In particular, the most results are greater than or equal to 0.8. In the summary, there is a high correlation among Algorithms 3.1, 3.2 and the real results, and Algorithms 3.1 and 3.2 are effective. The number of objects of the two data sets (Mat and Por) is relatively large, and the real ranking is 5 levels, so it is unreasonable to sort all objects. This paper uses the random sampling method to verify the accuracy of Algorithms 3.1 and 3.2 . The specific process is as follows: For Fig. 5a , the specific process is as follows: (1) two objects are randomly selected from 395 objects in the Mathematical data set; (2) the objects are sorted by comparing the NDCD values and Mat ( d 1 ), if the two sorting results are consistent, the result of Algorithm 3.2 is accurate; (3) Repeat (1) and (2) 1000 times, and count the proportion of consistent sorting results as the accuracy of Algorithm 3.2; (4) Repeat (1), (2) and (3) 1000 times to obtain the accuracy of 1000 times, and draw the diagram. Other subgraphs of Fig. 5 are similarly processed. Overall, the accuracies of Algorithms 3.2 and 3.1 are distributed between 0.60 and 0.72, and the whole random process is randomly selected 1000 * 1000 = 1 * 10 6 times (Select 2 of the 395 objects, C 2 395 = 77815 < 1 * 10 6 ). Comparing the process in Fig. 5 , the Mathematic data set is replaced by the Portuguese data set. Other processes are the same, and the results are shown in Fig. 6 , the accuracies of Algorithms 3.2 and 3.1 are distributed between 0.62 and 0.74, and the whole random process is randomly selected 1000 * 1000 = 1 * 10 6 times (Select 2 of the 649 objects, C 2 649 = 210276 < 1 * 10 6 ). In order to show the accuracy of Algorithms 3.2, 3.1, PROMETHEE, TOPSIS, and SAW, the detailed test process is as follows: For Fig. 7a , the specific process is as follows: (1) Fig. 7a (Mat ( d 1 ) ). Other broken lines in Fig. 7a shall be subject to the same test process. The accuracy analysis based on the Portuguese data set is shown in Fig. 7b . Overall, the accuracy of Algorithms 3.2 and 3.1 is higher than that of PROMETHEE, TOPSIS, and SAW in the mathematical data set; In the Portuguese data set, Algorithm 3.2 is exactly the same as PROMETHEE, TOPSIS, and SAW, but Algorithm 3.1 is lower than PROMETHEE, TOPSIS, and SAW. For the two data sets (Mat and Por), the sensitivity of 5 algorithms (Algorithms 3.2, 3.1, PROMETHEE, TOPSIS, SAW) is analyzed by deleting attributes. The specific test process is as follows: For Fig. 8 , by deleting the attributes one by one, the influence of attribute change on the accuracy of 5 algorithms (Algorithms 3.2, 3.1, PROMETHEE, TOPSIS, SAW) is verified. Combined with Figs. 7 and 8, the accuracy change is shown in Fig. 9 . For Fig. 9a -c, PROMETHEE and TOPSIS have the worst stability, followed by SAW, and Algorithms 3.1 and 3.2 have the best stability. For Fig. 9 (d) ,(e),(f), PROMETHEE has the worst stability, and other methods have similar stability. Overall, SAW is linear and more stable, but its scope of application is limited; PROMETHEE and TOPSIS are nonlinear and poor stable, but their scope of application are extensive. Algorithms 3.1 and 3.2 are non-linear, the stability of them is similar to SAW, or even better. In summary, (1) From the experimental results: Algorithms 3.1 and 3.2 are used for student ranking based on two data sets (Mat and Por). The sorting results of Algorithms 3.1 and 3.2 are compared with the real sorting results. Algorithms 3.1 and 3.2 are highly correlated with the actual results; By comparing Algorithms 3.1 and 3.2 with PROMETHEE, TOPSIS, and SAW, the accurate pair of display Algorithms 3.1 and 3.2 are higher than or equal to other methods; Through random process and deleting attributes, it shows that Algorithms 3.1 and 3.2 are robust and better than other methods. (2) For management implications: On the one hand, in the process of decision analysis, Algorithms 3.1 and 3.2 not only analyze the relationship between objects, but also analyze the impact of all objects on each object. In the process of Algorithm 3.2 design, a new rough set model is introduced. This model does not simply use the rough set The present study is designed to design two multi-attribute decision-making methods based on less subjective experience or prior information. In this study, two new methods based on the advantage matrix are proposed to handle MADM problems. Compared with the real decision results and other methods, the accuracy and robustness of two new multi-attribute decision-making methods are analyzed. For the two new multiattribute decision-making methods, the two methods rely on all objects, so the accuracy is high. At the same time, the change of a few attributes does not affect the decision result, that is, strong robustness. A number of limitations to our study should be noted, including that the element diversity of advantage matrix is needed. In other words, in Algorithm 3.1, the more diverse the modules of the elements of the advantage matrix, the better the decision-making effect; In Algorithm 3.2, the disadvantage and advantage neighborhood (correlation) approximations of each object are different. On the contrary, if a large number of empty or complete sets appear in the advantage matrix, Algorithms 3.1 and 3.2 fail. Our work clearly has some limitations. Despite this we believe our work could be a new perspective for decision analysis. For specific applications, the two methods are non-linear algorithms, which have a wider scope of application. Specifically, as long as an information system is established (see the previous information system description), and the attribute information is comparable, no other subjective experience or a prior information is required, and finally the decision result is obtained. As for the directions of future research, we will investigate complementary issues as follows. 1. According to the characteristics of the advantage matrix, We will extend the advantage matrix to the intuitionistic fuzzy environment, linguistic environment, and the Hesitant fuzzy environment and generalize our methods in such environments. 2. According to the characteristics of the advantage matrix, the object is analyzed only from the perspective of the advantages of the attributes, and the disadvantages of the attributes are missing. We will establish the advantage-disadvantage matrix and define a new degree based on the advantage-disadvantage matrix for decision analysis. 3. For an order information system (Greco et al. 2002) , the advantage matrix and directed graph between objects will be established, and the Graph Neural Networks (Wu et al. 2020) will be used to sort and cluster the objects. 4. According to Definition 2.9, the neighborhood operator is aimed at individual objects and is used for individual decision analysis of individual objects. We will define the neighborhood operator of the object group and use it for group decision analysis, such as comparison among multiple sales teams, asset oriented portfolio investment, etc. D(x, y) ∪ D(y, x) ≠ Q , then S is called a non-complementary information system (NCIS) If S is a CIS, then x ≫ y ⇔ |D(x, y)| > |Q| 2 S is an NCIS ⇔ ∃a ∈ Q∕(D(x, y) ∪ D(x, y)) , f (x, a) = f (y, a) A comparison of the analytic hierarchy process and a simple multi-attribute value function How to select and how to rank projects: the promethee method Multi-attribute method for prioritization of sustainable prototyping technologies Using data mining to predict secondary school student performance Applicability of the coefficient of variation method for analyzing synaptic plasticity A method for stochastic multiple attribute decision making based on concepts of ideal and anti-ideal points Decision making and cultural heritage: An application of the multi-attribute value theory for the reuse of historical buildings Electre methods with interaction between criteria: an extension of the concordance index Multi-criteria inventory classification using a new method of evaluation based on distance from average solution (EDAS) Rough approximation by dominance relations Learning rules from incomplete training examples by rough sets Learning cross-level certain and possible rules by rough sets Stude nt+ Perfo rmance. Student performance data set Rough set approach to case-based reasoning application Multiple attribute decision making Covering-based variable precision (I, T)-fuzzy roughsets with applications to multiattribute decision-making Three-way decisions based on decision-theoretic rough sets under linguistic assessment with the aid of group decision making Physica Lingras PJ, Yao Y (1998) Data mining using extensions of the rough set model Routledge, Milton Park Pamučar D, Ecer F (2020) Prioritizing the weights of the evaluation criteria under fuzziness: the fuzzy full consistency method-FUCOM-F Integrating rough set theory and medical applications Rough set Rough set approach to knowledge-based decision support Decision analysis using rough sets Models and methods in multiple criteria decision making Rough set based approaches to feature selection for case-based reasoning classifiers Decision-making model for early diagnosis of congestive heart failure using rough set and decision tree approaches Multigranulation fuzzy rough set over two universes and its application to decision making Three-way group decision making based on multigranulation fuzzy decision-theoretic rough set over two universes Rough set methods in feature selection and recognition Core-generating approximate minimum entropy discretization for rough set feature selection in pattern classification Green decoration materials selection under interior environment characteristics: a grey-correlation based hybrid mcdm method Discovering patterns of missing data in survey databases: an application of rough sets Three-way decisions based multi-attribute decision making with probabilistic dominance relations A comprehensive survey on graph neural networks The group decision-making rules based on rough sets on large scale engineering emergency Attribute dependency functions considering data efficiency A novel fuzzy rough set model with fuzzy neighborhood operators A -rough set model and its applications with TOPSIS method to decision making A characterization of novel rough fuzzy sets of information systems and their application in decision making A novel approach to predictive analysis using attribute-oriented rough fuzzy sets The theory of the displaced ideal. In: Multiple criteria decision making Kyoto 1975 Fuzzy -covering based (I, T)-fuzzy rough set models and applications to multi-attribute decision-making Covering-based generalized IF rough sets with applications to multiattribute decision-making Multiple-criteria evaluation model for medical professionals assigned to temporary SARS-CoV-2 hospitals Entropy method for determination of weight of evaluating indicators in fuzzy synthetic evaluation for water quality assessment The authors declared that they have no conflicts of interest to this work. We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.