key: cord-0077242-wp0ujzrr authors: Awan, Mazhar Javed; Mohd Rahim, Mohd Shafry; Salim, Naomie; Rehman, Amjad; Nobanee, Haitham title: Machine Learning-Based Performance Comparison to Diagnose Anterior Cruciate Ligament Tears date: 2022-04-11 journal: J Healthc Eng DOI: 10.1155/2022/2550120 sha: 40ed2aee2053ecc47a3c7cfb18ed39c74b02b27d doc_id: 77242 cord_uid: wp0ujzrr In recent times, knee joint pains have become severe enough to make daily tasks difficult. Knee osteoarthritis is a type of arthritis and a leading cause of disability worldwide. The middle of the knee contains a vital portion, the anterior cruciate ligament (ACL). It is necessary to diagnose the ACL ruptured tears early to avoid surgery. The study aimed to perform a comparative analysis of machine learning models to identify the condition of three ACL tears. In contrast to previous studies, this study also considers imbalanced data distributions as machine learning techniques struggle to deal with this problem. The paper applied and analyzed four machine learning classification models, namely, random forest (RF), categorical boosting (Cat Boost), light gradient boosting machines (LGBM), and highly randomized classifier (ETC) on the balanced, structured dataset of ACL. After oversampling a hyperparameter adjustment, the above four models have achieved an average accuracy of 95.72%, 94.98%, 94.98%, and 98.26%. There are 2070 observations and eight features in the collection of three diagnosis ACL classes after oversampling. The area under curve value was approximately 0.998, respectively. Experiments were performed using twelve machine learning algorithms with imbalanced and balanced datasets. However, the accuracy of the imbalanced dataset has remained under 76% for all twelve models. After oversampling, the proposed model may contribute to the investigation of ACL tears on magnetic resonance imaging and other knee ligaments efficiently and automatically without involving radiologists. Knee bone and joint diseases are ubiquitous in almost all groups of age and sex. ese are anterior cruciate ligament (ACL) injuries, osteoarthritis (OA), and osteoporosis (OP) [1] [2] [3] . e knee joint comprises the femur, tibia, patella, and the synovial membrane, which contains synovial fluid. e end of the femur is covered by articular cartilage. It moves against the articular cartilage of the tibia. e thin layers of rigid, slippery tissue called cartilage act as a protective cushion to allow the bones to move more freely [4, 5] . e knee ligaments are strong bands of tissue that connect one bone to another. Ligament bones limit movements and stabilize joints and durable bands of fibrous tissue, which can connect the bones and strength. e four main ligaments in Figure 1 are included the anterior cruciate ligament (ACL), the posterior cruciate ligament (PCL), the medial cruciate ligament (MCL), and the lateral cruciate ligament (LCL) [6] [7] [8] . e ACL tear is a strong band of tissue in the center and an essential part of the knee [9] . e ACL ligament cannot regenerate; unlike muscle, around 100,000 to 200,000 individuals tear it each year, and 500 million dollars are spent on ACL treatment annually [10] . e ACL tear often causes osteoarthritis or wearing down of the bone and cartilage in the knee [11] . e mechanism of injury to the ACL is usually a noncontact, pivoting injury. e muscles are attached to tendons and then bones. Osteoarthritis is figured out when the cartilage begins to thin or roughen; this happens naturally as part of aging. New bits of bone known as osteophytes may start to grow within the joints, and fluid can build up inside [12] . It reduces the space within the joints, which means that the joint does not move as smoothly as it used to and might feel stiff and painful (see Figure 2 ) [13, 14] . ML-based classification models are strongly affected by imbalanced data, especially in the medical field. e class imbalance is one of the common problems which affects the prediction accuracy and could lead to biases in the result. It is required to balance the data by increasing the minority class or decreasing the majority class (undersampling). e distribution can vary from a slight bias to a severe imbalance [15] [16] [17] [18] . e paper aims to apply extensive machine learning models to efficiently predict ACL tears in the early stage to avoid ACL injury efficiently. In this paper, we compare and analyze the results of the class imbalance problem in the context of structured data contained multiclasses through oversampling technique. As per our knowledge, there is no study to identify the three classes of ACL tears on structured data. erefore, this paper presented class imbalanced ACL data and evaluated the performance of twelve machine learning classifiers with and without oversampling. e significant contributions of the paper are the following: (i) Enhanced the distributions of partial and ruptured ACL classes through oversampling to balance all three categories. (ii) Applied extensive data visualization for the case of imbalanced and balanced datasets as well. (iii) As per our knowledge, there is no such study we applied and compared twelve machine learning classifier models on an imbalanced and balanced dataset. (iv) After adjusting hyperparameters and oversampling class balancing, the four machine learning models achieved above 95% accuracy, precision, recall, and F1-score. (v) e extra tree classifier model accuracy is 98.28%, the highest among all machine learning models. e paper is organized in the following: Section 2 is about the work related to machine learning prediction of the knee and other diseases. Section 3 is connected to material and methodology, data exploration, and methods of various machine learning models with random forest and extra tree cat boost used in our study. Section 4 compares the classification results with accuracy, confusion matrix, and other metrics. Conclusions are given in Section 5. e medical data are usually extensive and very hard to analyze and interpret by humans quickly. For this purpose, the machine learning-based models showed promising results in all medical fields to diagnose and predict various diseases efficiently [19] [20] [21] [22] [23] [24] [25] . e early detection of knee OA and OP disease progression is complex and challenging in the case of classification problems [26, 27] . e machine learning models can quantify anterior cruciate injury risk better for sports player injuries, synovial fluid of human OA knees, and joint angles prediction [28] [29] [30] [31] [32] . Machine learning is used widely in sports injuries prediction because many models performed better results. Jauhiainen, Kauppi [33] was used motion analysis and physical datasets of severe knee injuries of 318 cases. e random forest and logistic regression machine learning model achieved with receiver operative curves (ROC) only under 0.63 and 0.65. ese were highly prevalent among athletes, and injury follow-up lasted for 12 months. Kotti, Duffell's [34] study used a locomotion dataset of 47 osteoarthritides and 47 healthy knees and applied a random forest model with nine features. ree per axis was achieved for the discriminative features with an accuracy of 74.4% only. e study was not good for temporal information, and the parameters were strictly quantitative. Tiulpin, Klein's [35] analysis was used a machine learning-based approach for predicting structural knee OA development using data collected during a single clinical visit has been developed. e most important conclusion of this study is that patients with KL-0 and KL-1 at baseline were predicted to advance. Du et al. [36] discussed the Cartilage Damage Index (CDI) as a tool for determining how far osteoarthritis has progressed in the knee. Stajduhar et al. [37] 's study was related to our dataset knee ACL. Recently comparative analysis approaches in classifying imbalanced and balanced datasets are widespread in the literature. e study by Vijayvargiya et al. [38] was used various machine learning models on the original normal and abnormal subjects about knee from electromyography (EMG) data. e extra tree classifier found the best accuracy after oversampling at 93.3%. ere was no improvement in the performance metrics through various class balancing techniques. e literature suggested that machine learning, the ensemble of classifiers, and boosting are known to increase the accuracy of solving the class imbalance problem. Our study uses a machine learning classification model on structured data for three classes and differs from most other studies examined in the related work. Some of the studies applied machine learning to structured data. Still, our approach differs from these studies because we compared the performance of machine learning models before and after class balancing. Above all literature, traditional machine learning models are applied chiefly to unstructured data such as MRI and X-rays to predict the anterior cruciate ligament injury and osteoarthritis in most existing state-of-the-art. Moreover, several researchers have developed diagnosis methods to identify other diseases through machine learning. However, there is no such study to detect the three ACL classes through machine learning comparative analysis. ese issues are addressed in this research article to diagnose early ACL rupture tears. is section presents the methods and materials used in this study. Section 3.1 is the dataset description. Section 3.2 is the proposed framework of the study. Section 3.3 is the oversampling technique handling. Section 3.4 is the data exploration analysis of balanced datasets. e proposed machine learning models are explained in Section 3.5. We used the anterior cruciate ligament metadata file for our experiments. e 917 samples containing three ACL classes that are healthy, partial, and full ruptured were acquired from Clinical Hospital Centre Rijeka. ese are 75.2% for healthy and 18.8%, 6% for partial and injured tears, respectively. e three classes' volumes are 690, 172, and 55, respectively, are shown in Figure 3 . e feature names with unique and mean values of each feature are described in Table 1 . is section of the article discusses the proposed anterior cruciate ligament injury prediction system consists of many steps which are ideally linked to each other to get the desired results. Step I. e dataset is considered only in a structured form, imbalanced in nature, and its details have already been discussed in the section data description. Step II. e dataset was prepared, which included checking for unique values, NULL values, string values, and converting imbalanced data into balanced data by the oversampling technique described in Section 3.3. Step III. For better understanding, the data exploration analysis (EDA) was visualized through various libraries like Matplotlib and Seaborn, which have been used to plot correlation heatmap, typical distribution plots, and count plots. Step IV. After this, the data were split into training and testing set in 75% and 25%. Step V. e training data have been applied to twelve supervised machine learning models, and the four machine learning models trained well after adjustment in the hyperparameters. Step VI. With the help of test data, all models were evaluated through a confusion matrix, mean accuracy, precision, recall, F1-score. e receiver operative characteristics (ROC) were only considered the best four models. Step VII. At the last stage, the prediction of three classes was compared without class balancing and with the oversampling balancing of all twelve machine learning models. Figure 4 shows the overall proposed framework for the process and its septs. e class imbalance is a big problem in machine learning and image-related datasets [39] . It can handle undersampling [40] , oversampling [41] , and hybrid sampling techniques efficiently [42] . Our current dataset is an imbalance in nature, as shown in Figure 3 . We applied the Scikit library and import resample [43] . Here, we are using oversampling in partial and ruptured tears classes. After applied oversampling, the ratios of the three categories are now equal, as shown in Figure 5 . After oversampling, the data are shown with equal proportions that are 690 samples and 33.3% ratio of each sample percentage as shown in Figure 6 . Data exploration and visualization are critical to evaluate machine learning models through the python libraries of Matplotlib [44] and Seaborn [45] . ere are the following various plots after oversampling balanced datasets. e correlation matrix indicates the highest correlation, namely roiWidth and roiHeight features for predicting a diagnosis of ACL tears. Figure 7 shows the relationship covariance of each feature with the after-oversampling class balanced. where Covar means covariance measure and features Y 1 and Y 2 are computed for every pair in equations (1) and (2). Figure 8 is related to various distribution plots of all components, and ROI height and ROI width are generally distributed for both cases. Figure 9 shows the histogram counts of each feature after oversampling. Figure 10 shows the distribution of three classes for every feature. Series 5 feature has contained healthy and partial tears much greater. We applied twelve various machine learning models out of eight classifier models, logistic regression [46] , support vector machine [47] , decision tree [48] , k-nearest neighbour [49] , Gaussian Naïve Bayes [50] , AdaBoosting [51] , gradient boosting [52] , extreme gradient boosting [53] used for experiment results only. e following four proposed models are discussed in Section 3.5.1. Random Forest [54] , Section 3.5.2. Extra Tree Classifier [55] , Section 3.5.3. Categorical Boosting [56] , and Section 3.5.4. LGBM Classifier. We have explained this because it performs better results for our datasets. ere are M Features and N Rows. In a random forest, it grows multiple trees such that each tree comprises the square root of the total number of features that are present. In our case, we have M features, so each tree would have a square root of M features to train on; additionally, it uses bootstrap samples or samples with replacement. Figure 11 shows the structure of a random forest tree [57] . e algorithm of random forest is shown in Table 2 . e final prediction (final Pred) is by taking the majority of the decision tree DT 1 (m), DT 2 (m) from m features Generally, it is written as Classifier. An extremely randomized or extra tree classifier (ETC) is an ensemble algorithm that uses many unpruned decision trees from the training datasets [55] . e algorithm of ETC is described in Table 3 . e extra tree is also a bootstrapping and bagging algorithm. Still, the big difference between ETC and RF is that a random forest is like a greedy algorithm that uses the best available parameter at each node for the split based on Gini or entropy. e process of ETC is random but not greedy. e extra used all the records of the samples [58] . Let O be training samples with n possible classes (O � O 1 , O 2, . . ., On). e entropy (En) is obtained by the following mathematical formula: e entropy after O samples were portioned in O j with some features is obtained; M is given as follows: e information gain (IG) in the equation is defined as follows: Gini where p is the probability number of samples of class k and a total number of samples. Extra tree classifier is much faster than random forest. ere are three differences. (i) e extra tree classifier is selected samples for every decision tree without replacement. All models are unique. (ii) e total number of features selected remains the same, that is, the square root of the total number of features, in the case of the classification task. (iii) e main difference between a random forest and an extra tree classifier is that instead of computing the locally optimal split for a feature combination, a random value is selected for the split for the extra tree. ese are not the best split for features. e whole idea is rather than not spending time finding the best splitting point. e best criteria are randomly picking up a point and spit based on that; this leads to more diversified trees and fewer splitters to evaluate when training and extremely random forest. In the case of readily available datasets, if observed during testing with noisy features, the extra tree classifiers seemed to outperform the random forest. A categorical boosting (CatBoost) method focuses on processing categorical features and boosting trees with some ordering principle without showing conversion error. A target leakage problem occurred in gradient boosting and the standard way of categorical features to numbers. e ordering principle can apply to target encoding, categorical features, and boosting trees [59] . (1) Mean Target Encoding. It is an efficient way to deal with categorical variables to substitute them with numerical values. e mean target encoding can apply to categorical variables with the mean target value. Figure 12 explains the mean target encoding with a simple example. ere are color features (red, blue, and green) in unique categories, and the target is either zero or one. en, each type, red, blue, and green, is calculated by the target mean. e new feature column is named as encoded-color replaced with target mean value against each category. e advantage of target encoding was the explosion of the feature space compared with one-hot encoding, just adding one extra column at the end. Target encoding could also smooth the calculation with a prior term as shown in the following formula. mean target � class inclass + Prior total count + 1 , where, in the equation, count_inclass were the number of counts the label value equal to 1 for the objects against the categorical feature value, prior value can be assumed was determined the starting parameters, and total count means the total number of things with the categorical feature value. (2) Ordering Boosting. e ordered target encoding technique helps prevent overfitting due to target leakage. e encoded value estimates the expected target value against each feature category. Est b| i a � a i k . Boost implements an efficient modification of the ordered boosting on the basic decision tree. It was good for small datasets, support training with pairs, good quality with default parameters, extensive support of models formats, stable and model analysis tool. e classical boosting uses multiple trees and whole datasets with the residuals, which causes overfitting. e ordered boosting does not use the whole datasets to calculate residuals. Assuming model M i was trained on the first data points, then calculating the residuals at each point i using model M i − 1. e idea is that the tree did not see the data points as before, so it cannot overfit. Figure 13 shows the N separate trees with data point M 4 [56] . e model was trained on four data points, M 4 . e residuals are shown in equation (1). where N trees are not feasible, and it works with trees at location 2, where j � 1, 2, . . . log 2 (n). LightGBM is a gradient boosting framework that uses a decision-tree-based learning algorithm fast, distributed, and reduces the memory usage designed by Microsoft Research Asia [60] . (1) Gradient-based one-side Sampling (GOSS) . is method focuses more on the under-trained part of the dataset, which tried to learn more aggressively. e slight gradient means that it contains minor errors, which means the data points are learned well. e large gradient implies significant errors, which means the data points are not known well. e algorithm is supported for large gradients, and it is much essential. e algorithm of GOSS in Table 4 first sorts the data points according to their absolute gradient value. en, the top sampling ratio of the large gradient of data (LGD) points × 100% instances was considered. en, it randomly samples the proportion of small gradient data (SGD) × 100% instances from the rest of the data points. In the end, GOSS amplified the sampled data with a small gradient by multiplying 1 − LGA/SGD when calculating the information gain. We focused more on the under-trained instances without changing the original data distribution by much. Figure 14 explains the light GBM split tree leaf-wise. (2) Exclusive Feature Bundling (FEB). It efficiently represents sparse features such as 100 encoded features, reducing the total number of features. It is designed to be a distributed, high-performance gradient boosting framework based on a decision tree algorithm with lower memory usage and capable of large-scale handling data [61] . Hyperparameter Adjustments e experiments were performed on Google Colab. e Python 3.8 language is used for our experiments. e original dataset splits with training samples is 687 for training data and 230 after 75 : 25 ratios without oversampling. After resampling, the division of datasets was 1552 518, respectively. ree healthy, partial, and ruptured classes for each test were divided into 170, 170, and 178, respectively. All machine learning models have used the machine learning library Scikit-learn with version 1.0.1 [62] . Furthermore, we were trained our models on default parameters on all twelve machine learning models with and without oversampling class balancing. After a few adjustments in the parameter values of four models, random forest (RF), extra tree classifier (ETC), categorical boosting, and Light GBM, the results were performed very well during training. Table 5 Figure 13 : Ordering boosting to avoid overfitting problem on four data points. e final results and discussion are explained in this section for our best machine learning models and compared with the class imbalance and class balance. e performance of the proposed technique is evaluated through confusion matrix, accuracy, precision, recall, F1-score, an area under the curve (AUC), and receiver operative characteristics (ROC). e details of these evaluation metrics are as follows. e confusion matrix allows visualization of the performance of the models. e confusion matrix is based on the K × K matrix of the ratio of predicted categories or classes that were correctly predicted and not corrected predicted. e matrix gives the direct comparison of values such as true positive (TP), false positive (FP), true negative (TN), and false negative (FN). Figure 15 shows the confusion matrix of four models before and after class balancing. e sum of the correct classification was divided by the total number of three ACL classifications. e accuracy of equation (2) is as follows: accuacy � sum of correct classfication total number of three ACL classes . e precision is the ratio between the true positive and the positive results. e precision is a valuable matrix when the false positives are more important than false negatives. Accuracy can be expressed as in equation (3). 5.5. F1-Score. It is defined to be the harmonic mean between precision and recall. Equation (5) is the formula for F1-score. (18) Table 6 describes the result of three classes mean with accuracy, precision, recall, F1-score, and AUC of imbalanced and balanced datasets of our four machine learning models. e precision, recall, and F1-score results were lower than 40% in the case of without balanced classes. However, in the oversampled approach, the accuracy, recall, and F1-score were 94% to 98%. Figure 16 shows the comparison accuracy of twelve models in the case of imbalanced datasets. e accuracy of models logistic regression, support vector machine, random forest classifier, gradient boosting classifier, extra tree classifier achieved 75%. e XGB classifier, Naïve Bayes, k-nearest neighbours, AdaBoost classifier, Cat Boost classifier, and LGBM classifier remained accurate from 74% to 70%. e lowest accuracy, 63%, was the decision tree classifier. is study aims to achieve optimal performance through machine learning classifiers. For this, we were evaluated twelve machine learning models after balanced classes through oversampling. Figure 17 shows the comparison accuracy of twelve models in balanced datasets. e accuracy of all models extra tree classifier, random forest classifier, Cat Boost classifier, LGBM classifier, gradient boosting classifier, decision tree classifier, XGB classifier, k-nearest neighbours, AdaBoost classifier, Naïve Bayes, logistic regression, support vector machine was achieved 98.26%, 95.75%, 94.98%, 94.98%, 82.04%, 77.79%, 75.48%, 75.09%, 54.44%, 42.08%, 32.81%, 31.85%, respectively. e accuracy was above 94% for extra tree classifiers, random forest classifier, Cat Boost classifier, and LGBM classifier. e worst accuracy was 31.85% in the case of support vector machines. Figure 18 shows the plotting of receiver operating characteristic (ROC) and comparison of AUC on the best four models extra tree classifier, random forest classifier, Cat Boost classifier, LGBM classifier without class balancing. In the end, Figure 19 shows the plotting of receiver operating characteristic (ROC) and comparison of AUC on the best four models extra tree classifier, random forest classifier, Cat Boost classifier, LGBM classifier with oversampling class balancing. It is clearly shown that the AUC of these four models was 0.997, 0.997, 0.996, and 0.995, respectively, after oversampling technique, whereas, in the Previously studies were performed on the author's knee dataset on the MR images (unstructured) only. As per our knowledge, there was no such study available to diagnose ACL tears through structured data to resolve the imbalanced problem. Table 7 shows the comparison of the proposed machine learning methods with oversampling with other benchmark techniques, machine learning, and deep learning approaches. It is clearly shown that the machine learning model extra tree classifier performed 98.26% accuracy result and AUC 0.997 among the best of all studies from structured and unstructured data. Our study has several limitations. First, the machine learning models tuned only four models. Second, the machine learning models have applied only class balancing techniques through oversampling. ird, the study is not evaluated through cross-validation and does not compute the processing time for the classification of ACL tears diagnosis. In the future, we can validate our models through big data approaches inspired by recent studies [66] [67] [68] [69] [70] [71] [72] after comparing all class balancing. e anterior cruciate ligament is essential for evaluating osteoarthritis and osteoporosis. It is necessary to diagnose the ACL ruptured tears in the early stages to avoid the surgery procedure. e study fairly compared and evaluated four out of twelve machine learning classification models, namely, random forest (RF), extra tree classifier (ETC), categorical boosting (CatBoost), and light gradient boosting machines (LGBM). All models' performance remained under 74% without class balancing. After adjusting hyperparameters and class balancing, the accuracy of the four models, RF, ETC, CatBoost, and LGBM, achieved 95.75%, 98.26%, 94.98%, and 94.98%, respectively. Moreover, the ROC-AUC score of the four models is 0.997. In the future, we can apply machine learning models through MR images. Data Availability e datasets generated during and/or analysed during the current study are available at http://www.riteh.uniri.hr/ istajduh/projects/kneeMRI/ and 10.1016/ j.cmpb.2016.12.006 e authors declare that they have no conflicts of interest. Deep learning-based magnetic resonance imaging image features for diagnosis of anterior cruciate ligament injury Cost-effectiveness analysis based on intelligent electronic medical arthroscopy for the treatment of varus knee osteoarthritis Acceleration of knee MRI cancellous bone classification on Google colaboratory using convolutional neural network Endoscopic anatomy of the knee Cell-based therapy in articular cartilage lesions of the knee High tibial osteotomy: review of techniques and biomechanics Biomechanics of knee ligaments Anatomy of the anterolateral ligament of the knee Automated knee MR images segmentation of anterior cruciate ligament tears Selfreported fear predicts functional performance and second ACL injury after ACL reconstruction and return to sport: a pilot study An automatic knee osteoarthritis diagnosis method based on deep learning: data from the osteoarthritis initiative e relationship between anterior cruciate ligament injury and osteoarthritis of the Knee Surgical treatment of combined injury to anterior cruciate ligament, posterior cruciate ligament, and medial structures Effect of freshly isolated bone marrow mononuclear cells and cultured bone marrow stromal cells in graft cell repopulation and tendonbone healing after allograft anterior cruciate ligament reconstruction Learning from imbalanced data: open challenges and future directions Data imbalance in classification: experimental evaluation Data sampling methods to deal with the big data multi-class imbalance problem e use of hellinger distance undersampling model to improve the classification of disease class in imbalanced medical datasets Automated breast cancer diagnosis based on machine learning algorithms Efficient automated disease diagnosis using machine learning models Use of machine learning to determine the information value of a BMI screening program Detection of schistosomiasis factors using association rule mining Machine Learning for the Preliminary Diagnosis of Dementia AI-enabled COVID-9 outbreak analysis and prediction: Indian states vs. Union territories Soft clustering for enhancing the diagnosis of chronic diseases over machine learning algorithms A review of an early detection and quantification of osteoarthritis severity in knee using machine learning techniques Osteoporosis prediction for trabecular bone using machine learning: a review A machine-learning approach to measure the anterior cruciate ligament injury risk in female basketball players EMG and joint angle-based machine learning to predict future joint angles at the knee Machinelearning-based patient-specific prediction models for knee osteoarthritis Characteristics of MSCs in synovial fluid and mode of action of intra-articular injections of synovial MSCs in knee osteoarthritis Machine learning can reliably identify patients at risk of overnight hospital admission following anterior cruciate ligament reconstruction New machine learning approach for detection of injury risk factors in young team sport athletes Detecting knee osteoarthritis and its discriminating parameters using random forests Multimodal machine learning-based knee osteoarthritis progression prediction from plain radiographs and clinical data A novel method to predict knee osteoarthritis progression on MRI using machine learning methods Semiautomated detection of anterior cruciate ligament injury from MRI Human knee abnormality detection from imbalanced sEMG data Imbalancedlearn: a python toolbox to tackle the curse of imbalanced datasets in machine learning Under-sampling approaches for improving prediction of the minority class in an imbalanced dataset Generative Oversampling for Mining Imbalanced Datasets Efficient detection of knee anterior cruciate ligament from magnetic resonance imaging using deep learning approach Scikit-learn: machine learning in Python Matplotlib: a 2D graphics environment Building Machine Learning and Deep Learning Models on Google Cloud Platform Logistic regression Support-vector networks Automatic construction of decision trees from data: a multi-disciplinary survey An introduction to kernel and nearestneighbor nonparametric regression Exploring conditions for the optimality of Naïve Bayes e return of AdaBoost. MH: multi-class Hamming trees Boosting and additive trees Xgboost: a scalable tree boosting system Random forests Extremely randomized trees CatBoost: unbiased boosting with categorical features Random forest: a classification and regression tool for compound classification and QSAR modeling Comparative Analysis of Machine Learning Techniques for the Classification of Knee abnormality CatBoost: gradient boosting with categorical features support Lightgbm: a highly efficient Gradient boosting decision tree An intelligent approach to credit card fraud detection using an optimized light gradient boosting machine Scikit-learn: Machine learning in Python Improving MRI-based knee disorder diagnosis with pyramidal feature details Detection of anterior cruciate ligament tear using deep learning and machine learning techniques Improved deep convolutional neural network to classify osteoarthritis from anterior cruciate ligament tear using magnetic resonance imaging Social media and stock market prediction: a big data approach Real-time DDoS attack detection system using big data approach Fake news data exploration and analytics Cricket match analytics using the big data approach A recommendation engine for predicting movie ratings using a big data approach A big data approach to black friday sales Big data COVID-19 systematic literature review: pandemic crisis