key: cord-1044226-8f1yjbpc authors: Jyotsna; Nand, Parma title: Novel DLSNNC and SBS based framework for improving QoS in healthcare-IoT applications date: 2022-04-20 journal: Int J Inf Technol DOI: 10.1007/s41870-022-00922-z sha: 1787b26b4c3fa25da826ccb7a8de4083105cebf4 doc_id: 1044226 cord_uid: 8f1yjbpc Health care system is intended to enhance one's health and as a result, one's quality of life. In order to fulfil its social commitment, health care must focus on producing social profit to sustain itself. Also, due to ever increasing demand of healthcare sector, there is drastic rise in the amount of patient data that is produced and needs to be stored for long duration for clinical reference. The risk of patient data being lost due to a data centre failure can be minimized by including a fog layer into the cloud computing architecture. Furthermore, the burden of such data produced is stored on the cloud. In order to increase service quality, we introduce fog computing based on deep learning sigmoid-based neural network clustering (DLSNNC) and score-based scheduling (SBS). Fog computing begins by collecting and storing healthcare data on the cloud layer, using data collected through sensors. Deep learning sigmoid based neural network clustering and score based Scheduling approaches are used to determine entropy for each fog node in the fog layer. Sensors collect data and send it to the fog layer, while the cloud computing tier is responsible for monitoring the healthcare system. The exploratory findings show promising results in terms of end-to-end latency and network utilization. Also, the proposed system outperforms the existing techniques in terms of average delay. Internet of Things principles can improve patients' health and welfare by increasing the availability and quality of care, as well as significantly reducing treatment expenses and frequent travel [1] . The Internet of Medical Things (IoMT) is a digital healthcare system that connects patients to medical resources and services [2] . Wireless sensor networks are becoming more pervasive and easy-to-use enabling technology for structural health monitoring than current wired systems [3] . Patients can use smart wearable devices with sensors that come with smartphones to gather data about their health status such as heart rate, glucose level and blood pressure [4] . The analysis and processing of data is done by cloud servers. Moreover, cloud computing is the most likely practical approach for connecting IoT with healthcare [5] . Patient data may be used not only to monitor a patient's present health, but also to forecast future medical concerns using cloud big data storage and machine learning techniques [6] . But patient's physical condition changes over time, demanding quick action to monitor remote patients. And cloud mechanism lacks to handle the real time application and cannot meet the requirements of quality-of-service (QoS). There is need of system that can continually and quickly monitor the report on the patient's condition [7] . The introduction of fog computing in healthcare applications is to bridge the gap between IoT devices and analytics [8] . Fog computing is a distributed computing platform for managing applications and services at the network edge [9] . The probability of a mistake and the delay increases as the volume of data transmitted over the network grows. Data packet loss and transmission latency are directly proportional to the amount of data transported by IoT devices to the cloud. The Edge or Fog paradigm overcomes problems like latency by placing small servers known as edge servers in close proximity to end user devices [10] . A fog-based IoT system comprises of three layers: device, fog, and cloud. It has been hailed as a promising paradigm for lowering networking infrastructure and processor energy consumption while offering cloudlike health monitoring services [11] . The number of fogbased applications is expanding and is thought to outweigh IoT apps in near future [12] . IoT technology in healthcare can enhance the quality as well as affordability of medical treatment by automating formerly manual activities [13] . Fog provides storage and processing capabilities more accessible to end-users. Fog can capture, analyse, and store massive amounts of data in real-time [14] . Because medical sensors collect data on a frequent basis, real-time analysis performance might be enhanced, enabling intelligent data analysis and decision-making based on end-user rules and network resources. [15, 16] . The following are the main contributions of this work: Fog computing uses deep learning sigmoid based neural network clustering and score based scheduling to calculate entropy values for each fog node and thus to improve the quality of service in fog based architecture. The manuscript's structure is organized as follows: Sect. 2 examines the existing literature on the proposed strategy. Section 3 provides a brief overview of the proposed system, Sect. 4 explores the exploratory findings, and Sect. 5 concludes the article. The quality of service is determined by resource allocation and load balancing in cloud/fog computing. Fog-based architectures have been proposed by many researchers for a variety of applications. Table 1 presents an overview of existing Fog literature surveys relevant to our work. The support for real-time applications is a major reason for the fog computing architecture emergence. There are several QoS metrics to consider, including latency, bandwidth, energy consumption reduction, and cost minimization for the successful development of fog-based system. The three tiers of computing is cloud computing, fog computing, and sensors, which all communicate with one another. The primary purpose of the proposed technique is to present three-tier architecture for context and latencysensitive monitoring systems. In this paper, we propose that fog computing can be utilized to assist in the monitoring of patients' healthcare data, ensuring that data is gathered and evaluated efficiently. Sensors are used to collect data from patients at first. Both external and internal data are recorded by these sensors. The role of sensors is to gather and transmit all data to the fog computing layer. Fog computing then uses Deep Learning Sigmoid based Neural Network Clustering and Score based Scheduling to get the entropy value for each fog node. This layer analyses the data and information collected by the edge devices. The layer functions similarly to the server. In addition, the cloudcomputing tier constantly checks the health monitoring system as shown in Fig. 1 . To resolve jobs in a more qualified manner or to implement a range of strategies in order to reach a better result, the neural network must always learn. When it receives new information from the system, it learns how to respond to a new circumstance. A deep neural network is a sort of machine learning in which the system uses numerous layers of nodes to extract high-level functions from input data. It requires converting numerical data into a more abstract and artistic form. Convolution, Sigmoidbased normalisation, pooling, and a fully connected layer are among the suggested DLSNN layers that solve CNN problems. Figure 2 depicts the deep learning sigmoid neural network clustering topology. A sigmoid function is a mathematical function with a distinctive ''S''-shaped curve, sometimes known as a sigmoid curve. Equation (1) represents the sigmoid function, where (sig) is the input and f is the output. The output of a sigmoid function is used in DLSNN normalisation. The measurement of haphazardness used to describe the texture of the input fog node data is entropy (E). The entropy of the ith data E i y is calculated by the condition (2) where, u and v are the coefficients of co-occurrence matrix of enhanced node, Pðu; vÞ is the component in the co-occurrence matrix at the coordinates u,vm and is the dimension of the co-occurrence matrix. Int. j. inf. tecnol. The weight and biases of the preceding layers in the structure design are used by the DLSNNC classifier to reach a conclusion. The model is then improved with conditions (3) and (4) for each layer independently. where W n means the weight, B n means the bias,n means the layer number, k means the regularization parameter, x means the learning rate, N t means the total amount of sensor data sets,m means a momentum, t means the upgrading phase, and C means the cost function. The DLSNN Cluster contains various kinds of layers are according to the subsequent, Step 1: Convolutional layer: Using a condition, this layer completes the convolution of the input data with the kernel (5). where y n represents reproduced segmented data,ĥ represents the filter, and M represents the number of components in y and the output vector is C k . Step 2: Sigmoid-based normalization layer: The technique of linearly modifying data to fit it within a given range is known as normalisation. The Z-score normalisation method is used to standardise data by changing it linearly. The formula for Z-score normalisation is shown in Eq. (6): Int. j. inf. tecnol. Here, Z norm is the normalized output, f is the sigmoid function value, l is the mean value of the convolutional layer output data, and r is the standard deviation of the values in the convolutional layer output. The convolutional layer output is normalized using the sigmoid function by using the Eq. (6). The Sigmoid-based normalised output from this layer is sent into the pooling layer. This layer is expected to contribute to the pooling layer by providing value-based normalised data support. Step 3: Pooling layer: The down-sampling layer is another name for this layer. To save computing effort and minimise overfitting, the pooling stage reduces the size of output neurons from the convolution layer. The maxpooling algorithm selects only the highest value in each data map, resulting in fewer output neurons. Pooling layers are typically used after convolution layers to assist simplify the information in the convolution layer's output. Step 4: A fully connected layer: The actuation work computes a probability distribution of the classes. Thus, the output layer uses the softmax function to find a preceding layer outcome that fits the most clustered data. where y, which represents the resultant cluster. Here, the DLSNNC is adapted with the sigmoid function-based normalization to direct the over-fitting in layers and conclusions in the important clustering of sensor data to the fog-cloud computing layers. Our major purpose is to design workflow tasks that contain patients' health-care data. Initially, the task request is produced and separated into numerous task requests so that execution durations may be reduced at a reasonable cost while staying within the user-specified deadline. The Score based workflow task scheduling algorithm selects only those task requests that match the minimal threshold of workflow tasks for scheduling. In [27] , there is an existing scheduling algorithm. The flow chart in Fig. 3 describes our proposed Score based workflow task scheduling system. The steps of the Score based Scheduling algorithm are described below: The following are the steps of the score-based scheduling algorithm: Step 1: Submit the workflow task list, which includes patient healthcare information. (T = T1, T2, T3, ….,T n ). Step 2: Contact the data centre to learn about the virtual resources that are available. VM = VM1, VM2, VM3, and so on, up to VMn. Step 3: Assign a user-defined deadline constraint D in the form of sub-deadlines for various task requests to the whole workflow application. Step 4: Using the components' minimum sub-scores, determine the VMs' scores value (SV) where X-defines the observed value, lmean of the sample task, r-standard deviation task. Step 5: Repeat steps 6, 7, and 8 if the task list contains the tasks to schedule; otherwise, return to the task mapping. Step 6: Select the lowest-scoring VM from the VM list that meets the task's threshold. The task threshold (p) is determined by the length of the instructions. Step 7: The job is assigned to the selected VM if it can finish the work within the specified deadline; else, the assignment is sent to the next lowest-scoring VM from the list of resources. Step 8: From the list, choose the next assignment. If all jobs have been scheduled, their mapping to VMs will be completed. The implementation of our proposed effective DLSNN Clustering and Score-based scheduling for cloud IoT applications is done in PYTHON and by using an online cloud healthcare dataset. Different execution estimations such as latency and network are estimated to explore the performance of the proposed work. Finally, the average delay is estimated and compared using existing FCFS [28] , SJF [28] , and BMO [29] to prove the relevance of the proposed approach. There will be data flow between the various tiers in our fog computing solution in health informatics. In many circumstances, the amount of information and thus the amount of time required will differ. As a result, the latency varies. As shown in equation, latency is the difference between the time of commencement and the time of completion of service (8), Here, L denotes latency, ST denotes the requested task's start time, PT denotes the requested task's processing time, TQT denotes the transmission and queuing time prior to the requested task, and IT is the desired job's initiation time. In the data ranges 500, 1000, 1500, 2000, 2500, and 3000, the latency comparison between cloud and mixed cloud and fog computing layers is shown in Table 2 . In addition, Fig. 4 depicts a latency comparison graph in the cloud, as well as both cloud and fog computing layers. The second evaluation constraint is network usage (N usage ). As the number of devices on the network grows, so does network usage, resulting in network congestion. As a result of the congestion, the application running on the Cloud network performs poorly. By dispersing the load across intermediary fog devices, fog computing aids in the reduction of network congestion. Equation is used to calculate network utilization (9) , where N is the total number of tasks, L i is the latency, and S i is the network size of ith task. Table 3 states the Network usage in the cloud and both cloud and fog computing layers along with the network usage in GB mentioned and the data usage 500-3000. Figure 5 stated the network usage comparison graph in the cloud and both cloud and fog computing layers. The ratio of average delays state the difference between starting execution time ST and the ending execution time ET for the request tasks noted in Eq. (10), Table 4 states the Average Delay in FCFS, SJF, and BMO comparing with the proposed technique with data usage 500-3000. Figure 6 stated the average delay when the average waiting time increases the average delay also increases comparison graph in FCFS, SJF, BMO and comparing with the proposed technique decreases the average delay. We propose a fog-cloud computing technique for health monitoring systems in this paper. The purpose of the study reported in this paper is to improve service quality. In this work, DLSNN Clustering and Score-based Scheduling are used to improve prediction. According to simulation results, proposed solution improve Quality-of-Service in the cloud/fog computing environment in terms latency and network consumption. Additionally, the proposed technique outperforms existing approach in terms of average delay. Different encryption techniques can be incorporated with the implementation of proposed architecture to improve the security of the system. Real-time signal quality-aware ECG telemetry system for IoT-based healthcare monitoring Internet of Health Things: Toward intelligent vital signs monitoring in hospital wards Maximizing lifetime in wireless sensor network for structural health monitoring with and without energy harvesting Implementation of a battery-free wireless sensor for cyber-physical systems dedicated to structural health monitoring applications Towards fog-driven IoTeHealth: promises and challenges of IoT in medicine and healthcare Medical warning system based on Internet of Things using fog computing An E-health system for monitoring elderly health based on Internet of Things and Fog computing IoT application modules placement in heterogeneous fog-cloud infrastructure Advantages of using fog in IoT applications Energy efficient fog-based healthcare monitoring infrastructure Fog computing in healthcare-a review and discussion Fog assisted-IoT enabled patient health monitoring in smart homes Exploiting smart e-Health gateways at the edge of healthcare internet-of-things: a fog computing approach Integrating IoT and fog computing for healthcare service delivery. In: Fig. 6 Average delay of proposed approach compared to existing techniques Components and services for IoT platforms Enabling technologies for fog computing in healthcare IoT systems An IoT patient monitoring based on fog computing and data mining: cardiac arrhythmia use case Efficient task offloading for IoTbased applications in fog computing using ant colony optimization BADEP: bandwidth and delay efficient application placement in fog-based IoT systems COVID-SAFE: an IoT-based system for automated health monitoring and surveillance in post-pandemic life Remote health monitoring system of elderly based on Fog to Cloud (F2C) computing Methods of resource scheduling based on optimized fuzzy clustering in fog computing Scheduling internet of things requests to minimize latency in hybrid fog-cloud computing A novel fog computing approach for minimization of latency in healthcare using machine learning Improving quality-of-service in cloud/fog computing through efficient resource allocation Latency-aware application module management for fog computing environments Energy and delay efficient fog computing using caching mechanism Score based deadline constrained workflow scheduling algorithm for cloud systems A job scheduling algorithm for delay and performance optimization in fog computing Blockchain-based mobility-aware offloading mechanism for fog computing services