key: cord-0028070-vfj0hjx2 authors: Pareek, Piyush Kumar; Sridhar, Chethana; Kalidoss, R.; Aslam, Muhammad; Maheshwari, Manish; Shukla, Prashant Kumar; Nuagah, Stephen Jeswinde title: IntOPMICM: Intelligent Medical Image Size Reduction Model date: 2022-02-25 journal: J Healthc Eng DOI: 10.1155/2022/5171016 sha: 3782d48bf7267b2c78baf42847469073ba474422 doc_id: 28070 cord_uid: vfj0hjx2 Due to the increasing number of medical imaging images being utilized for the diagnosis and treatment of diseases, lossy or improper image compression has become more prevalent in recent years. The compression ratio and image quality, which are commonly quantified by PSNR values, are used to evaluate the performance of the lossy compression algorithm. This article introduces the IntOPMICM technique, a new image compression scheme that combines GenPSO and VQ. A combination of fragments and genetic algorithms was used to create the codebook. PSNR, MSE, SSIM, NMSE, SNR, and CR indicators were used to test the suggested technique using real-time medical imaging. The suggested IntOPMICM approach produces higher PSNR SSIM values for a given compression ratio than existing methods, according to experimental data. Furthermore, for a given compression ratio, the suggested IntOPMICM approach produces lower MSE, RMSE, and SNR values than existing methods. rough telemedicine, patients can receive medical care even if they are in a remote location. is technology allows remote patients to communicate with local hospitals and receive treatment without leaving their home [1] . However, storing and transferring large amounts of photographs in a hospital consume a large amount of bandwidth [2, 3] . To remedy the difficulty described above, compression is recommended. Images are compressed using a lossless compression approach in medical applications such as X-rays and X-ray images, because every information is vital [4] . In other words, loss compression techniques are used to compress the DC or video image. Wavelet transform is a highly efficient technique for decoding and encoding techniques such as cosine transform that delivers superior results but handles data in a serial way, requiring more processing time. Lossy compression is recommended for artificial neural networks to overcome these flaws [5] . A "bottleneck" network is one that has a big input layer that feeds into a small hidden layer, which is then merged into a huge output layer [6] . e bottleneck artificial neural network is a new image converter that analyses data continuously, using less time and thus being more efficient than other methods. Figure 1 shows the bottleneck artificial neural network's topology. rough telemedicine, patients can receive medical care even if they are in a remote location. is technology allows remote patients to communicate with local hospitals and receive treatment without leaving their home [7] . However, storing and transferring large amounts of photographs in a hospital consume a large amount of bandwidth [8] . rough telemedicine, patients can receive medical attention even if they are in a remote location. However, unlike in traditional hospitals, medical images are often large and require a large amount of bandwidth to store and transmit [9] . e outcome of the concealed layer, on the other hand, is a decimal value between 1 and 1. is indicates that if we combine another compression technique with the output of the hidden layer, we can compress it even more. As a result, the codebook in this study was developed using the new GenPSO hybrid approach [10] . e codebook is then utilized to condense the hidden neurons as a vector volume. Because of its capacity to process preimplant models to build simpler models with fewer components, this artificial neural network is better suited for picture reduction [11] . It offers the benefits of a simple structure, stability, and easy hardware performance, but its network training issues are related to large-scale optimization, which affects algorithm specificity over time due to its drawbacks: long duration and a low starting price. To address this disadvantage, an optimization strategy is required. As a result, the network is optimized using a GenPSO hybrid approach in this article [12] . Figure 2 depicts our proposed blockbuster neural network design for image compression. e following are the primary contributions of this study. To compress medical images, a bottleneck type artificial neural network was designed. (i) GenPSO, a novel hybridized approach for optimizing neural network parameters, is developed (ii) e hidden layer of a bottleneck type artificial neural network is compressed more effectively with GenPSO, resulting in a higher compression ratio (iii) A high compression ratio and superior performance over existing methods are achieved (iv) For a real-time medical data set, the proposed technique produces good results is document's manuscript is arranged as follows: e second section looks at some recent literature that is pertinent. e suggested architecture is described in depth in Section 3. e experimental results are shown in Section 4, which comprise general operational results of comparing the performance of GenPSO and other compression methods previously reported. Section 5 contains the conclusion and recommendations for future work. Various traditional approaches such as JPEG [9] , JPEG 2000 [10] , SPHIT [11] , EBCOT [12] , and lifting scheme [13] have been developed for medical image compression, as well as current approaches such as artificial intelligence-based approaches such as neural network-based approaches. Artificial intelligence-based technologies are found to be superior to traditional approaches when dealing with incomplete or noisy data. ese methods are not only computationally efficient, but they are also near-perfect in terms of decisionmaking accuracy. e authors of [14] proposed a neural network-based technique based on edge preservation. In this network, quantization levels were used to represent compressed patterns, and the compression ratio was determined by computing the average mean square value. e authors of [15] proposed a neural network with vector quantization that created a set of codes using the self-organizing feature map algorithm. As a result, all of the blocks created by each code vector were then modelled using a cubic surface to provide a built image with significant perceptual fidelity. e authors presented a multi-resolution-based neural network compression technique in [16] , which was well suited to low bit rate coding with coarsely quantized digital coefficients. With the use of reference coefficients, a filter bank was also used to accurately synthesize the signal. is method was superior to conventional methods only for low bit rate compression; however, at high bit rates, its performance deteriorated dramatically. e authors of [17] presented a DCT-based neural network compression approach. To associate the grey image intensity, i.e., pixel value during compression, DCT coefficients were employed in conjunction with a neural network based on supervised learning, which was generally trained with a single ideal compression ratio. e authors reported a lossless compression solution for medical pictures using a neural network and an enhanced back-propagation algorithm in [18] , which demonstrated significant compression performance but only with low PSNR. e authors of [19] proposed a medical compression algorithm that included a neural network and a Haar wavelet with nine compression ratios. e main drawback of this technique was that the compressed image quality was poor, making it unsuitable for medical image processing. e authors of [20] proposed an image compression technique for medical images that used wavelet and radial basis function neural network (RBFFN) features, as well as vector quantization. e medical images were decomposed at different resolutions with distinct frequency bands and then compressed using wavelet filters to create a set of sub-bands. en, based on their statistical features, several quantization and coding procedures were used. RBFFN was used to compress the high-frequency coefficients, whereas differential pulse code modulation (DPCM) was used to compress the low-frequency band coefficients. In addition, the RBFFN hidden layer coefficients were vector quantized to increase the compression ratio without damaging the reconstructed image. e authors of [21] proposed a compression method based on artificial neural networks. e image content was included in the compression process, resulting in dramatically enhanced compression quality. An upstream module grouped the appearing image contents and identified the compressor using the unsupervised trained artificial neural network. e compressors utilized in this approach were then set up as artificial neural networks that had been trained using supervised learning and evolved into specialists for certain image contents during their training. A dataset is a collection of all these subset photographs of a single image that is normally partitioned into a number of images. Our dataset also contains similarity of specific image contents, which is thoroughly utilized by this proposed approach. As a result, this compression technique provided superior compression performance and image quality when reconstructed. e dual-level DPCM, which consisted of a linear DPCM followed by a nonlinear DPCM known as contextadaptive switching neural network (CAS-NNP), was presented as a lossless compression method for medical photographs in [22] . Based on the context texture information of the predicted pixel present in the image, this CAS-NNP was frequently switched between three neural network predictors. As a result, this strategy resulted in a smaller prediction error. For compression, the authors suggested a hybrid predictive wavelet coding with neural network in [23] . Interpixel redundancy was reduced by applying predictive coding to preprocess the image data values, and then discrete wavelet transforms the difference calculated between anticipated and original values, known as an error. e properties of both predictive and discrete wavelet coding were combined in this way. e nonlinear neural network predictor used the predictive coding procedure in this case. As a result, as compared to JPEG 2000, this compression approach produced higher-quality compressed images at high decomposition levels. e authors of [24] suggested an image compression technique based on artificial neural networks, in which the original image was coded in terms of both pixel coordinates and pixel values. To do so, the image was first read as a matrix with dimensions m * n, and then, a pair of pixel values and counts was created by searching for co-similar pixel coefficients. For the obtained co-similar pairs, the suggested procedure is quite similar to the runlength encoding technique. ese pairs were then sent into a neural network, which was used to compress the image losslessly. e authors of [25] proposed an image compression approach in which the images were digitized first and then wavelet transforms were applied. e result, i.e., transformation coefficients, was then vector quantized to the nearest integer values. Following quantization, the coefficients were encoded using the Huffman encoding, which compressed the tablet images and the tablet strip images derived from the text's exact frequencies. As a result, a variable length code table was created, with source symbols encoded, stored, and then sent via the channel for decoding. Unsupervised neural networks were used to improve image quality and remove block artefacts in this case. e authors of [26] suggested a medical image compression method based on the radial basis function of a neural network. e neural network had two layers of feed-forward network, with hidden nodes implemented using a series of radial basis functions and output nodes implemented using linear summation functions, similar to multilayer perceptron (MLP). e entire network training was broken into two sections here. e weights from the input layer to the hidden layers were determined in the first stage, and the weights from the hidden layer to the output layer were determined in the second stage. Additionally, the neurons were added to the network's hidden layer until the MSE target was met. e network that was created here was excellent at interpolation. Approximating functions can also be done with the radial basis network. e authors of [27] presented a compression method based on the back propagation of discrete wavelet transform-fed neural networks. e photographs were initially compressed using the DWT technique, and then, the images from the DWT step were fed into a back-propagation neural network for additional compression. e PSNR value and compression ratio of the final compressed image were both improved. e authors introduced a deep neural network (DNN) compression approach based on rectified linear units (ReLUs) in [28] . e capabilities of DNNs were used in this technique to find a substantial estimate of the link between compression and decompression. e ReLUs speed up the encoding and decoding processes, increased the DNN's ability to generalize, established efficient gradient propagation, induced sparsity, and made it computationally efficient. As a result, ReLUs improved the ability of these neural networks to conduct real-time compression. e authors of [28] proposed a compression strategy based on a feed-forward neural network. Image quality was kept via predictive image coding. e medical image was first partitioned into diagnostically important regions (DIRs) and non-DIRs using a graph-based segmentation strategy in this method. e prediction step was then carried out using identical feed-forward neural networks (FFNNs) at both the compression and decompression stages. e gravitational search method and particle swarm techniques were used to train FFNNs. en, using lossless and near-lossless mannered methods, the performance of these two FFNNs was assessed. is prediction error sequence was further reduced using the Markov model-based arithmetic coding to calculate the difference between actual and predicted pixel values. e following is the image compression technique utilizing our GenPSO technology. In our work, we use vector quantization for image compression. e picture pixel is encoded using the codebook approach by vector quantization. Creating a codebook can be thought of as a search issue, with the goal of finding the best solution, since the representative codebook can be used to compress images properly. To clone VQ, we must meet the following two requirements: 1. the method used to create the greatest laptop, and the job's algorithms are listed below; and 2. codebook standards that are the most representative. As a result, a new generation of GenPSO will be employed to conduct the search for the optimal codecs. e grey scales represent the source of the training model inside the scientific framework. at is, the image will be chopped into identical-sized blocks as the training model. e training sample will be used to pick the majority of avatars. e representative photographs will be punished and categorized in the codebook according to the categorization and selection procedure. ese procedures are followed to make a codec with GenPSO: the GenPSO structure uses a method of encoding real numbers and sticks to chromosomal codes as a single solution. It also generates the first population based on the picture block's vector training. Calculating the GenPSO evolution shows that the original population is a viable answer. In this study, we employ the crossover selection operator and the exchange operation to produce the best logic codebook using the PSNR as a fitness value lower than the measurable threshold. e process for optimizing the parameter network using our GenPSO method is as follows. e GA solution determines the starting population of PSO in our hybrid approach to GA and PSO. GA and PSO evenly dispersed the number of repetitions. e GA governs the first half of the exchange, and the solution is supplied as the first set of PSOs. PSO owns the remainder of the exchange. As a result, it can solve the problem of sluggish network convergence and reduce BP algorithm recurrence. Both global and local hybrid searches should be included in the suggested strategy. After that, for each particle, the optimal Pg positions are constructed, which are subsequently assigned to the BP algorithm to refine these search operations. e first step in the process is to determine the starting population of PSO in the hybrid approach to maximize the efficiency of the parameter network. e solution is then divided into two groups, namely the first group of PSOs and the second group of PSOs. e image compression problem is solved using the neural network architecture depicted in Figure 3 . e input from a high number of input neurons is sent to a smaller number of neurons in the hidden layer, which are then provided to a larger number of neurons in the network's output layer. e forward neural network for communication is the name for this type of network. e network's architecture consists of 64 hidden neurons and 16 output neurons. e encoded information collected by the hidden layer's neuron is sent to the encryption layer, which then decodes it. e resulting output layer is composed of 16 hidden nodes. To control the output of the hidden layer, we need to combine the output with other compression techniques. is method is proposed as a hybrid approach that combines neural networks and GenPSO. e weight generated by the neural network is updated using the GenPSO algorithm in this stage. A chromosome's initial constitutive population has an evenly distributed random number. e weight of the neural network is represented by the chromosome. A crossover on a node is what we are talking about here. Two springings are created from the parent, each active node from a hidden layer and several outputs, all of which are randomly chosen with equal chance. is node is referred to as a transit node. With the other master, the value of all input weights for that particular node is altered. is update can alternatively be thought of as a node change, with all incoming weights for randomly chosen active nodes. Finally, the PSO method is used. A GA decision was used to establish the initial PSO population. As a result, it can solve the GPS network's sluggish convergence problem. To get the best weight value, the following procedures are examined: generations is less than the given brightness, the latest generation's best chromosome is the optimum weight; otherwise, the next step (ii) is proceeded. (x) e genetic algorithm determines the initial particle population, as well as their location and velocity. (xi) e fitness value of all particles based on the fitness function and the optimization problem's aim is estimated. (xii) e values of each particle's fit are compared with its holes. e current fitness value is set to its pbest if the current value is better than pbest. Otherwise, the pbest value will remain unchanged. (xiii) e population's current best fitness value is determined from all fitness values. en, this value is compared to Gebel, and if it is better than Gebet, the current value is set to Gebel; otherwise, blank is left. (xiv) Each piece's position and speed should be updated. (xv) If the best solution is identified that fits the predetermined minimum error or reaches the maximum number of repetitions, then the step is stop; otherwise, steps (x) through (xv) are continued. Network. e neural network retardation algorithm follows the following stages to achieve high picture compression and realtime image compression. Step 1: first and foremost, weight is required to begin with small random numbers. e input signal is received by each input neuron, which then passes it to the hidden unit. Step 2: weighted input signals from hidden neural network units are combined with the following spacing: (1) On application of activation function, the z j � f (z inf ) is output and it sends this signal to all neurons of the output layer. Step 3: now, neurons (y k , K � 1, .., m are output, all weighted inputs y ink � w ok + p j�1 z i w jk ) are added and its activation function is applied to calculate the output: (2) Step 4: output unit compares the calculated output corresponding to a target, and error is calculated as follows: Step 5: the error from the output is distributed to the hidden layer, and then, each hidden unit (z j , j � 1, . . . , n), sums its inputs from units in the layer: e error is calculated as follows: σσ j � in f (z). (7) Step 6: each output neuron (y k ) updates its bias and weights (j � 1, . . . ,p), and the weights are corrected by Δw jk � ασ k z j , and similarly, the bias is corrected by the term Δw jk � ασ k . Step 7: the newly updated weights are represented by the following terms: w jk (updated) � w jk (old) + Δw jk , w ok (updated) � w ok (old) + Δw ok . Step 8: the weight correction term for hidden neurons is as follows: Δv ij � ασ j x i . (7) e bias correction term for hidden neurons is as follows: Δv oj � ασ j . (8) Step 9: newly updated weights are as follows: Together with the GenPSO algorithm, forward backpropagation algorithms are used to train neural networks. Vector quantization techniques are used to increase the compression ratio. e construction of a codebook that maximizes PSNR while minimizing bit rate and computing time is a critical challenge with VQ picture compression. We will look at the factors that influence the VQ algorithm in this part. Code: a mapping from a vector to a k-measured Euclidean space (Rk) in a restricted set C containing a N output called code words is called a VQ of dimension k and size N. Q: Rk → C, where C � (y1, y2, . . ., yN) , and yi ∈ Rk for each i ∈ J ≡ 1, 2 . . . , N { }. e codebook is the set C. e vector volume solution is r � (log2N)/k, which measures the number of bits of vector components needed to represent the input vector and indicates whether the codec is well-designed. A 2 × 2 block is used to generate code words in a straightforward way. To achieve a lower bit rate, we must either raise the k size or the block size. Large block size, on the other hand, will result in reduced PSNR because the coding data are generated by the block's optimal pixel intensity. In contrast, reducing the block size improves the PSNR because small block coding absorbs the difference between the intensity value and the block size. A high bit rate is advantageous in this instance. We suggest a VQ picture encoding that boosts PSNR while maintaining the same bit rate in this study. To achieve these conditions, the first codebook must be created using an optimization technique. e picture pixel is encoded using the codebook approach by vector quantization. Creating a codebook can be thought of as a search issue, with the goal of finding the best solution, since the representative codebook can be used to compress images properly. To clone VQ, we must meet the following two requirements: 1. the method used to create the greatest laptop, and the job's algorithms are listed below; and 2. codebook standards that are the most representative. As a result, a new generation of GenPSO will be employed to conduct the search for the optimal codecs. e image is divided into 4 × 4 pixel pieces during digital code production. ese blocks are transformed into e-dimensional vectors, which are referred to as training vectors, and the collection of training vectors is referred to as a vector training set of size N [8] . e equation is used to determine N. where k is the vector size and m × m is the number of rows and columns of pixels in the input image. e p location for which the encoder is to be selected is calculated using the following equation: where N is the size of the training set and M is the desired size of the codebook. e coders at each location of the training set were selected to form the coding book. In the coding step, the nearest coders for each training vector were determined by calculating the distance d (e, c) using the following equation. ere must be a minimum for I j where I � 1 to M and d (Y, X i ). Each block of the input image is replaced with the converter's index, which corresponds to (3) . Index cards are a collection of such indicators. Now, the compressed image is stored or supplied as a map index and code. Each index in the index code is replaced by the correct encoder at the decoder step, when you regenerate the image from the compressed image. e initial codebook in the proposed GenPSOVQ technique is generated by the GenPSO algorithm. e procedure for determining the best codebook is as follows: (i) On area M is R n, the population of the XiMi a solution, I � 1,..., is began. (ii) Using a parent population distribution, two parents are chosen at random, and two births are generated using a crossover operator. e initial population, as well as the new one, was born out of the original population. is mixture is mixed at random to ensure that parents and children are suitably mingled. (vii) Each X m , m � 1, . . . , 2 answer is compared to 10% of the solutions of the other randomly selected solutions from a mixed population. (viii) e decision on the highest wages is left to the next generation of parents. (ix) If the best chromosome difference across N generations is less than the given brightness, the latest generation's best chromosome is the optimum weight; otherwise, the next step is proceeded. (x) e genetic algorithm determines the initial particle population, as well as their location and velocity. (xi) e fitness value of all particles based on the fitness function and the optimization problem's aim is estimated. (xii) e values of each particle's fit with its holes are compared. e current fitness value is set to its pbest if the current value is better than pbest. Otherwise, the pbest value will remain unchanged. (xiii) e population's current best fitness value is determined from all fitness values. en, this value is compared to Gebel, and if it is better than Gebet, the current value is set to Gebel; otherwise, blank is left. (xiv) e pace and position of each item are adjusted. (xv) If the best solution is identified that fits the predetermined minimum error or reaches the maximum number of repetitions, then the step is stopped; otherwise, steps (x) through (xv) are continued. Quantization Neural Network Approach. e following steps explain the suggested technique for quantifying vectors using reverse neural networks: Step 1: the image is cut into eight separate sections. ese cartridges have a square shape to them. A symmetric matrix is simple to work with. Step 2: the matrix's pixel value (0 to 255) is determined. Step 3: these values are applied to the forward neural network. Because the detector is 8 by 8, this network must contain 64 input nodes. e training algorithm described in Section 3 is used to train the neural network. Step 4: GPS-added weight and tilt are sent into a hidden layer with 2, 4, 8, 16, 32, and 64 hidden nodes. As described in Section 3.4, these values are encoded using GenPSOVQ. Step 5: after completing the training, the 8 × 8 printer is selected in the correct order. Step 6: digital bits are turned into real numbers. Step 7: finally, at the output of the nervous system's output layer, the call phase displays depressed images (output images). During the compression and decompression process, a pixel is converted to a true value and a true value is converted to a pixel. Used. PSNR, MSE, SSIM, NMSE, SNR, and CR [28] values are used to assess the performance of the proposed IntOPMICM approach on real-time medical pictures. ese findings are compared to those of known algorithms such as FFNN, VQFFNN, optimized FFNN, and optimized VQFFNN. Figure 4 shows a selection of photographs from a real-time medical database [29] . For the assessment of the compression approaches, this study employs six metrics namely PSNR, MSE, SSIM, NMSE, SNR, and CR. To compare the quality of the compressed and original images, the peak signal-to-noise ratio (PSNR) is utilized. e formula for PSNR is as follows: where H and W are the image's height and width, respectively, and f(x, y) and g(x, y) are the grey levels in the original and compressed images, respectively, at coordinates (x, y). e structural similarity index is a means of determining how similar the compressed and original images are. SSIM (y, y) � 2 μ y μŷ + c 1 2σ yy + c 2 where Y is the compressed image, the Y is the original image, µ is the mean, and σ2 is the variance. e difference between a compressed image [30] and the original image is measured using the mean square error (MSE). e MSE can be calculated using the following formula: where Y is the compressed image and the Y is the original image. Error. e root-mean-square error (RMSE) is a frequently used measure of the difference between compressed image values and original image values. where Y is compressed image and Y is the original image. Particles of GenPSO with 50 Population Size. e 2-2-1 feedforward architecture was designed, and the error value is evolved according to the GenPSO method set out above. e population size is 50, and the termination rule is if the best error in the chromosome in the 50 successive generations is less than 0.00001, as shown in Figure 6 . e architecture of the best chromosome particle distribution 2-2-1 was developed using the GenPSO method described above. e population size is 50, and the termination rule is if the best error in the chromosome in the 50 successive generations is less than 0.00001, as shown in Figure 7 . e 2-2-1 feed-forward architecture was designed, and the error value is evolved according to the GenPSO method set out above. e population size is 50, and the termination rule is if the best error in the chromosome in the 50 successive generations is less than 0.00001, as shown in Figure 8 . Image-based processing is becoming increasingly used in a variety of applications. ere are numerous approaches that give satisfying results to some extent. However, with evolving technology, there is still a lot of room for growth in this field. e IntOPMICM approach, which combines GenPSO and VQ, has been introduced as a novel image compression scheme. A combination of fragments and genetic algorithms was used to create the codebook. PSNR, MSE, SSIM, RMSE, SNR, and CR indicators were used to test the suggested technique using real-time medical imaging. e suggested IntOPMICM approach produces higher PSNR SSIMM values for a given compression ratio than existing methods, according to experimental data. Furthermore, for a given compression ratio, the suggested IntOPMICM approach produces lower MSE, RMSE, and SNR values than existing methods. Various image-based processing techniques are becoming increasingly popular in various applications. IntOPMICM, which combines VQ and GenPSO, is a novel algorithm that combines genetic algorithms and fragments. It was tested using real-time medical images. Data Availability e data that support the findings of this study are available on request from the corresponding author. e authors of this manuscript declare that they do not have any conflicts of interest. Convergence Analysis of codebook generation techniques for vector quantization using kmeans clustering techniques e JPEG still picture compression standard Gaussian elimination-based novel canonical correlation Analysis method for EEG motion artifact removal A tutorial on modern lossy wavelet image compression: foundations of JPEG 2000 Efficient FPGA implementation of DWT and modified SPIHT for lossless image compression High performance scalable image compression with EBCOT Design and development of image security technique by using cryptography and steganography: a combine approach Neural network-based image compression with lifting scheme and RLC Design of vector quantizer for image compression using self-organizing feature map and surface fitting Multiobjective genetic algorithm and convolutional neural network based COVID-19 identification in chest X-ray images Image compression with a multiresolution neural network Neural networks arbitration for optimum DCT image compression Stock price prediction using technical indicators: a predictive model using optimal deep learning Lossless compression of medical images using Hilbert space-filling curves Medical radiographs compression using neural networks and haar wavelet Neuro-wavelet based efficient image compression using vector quantization Artificial neural network-based crop yield prediction using NDVI, SPI, VCI feature vectors ANNIE-artificial neural network-based image encoder Lossless compression of medical images using a dual level dpcm with context adaptive switching neural network predictor Enhancement of patient facial recognition through deep learning algorithm: ConvNet Hybrid neural network predictive-wavelet image compression system Neural network based image compression approach to improving the quality of biomedical image for telemedicine Neural network based image compression for memory consumption in a cloud environment Efficient image compression techniques for compressing multimodal medical images using neural network radial basis function approach Stock prediction based on technical indicators using deep learning model Efficient deep neural network for digital image compression employing rectified linear neurons Feedforward neural network-based predictive image coding for medical image compression Cryptosystem for securing image encryption using structured phase masks in fresnel wavelet transform domain Novel machine learning applications on fly ash based concrete: an overview A hybrid discrete wavelet transform with neural network backpropagation approach for efficient medical image compression