key: cord-0077418-uvzjx84b authors: Sun, Ying; Zhao, Zichen; Jiang, Du; Tong, Xiliang; Tao, Bo; Jiang, Guozhang; Kong, Jianyi; Yun, Juntong; Liu, Ying; Liu, Xin; Zhao, Guojun; Fang, Zifan title: Low-Illumination Image Enhancement Algorithm Based on Improved Multi-Scale Retinex and ABC Algorithm Optimization date: 2022-04-11 journal: Front Bioeng Biotechnol DOI: 10.3389/fbioe.2022.865820 sha: d8b7a17016d08a8177605582876aa6ee7179a5c4 doc_id: 77418 cord_uid: uvzjx84b In order to solve the problems of poor image quality, loss of detail information and excessive brightness enhancement during image enhancement in low light environment, we propose a low-light image enhancement algorithm based on improved multi-scale Retinex and Artificial Bee Colony (ABC) algorithm optimization in this paper. First of all, the algorithm makes two copies of the original image, afterwards, the irradiation component of the original image is obtained by used the structure extraction from texture via relative total variation for the first image, and combines it with the multi-scale Retinex algorithm to obtain the reflection component of the original image, which are simultaneously enhanced using histogram equalization, bilateral gamma function correction and bilateral filtering. In the next part, the second image is enhanced by histogram equalization and edge-preserving with Weighted Guided Image Filtering (WGIF). Finally, the weight-optimized image fusion is performed by ABC algorithm. The mean values of Information Entropy (IE), Average Gradient (AG) and Standard Deviation (SD) of the enhanced images are respectively 7.7878, 7.5560 and 67.0154, and the improvement compared to original image is respectively 2.4916, 5.8599 and 52.7553. The results of experiment show that the algorithm proposed in this paper improves the light loss problem in the image enhancement process, enhances the image sharpness, highlights the image details, restores the color of the image, and also reduces image noise with good edge preservation which enables a better visual perception of the image. The vast majority of information acquired by humans comes from vision. Images, as the main carrier of visual information, play an important role in three-dimensional reconstruction, medical detection, automatic driving, target detection and recognition and other aspects of perception Wang et al., 2019; Yu et al., 2019; Huang et al., 2021; Liu et al., 2022a; Tao et al., 2022a; Yun et al., 2022a; Bai et al., 2022) . With the rapid development of optical and computer technology, equipment for image acquisition are constantly updated, and images often contain numerous valuable information waiting to be discovered and accessed by humans Huang et al., 2020; Hao et al., 2021a; Cheng and Li, 2021) . However, due to the influence of light, weather and imaging equipment, the captured images are often dark, noisy, poorly contrasted and partially obliterated in detail in real life (Sun et al., 2020a; Tan et al., 2020; Wang et al., 2020) . This kind of image makes the area of interest difficult to identify, thus reducing the quality of image and the visual effect of the human eyes (Jiang et al., 2019b; Hu et al., 2019) , and also causes great inconvenience for the extraction and analysis of image information, generating considerable difficulty for computers and other vision devices to carry out normal target detection and recognition (Su and Jung, 2018; Sun et al., 2020b; Cheng et al., 2020; Luo et al., 2020; Hao et al., 2021b) . Therefore, it is necessary to enhance the lowlight images through image enhancement technology Sun et al., 2020c) , so as to highlight the detailed features of the original images, improve contrast, reduce noise, make the original blurred and low recognition images clear, improve the recognition and interpretation of images comparatively, and satisfy the requirements of certain specific occasions (Tao et al., 2017; Ma et al., 2020; Jiang et al., 2021a; Tao et al., 2021; Liu et al., 2022b) . Metaheuristic algorithms have great advantages for multi-objective problem solving and parameter optimization Yu et al., 2020; Chen et al., 2021a; Liu X. et al., 2021; Wu et al., 2022; Xu et al., 2022; Zhang et al., 2022; Zhao et al., 2022) , Methods of Multiple Subject Clustering and Subject Extraction as well as, K-means clustering methods, steady-state analysis methods, numerical simulation techniques quantification and regression methods are also widely used in data processing Sun et al., 2020d; Chen et al., 2022) . Artificial Bee Colony (ABC) is an optimization method proposed to imitate the honey harvesting behavior of bee colony, which is a specific application of cluster intelligence idea. The main feature is that ABC requires no special information about the problem, but only needs to compare the advantages and disadvantages of the problem (Li C. et al., 2019; He et al., 2019; Duan et al., 2021) , and through the individual local optimization-seeking behavior of each worker bee, the global optimum value will eventually emerge in the population, which has a fast convergence speed (Chen et al., 2021b; Yun et al., 2022b) . In response to the above problems, considering this advantage of ABC, this paper proposes a low-illumination image enhancement algorithm based on improved multi-scale Retinex and ABC optimization. Based on Retinex theory and image layering processing, this algorithm improves and optimizes the multi-scale Retinex algorithm with the structure extraction from texture via relative total variation, and replicates the original image to obtain the main feature layer and the compensation layer. In the image fusion process, the ABC algorithm is used to optimize the fusion weight factors of each layer and select the optimal solution to realize the processing enhancement of low-illumination images. Finally, the effectiveness of the algorithm in this paper is verified by conducting experiments on the LOLdataset dataset. The other parts of this paper as follows: Related Work gives an overview of image enhancement methods in low illumination and Artificial Bee Colony algorithms; Basic Theory describes the basic theory of Retinex; The Algorithm Proposed in This Paper proposes a low illumination image enhancement algorithm based on improved multiscale Retinex and ABC optimization; Experiments and Results Analysis conducts verification experiments which compares with the traditional Retinex algorithm and the method proposed in this paper and the results were analyzed by Friedman test and Wilcoxon signed rank test; and the conclusions of this paper are summarized in Conclusion. Image enhancement algorithms are grouped into two main categories: spatial domain and frequency domain image enhancement algorithms (Vijayalakshmi et al., 2020) . The methods of spatial domain enhancement mainly include histogram equalization (Tan and Isa, 2019) and Retinex algorithm, etc. Histogram Equalization (HE) achieves the enhancement of image contrast by adjusting the pixel grayscale of the original image and mapping the image grayscale to more gray levels to make it evenly distributed, but often the noise of image processed by HE is also enhanced and the details are lost (Nithyananda et al., 2016) ; The Retinex image enhancement method proposed by Land E H (Land, 1964) combines well with the visual properties of the human eye, especially in low-illumination enhancement, and which performs well overall compared to other conventional methods. Based on the Retinex theory, Jobson D J et al.(Jobson et al., 1997) proposed the Single-Scale Retinex (SSR) algorithm, which can get better contrast and detail features by estimating the illumination map, but this algorithm can cause detail loss in image enhancement. Researchers subsequently proposed Multi-Scale Retinex (MSR), the image enhanced by this algorithm will have certain problems of color bias, and there will still be local unbalanced enhancement and "halo" phenomenon . Therefore, Rahman Z et al. (Rahman et al., 2004) proposed the Multi-Scale Retinex with Color Restoration (MSRCR), and the "halo" and color problems have been improved. The application of convolutional neural networks to deep learning has led to improved enhancement and recognition, but the difficulties in the construction of the network and the collection of data sets for training make this method difficult to implement Sun et al., 2021; Weng et al., 2021; Yang et al., 2021; Tao et al., 2022b; Liu et al., 2022c) . Based on the Retinex algorithm, Wang D et al. (Wang et al., 2017) used Fast Guided Image Filtering (FGIF) to evaluate the irradiation component of the original image, combined with bilateral gamma correction to adjust and optimize the image, which preserved the details and colors of the image to some extent, but the overall visual brightness was not high. Zhai H et al. (Zhai et al., 2021) proposed an improved Retinex with multiimage fusion algorithm to operate and fuse three copies of images separately, and the images processed by this algorithm achieved some improvement in brightness and contrast, but the overall still had noise and some details lost. The frequency domain enhancement methods mainly include Fourier transform, wavelet transform, Kalman filtering and image pyramid, etc Li et al., 2019d; Huang et al., 2019; Chang et al., 2020; Tian et al., 2020; Liu et al., 2021c) . This kind of algorithm can effectively enhance the structural features of the image, but the target details of the image which are enhanced by these methods are still blurred. The image layering enhancement method proposed by researchers in recent years has led to the application of improved low-light image enhancement methods based on this principle more and more widely (Liao et al., 2020; Long and He, 2020) . The enhancement of image layer decomposes the input image into base layer and detail layer components, and then processes the enhancement of the two layers separately, and finally selects the appropriate weighting factor for image fusion. Commonly used edge-preserving filters are bilateral filtering, Guided Image Filtering (GIF), Fast Guided Image Filtering (Singh and Kumar, 2018) , etc. Since GIF uses the same linear model and weight factors for each region of the image, it is difficult to adapt to the differences in texture features between different regions of the image. In order to resolve this problem of GIF, Li Z et al. (Li et al., 2014) proposed a Weighted Guided Image Filtering (WGIF) based on local variance, which constructs an adaptive weighting factor based on traditional guided filtering, which not only improves the edge-preserving ability but also reduces the "halo artifacts" caused by image enhancement. Inspired by the honey harvesting behavior of bee colonies, Karaboga (Karaboga, 2005) proposed a novel global optimization algorithm based on swarm intelligence, Artificial Bee Colony (ABC), in 2005. Since its introduction, the ABC algorithm has attracted the attention of many scholars and has been analyzed comparatively. Karaboga et al. (Karaboga and Basyurk, 2008) analyze the performance of ABC compared with other intelligent algorithms under multidimensional and multimodal numerical problems and the effect of the scale of the ABC control parameters taken. Karaboga et al. (Karaboga and Akay, 2009) were the first to perform a detailed and comprehensive performance analysis of ABC by testing it against 50 numerical benchmark functions and comparing it with other well-known evolutionary algorithms such as Genetic Algorithms (GA), Particle Swarm Optimization (PSO), Differential Evolution Algorithm (DE), and Ant Colony Optimization (ACO). Akay et al. ) analyzed the effect of parameter variation on ABC performance. Singh et al. (Singh, 2009) proposed an artificial bee colony algorithm for solving minimum spanning tree and verified the superiority of this algorithm for solving such problems. Ozurk et al. (Ozurk and Karaboga, 2011) proposed a hybrid method of artificial bee colony algorithm and Levenberg-Marquardts for the training of neural networks. Karaboga et al. (Karaboga and Gorkemli, 2014) modified the new nectar search formula to find the best nectar source near the exploited nectar source (set a certain radius value) to be exploited in order to improve the local merit-seeking ability of the swarm algorithm. Retinex is a common method of image enhancement based on scientific experiments and scientific analysis, which is proposed by Edwin.H. Land in 1963 (Land and McCann, 1971) . In this theory, two factors determine the color of an object being observed, as shown in Figure 1 , namely the reflective properties of the object and the intensity of the light around the them, but according to the theory of color constancy, it is known that the inherent properties of the object are not affected by light, and the ability of the object to reflect different light waves determines the color of the object to a large extent (Zhang et al., 2018) . This theory shows that the color of the substance is consistent and depends on its ability to reflect wavelengths, which is independent of the absolute value of the intensity of the reflected light, in addition to being unaffected by non-uniform illumination, and is consistent, so Retinex is based on color consistency. While traditional nonlinear and linear only enhance one type of feature of the object, this theory can be adjusted in terms of dynamic range compression, edge enhancement and color invariance, enabling adaptive image enhancement. The Retinex method assumes that the original image is obtained by multiplying the reflected image and the illuminated image, which can be expressed as In Eq. 1, I (x,y) is the original image, R (x,y) is the reflection component with the image details of the target object, L (x,y) is the irradiation component with the intensity information of the surrounding light. In order to reduce the computational complexity in the traditional Retinex theory, the complexity of the algorithm is usually simplified by taking logarithms on both sides with a base The SSR method uses a Gaussian kernel function as the central surround function to obtain the illumination component by convolving with the original image and then subtracting it to obtain the reflection component in the logarithmic domain. The specific expressions are as follows: In Eqs 4, 5, G (x,y) denotes the center surround function -Gaussian kernel function, L (x,y) is obtained by convolving G (x,y) with I (x,y) . σ is the Gaussian surround scale parameter, and is the only adjustable parameter in SSR. When σ is small, it can retain better image details, but the color is easily distorted; When σ is larger, better image color can be preserved, but the details of image easily loss (Parihar and Singh, 2018; Jiang et al., 2021b) . In order to maintain high image fidelity and compression of the dynamic range of the image, researchers proposed the Multi-Scale Retinex (MSR) method on the basis of SSR (Peiyu et al., 2020) , The MSR algorithm uses multiple Gaussian wrap-around scales for weighted summation, The specific expressions are as follows: In Eqs. 7, K is the number of Gaussian center surround functions. When K = 1, MSR degenerates to SSR. ω k is the weighting factor under different Gaussian surround scales, and in order to ensure the advantages of both high, medium and low scales of SSR to be considered, K is usually taken as three and ω 1 ω 2 ω 3 1/3. Considering the color bias problem of SSR and MSR, the researchers developed the MSRCR (Weifeng and Dongxue, 2020), MSRCR adds a color recovery factor to MSR, which is used to adjust the color ratio of the channels, The specific expressions are as follows: In Eq. 9, β is the gain constant; α is the nonlinear intensity control parameter; I i (x, y) denotes the image of the ith channel. N j 1 I j (x, y) denotes the sum of pixels in this channel. After processing the image by MSRCR algorithm, the pixel values usually appear negative. So the color balance is achieved by linear mapping and adding overflow judgment to achieve the desired effect. The low-illumination image enhancement algorithm based on improved multi-scale Retinex and ABC optimization, which is proposed in this paper, divides the image equivalently into a main feature layer and a compensation layer. For the main feature layer firstly, HE is used for image enhancement, and WGIF is selected for edge-preserving noise reduction. For the compensation layer, the irradiated component of the original image is first obtained by used the structure extraction from texture via relative total variation, and then the original image is processed with the MSRCR algorithm to obtain the reflected component for color recovery, and Histogram Equalization, bilateral gamma function correction, and edge-preserving filtering are applied to it. Finally, the main feature layer and the compensation layer are fused by optimal parameters, and the optimal parameters are obtained by adaptive processing correction with an ABC algorithm to achieve image enhancement under low illumination. The flow chart of the algorithm in this paper is shown in Figure 2 . Guided Image Filter is a filtering method proposed by He K et al. (He et al., 2012) , which is an image smoothing filter based on a local linear model. The basic idea of guided image filter is to assume that the output image is linearly related to the bootstrap image within a local window ω k . A guided image is used to generate weights to derive a linear model for each pixel, and thus the input image is processed. The mathematical model expression is as follows: To find the linear coefficients in Eq. 10, the cost function is introduced as follows: Using least squares to minimize the value of the cost function E (a k ,b k ) , the linear coefficients are obtained as: In Eqs. 10, 11, 12, O is the output image, G is the guide image, and I is the input image; a k , b k are the linear coefficients of the local window ω k ; ε is the regularization coefficient to prevent the linear coefficient a k from being too large, and the larger the value Frontiers in Bioengineering and Biotechnology | www.frontiersin.org April 2022 | Volume 10 | Article 865820 of ε is, the more obvious the smoothing effect is when the input image is used as the guide image. μ k denotes the mean value of G within ω k , σ k denotes the standard deviation of G within ω k , |ω| is the total number of pixel blocks within the local window ω k and I k is the mean value of the input image within the window ω k . Since a pixel point in the output image can be derived by linear coefficients in different windows, the following expression can be obtained: GIF uses a uniform regularization factor ε for each region of the image, and larger regularization factors produce a "halo" phenomenon in the edge regions of the image. In view of this problem, WGIF achieves adaptive adjustment of the regularization coefficients by introducing a weighting factor Γ G . In this way, adaptive adjustment of the linear coefficients is obtained, thus achieving adaptivity to each region of the image and improving the filtering effect. The weighting factor Γ G and the new linear coefficient a k are as follows: In Eqs. 15, 16, σ 2 G,1 (I′) is the variance of the guide image with respect to Ω 1 (I′), where Ω 1 (I′) denotes a 3 × 3 window centered at I′ and r 1; γ is the regularization factor, taken as (0.001 × L) 2 , L is the dynamic range of the image (Li et al., 2014) . A comparison of the results processed by WGIF and FGIF is shown in Figure 3 . As it can be seen in Figure 3 , the FGIFprocessed images still have some noise, while the results after WGIF processing are well improved in this aspect. Obtaining the Main Feature Layer HE is used for image enhancement and WGIF is selected for edge-preserving noise reduction. The results obtained from each step are shown in Figure 4 . From this figure, it can be seen that the image obtained by HE has been improved compared with the original image, but in this process, the noise in the image is also extracted and amplified. Some of the details and noise in the image are filtered out by the process of WGIF, and the "halo" phenomenon and the "artifacts" caused by the gradient inversion are avoided, because WGIF takes into account the texture differences between regions in the image. As can be seen from 2.2, the traditional Retinex algorithm uses a Gaussian filter kernel function to convolve with the original image, and after eliminating the filtered irradiated component, the reflected component is used as the enhancement result, but the estimation of Gaussian filter at the edge of the image is prone to bias, and thus the "halo" phenomenon occurs, which undoubtedly This will undoubtedly lead to unnatural enhancement results due to the lack of illumination. To address this problem, this paper uses the structure extraction from texture via relative total variation in obtaining the irradiation component of the compensation layer, which was proposed by Xu L et al. (Xu et al., 2012) in 2012, to better preserve the main edge information of the image and thus reduce the "halo" phenomenon in the edge information-rich region. The model of the method is as follows: In Eqs 17 and 18, 19, 20 and 21, S is the output image, p is the pixel index, λ is the weighting factor to adjust the degree of smoothing of the image, and the larger the value of λ is, the smoother the image is; ε is a positive number close to zero and is used to prevent the denominator from being zero; D x (p) and D y (p) are respectively the variation functions of pixel p in the x and y directions, and R(p) is the window centered on p. L x (p) and L y (p) are the intrinsic functions within the window, respectively. The parameter σ is the texture suppression factor, and the larger the value of σ is, the stronger the texture suppression effect is. In order to demonstrate the advantages of this method from a practical point of view, the images in the LOLdataset were taken for the structure extraction from texture via relative total variation and convolution operations with the Gaussian kernel function in the traditional Retinex algorithm to obtain the irradiation components, respectively, and the results obtained by the two methods are shown in Figure 5 . Meanwhile, Information Entropy and Standard Deviation were used to assess their quality, and the results are shown in Tables 1 and 2. From Figure 5 , it can be seen that the structure extraction from texture via relative total variation method preserves the irradiation component better, and at the same time, it is known from the correlation evaluation function that the IE and SD of this method are greater than those of the Gaussian kernel function convolution method of the traditional Retinex algorithm, which proves that the structure extraction from texture via relative total variation method is better in preserving the image information in the acquisition of the irradiation component. For the original image, a duplication layer is performed to obtain the image to be processed, and the structure extraction from texture via relative total variation is selected to obtain the irradiation component, and combined with the principle of Retinex and color recovery to obtain the reflection component, at the same time, histogram equalization, bilateral gamma Frontiers in Bioengineering and Biotechnology | www.frontiersin.org April 2022 | Volume 10 | Article 865820 6 correction and bilateral filtering are performed. The results obtained from each step are shown in Figure 6 . As can be seen from the figure, the image content is basically recovered by the MSRCR algorithm processing, but the image saturation is not enough to restore the real scene in comparison. After HE method, the color was recovered to some extent, but the obtained image shows that the light and dark transition areas are not effective. Therefore, this paper used the improved bilateral gamma function for processing . The mathematical expression of the traditional gamma function is as follows: In Eq. 23, I is the input image to be processed, O is the output image, and γ is a constant between (0,1) to control the enhancement performance of the image. O d is the convex function corrected for the dark region and O b is the convex function corrected for the bright region. Since the traditional bilateral gamma function can only be mechanically enhanced, to address this problem and considering the distribution characteristics of the illumination function, the mathematical expression of the scholars' improved bilateral gamma function is as follows: FIGURE 5 | The first row is the irradiation component obtained by Gaussian kernel function; the second row is the irradiation component obtained by the structure extraction from texture via relative total variation method. Hence an improved bilateral gamma function is used for adaptive correction of the luminance transition region; Finally, bilateral filtering is used for edge-preserving and noise-reducing to obtain the final compensation layer. Through the above processing flow, the main feature layer and compensation layer are finally obtained, and the corresponding fusion is performed at the end of the proposed method in this paper, where an image evaluation system is established and three evaluation indexes are introduced: Information Entropy, Standard Deviation and Average Gradient. The Standard Deviation (SD) reflects the magnitude of the dispersion of the image pixels. The larger the standard deviation, the greater the dynamic range of the image and the more gradation levels. The formula to calculate SD is as follows: In Equ. 25, W is the width of the input image and H is the height of the input image. The Average Gradient (AG) represents the variation of small details in the multidimensional direction of the image. The larger the AG, the sharper the image detail, and the greater the sense of hierarchy. The formula to calculate AG is as follows: The information entropy (IE) of image is a metric used to measure the amount of information in an image. The greater the IE, the more informative and detailed the image is, and the higher the image quality. The formula to calculate IE is as follows: In Eq. 27, R is the image pixel gray level, usually R = 2 8 -1, and P(x) is the probability that the image will appear at a point in the image when the gray value x is at that point. On the concept of multi-objective optimization (Li et al., 2019e; Liao et al., 2021; Xiao et al., 2021; Liu et al., 2022d; Yun et al., 2022c) , The IE, AG and SD are weighted together and balanced by using an equal proportional overlay, showing that IE, AG and SD are equally important in image evaluation. The mathematical expression of the fitness function obtained is as follows: The values of the fitness function under different weights are obtained by applying different weights to the main feature layer and the compensation layer for image weighting fusion, as shown in Figure 7 . It is clear from this figure that the value of the fitness function varies with different weights and that the maximum value should be generated in ω 1 ∈ [0.1, 1], ω 2 ∈ [0.2, 1]. To determine the optimal weights, an adaptive optimization evaluation system needs to be constructed. Traditional nonlinear optimization algorithms update the objective solution by certain rules of derivatives, such as Gradient Descent, Newton's Method and Quasi-Newton Methods. When solving multi-objective nonlinear optimization problems, it is difficult to satisfy the requirements because of the computational complexity of following the defined methodological steps for optimization The convergence of the Gradient Descent is slowed down when it approaches a minimal value, and requires several iterations; Newton's method is secondorder convergence, which is fast, but each step requires solving the inverse matrix of the Hessian matrix of the objective function, which is computationally complicated. The metaheuristic algorithm models the optimization problem based on the laws of biological activity and natural physical phenomena. According to the laws of natural evolution, the natural evolution-based metaheuristic algorithm uses the previous experience of the population in solving the problem, and selects the methods that have worked well so that the target individuals are optimized in the iterations, and finally arrives at the best solution. Considering the computational complexity of the objective function and this feature of the metaheuristic algorithm, the artificial bee colony algorithm is chosen for the objective optimization. Inspired by the honey harvesting behavior of bee colonies, Karaboga (2005) proposed a novel global optimization algorithm based on swarm intelligence, Artificial Bee Colony (ABC), in 2005. The bionic principle is that bees perform different activities during nectar collection according to their respective division of labor, and achieve sharing and exchange of colony information to find the best nectar source. In ABC, the entire population is divided into three types of bees, namely, employed bees, scout bees and follower bees. When a employed bee finds a honey source, it will share it with a follower bee with a certain probability; a scout bee does not follow any other bee and looks for the honey source alone, and when it finds it, it will become a employed bee to recruit a follower bee; when a follower bee is recruited by multiple employed bees, it will choose one of the many leaders to follow until the honey source is finished. Determination of the initial location of the nectar source: In Eq. 29, rand(0, 1) is a random number that follows a uniform distribution over the interval; L d and U d denote the upper and lower bounds of the traversal. Leading the bee search for new nectar sources: In Eq. 30, λ is a [-1,1] uniformly distributed random number that determines the degree of perturbation; α is the acceleration coefficient, which is usually taken as 1. Probability of follower bees selecting the employed bee: Scout bees searching for new nectar sources: During the search for the nectar source, if it has not been updated to a better one after n iterations of the search reach the threshold T, then the source is abandoned. The scout bee then finds a new nectar source again. The flow chart of the artificial bee colony algorithm is shown in Figure 8 . The above fitness function is selected and iteratively optimized by the artificial bee colony algorithm, each parameter is set to the number of variables is 2, max-iter is 100, n-pop is 45, and the maximum number of honey source mining is 90. The convergence curve of the optimal weight parameter is shown in Figure 9 .The convergence curves of the optimal weight parameters by iterative optimization of the artificial bee colony algorithm by selecting the above fitness function are shown in Figure 8 . Considering that the optimization algorithm is to obtain the minimum value, the results are inverted, and it can be seen from Figure 9 that the maximum value is 42.0534 under this fitness function, and convergence is completed at 14 times. The computer used in this experiment was a 64-bit Win10 operating system; CPU为Intel(R)Core(TM) i5-6300HQ at2.30GHz; GPU is NVIDIA 960M with 2G GPU memory; RAM is 8 GB; All algorithms in this paper were run on MATLAB 2021b and Python3.7 on the PyCharm platform, and statistical analysis of the results using IBM SPSS Statistics 26. The images used in the experiments are all from the LOLdataset dataset, and 200 low-illumination images are randomly selected and tested one by one by the algorithm, and the representative images are selected for comparison of processing effects. The algorithm proposed in this paper is compared with SSR algorithm, MSR algorithm, MSRCR algorithm, literature (Zhai et al., 2021) , and literature (Wang et al., 2017) (Zhai et al., 2021) and literature (Wang et al., 2017) are built according to the content of the paper respectively, and the algorithm is restored as much as possible. In this paper, the image enhancement results under different methods are analyzed by subjective evaluation and objective evaluation, and the processing results of each method are shown in Figure 10 . It can be shown from Figure 10 that the brightness of the image after processing by SSR and MSR algorithms is improved compared with the original image, but the color retention effect is poor, the image is whitish and the color loss is serious. MSRCR ensure the brightness improvement comparing to the former methods, the color is also restored to some extent, but the color reproduction is not high and there is loss of details. The processing results of literature (Zhai et al., 2021) are in better color reproduction, but general brightness enhancement, part of the detailed information not effectively enhanced is still annihilated in the dark areas of the image, specifically in the end of the bed in The enhanced image obtained by the algorithm in this paper has higher color fidelity, more prominent details, better structural information effect, and more consistent with the visual perception of human eyes in overall comparison. Subjective evaluation is susceptible to interference from other factors and varies from person to person. In order to have a better comparison of the image quality of the enhancement results under different methods and to ensure the reliability of the experiments, Standard Deviation, Information Entropy and Average Gradient are used as evaluation metrics in this paper. The Standard Deviation reflects the magnitude of the dispersion of the image pixels, the greater the Standard Deviation, the greater the dynamic range of the image; Information Entropy is a metric used to measure the amount of information in an image, the higher the Information Entropy, the more information in the image. The Average Gradient represents the variation of small details in the multidimensional direction of the image, the larger the Average Gradient, the stronger the image hierarchy. The evaluation results of low-illumination image enhancement with different algorithms are shown in Tables 3-5 . Statistical analysis is taken for the data in Tables 3-5 . The Friedman test is used to analyze the variability of the results of experiments, and the Wilcoxon sign ranked test method is used to analyze the advantages of the proposed method in this paper with other methods. The Friedman test is a statistical test for the chi-squaredness of multiple correlated samples, which was proposed by M. Friedman in 1973. The Friedman test requires the following requirements to be met: 1. sequential level data; 2. three or more groups; 3. relevant groups; And 4. a random sample of values from the collocation. Obviously, the data in Tables 3-5 all satisfy the requirements. Under the Friedman test, the following hypothesis is set: H 0 : No difference between the six methods compared. H 1 : There are differences in the six methods of comparison. The data is imported into SPSS software for analysis, and the results were obtained as shown in Tables 6 and 7. The Wilcoxon Signed Rank Test was proposed by F. Wilcoxon in 1945. In the Wilcoxon Signed Rank Test, it takes the rank of the absolute value of the difference between the observation and the central position of the null hypothesis and sums them separately according to different signs as its test statistic. Under the Wilcoxon Signed Rank Test, it can be seen from Tables 3-5 that the method of this paper is numerically greater than the other algorithms, so the following hypothesis is set: H 0 : The images enhanced by ours did not differ from the other methods. H 1 : The images enhanced by ours differ from the other methods. The data is imported into SPSS software for analysis, and the results of the data were obtained as shown in Table 8 and 9. From the data in Tables 3-5 , it can be seen that the algorithm in this paper achieves a large improvement in SD, IE and AG, which is significantly better than the other five algorithms, Meanwhile, after Friedman test, it can be seen from Table 6 and 7 that asymptotic significance is less than 0.001 in all three evaluation metrics, so the original hypothesis is rejected and this data is extremely different in statistics; After Wilcoxon Signed Rank Test, it can be seen from Table 8 and 9 that the bilateral asymptotic significance is less than 0.01 for all three evaluation metrics, so the original hypothesis is rejected and the method of this paper is effective, which is differ from the other methods. This shows that the images enhanced by the algorithm in this paper have increased brightness, richer details, less image distortion and better image quality, thus verifying the effectiveness of the algorithm proposed in this paper. For the problems of poor image quality and loss of detail information in the process of low-illumination image enhancement, a low-illumination image enhancement algorithm is proposed in this paper, which is based on improved multi-scale Retinex and ABC optimization. Duplicate layering the original image, the main feature layer is processed by HE and WGIF, to enable image brightness enhancement, color restoration and noise elimination, and avoid the generation of gradient inversion artifacts; The structure extraction from texture via relative total variation method is performed on the compensation layer to estimate the irradiation component, and combined with bilateral gamma correction and other methods to avoid the occurrence of halo phenomenon; Finally, the Artificial Bee Colony algorithm is used to optimize the parameters for weighted fusion. The experimental results verify the rationality of the algorithm in this Frontiers in Bioengineering and Biotechnology | www.frontiersin.org April 2022 | Volume 10 | Article 865820 paper, and which achieves better results in both subjective and objective evaluations by comparing with other five methods. The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors. Parameter Tuning for the Artificial Bee Colony Algorithm Improved Single Shot Multibox Detector Target Detection Method Based on Deep Feature Fusion Gaussian Pyramid Transform Retinex Image Enhancement Algorithm Based on Bilateral Filtering Evolutionary Game of Multi-Subjects in Live Streaming and Governance Strategies Based on Social Preference Theory during the COVID-19 Pandemic Analysis of Effects on the Dual Circulation Promotion Policy for Cross-Border E-Commerce B2B Export Trade Based on System Dynamics during COVID-19 Modeling Multi-Dimensional Public Opinion Process Based on Complex Network Dynamics Model in the Context of Derived Topics Gesture Recognition Based on Surface Electromyography-Feature Image Visualization of Activated Muscle Area Based on sEMG Gesture Recognition Based on Multi-modal Feature Weight Towards the Steel Plate Defect Detection: Multidimensional Feature Information Extraction and Fusion Intelligent Detection of Steel Defects Based on Improved Split Attention Networks Guided Image Filtering Gesture Recognition Based on an Improved Local Sparse Representation Classification Algorithm Probability Analysis for Grasp Planning Facing the Field of Medical Robotics Detection Algorithm of Safety Helmet Wearing Based on Deep Learning Improvement of Maximum Variance Weight Partitioning Particle Filter in Urban Computing and Intelligence Jointly Network Image Processing: Multi-task Image Semantic Segmentation of Indoor Scene Based on CNN Manipulator Grabbing Position Detection with Information Fusion of Color Image and Depth Image Using Deep Learning Grip Strength Forecast and Rehabilitative Guidance Based on Adaptive Neural Fuzzy Inference System Using sEMG Gesture Recognition Based on Skeletonization Algorithm and CNN with ASL Database Semantic Segmentation for Multiscale Target Based on Object Recognition Using the Improved Faster-RCNN Model Gesture Recognition Based on Binocular Vision Properties and Performance of a center/surround Retinex A Comparative Study of Artificial Bee colony Algorithm An Idea Based on Honey Bee Swarm for Numerical Optimization On the Performance of Artificial Bee colony (ABC) Algorithm A Quick Artificial Bee colony (qABC) Algorithm and its Performance on Optimization Problems The Retinex Lightness and Retinex Theory Gesture Recognition Based on Modified Adaptive Orthogonal Matching Pursuit Algorithm Surface EMG Data Aggregation Processing for Intelligent Prosthetic Action Recognition Global and Adaptive Contrast Enhancement for Low Illumination gray Images Trajectory Tracking of 4-DOF Assembly Robot Based on Quantification Factor and Proportionality Factor Self-Tuning Fuzzy PID Control Human Lesion Detection Method Based on Image Information and Brain Signal A Novel Feature Extraction Method for Machine Learning Based on Surface Electromyography from Healthy Brain Towards the SEMG Hand: Internet of Things Sensors and Haptic Feedback Application Occlusion Gesture Recognition Based on Improved SSD Multi-object Intergroup Gesture Recognition Combined with Fusion Feature and KNN Algorithm Genetic Algorithm-Based Trajectory Optimization for Digital Twin Robots Dynamic Gesture Recognition Algorithm Based on 3D Convolutional Neural Network Grasping Posture of Humanoid Manipulator Based on Target Shape Analysis and Force Closure Self-tuning Control of Manipulator Positioning Based on Fuzzy PID and PSO Algorithm Wrist Angle Prediction under Different Loads Based on GA-ELM Neural Network and Surface Electromyography Manipulator Trajectory Planning Based on Work Subspace Division Target Localization in Local Dense Mapping Using RGBD SLAM and Object Detection Image Enhancement Method Based on Multi-Layer Fusion and Detail Recovery Decomposition Algorithm for Depth Image of Human Health Posture Based on Brain Health Grasping Force Prediction Based on sEMG Signals Review on Histogram Equalization Based Image Enhancement Techniques Hybrid Artificial Bee Colony Algorithm for Neural Network Training A Study on Retinex Based Method for Image Enhancement Underwater Image Enhancement Algorithm Based on Fusion of High and Low Frequency Components Retinex Processing for Automatic Image Enhancement An Artificial Bee colony Algorithm for the Leaf-Constrained Minimum Spanning Tree Problem Dehazing of Outdoor Images Using Notch Based Integral Guided Filter Perceptual Enhancement of Low Light Images Based on Two-step Noise Suppression Gear Reducer Optimal Design Based on Computer Multimedia Simulation Numerical Simulation of thermal Insulation and Longevity Performance in New Lightweight Ladle Gesture Recognition Algorithm Based on Multi-scale Feature Fusion in RGB-D Images Intelligent Human Computer Interaction Based on Non Redundant EMG Signal Multiscale Generative Adversarial Network for Real-world Super-resolution Research on Gesture Recognition of Smart Data Fusion Features in the IoT Exposure Based Multi-Histogram Equalization Contrast Enhancement for Non-uniform Illumination Images Photoelastic Stress Field Recovery Using Deep Convolutional Neural Network A Time Sequence Images Matching Method Based on the Siamese Network 3D Reconstruction Based on Photoelastic Fringes Enhanced Image Algorithm at Night of Improved Retinex Based on HIS Space Gesture Recognition Based on Multilevel Multimodal Feature Fusion Frontiers in Bioengineering and Biotechnology | www.frontiersin.org A Comprehensive Survey on Image Contrast Enhancement Techniques in Spatial Domain An Adaptive Correction Algorithm for Non-uniform Illumination Panoramic Images Based on the Improved Bilateral Gamma Function Low-light Image Joint Enhancement Optimization Algorithm Based on Frame Accumulation and Multi-Scale Retinex An experiment-based Review of Low-Light Image Enhancement Methods Low-light Image Enhancement via the Absorption Light Scattering Model Low-Illumination-Based Enhancement Algorithm of Color Images with Fog Enhancement of Real-Time Grasp Detection by Cascaded Deep Convolutional Neural Networks Attitude Stabilization Control of Autonomous Underwater Vehicle Based on Decoupling Algorithm and PSO-ADRC An Effective and Unified Method to Derive the Inverse Kinematics Formulas of General Six-DOF Manipulator with Simple Geometry. Mechanism Machine Theor Structure Extraction from Texture via Relative Total Variation Genetic-Based Optimization of 3D Burch-Schneider Cage with Functionally Graded Lattice Material Dynamic Gesture Recognition Using Surface EMG Signals Based on Multi-Stream Residual Network Hand Medical Monitoring System Based on Machine Learning and Optimal EMG Feature Set Application of PSO-RBF Neural Network in Gesture Recognition of Continuous Surface EMG Signals Real-time Target Detection Method Based on Lightweight Convolutional Neural Network Grab Pose Detection Based on Convolutional Neural Network for Loose Stacked Object Self-adjusting Force/Bit Blending Control Based on Quantitative Factor-Scale Factor Fuzzy-PID Bit Control Improved Retinex and Multi-Image Fusion Algorithm for Low Illumination Image Enhancement Low-light Image Enhancement Based on Directional Total Variation Retinex Time Optimal Trajectory Planing Based on Improved Sparrow Search Algorithm A Tandem Robotic Arm Inverse Kinematic Solution Based on an Improved Particle Swarm Algorithm Weighted Guided Image Filtering All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.