Research on distance education image correction based on digital image processing technology RESEARCH Open Access Research on distance education image correction based on digital image processing technology Ling Ma Abstract Distance education is generally developed through live broadcast or video playback. Because of the influence of various factors in the process of distance education, the pixel characteristics in the original educational resource image will change. Based on this, this study is based on digital image processing technology to correct the distance education image. In addition, this study uses the layered processing model to decompose each color channel and then process the image brightness channel, which effectively reduces the computational overhead while ensuring the fusion effect. Finally, combined with comparative experimental research, the performance analysis of the algorithm is carried out. Research shows that the algorithm of this study has good performance in image correction and can provide theoretical reference for subsequent related research. Keywords: Digital image, Image processing, Remote, Network education, Image correction 1 Introduction In distance education, image resources are an important part of distance learning resources. Image is an import- ant form of conveying teaching information and carrying educational content. In the teaching process, a variety of media information integrating text, graphics/images, audio and even video technologies can stimulate the learner’s interest in learning and improve the quality of teaching. In distance education, the greatest advantage of images is their figurativeness. Many of the contents of teaching will appear “dry” and boring if they are only expressed in words, and the expression will become very “rich and colorful” with images. In the so-called a pic- ture wins thousands of words, image teaching is condu- cive to fully activate the classroom learning atmosphere and inject vitality into teaching [1]. In network distance education, due to various factors, remote network edu- cation will be limited by different specifications in the image transmission process, resulting in a large amount of noise in the image, which causes a large amount of noise in the image, causes the pixel features in the ori- ginal educational resource image to change, makes the color of the distance education resource image visually error, and leads to distortion of the image setting. The above situation will make students’ attention divergence, less interest in learning, and lack of positive learning motivation [2]. If the image cannot be set reasonably and the image noise is reduced, the image information trans- mitted in the distance education will be inaccurate, and the image will lose the meaning of the demonstration, which will make the students unable to understand the deep meaning of the image information description and reduce the teaching effect and quality [3]. Only by filtering the noise signal in the image to ensure that the image stor- age valuable information is accurately displayed can it play an important role in network distance learning. Therefore, this study uses image processing technology to achieve distance education image correction. In many applications such as remote sensing, medical imaging, security, computer vision, multi-camera video and panoramic image, correcting nonlinear distortion caused by optical lens has always been a hot research topic [3]. The distortion correction of the image can be generally started from two aspects, optical design correc- tion and digital image processing correction. The use of optical and mechanical structures for lens correction has many limitations in terms of design difficulty, Correspondence: cqmaling2009@163.com School of Information Engineering, Chongqing Industry Polytechnic College, Chongqing 401120, China EURASIP Journal on Image and Video Processing © The Author(s). 2019 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Ma EURASIP Journal on Image and Video Processing (2019) 2019:18 https://doi.org/10.1186/s13640-019-0416-9 http://crossmark.crossref.org/dialog/?doi=10.1186/s13640-019-0416-9&domain=pdf mailto:cqmaling2009@163.com http://creativecommons.org/licenses/by/4.0/ manufacturing, and lens size and cost. Therefore, correc- tion using digital image processing has become the mainstream of society. The use of digital image process- ing technology to achieve image correction of the cam- era is usually divided into two steps, camera calibration and image correction [4]. Establishing the geometric model of camera imaging and determining the parameters of the camera model are the main content of camera calibration in computer vision, and it is also the primary task of image correc- tion. The internal and external parameters of the camera determine the mapping relationship between the 3D scene and the corresponding 2D image. The process of restoring and determining the parameters inside and outside the camera is called camera calibration. The in- ternal optical and geometric characteristics of the cam- era determine the camera’s internal parameters, such as camera focal length, distortion factor, and image center. However, the camera’s external parameters represent the three-dimensional position and orientation information of the world coordinate system and camera coordinate system [5]. Since the end of the nineteenth century, camera calibration and lens correction have been the re- search hotspots in academia. The development of this research hotspot has formed a very perfect theoretical basis, and many algorithms have been proposed to im- prove the calibration results from accuracy and speed [6]. In 1966, B. Hallert first used the least squares method to process the observation data obtained in camera cali- bration, thus obtaining high-precision results in field measurements [7]. Abdal. Aziz and Karara proposed the direct linear transformation (DLT) camera calibration method in the early 1970s. From the perspective of photogrammetry, the relationship between camera im- ages and 3D world was deeply studied, and a linear model was established. The linear model is easy to cal- culate and fast, but it is difficult to fully represent the nonlinear distortion characteristics of the camera [8]. In the mid-1980s, R.Tsai proposed a RAC-based calibration method and established a classic Tsai camera model. The core of the method is to solve the linear parameters by using the linear model and then iteratively solve the nonlinear parameters according to the radial uniform constraints. The distortion model based on RAC method reduces the complexity of parameter solving, and the calibration process is fast and accurate [9]. Later, J. Weng improved Tsai’s distortion model, which made it compatible with distorted lenses [10]. ZhengYou Zhang proposed a flexible plane calibration method to simplify the process of camera calibration while obtaining high-precision calibration results. The method takes three or more images based on black and white checkerboard templates of different angles and postures, extracts the corner coordinates of the chess- board in the image, and substitutes the established cam- era model equations to obtain the internal and external parameters and distortion coefficients of the camera. Later, camera calibration methods based on circular templates, hexagonal lattice patterns, etc., were devel- oped [11]. With the development of computer automation tech- nology since the twentieth century, a new type of active vision camera calibration method and camera self-cali- bration method have been formed. The active visual cali- bration method based on camera two orthogonal motion proposed by Hu Zhanyi et al. is easier to implement and can solve all five internal parameters of the camera com- pared with the three orthogonal motion method pro- posed by Ma Weide. Traditional calibration methods and active visual calibration are inseparable from special scenes or camera-related motion information. In order to meet the needs of camera calibration under the un- known motion of any scene, the self-calibration of the camera is proposed. It can be roughly divided into self-calibration method based on Kruppa equation and self-calibration method based on absolute quadric sur- face and infinity plane [12]. Through the above analysis, we can see that the current digital image technology is less used in network education image correction. Therefore, this study ana- lyzes the image problems in remote network education and combines the digital image processing technology to analyze the remote network image correction and, on this basis, draws an effective strategy. 2 Research methods When the color in the image is rich and evenly distrib- uted, the gray world method can be used to obtain an ideal correction result. However, for some specific scenes, such as the existence of a large area of a single color in the image, the premise of the gray world method assumption is no longer satisfied, and the cor- rected image will produce a significant color deviation. The richer the image color, the more accurate the color reproduction during calibration. According to the defin- ition of entropy, image entropy can be used to measure the richness of image color. The richer the image color, the larger the image entropy; conversely, the less the color in the image, the smaller the image entropy. Applying the physical meaning of image entropy to the algorithm, firstly, the image is evenly segmented, the en- tropy values of different sub-blocks are different, and the weights are assigned according to the sub-block entropy value, thereby reducing the influence of a single-color block. Based on the image entropy theory, the gray world algorithm is improved, and the application range of the algorithm is expanded. First, the histogram Ma EURASIP Journal on Image and Video Processing (2019) 2019:18 Page 2 of 9 equalization processing is performed on the R, G, and B channels of the image to improve the overall contrast. After the equalization process, the color cast of the color cast image has been weakened and the image contrast is deepened [13]. When the image is chunked, the choice of partition size is important. Considering the distribution of the pixel level, the image is divided into sub-blocks Tij(1 ≤ i ≤ m, 1 ≤ j ≤ n)of 16 × 16 size, which are divided into n × m image blocks, and the block mode is as shown in Fig. 1. The three-channel image entropy of the image sub-blocks R, G, and B is calculated by using Eq. (1), which are denoted as HRij , H G ij , and H B ij , and the mean value is taken as the image entropy of the image block [14]: Eij ¼ HRij þ HGij þ HBij � � =3 ð1Þ The larger the entropy value of the image block and the richer the color of the image block, the more accurate the color reduction during correction, and the image obtained when the gray world method is used for correction is also the expected image. Therefore, in order to reduce the in- fluence of a single-color block on image correction, a color-rich image block is given a higher weight level, and a color-only image block is given a lower weight level. The concept of image entropy is used to measure the color richness of sub-blocks, and weights are assigned according to their size. The larger the entropy value of the image block, the larger the weight of the allocation; conversely, the smaller the entropy value of the image block, the smaller the weight assigned to it. Therefore, after calculat- ing the image entropy of all sub-blocks, the sub-block weight value is calculated by normalizing the image en- tropy, and the calculation formula is [15]: Wij ¼ EijPm i¼1 Pn j¼1Eij ð2Þ Among them, Wij is the corresponding weight of any image block Tij(1 ≤ i ≤ m, 1 ≤ j ≤ n), and Eij is the image en- tropy of the sub-block. The average values of the three channels R, G, and B of each image block are counted and recorded as Rij, Gij, and Bij. According to the weight value of the sub-block entropy value, the mean R, G, and B channel values [16] of the whole image are calculated: EW R ¼ Xm i¼1 Xn j¼1Wij � Rij EW G ¼ Xm i¼1 Xn j¼1Wij � Gij E WB ¼ Xm i¼1 Xn j¼1W ij � B 8 >>< >>: ð3Þ After obtaining the weighted mean value of the en- tropy values of the R, G, and B channels, the average value of the three-channel entropy is taken as the aver- age gray value of the image, namely: Gray ¼ EW R þ EW G þ E W B � � =3 ð4Þ The Von Kries diagonal model is applied to update the R, G, and B components of each pixel in the image, and corrects the entire image, and adjusts the corrected pixels to a displayable range [0, 255]. R0 ¼ R � RGain→R � RGain < 255 255→R � RGain > 255 � � R0 ¼ G � GGain→G � GGain < 255 255→G � GGain > 255 � � R0 ¼ B � BGain→B � BGain < 255 255→B � BGain > 255 � � 8 >>>>>>< >>>>>>: ð5Þ For imaging devices such as cameras, there is a large difference between the colors of images acquired under different lighting conditions. In the normal color temperature environment, the acquired image has no color cast, which is basically consistent with human vis- ual perception. However, when the color temperature is high, the image is blue overall, and when the color temperature is low, the image is yellowish overall. Figure 2 shows the imaging of the same scene at differ- ent color temperatures, and it can be seen that there is a significant difference between the two. When white balance correction is performed on images acquired under different lighting conditions, if the image is always corrected with a uniform coefficient, it is impos- sible to obtain a better correction result under various conditions. Therefore, before correcting the image, it is necessary to first detect the color cast of the image to de- termine whether there is color cast. If it is a color cast image, its type needs to be determined. After knowing the specific color cast of the image, the image can be Fig. 1 Image block mode Ma EURASIP Journal on Image and Video Processing (2019) 2019:18 Page 3 of 9 processed with different coefficients in a targeted manner. According to this idea, an automatic white balance algo- rithm based on image color cast detection is proposed. When measuring the degree of deviation between two colors, for the commonly used RGB color space, the dif- ference between the two colors calculated does not cor- rectly represent the true difference between the two colors that people actually perceive. According to the study of color space theory, the CIE Lab color space is close to the perception characteristics of the human vis- ual system, and the distance between the colors calcu- lated by the space is in line with people’s actual perception. Therefore, the CIE Lab color space is used to examine the color cast of the image. The color cast of the image is related to the chromati- city distribution characteristics and the chromaticity mean of the image. When there is a color cast in the image, the two-dimensional histogram distribution on the ab chro- maticity coordinate plane is concentrated, which is basic- ally a single peak. At the same time, the larger the chromaticity mean, the larger the color deviation. In con- trast, if the two-dimensional histogram distribution is more scattered and there are obvious multi-peaks, the color cast is weakened. Therefore, the concept of equiva- lent circle is introduced, and the ratio of image mean chromaticity to chromaticity center distance is used to measure the degree of color shift of the image. The color cast factor K is defined as the ratio of the average chroma- ticity D of the image to the center distance C of the chro- maticity. The calculation method is as shown in Eq. (6). The average chromaticity D and the chromatic center dis- tance C are as shown in Eqs. (7, 8), respectively. K ¼ D C ð6Þ D ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi d2a þ d2b q ; da ¼ PM i¼1 PN j¼1a MN ; db ¼ PM i¼1 PN j¼1b MN ð7Þ C ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi C2a þ C2b q ; Ca ¼ P maxa a¼mina a−daj j MN ; Cb ¼ P maxb a¼minb b−dbj j MN P bð Þ ð8Þ Among them, M and N are the width and height of the image, respectively, in units of pixels. P(a) and P(b) are the corresponding histograms of the positions a and b, respectively. On the ab chromaticity plane, the center coordinate of the equivalent circle is (da, db), the radius is C, and the distance from the center of the equivalent circle to the origin of the central axis of the ab chroma- ticity plane (a = 0, b = 0) is D. Within a certain range of the equivalent circle, the image is considered to have no color shift. If it exceeds the range, it is considered to have a color shift, and the farther the deviation is, the larger the color deviation is. The threshold of the color cast factor is Kfold. If K > Kfold, the image is considered to have a color cast. Otherwise, it is considered to be color- less. Generally, Kfold is taken as 1. When the picture has a color cast, the image color cast is judged by the specific position of the equivalent circle on the ab chromaticity plane. The color cast grade is di- vided into reddish, bluish, greenish, and yellowish. When da ≥ 0, − da ≤ db ≤ k1da, the picture is reddish. When da < 0, da ≤ db ≤ k2da, the picture is greenish. When db < 0, |da| < 0, |da| < |db|, the picture is bluish. In other cases, the picture is yellowish. The schematic diagram of the color cast is shown in Fig. 3. By evaluating and analyzing mul- tiple images, k1 is taken as 2 and k2 is − 2. Through the color cast detection, it has been possible to determine the color cast of the image. Therefore, in the white balance correction, different correction coeffi- cients can be used to correct the image according to the specific color cast. In the correction, the image is first histogram equalized. In the RGB color space, the R, G, and B channels are respectively histogram equalized to obtain the equalized image, and the image color cast is initially weakened. The image is then converted from the RGB color space to the YCbCr color space, and the data (YHisp,CrHisp, CbHisp) after the histogram (a) (b) Fig. 2 a, b Color cast image at different color temperatures Ma EURASIP Journal on Image and Video Processing (2019) 2019:18 Page 4 of 9 equalization of the image is obtained. All pixel points from which Eq. (9) is satisfied are found. Y Hist ≥210 −3≤CrHisp; CbHisp ≤ þ 3 � ð9Þ In the pixel point satisfying the Formula (9), the highest luminance pixel point ðY brightHist ; Cr bright Hist ; Cb bright Hist Þ is found by the maximum value YHist and the value closest to zero CrHisp, CbHisp, and the average value ðY avgHist; Cr avg Hist; Cb avg HistÞ satisfying all the pixel points of the Eq. (9) is calculated. Then, all the pixels satisfying the Eq. (10) are found from the histogram-equalized data of the image. Y L ≤Y Hist ≤Y H CrL ≤CrHist ≤CrH CbL ≤CbHist ≤CbH 8 < : ð10Þ Among them, YL and YHY are respectively the mini- mum and maximum values selected between YH and Y avg Hist; CrL and CrH are respectively the minimum and maximum values selected between Cr bright Hist and Cr avg Hist , and CbL and CbH are the same. For pure white points (255, 255, 255), the correspond- ing YCbCr value is (255, 0, 0). Therefore, the larger the Y value, the closer the CbCr value is to the pixel point of 0, that is, the closer it is to the white point. The white point in the image is initially selected by the conditional limitation of Eq. (9). Equation (10) further severely se- lects the conditions for the white point and removes some of the pixel points from the white point just se- lected, and the remaining pixels are the white spots that satisfy the condition. From the equalized image data (YHist, CrHist, CbHist), the pixel points satisfying the corre- sponding position of the formula (3.14) are selected as the reference white pixels, and the average value (Rw, Gw, Bw) of the reference white pixels is calculated. The color cast is judged on the image. If it is consid- ered that there is no color shift, no processing is performed on the image; if it is judged that there is color shift, the corresponding scale factor is obtained accord- ing to the color cast level. The Von Kries transformation is also applied to correct the original data of the image, and the R, G, and B components of the image are up- dated. So far, the automatic white balance algorithm based on image color cast detection is completed. R0 ¼ R � RGain→ � RGain < 255 255→R � RGain > 255 � � G0 ¼ G � RGain→ � GGain < 255 255→G � GGain > 255 � � B0 ¼ B � BGain→B � BGain < 255 255→B � BGain > 255 � � ð11Þ This study chooses the median filter method for image denoising. The previous median filter is a commonly used arrangement and summary method, which can ef- fectively filter impulse noise and has high application value. However, in the process of system application, the details of the courseware image will be blurred, and the density of the pulse noise will increase. The filtering per- formance of this method shows a significant downward trend, which seriously hinders the application of multi- media systems in distance education. Therefore, we propose an improved median filtering method to further optimize the original algorithm to ensure the integrity of the course image details, and to solve the contradiction between the traditional algorithm cannot effectively deal with the filtering performance and image detail integrity. The improved median filtering algorithm can not only filter the noise factors in the courseware image, but also ensure the integrity of the courseware image details, which is the first choice for the system. The past median filter operates on the pixel points of the image by the same operation method, which causes the value of the real signal point to change, and the image has a fuzzy problem, so it is not suitable for the system application of this paper. If the true pixel signal (a) (b) (c) Fig. 3 a–c Image denoising results show Ma EURASIP Journal on Image and Video Processing (2019) 2019:18 Page 5 of 9 point and the signal point contaminated by noise are ac- curately distinguished, the misoperation of the true pixel signal point can be avoided, and the clarity of the courseware image can be improved. Using the improved switch median filtering method, the pixel of the course- ware image can be accurately judged to avoid misopera- tion of the true pixel signal point. For a pixel of a suspected color image, there is a strong correlation be- tween adjacent points, and the brightness value of a point in the image is close to the brightness value of the adjacent point. The pixel points of the isolated point of the pixel and the image portion of the edge portion are generally considered to be noise points. Therefore, in a courseware image, if there is a large difference between the value of a certain pixel point and its neighborhood value, the correlation between the pixel point and the neighborhood is low, and the point is considered to be a noise point, which needs to be performed filter process- ing. If the point value is close to the neighbor value, it is the true signal point. The extreme median filtering algorithm evaluates noise, which has a strong processing power for large im- pulse noise. However, some thin lines in the color courseware image and the narrower edge details are also significantly different from the pixel values of the adja- cent areas. At this time, the extreme median filtering al- gorithm will misjudge such pixel points into noise points, and accurate filtering of image noise points can- not be achieved. Since the traditional median filtering algorithm has certain limitations, it needs to be optimized. Therefore, an improved median filtering algorithm based on me- dian filtering theory is proposed. The improved method uses a method of summarizing between partitions to take a pixel point to be analyzed as a center point ((2N+ 1) × (2N + 1)) square range. The range is set as a win- dow, and pixels in the window range can be arranged ac- cording to the brightness level, and the arranged pixel values are divided into 2N + 1 small ranges. The mean of 2N + 1 pixel values in the middle small range is used to describe the new pixel brightness at the center point. Unlike the conventional method, this method reduces and eliminates the high-frequency components of the Fourier region, and the high-frequency components are different from the pixel values of the luminance of the image edges. Therefore, the improved filtering algorithm can remove high frequency components in the image and ensure the flatness of the image. In the process of window image position transformation, the improved median filtering algorithm can maximize the smoothness of the image and improve the integrity of the image de- tails. The specific process is as follows: (1) In the courseware image, the (2N + 1) × (2N + 1) fil- ter window is set to fuse the center of the filter window and a pixel in the image center. (2) The brightness values of different pixels in the filtering window are col- lected. (3) The ascending order is used to sort the lumi- nance values and divide them into 2N + 1 number of segments, ensuring that each cell contains 2N + 1 num- ber of pixel luminance values. (4) 2N + 1 number of pixel luminance values are collected between the central cells, and then, the average value of the pixel luminance values is obtained. (5) The obtained mean value is used to up- date the pixel value of the corresponding window center area. (6) Processes (3)–(5) are cycled until the corre- sponding analysis and operation are completed for all pixels. The output pixel of the improved median filtering algorithm has a strong correlation with the 2N + 1 num- ber of pixel brightness value mean of the central seg- ment of the adjacent range image. According to the correlation between the data points in the window, the damage data can be corrected, the noise points are fil- tered, the sharpness of the image is improved, and the integrity of the image details is ensured. 3 Results In order to verify the effectiveness of the image correction algorithm in this study, we investigated the image correc- tion effects of Gaussian image processing, neural network image correction, and this algorithm through experimen- tal research. In the actual research, the distance education video picture collection is taken as the research object, and the original image is shown in Fig. 4. First, the image gradation processing results are com- pared, and the results obtained are shown in Fig. 5. Among them, Fig. 5a is a processing result of a Gaussian image, Fig. 5b is a processing result of a gamma of a god network image, and Fig. 5c is a processing result of the processing method of the present study. Edge recognition is performed on the image, and the results obtained are shown in Fig. 6. Among them, Fig. 6a is a processing result of a Gaussian image, Fig. 6b is a processing result of a gamma of a god network image, Fig. 4 Original image Ma EURASIP Journal on Image and Video Processing (2019) 2019:18 Page 6 of 9 and Fig. 6c is a processing result of the processing method of the present study. The correction effect of the image is compared, and the obtained result is shown in Fig. 7. Among them, Fig. 7a is a processing result of a Gaussian image, Fig. 7b is a processing result of a gamma of a god network image, and Fig. 7c is a processing result of the process- ing method of the present study. 4 Discussion and analysis The Gaussian image algorithm effectively combines the details of different levels through the multi-resolution decomposition method, so it has a good expressive power for the contrast and saturation of the image, and its color performance is closer to the actual scene. How- ever, the algorithm is not particularly complete for de- tails that are particularly bright or dim, such as the lack of sharpness in Fig. 5a and the clarity of Fig. 7a after image correction. At the same time, due to the pyramid decomposition, its computational overhead is relatively large. The algorithm of the neural network is simple and easy to implement. For the image with small dynamic range change in Fig. 4, the overall processing effect is not bad. However, when the dynamic range of the image changes greatly, the composite image has obvious color degradation, and the color transition is unnatural. For example, there is a defect in the gradation processing effect shown in Fig. 5b, a relatively significant noise interference in the Fig. 6b recognition, and an image en- hancement in Fig. 7b, and there are still some unclari- ties. The color degradation of the image of the method is obvious, and the local boundary is clearly visible, so that the image has a visual feeling similar to that of the oil painting, and at this time, it deviates from the actual perception of the human eye. At the same time, its effect on detail processing is also unsatisfactory, and the over- all image feels rather vague and unclear. The improved algorithm of this research uses the lay- ered processing model to decompose the color channels and then process the image brightness channel. The method effectively reduces the computational overhead while ensuring the fusion effect and avoids the color mi- gration phenomenon caused by ignoring the intrinsic re- lationship between the color channels in the RGB space. Comparing the correction results of each algorithm, it can be seen that the synthesized image effect is signifi- cantly better than the Gaussian algorithm result and the neural network processing result. By observing the grad- ation processing effect of Fig. 5c, the edge recognition result of Fig. 6c, and the image correction of Fig. 7c, it can be seen that the improved algorithm is very clear on the details, and the subtle parts of each area of different brightness are better expressed. At the same time, the resulting images are more in line with the actual percep- tion of the human eye and are more similar to the real Fig. 5 a–c Comparison of image grayscale processing results Fig. 6 a–c Comparison of image edge recognition results Ma EURASIP Journal on Image and Video Processing (2019) 2019:18 Page 7 of 9 scene. The results were examined using an improved image quality assessment method by referring to Section Results. The image saturation S, the information entropy H and the standard deviation SD are calculated in the HSI color space, and the comprehensive evaluation fac- tor based on the three factors is used as the final evalu- ation index, and the result image output by each algorithm is analyzed. Overall, the information entropy and standard deviation of the Mertens algorithm are relatively large, that is, the image content and dynamic range expansion are excellent, and the details and defin- ition are complete. Gaussian image processing results have a relatively large saturation, which is an advantage in image color processing, but the processing of detail and definition is slightly inferior. The overall perform- ance of the neural network is average, but its advantage is that it is easy to calculate. In summary, the improved algorithm yielded better output. Firstly, compared with Gaussian image process- ing algorithm, the algorithm improves the results of each evaluation factor and effectively draws on the low com- plexity processing idea of the algorithm. It has advan- tages in detail content extraction and dynamic range expansion, which is obviously superior to other algo- rithms. When using the comprehensive evaluation factor to examine the image quality, the research shows that the output of the improved algorithm is significantly bet- ter than the Gaussian image processing and neural net- work image processing, and the results in many aspects are slightly better. Although the improvement is not big with respect to the neural network, the improved algo- rithm achieves better fusion results while spending less computational overhead, and effectively expands the dy- namic range of the image. 5 Conclusion This study analyzes the image problems in remote net- work education and analyzes the remote network image correction with digital image processing technology. Based on this, an effective strategy is obtained. The study uses image entropy to measure the richness of image color. Firstly, the image is evenly divided. Since there are differences in the entropy values of different sub-blocks, the weights are assigned according to the sub-block entropy values, thereby reducing the influence of a single-color block. For imaging devices such as cam- eras, there is a large difference between the colors of im- ages acquired under different lighting conditions. In the normal color temperature environment, the acquired image has no color cast, which is basically consistent with human visual perception. The color cast is judged on the image. If it is considered that there is no color shift, no processing is performed on the image; if it is judged that there is color shift, the corresponding scale factor is obtained according to the color cast level. The research proposes an improved median filtering method, which further optimizes the original algorithm and en- sures the integrity of the courseware image details, and solves the contradiction between the traditional algo- rithm and the image detail integrity. Finally, the effect- iveness of the proposed algorithm is verified by experiments. It is compared with Gaussian image pro- cessing algorithm and neural network image processing algorithm, and the performance superiority of the pro- posed algorithm is obtained. Acknowledgements The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions. Funding Not applicable. Availability of data and materials Please contact author for data requests. Author’s contributions All authors take part in the discussion of the work described in this paper. The author read and approved the final manuscript. Competing interests The author declares that she has no competing interests. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Fig. 7 Comparison of image correction results Ma EURASIP Journal on Image and Video Processing (2019) 2019:18 Page 8 of 9 Received: 27 October 2018 Accepted: 7 January 2019 References 1. Y. Xu, J.T. Dong, Z.Q. Wang, Fuzzy based distance correction algorithm for digital image interpolation. Appl. Mech. Mater. 513-517(513–517), 1549– 1554 (2014) 2. G. Cao, C.Y. Zhang, Y. Zhang, The traffic accident scene drawing system based on image trapezoid distortion correction. Appl. Mech. Mater. 713- 715, 1996–1999 (2015) 3. Y. Zhu, B. Chen, M. Qin, et al., 2-D micromachined thermal wind sensors—a review. IEEE Internet Things J. 1(3), 216–232 (2017) 4. S.W. Zhang, X.N. Zhang, Z.Y. Wu, et al., Research on asphalt mixture injury digital image based on enhancement and segmentation processing technology. Appl. Mech. Mater. 470, 832–837 (2014) 5. I. Urdapilleta, L. Dany, J. Boussoco, et al., Culinary choices: a sociopsychological perspective based on the concept of distance to the object. Food Qual. Prefer. 48, 50–58 (2016) 6. V.N. Kopenkov, On halting the process of hierarchical regression construction when implementing computational procedures for local image processing. Pattern Recognit. Image Anal. 24(4), 506–510 (2014) 7. M.K. Hoon, F.A. Cannone, J.J. Hoon, Microstructural analysis of asphalt mixtures using digital image proce. Can. J. Civ. Eng. 41(1), 74–86 (2014) 8. W. Li, X. Zhang, Z. Wang, Music content authentication based on beat segmentation and fuzzy classification. Eurasip J. Audio Speech Music Proc. 2013(1), 11 (2013) 9. S. Vanonckelen, V. Rompaey, et al., The effect of atmospheric and topographic correction methods on land; cover classification accuracy. Int. J. Appl. Earth Obs. Geoinf. 24(1), 9–21 (2013) 10. V. López, A. Fernández, M.J. del Jesus, et al., A hierarchical genetic fuzzy system based on genetic programming for addressing classification with highly imbalanced and borderline data-sets. Knowl.-Based Syst. 38(2), 85– 104 (2013) 11. G. Bernardini, E. Quagliarini, M. D'Orazio, Towards creating a combined database for earthquake pedestrians’ evacuation models. Saf. Sci. 82, 77–94 (2016) 12. K. Adikaram, M.A. Hussein, M. Effenberger, et al., Outlier detection method in linear regression based on sum of arithmetic progression. Sci. World J. 2014, 821623–821623 (2014) 13. M.L. Firdaus, F. Trinoveldi, I. Rahayu, et al., Determination of chromium and iron using digital image-based colorimetry. Procedia Environ. Sci. 20, 298– 304 (2014) 14. Y. Imamverdiyev, A.B.J. Teoh, J. Kim, Biometric cryptosystem based on discretized fingerprint texture descriptors. Expert Syst. Appl. 40(5), 1888– 1901 (2013) 15. J. Poignant, L. Besacier, G. Quénot, Unsupervised speaker identification in TV broadcast based on written names. IEEE/ACM Trans. Audio Speech Lang. Proc. 23(1), 57–68 (2015) 16. J.B. Liu, X.H. Zhang, H.B. Liu, et al., Correction method for non-landing measuring of vehicle-mounted theodolite based on static datum conversion. SCIENCE CHINA Technol. Sci. 56(9), 2268–2277 (2013) Ma EURASIP Journal on Image and Video Processing (2019) 2019:18 Page 9 of 9 Abstract Introduction Research methods Results Discussion and analysis Conclusion Acknowledgements Funding Availability of data and materials Author’s contributions Competing interests Publisher’s Note References