key: cord-0169528-uwtqy89a authors: Yang, Liqiao; Liu, Yang; Kou, Kit Ian title: Quaternion Optimized Model with Sparse Regularization for Color Image Recovery date: 2022-04-19 journal: nan DOI: nan sha: 22dedeeff424b881c58a44f8459796292344c097 doc_id: 169528 cord_uid: uwtqy89a This paper addresses the color image completion problem in accordance with low-rank quatenrion matrix optimization that is characterized by sparse regularization in a transformed domain. This research was inspired by an appreciation of the fact that different signal types, including audio formats and images, possess structures that are inherently sparse in respect of their respective bases. Since color images can be processed as a whole in the quaternion domain, we depicted the sparsity of the color image in the quaternion discrete cosine transform (QDCT) domain. In addition, the representation of a low-rank structure that is intrinsic to the color image is a vital issue in the quaternion matrix completion problem. To achieve a more superior low-rank approximation, the quatenrion-based truncated nuclear norm (QTNN) is employed in the proposed model. Moreover, this model is facilitated by a competent alternating direction method of multipliers (ADMM) based on the algorithm. Extensive experimental results demonstrate that the proposed method can yield vastly superior completion performance in comparison with the state-of-the-art low-rank matrix/quaternion matrix approximation methods tested on color image recovery. I N terms of image processing, the purpose of color image completion is to use the known pixels in an image, however limited, to recover missing pixels. To achieve this, many matrix-based completion (MC) approaches have been designed, though in general, a color image is processed by separating RGB channels to three matrices and prior knowledge about the desired model is then used to achieve inpainting. An effective and widely used source of such prior knowledge is low-rankness. In [1] , the nuclear norm (NN) was proved to be the tightest feasible convex relaxation of the matrix rank function, being NP hard to further minimize. Following on from this, other researchers have shown that when the singular value is treated equally in the nuclear norm minimization process, the ensuing inpainting results are sub-optimal. Based on this, various work has been developed to optimize NN, such as the weighted nuclear norm [2] , the truncated nuclear norm (TNN) [3] , and the logarithmic norm [4] , etc. However, although these approaches have improved the inpainting results for color Liqiao images, especially as compared with classic NN methods, such matrix-based approaches involve dimension reduction, which can destroy the structure of the color image. To avoid this, quaternion-based approaches have gradually become more commonly used in image processing, as these allow the values of one pixel in the color image to be put in three imaginary parts of one quaternion to form a more reliable quaternion matrix. As the extension of a complex number, a quaternion number provides a representation that exactly matches the structure of the color pixel usually containing three values drawn from the RGB channels separately. Thus, quaternion-based approaches can be widely used in various types of color image processing, including color image recovery [5] - [7] , color image watermarking [8] , color face recognition [9] , and so on. For color image completion in the quaternion domain, the typical prior knowledge type is the low-rankness, analogous to the prior matrix-based cases. As an example, the authors in [5] proposed a low-rank quaternion approximation model based on several modified quaternion nuclear norm (QNN), including laplace, geman, and weighted Schatten-γ functions. Offering two reasons why this series of methods could improve the inpainting performance. The first is that a single color image can be put in one quaternion matrix for processing, allowing correlation among RGB channels may be preserved, while the other is that these improved functions can approximate the rank of the quaternion matrix more precisely than QNN, with the latter being identical to matrix-based methods. However, calculating the quaternion singular value decomposition (QSVD) is computationally complex, with the operation needed to calculate the singular value decomposition of corresponding complex matrices being twice the size of the original quaternion matrix in each iteration. To overcome the time-consuming nature of QSVD, researchers in [6] developed a low-rank quaternion matrix factorization approach by factorizing the target quaternion matrix into the product of two smaller quaternion factor matrices. This factorization method is based on three kinds of quaternion bilinear matrix norms, quaternion double Frobenius norm (Q-DFN), quaternion double nuclear norm (Q-DNN), and quaternion Frobenius/nuclear norm (Q-FNN). In this factorization model, the low-rank estimation process retains higher levels of computing efficiency due to the fact that only two smaller quaternion factor matrices need to be optimized. However, as in the matrix-based cases, this factorization method may become trapped in the local minima [10] , causing the computation of QSVD for factor quaternion matrices to remain necessary in each iteration. As described above, the various low-rank completion methods can be divided into two branches: rank estimators supported by various regularization processes and low-rank factorization. Based on such assumptions about low-rankness, the previously developed quaternion-based methods can thus process color images overall to obtain better recovery results; however, they ignore other important properties such as sparsity. The direct motivation to reconsider this can be derived from the fact that various kinds of signals, including audio and images, have naturally sparse structures, with regard to given bases such as Fourier and wavelet [11] . This fact has inspired new approaches in signal processing in recent years, and especially for vision tasks based on matrix optimization, the l 1 norm has now been used extensively, especially for face reorganization [12] , [13] , image denoising and completion [14] - [17] , and similar tasks. In terms of image completion, as highlighted in [17] , sparsity is also an important property that can be used in MC. Specifically, in certain transformed domains, piecewise smooth functions have the property of being sparse and, based on this, in [14] , such sparsity is depicted by using the l 1 norm in the discrete cosine transform (DCT), alongside low-rankness was as depicted by TNN. The TNN was found to obtain a more accurate approximation of rank than classic NN in this circumstance, as the first few largest singular values, despite being largest, did not influence the rank. Moreover, the TNN was also found to reduce computation time for the SVD of the target matrix (X ∈ R m×n ), due to it only needing to minimize the sum of the min(m, n) − r minimum singular values, where r is the truncated number. Utilizing quaternion-based optimization, although the multiplication is uncommunicative in the quaternion domain, color images may be processed as a whole, and, consequently, sparsity may be used simultaneously for image processing in the quaternion domain. In [18] , a quaternion-based dictionarylearning algorithm was used on color images sparse representation (SR), while in [19] , a new robust estimator was used to measure the quaternion residual error for SR. With regard to color image completion, in [20] , a robust quaternion matrix completion algorithm was provided based on factorizing the target quaternion matrix to the sum of a low-rank quaternion matrix and a sparse quaternion matrix. Although a rigorous analysis was provided for this method, however, the work suggested that it cannot be applied to cases where the missing rate of images is excessively high, and, in addition, the convergence rate of the method is slow. In spite of the successes of these algorithms being encouraging, most of them are optimized only in the real domain; where images are optimized in the quaternion domain, the main direction is SR, and, in terms of image completion, the results as seen in [20] are not very satisfactory. Hence, to develop more accurate reconstructions, sparsity as an additional source of information is considered in this paper to facilitate color image completion in the quaternion domain. This sparsity is represented by QDCT , with low-rankness depicted using the quaternion truncated nuclear norm (QTNN) as proposed in our previous work [21] . The resulting novel model is named the Low-rank Quaternion Recovery with Sparse Regularization. (LRQR-SR) model. The other main contributions of this work are thus as follows: • The work focuses on quaternion matrix completion based on combining low-rankness and sparsity. The underlying concept is that the quaternion-based method can preserve the structure of the color image, allowing the sparsity to be formulated as an l 1 norm regularizer in the transformed domain (QDCT). • The closed form solution of the model, combining the Frobenius norm and the l 1 norm, is thus proposed and supported with theoretical analysis. • Extensive experimental results on real color images demonstrate the competitive performance of the proposed method in comparison with several state-of-the-art methods. The outline of the remainder of this article is as follows. Section II generalizes some preliminaries, while Section III offers the proposed LRQR-SR model and the corresponding two-step ADMM-based optimization. Section IV then presents the experimental results to demonstrate the efficiency of the proposed LRQR-SR, while Section V offers a conclusion to the work. In the real domain R, we denote scalar, vector, and matrix as a, a, and A ,respectively. In the quaternion domain H, we denote scalar, vector, and matrix asȧ,ȧ, andȦ, respectively. Besides, we denote the complex space as C. For a quaternioṅ q, we denote the real part and imaginary part as R(q) and I(q). We denote transpose, conjugate transpose, and inverse as (·) T , (·) H , and (·) −1 , respectively. We use · F and · * to represent the Frobenius norm and nuclear norm. We define the inner product of * 1 and * 2 as * 1 · * 2 tr( * H 1 * 2 ), where tr(·) is the trace function. Both I r×r and I r denote the r × r identity matrix. Quaternions are proposed by Hamilton in 1843 [22] . A quaternion numberq ∈ H is combined by a real part and three imagery parts, and can be written as following form: where q n ∈ R (n = 0, 1, 2, 3), and i, j, k are three imaginary number units which have the following relationships: R(q) q 0 is the real part ofq. I(q) q 1 i + q 2 j + q 3 k is the imaginary part ofq. Henceq = R(q) + I(q). Besides, when real part q 0 = 0,q is a pure quaternion. The conjugate and the modulus ofq are defined as:q * q 0 − q 1 i − q 2 j − q 3 k and |q| √qq * = q 2 0 + q 2 1 + q 2 2 + q 2 3 . Assuming two quaternionsṗ andq ∈ H, the addition and multiplication are separately defined as following: It is important to note that the multiplication in the quaternion domain is not commutativeṗq =ṗq. For quaternion matrixQ = (q ij ) ∈ H M ×N , whereQ = Q 0 + Q 1 i + Q 2 j + Q 3 k and Q n ∈ R M ×N (n = 0, 1, 2, 3) are real matrices. When Q 0 = 0,Q is a pure quaternion matrix. The Frobenius norm is defined as: Then the isomorphic complex matrix representation of quaternion matrixQ can be denoted as Q c ∈ C 2M ×2N : . Definition 2 (The rank of quaternion matrix [24] ): The rank of quaternion matrixQ = (q ij ) ∈ H M ×N is defined as the maximum number of right (left) linearly independent columns (rows) ofQ. Theorem 1 (QSVD [24] ): Given a quaternion matrixQ ∈ H M ×N be of rank r. There are two unitary quaternion matriceṡ where Σ r = diag(σ 1 , · · · , σ r ) ∈ R r×r , and all singular values σ i > 0, i = 1, · · · , r. QSVD and SVD have many properties in common, such as all the singular values are nonnegative, and the order of singular values is decreasing. As we can see, the rank of the quaternion matrix is the l 0 norm of the vector , however, the l 0 norm is nonconvex so as to QNN is considered, which is similar to the concept of NN in the real domain. Definition 3 (QNN [5] , [7] ): The nuclear norm of the quaternion matrixQ ∈ H M ×N is defined as where σ i is singular value that can be obtained from the QSVD ofQ. As observed in [25] , the bigger singular values would maintain more information on the color image than smaller singular values. Moreover, the first few largest singular values do not change the rank. Hence, the quaternion-based truncated nuclear norm (QTNN) is developed as follows. (2) This section is divided into three parts. Subsection III-A gives the formulation of the proposed LRQR-SR, while subsection III-B presents the quaternion discrete cosine transform that we used in this paper, and subsection III-C gives the corresponding ADMM-based optimization algorithm. This section introduces the formulation of a low-rank quaternion optimized model for color image recovery. LetȮ ∈ H M ×N be the partial observed color image. The purpose of the resulting problem is thus to recoverẊ ∈ H M ×N by improving QNN as applied to the characterization of the low-rank property more accurately. The QNN model can be formulated as where Ω is the index set of the observed data, and the linear operation P Ω ( * ) is the operator that indicates that the elements in Ω are remain while other elements are zero. As the several largest singular values will not influence the rank, the TQNN model is proposed, and the estimation of lowrankness should thus be more accurate. Based on model (3) and Definition 4, the QTNN model can thus be formulated as Being low rank is a necessary but insufficient condition for MC [17] . Motivated by this observation and the success of sparsity as utilized in MC [14] - [16] the LRQR-SR model is developed, which considers both a low-rank constraint and sparsity in the quaternion domain. The low-rank constraint is depicted using QTNN in this case, while sparsity is depicted using the l 1 norm. As in the previously mentioned strategies, the quaternion matrix is assumed to be sparse in a certain transformed domain. Hence, the resulting model can be formulated as where T (·) is the transform operator,Ḋ is the transformed quaternion matrix, and λ is a positive number. However, it is hard to solve problem (5) directly due to the fact that QTNN is nonconvex. To address this problem, Theorem 2 is applied such that problem (5) can be rewritten as {u 1 , · · ·u r } and {v 1 , · · ·v r } are the first r columns ofU anḋ V.U andV are left and right unitary quaternion matrices that are calculated by QSVD ofẊ. In this way, the whole procedure of the method can be divided into two main steps: in the first step, the quaternion matrices are computed by QSVD, and then the main goal becomes to optimize problem (6) . The overall procedure is summarized in Algorithm 1. Input: the observed quaternion matrixȮ ∈ H M ×N , the position set of observed elements Ω, and the tolerance ε 0 . 1: Initial the initial number of iteration k = 1,Ẋ 1 =Ȯ. 2: Repeat 3: Step 2. Solving the optimization problem as followeḋ Output: the recovered quaternion matrixẊ opt . Several key points of utilizing QDCT are outlined below. 1) The reasons for utilizing QDCT : Most importantly, the proposed method operates in the quaternion domain, where each color image can be handled by quaternion algebra as a whole. In order to avoid destroying the RGB structure and to improve the accuracy of image recovery, the entire process must thus be operated in the quaternion domain. In addition, the spectral coefficients obtained using QDCT have strong energy and good redundancy elimination characteristics [26] , while QDCT itself is easy to quantitatively analyze. Finally, in the real and complex fields, the energy concentration of the input information throughout two-dimensional DCT is higher than that of the input information after DFT. Research into QDCT is driven by the existence of successful applications in both the real and complex domains, and for these reasons, QDCT is adopted in the proposed method. Fig. 1 gives an illustration of the proposed sparse representation on the color image "Parrot". The first image is the origi-nal image; the second, third, and fourth image respectively display the coefficients after QDCT, QDFT, and DCT (grayscale image) using a logarithmic scale. After transformation, the coefficients of QDCT and DCT are mainly concentrated in the upper left corner, and most of the remaining coefficients are close to zero. However, after QDFT transformation, the coefficients are mainly concentrated at four corners, which means that utilizing cosine transform is superior to Fourier transform to process images in the quaternion domain. Besides, when comparing QDCT with DCT, especially in the upper left corner, it can be observed that QDCT has higher energy compaction than DCT. 2) Definition of QDCT: As the multiplication of quaternions is non-commutative, there are two forms of QDCT: a left-handed form QDCT L and a right-handed form QDCT R 2). These can be formulated as the following equations, respectively [27] : whereḞ(m, n) ∈ H M ×N , m and n is the row and column of quaternion matrixḞ.u is a pure quaternion and satisfieṡ u 2 = −1. The values of of α(p), α(s) and C(p, s, m, n) are analogous to DCT in the real domain: Besides, the corresponding inverse transformation of QDCT is the Inverse Quaternion Discrete Cosine Transform (IQDCT). These are thus the transformation pairs of each other, and satisfy the following relationship: In the proposed algorithm, QDCT L is utilized to calculate QDCT. 3) Calculation of QDCT L : To simplify the calculation of QDCT, we take full advantage of the Cayley Dickson form seen in Definition 1, as in [27] is used. The whole process of QDCT L calculation is as follows: a Transforming the given quaternion matrixḞ(m, n) ∈ H M ×N to the Cayley Dickson formḞ(m, n) = F p (m, n)+F q (m, n)j, where F p (m, n) and F q (m, n) ∈ C M ×N b Calculating the DCT of complex matrices F p (m, n) and F q (m, n). The results are denoted as DCT C (F p (m, n) ) and DCT C (F q (m, n) ), respectively. c Using DCT C (F p (m, n) ) and DCT C (F q (m, n)) to form a quaternion matrix:Ḟ (m, n) = DCT C (F p (m, n)) + DCT C (F q (m, n))j. d MultiplyingḞ (m, n) with the quaternion factoru to get the final result QDCT L : QDCT L (Ḟ(m, n)) =u ·Ḟ (m, n). Following the model we discussed in subsection III-A and the transformation introduced in III-B, ADMM was adopted to optimize problem (6) . This involves introducing auxiliary variableḢ and reformulating (6) as In analogy with the ADMM framework adopted in the complex domain [28] ,as the multiplication is not commutative in the quaternion domain, the augmented Lagrangian function of (9) can be written as whereẎ andŻ are the Lagrange multipliers, and β is the positive penalty parameter. Under the framework of ADMM, the variablesẊ,Ḣ,Ḋ,Ẏ, andŻ are updated alternately in the p-th iteration using TheẊ subproblem iṡ In the last term of (12),Ẋ can not be separated directly as the transformation. Despite this, the Parseval theorem in the quaternion domain indicates that the total energy of signal computed in the quaternionic domain and total energy of signal computed in the spatial domain must be the same [29] , [30] . This means that a unitary transformation preserves energy conservation under the Frobenius norm, and thus the last term of (12) can be rewritten as where the T IQDCT L is the inverse transformation of QDCT L . For a more concise representation, let T denote T QDCT L and IT denote T IQDCT L . Consequently, (12) can be reformulated aṡ The closed solution of the above problem iṡ where D τ ( * ) is the quaternion singular value shrinkage operator [5] is defined as whereU ,V, and σ i are obtained by computing QSVD of quaternion matrixȦ =UΣV H , Σ = diag(σ 1 , · · · , σ r , 0 · · · , 0) ∈ R M ×N . TheḊ subproblem iṡ To obtain the optimal solution of (14), we have the following theorem. Theorem 3: For any λ > 0, the closed solution of problem miṅ X λ Ẋ 1 + Ẏ −Ẋ 2 F can be given bẏ where S τ (·) represents the element-wise soft thresholding operator defined by The proof of Theorem 3 is given in the Appendix. Based on Theorem 3, problem (14) has a closed-form solution given byḊ TheḢ subproblem iṡ Following the above equation, we can obtaiṅ Moreover, the observed data should remain unchanged in each iteration such thaṫ The update of penalty parameter β p is where β max is the given maximum value of the penalty parameter, and ρ ≥ 1 is a constant parameter. (9) is the Step 2 problem listed in Algorithm 1, so that the whole procedure to solve it is summarized in Algorithm 2. Algorithm 2 ADMM solver for problem (9) in Step 2. Input:Ȯ, Ω,Ȧ l ,Ḃ l , tolerance ε, and parameters λ, ρ, β max . 1: InitialẊ 1 =Ȯ,Ḣ 1 =Ḋ 1 =Ẋ 1 , and β 1 . LetẎ 1 andŻ 1 be random quaternion matrix with the same size ofẊ 1 . 2: β p (T (Ẋ p+1 ) −Ż pH /β p ). 5: UpdateḢ p+1 =Ẋ p+1 +Ẏ p /β p +Ȧ HḂ /β p , H p+1 = P Ω C (Ḣ p+1 ) + P Ω (Ȯ). 6: UpdateẎ p+1 =Ẏ p + β p (Ẋ p+1 −Ḣ p+1 ). 7: UpdateŻ p+1 =Ż p + β p (Ḋ p+1 − T (Ẋ p+1 )). 8: Update β p+1 = min{ρβ p , β max },. 9: Until convergence Ẋ p+1 −Ẋ p F ≤ ε or p reaches the set maximum iteration number. Output:Ẋ p+1 In this section, the effectiveness of the proposed LRQR-SR method is demonstrated in comparison with various relevant state-of-the-art methods is shown. Subsection IV-A provides the experimental settings. Subsection IV-B presents the color image recovery results. Finally, the experimental results are discussed in Subsection IV-C. A. Experimental Settings 1) Comparison Methods: Several relevant existing algorithms were used as comparison algorithms, including D-N and F-N [31] , TNNR [3] , TNN-SR [14] , Q-DNN and Q-FNN [6] , LRQA [5] , QTNN [21] . The first four of these algorithms are matrix-based, while the last four algorithms are quaternion-based. D-N, F-N, Q-DNN, and Q-FNN use factorization to depict low-rankness, while LRQA is based on the developed QNN to depict low-rankness, and TNNR, TNN-SR, and QTNN utilize a truncated nuclear norm. 2) Test Data and Experimental Environment: Eight benchmark color images as shown in Figure 2 , were selected from SIPI Image Database 1 and McMaster Dataset to demonstrate the effectiveness of the method. In order to fully demonstrate this effectiveness, 50 color images were also randomly selected from Berkeley Segmentation Dataset (BSD) 2 as further test samples. All the experiments were implemented in MATLAB R2019a, on a PC with a 3.00GHz CPU and 8GB RAM. The peak signal to noise rate (PSNR) and the structural similarity index (SSIM) were utilized as the relevant indices, and the best numerical results are highlighted in bold font. When processing image recovery with random samples, a larger Sample Rate (SR) value means more observed pixels in a given image. B. Color Image Recovery 1) Simulations with different parameters: As simulations with different settings of the parameters (β 1 , λ truncated number r) offer different performance levels, a range of parameters was used to test the performance of the proposed LRQR-SR algorithm, based on recovering random sampled images from Fig. 2 . The influence of different parameter values (β 1 = {1e − 4, 5e − 4, 1e − 3, 5e − 3, 1e − 2, 5e − 2, 1e − 1, 5e − 1}) on the experimental results was first tested with the other parameters fixed (λ = 0.1, r = 30, ρ = 1.01) and SR = {0.5, 0.3, 0.1}. The relevant PSNR and SSIM results are plotted in Fig. 3-5 , showing that when β 1 ≥ 1e−2 the recovery effect is minimal. The best recovery results are instead obtained with different degrees of sampling when β 1 = 1e − 4. , good recovery results are also obtained. This is also consistent with the fact that the more missing pixels in the observed image, the more low-rank constraints are required to improve the recovery effect. Intuitively, when the observed image is missing a lot of pixels, the truncation would contains less useful information. 2) Images recovery with random sample: The LRQR-SR algorithm was compared with several other methods mentioned previously by setting SR = {0.3, 0.2, 0.1}. The parameters of LRQR-SR were set as β 1 = 1e−4, λ = 0.07, ρ = 1.01, while the truncation number r = {40, 30, 20} was decided by the SR: the lower the SR, the less truncation is required. Fig. 12 displays the visual comparisons between the designed novel LRQR-SR method and the other methods of comparison for the eight tested color images when SR = 0.2. The PSNR and SSIM results of for recovery as seen in Fig. 2 with SR = {0.3, 0.2, 0.1} are presented in Table I . As shown, as compared with other methods, across all SR values, the results obtained by D-N and F-N do not show particularly clear recovery. However, such factorization skills are more effective when operated in the quaternion domain, as in Q-FFN and Q-DNN. The validity of quaternion-based methods is thus illustrated by these results. The results for TNNR are also inferior to those of TNNR-SR, highlighting that only utilizing low-rankness as prior is insufficient to recover an image more accurately. This supports the reasoning behind introducing sparsity to the LRQR-SR method. In comparison with the other options, the developed LRQR-SR method also provides the most visually optimal results, with crisp details. As shown in the data presented in TABLE I, when the value of SR is very low, utilizing only the truncated skill cannot recover the potential images accurately, while LRQR-SR uniformly outperforms its comparators in terms of both PSNR and SSIM values. competing completion approaches related to recovering V egetable, House, Airplane, Barbara under SR = 0.1. The corresponding observed images are shown in Fig. 13 . The image in the green box is the zoomed-in image of that in the red box. These images show that restricting recovery only to low rankness may lose some local details. Moreover, the TNNR and QTNN approaches restore the image only roughly when the sample rate is low, with other methods suffering from similar problems. Comparing just the two optimal algorithms (TNN-SR and LRQR-SR), although the effect gap is not visually apparent across the restored images, the corresponding PSNR and SSIM prove the superiority of the proposed method. 3) Images recovery under text mask: The LRQR-SR algorithm was then compared with other methods under a text mask. The parameters of LRQR-SR were set to β 1 = 1e − 4, λ = 0.07, ρ = 1.01, and the truncated number r = 30. Fig. 18 -21 compares the visual results for all the competing completion approaches related to recovering T ree, Beans, F lower, Splash under text mask. The image in the green box is a zoomed-in image of that in the red box. As shown, the recovered results from D-N and F-N are very blurry, especially where the content of the image is complex. It can also be observed from the zoomed-in portion of Fig. 18 that the results for TNNR, LRQA, Q-FFN, and Q-DNN still leave some obvious artifacts in the blue area. Similar problems can be observed in Fig. 19-21: In Fig. 19 , there are some obvious artifacts on the red beans in the zoomed-in portion, while in Fig. 20 , some vertical lines remain in the enlarged area after the restoration of the image. In Fig 21, some further visible blemishes appear on the white part of the zoomedin portion. In comparison, the TNNR-SR and the proposed LRQR-SR approaches would obtain better performance, and though in general there is not much difference between the two approaches based on image observation, the corresponding PSNR and SSIM results show that the proposed method can restores the image technically more effectively. 4) 50 Images recovery to further demonstrate the effectiveness of our method: A set of 50 images was subject to recovery to further demonstrate the effectiveness of the method. In this simulation, 50 images were randomly selected from BSD as a test sample, though in order to keep the parameters of the comparison algorithms as in the original set, these images were resized to 256 × 256 × 3 throughout under random sample (SR=0.25). The corresponding PSNR and SSIM results are reported in Fig. 22 and Fig. 23 . The simulation experiment results offer various items for discussion that can be summarised as follows: • The newly-developed LRQR-SR algorithm outperforms comparable existing algorithms both visually and numerically. The main reasons for this can be summarized in three points: this algorithm has been developed in the quaternion domain where the spatial structure information of color image is not destroyed; the model uses QTNN to depict low-rankness, which helps preserve the information contained in the first few large singular values; and, Table I . House Airplane Barbara finally, the l 1 norm is added to act as the regularization in the algorithm, helping to model the sparseness of the underlying quaternion matrix. • When compared with matrix-based methods, TNNR, D-N, and F-N only depict low-rankness by means of modified NN or low-rank factorization. Hence, the recovery results are generally not satisfactory. However, as TNN-SR is based on both low-rankness and sparse priors, the recovered results are improved. In general, for matrixbased methods, the RGB channels of color images must This paper proposed a novel low-rank quaternion recovery model incorporating sparse regularization that can be used to describe the connections of three dimensional structures to obtain better approximations. The proposed LRQR-SR method is based on the use of QTNN to depict low-rankness, as well as taking advantage of the l 1 norm under QDCT to restrict the sparseness. A modified two-step ADMM framework was also adopted to optimize the model. The experimental results for actual color images illustrated the effectiveness of the resulting LRQR-SR, suggesting that, as the quaternion-based method can exploit correlations among color channels, it may be possible to combine this framework with deep learning methods in future work [32] . According to [5] , we need to prove has one unique optimal solutionẊ andẊ equals toẊ opt defined in (15) . Proof: It can be observed that two terms in (22) are convex, hence, (22) has one unique optimal solution. Based on the rules of quaternion matrix derivatives in [33] ,Ẋ must satisfy the following formula: where ∂ Ẋ 1 represents the subgradient of Ẋ 1 . Following [20] , the subgradient of the l 1 norm atẊ is given by where direc(Ẋ ) is a M ×N matrix with the entries computed by [ xij |xij | ] M ×N . Then,Ẋ opt need to be proved to satisfy (23) . Whenẏ > 2λ,ẋ > 0, theṅ y −ẋ =ẏ −ẏ |ẏ| max{|ẏ | −2λ, 0} =ẏ − (ẏ − 2λ) = 2λ. When −2λ ẏ 2λ,ẋ = 0, theṅ y −ẋ =ẏ. LetḞ = 1 2λẎ , then we have Ḟ ∞ 1. Whenẏ < −2λ,ẋ < 0, theṅ y −ẋ =ẏ + (−ẏ − 2λ) = −2λ = 2λ(−1). Based on the above discussions, we can obtain that0 ∈Ẋ opt − Y + 2λ∂ Ẋ opt 1 , which means thatẊ opt =Ẋ . Exact matrix completion via convex optimization Weighted nuclear norm minimization and its applications to low level vision Fast and accurate matrix completion via truncated nuclear norm regularization Logarithmic norm regularized low-rank factorization for matrix and tensor completion Low-rank quaternion approximation for color image processing Quaternion-based bilinear factor matrix norm minimization for color image inpainting Quaternion-based weighted nuclear norm minimization for color image denoising A robust blind color image watermarking in quaternion fourier transform domain Quaternion collaborative and sparse representation with application to color face recognition Unifying nuclear norm and bilinear factorization approaches for low-rank matrix decomposition Sparse representation for computer vision and pattern recognition Face recognition via weighted sparse representation Robust face recognition via sparse representation Low rank matrix completion using truncated nuclear norm and sparse regularizer A trilateral weighted sparse coding scheme for real-world image denoising Matrix completion by least-square, low-rank, and sparse self-representations Repairing sparse low-rank texture Vector sparse representation of color image using quaternion matrix analysis Robust sparse representation in quaternion space Robust quaternion matrix completion with applications to image inpainting Weighted truncated nuclear norm regularization for low-rank quaternion matrix completion Ii. on quaternions; or on a new system of imaginaries in algebra Singular value decomposition of quaternion matrices: a new tool for vector-sensor signal processing Quaternions and matrices of quaternions Quaternion principal component analysis of color images Image splicing detection based on markov features in QDCT domain Quaternion discrete cosine transform and its application in color template matching Alternating direction method of multipliers for separable convex optimization of real functions in complex variables An uncertainty principle for quaternion fourier transform Quaternion fourier transform on quaternion fields and generalizations Bilinear factor matrix norm minimization for robust PCA: algorithms and applications Color random valued impulse noise removal based on quaternion convolutional attention denoising network The theory of quaternion matrix derivatives