key: cord-1014302-vv6vjke7 authors: nan title: A Compressed Sensing Approach to Pooled RT-PCR Testing for COVID-19 Detection date: 2021-04-27 journal: IEEE open journal of signal processing DOI: 10.1109/ojsp.2021.3075913 sha: 9e77026e8fbb605d3b2f8737dc0698f96a5f30ee doc_id: 1014302 cord_uid: vv6vjke7 We propose ‘Tapestry’, a single-round pooled testing method with application to COVID-19 testing using quantitative Reverse Transcription Polymerase Chain Reaction (RT-PCR) that can result in shorter testing time and conservation of reagents and testing kits, at clinically acceptable false positive or false negative rates. Tapestry combines ideas from compressed sensing and combinatorial group testing to create a new kind of algorithm that is very effective in deconvoluting pooled tests. Unlike Boolean group testing algorithms, the input is a quantitative readout from each test and the output is a list of viral loads for each sample relative to the pool with the highest viral load. For guaranteed recovery of [Formula: see text] infected samples out of [Formula: see text] being tested, Tapestry needs only [Formula: see text] tests with high probability, using random binary pooling matrices. However, we propose deterministic binary pooling matrices based on combinatorial design ideas of Kirkman Triple Systems, which balance between good reconstruction properties and matrix sparsity for ease of pooling while requiring fewer tests in practice. This enables large savings using Tapestry at low prevalence rates while maintaining viability at prevalence rates as high as 9.5%. Empirically we find that single-round Tapestry pooling improves over two-round Dorfman pooling by almost a factor of 2 in the number of tests required. We evaluate Tapestry in simulations with synthetic data obtained using a novel noise model for RT-PCR, and validate it in wet lab experiments with oligomers in quantitative RT-PCR assays. Lastly, we describe use-case scenarios for deployment. The coronavirus disease of 2019 (COVID-19) crisis has led to widespread lockdowns in several countries, and has had a major negative impact on the economy. Early identification of infected individuals can enable quarantining of the individuals and thus control the spread of the disease. Such individuals may often be asymptomatic for many days. Widespread testing with the RT-PCR (reverse transcription polymerase chain reaction) method can help identify the infected individuals. However, widespread testing is not an available option in many countries due to constraints on resources such as testing time (∼ 3-4 hours per round), basic equipment, skilled manpower and reagents. The current low rate of COVID-19 infection in the world population [1] means that most samples tested are not infected, so that most tests are wasted on uninfected samples. Group testing is a process of pooling together samples of n different people into multiple pools, and testing the pools instead of each individual sample. A negative result on a pool implies that all samples participating in it were negative. This saves a huge amount of testing resources, especially with low infection rates. Group testing for medical applications has a This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ long history dating back to the 1940 s when it was proposed for testing of blood samples for syphilis [2] . Simple two-round group testing schemes have already been applied in the field by several research labs [3] , [4] for COVID-19 testing. Such two-round group testing schemes require pooling of samples and a second round of sample handling for all samples in positive pools. This second round of sample handling can increase the time to result and be laborious to perform since it requires the technician to wear PPE one more time, do another round of RNA extraction, and PCR. In situations where the result needs to be delivered fast, a second round of sample handling and testing must be avoided. In such situations, these schemes are less attractive. We present Tapestry, a novel combination of ideas from combinatorial group testing and compressed sensing (CS) [5] which uses the quantitative output of PCR tests to reconstruct the viral load of each sample in a single round. Tapestry has been validated with wet lab experiments with oligomers [6] . In this work, we elaborate on the results from the algorithmic perspective for the computer science and signal processing communities. Tapestry has a number of salient features which we enumerate below. 1) Tapestry delivers results in a single round of testing, without the need for a second confirmatory round, at clinically acceptable false negative and false positive rates. The number m of required tests is only O(k log n) for random binary pooling matrix constructions, as per compressed sensing theory for random binary matrices [7] . In the targeted use cases where the number of infected samples k n, we see that m n. However, our deterministic pooling matrix constructions based on Kirkman Triple Systems [8] , [9] require fewer tests in practice (see Section III-F8 for a discussion on why this may be the case). Consequently we obtain significant savings in testing time and resources such as number of tests, quantity of reagents, and manpower. 2) Tapestry reconstructs relative viral loads i.e., ratio of viral amounts in each sample to the highest viral amount across pools. It is believed that super-spreaders and people with severe symptoms have higher viral load [10] , [11] , so this quantitative information might have epidemiological relevance. 3) Tapestry takes advantage of quantitative information in PCR tests. Hence it returns far fewer false positives than traditional binary group testing algorithms such as COMP (Combinatorial Orthogonal Matching Pursuit) [12] , while maintaining clincally acceptable false negative rates. Furthermore, it takes advantage of the fact that a negative pool has viral load exactly zero. Traditional CS algorithms do not take advantage of this information. Hence, Tapestry demonstrates better sensitivity and specificity than CS algorithms. 4) Because each sample is tested in three pools, Tapestry can detect some degree of noise in terms of cross-contamination of samples and pipetting errors. 5 ) Tapestry allows PCR test measurements to be noisy. We develop a novel noise model to describe noise in PCR experiments. Our algorithms are tested on this noise model in simulation. 6) All tuning parameters for execution of the algorithms are inferred on the fly in a data driven fashion. 7) Each sample contributes to exactly three pools, and each pool has the same number of samples. This simplifies the experimental design, conserves samples, keeps pipetting overhead to a minimum, and makes sure that dilution due to pool size is in a manageable regime. The organization of the paper is as follows. We first present a brief overview of the RT-PCR method in Section II. The precise mathematical definition of the computational problem being solved in this paper is then put forth in Section III-A. We describe traditional and CS-based group-testing algorithms for this problem in Section III-B, III-C and III-D. The Tapestry method is described in Section III-D. The sensing matrix design problem, as well as theoretical guarantees using Kirkman Triple Systems or random binary matrices, are described in Section III-F. Results on synthetic data are presented in Section IV. This is followed by results on data from lab experiments performed with oligomers to mimic the clinical situation as closely as possible. In Section V, we compare our work to recent related approaches. We conclude in Section VI with a glance through different scenarios where our work could be deployed. The supplemental material contains several additional experimental details as well as proofs of some theoretical results. We present here a brief summary of the RT-PCR process, referring to [13] for more details. In the RT-PCR method for COVID-19 testing, a sample in the form of naso-or oropharyngeal swabs is collected from a patient. The sample is then dispersed into a liquid medium. The RNA molecules of the virus present in this liquid medium are converted into complementary DNA (cDNA) via a process called reverse transcription. DNA fragments called primers complementary to cDNA from the viral genome are then added. They attach themselves to specific sections of the cDNA from the viral genome if the virus is present in the sample. The cDNA of these specific viral genes then undergoes a process of exponential amplification in an RT-PCR machine. Here, cDNA is put through several cycles of alternate heating and cooling in the presence of Taq polymerase and appropriate reagents. This triggers the creation of many new identical copies of specific portions of the target DNA, roughly doubling in number with every cycle of heating and cooling. The reaction volume contains sequence-specific fluorescent markers which report on the total amount of amplified DNA of the appropriate sequence. The resulting fluorescence is measured, and the increase can be observed on a computer screen in real time. The time when the amount of fluorescence exceeds the threshold level is known as the threshold cycle C t , and is a quantitative readout from the experiment. A smaller C t indicates greater number of copies of the virus. Usually C t takes values anywhere between 16 to 32 cycles in real experiments. PCR can detect even single molecules. A single molecule typically would have C t value of around 40 cycles. A typical RT-PCR setup can test 96 samples in parallel. The test takes about 3-4 hours to execute. Let x denote a vector of n elements where x i is the viral load (i.e. viral amount) of the i th person. Throughout this paper we assume that only one sample per person is extracted. Hence x contains the viral loads corresponding to n different people. Note that x i = 0 implies that the i th person is not infected. Due to the low infection rate for COVID-19 as yet even in severely affected countries [1] , x is considered to be a sparse vector with at the most k n positive-valued elements. In group testing, small and equal volumes of the samples of a subset of these n people are pooled together according to a sensing or pooling matrix A = (A ji ) m×n whose entries are either 0 or 1. The viral loads of the pools will be given by: where A ji = 1 if a portion of the sample of the i th person is included in the j th pool, and A j is the j th row of A. In all, some m < n pools are created and individually tested using RT-PCR. We now have the relationship z = Ax, where z is the m-element vector of viral loads in the mixtures, and A denotes a m × n binary 'pooling matrix' (also referred to as a 'sensing matrix' in CS literature). Note that each positive RT-PCR test will yield a noisy version of z j , which we refer to as y j . The relation between the 'clean' and noisy versions is given as follows (also see Eqn. (7)): where e j ∼ N (0, σ 2 ) and q ∈ (0, 1) is the fraction of viral cDNA that replicates in each cycle. The factor (1 + q) e j reflects the stochasticity in the growth of the numbers of DNA molecules during PCR. Here σ is known and constant. Equivalently for positive tests, we have: In case of negative tests, y j as well as z j are 0-valued, and no logarithms need be computed. In non-adaptive group testing, the core computational problem is to estimate x given y and A without requiring any further pooled measurements. It should be noted that though we have treated each element of x to be a fixed quantity, it is in reality a random variable of the form x i ∼ Poisson(λ i ) where λ i ≥ 0. If matrix A contains only ones and zeros, this implies that z j ∼ Poisson(A j x) because the sum of Poisson random variables is also a Poisson random variable. For a positive pool j, the quantitative readout from RT-PCR is not its viral load but the observed cycle time t j when its fluorescence reaches a given threshold F (see Section II). In order to be able to apply CS techniques (see Section III-C), we derive a relationship between the cycle time of a sample and its viral load. Because of exponential growth (see [14] ), the number of molecules of viral cDNA in pool j at cycle time t, denoted by v j (t ) is given by: Also, t is a real number, with t indicating the number of PCR cycles that have passed, and t − t indicating the fraction of wall-clock time within the current cycle. The fluorescence of the pool, f j (t ), is directly proportional to the number of virus molecules v j (t ). That is, where K is a constant of proportionality. Suppose the fluorescence of pool j should reach the threshold value F at cycle time τ j , according to Eqn. (5) . Due to the stochastic nature of the reaction, as well as measurement error in the PCR machine, the threshold cycle output by the machine will not reflect this true cycle time. We model this discrepancy as Gaussian noise. Hence, the true cycle time τ j and the observed cycle time t j are related as τ j = t j + e j , where e j ∼ N (0, σ 2 ) as before. Now, since f j (τ j ) = F , using Eqn. (5), we have The latter equality is since we use the noisy cycle threshold t j to compute viral load, where y j is defined to be the noisy viral load of pool j. Hence we find obtaining the relationship from Eqn. (2) . Constants F and K are unknown. Hence it is not possible to directly obtain y j from t j without additional machine-specific calibration. However, we can find the ratio between the noisy viral loads of two pools using Eqn. (6) . Let y min be the noisy viral load of the pool with the minimum observed threshold cycle (t min ) among all pools. Then we define relative viral loads as: where z j is the relative viral load of a pool, y j is its noisy version, and x is the vector of relative viral loads of each sample. We note that due to Eqn. (7), the following relation holds: Hence we can apply CS techniques from Section III-C to determine the relative magnitudes of viral loads without knowing F and K. We provide more comments about the settings of various noise model parameters for our experiments, in Section IV, particularly in Section IV-A6. [12] and show that COMP requires ek(1 + δ) log n tests for an error probability less than n −δ (see Section III-F8). This analysis has been extended to include the case of noisy test results as well [12] . However COMP can result in a large number of false positives if not enough tests are used, and it also does not predict viral loads. Group testing is intimately related to the field of compressed sensing (CS) [16] , which has emerged as a significant sub-area of signal and image processing [5] , with many applications in biomedical engineering [17] - [19] . In CS, an image or a signal x with n elements, is directly acquired in compressed format via m linear measurements of the form y = Ax + η. Here, the measurement vector y has m elements, and A is a matrix of size m × n, and η is a vector of noise values. If x is a sparse vector with k n non-zero entries, and A obeys the so-called restricted isometry property (RIP), then exact recovery of x from y, A is possible [20] if η = 0. In the case of measurement noise, the recovery of x produces a solution that is provably close to the original x. A typical recovery problem P0 consists of optimizing the following cost function: where ε is an upper bound (possibly a high probability upper bound) on η 2 , and x 0 is the number of non-zero elements in x. In the absence of noise, a unique and exact solution to this problem is possible with as few as 2˜k measurements in y if x has k non-zero elements [20] . Unfortunately, this optimization problem P0 is NP-Hard and the algorithm requires brute-force subset enumeration. Instead, the following problem P1 (often termed 'Basis Pursuit Denoising' or BPDN) is solved in practice: P1 is a convex optimization problem which yields the same solution as the earlier problem (with similar conditions on x, A) at significantly lower computational cost, albeit with O(k log n) measurements (i.e. typically greater than 2˜k) [5] , [20] . The order k restricted isometry constant (RIC) of a matrix A is defined as the smallest constant δ k , for which the following relationship holds for all k-sparse vectors x (i.e. all vectors with at the most k non-zero entries): The matrix A is said to obey the order k restricted isometry property (RIP) if δ k is close to 0. This property essentially implies that no k-sparse vector (other than the zero vector) can lie in the null-space of A. Unique recovery of k-sparse signals requires that no 2˜k-sparse vector lies in the nullspace of A [20] . A matrix A which obeys RIP of order 2˜k satisfies this property. It has been proved that matrices with entries randomly and independently drawn from distributions such as Rademacher or Gaussian, obey the RIP of order k with high probability [21] , provided they have at least O(k log n) rows. There also exist deterministic binary sensing matrix designs (e.g. [22] ) which require O(max(k 2 , √ n)) measurements. However it has been shown recently [23] that the constant factors in the deterministic case are significantly smaller than those in the former random case when n < 10 5 , making the deterministic designs more practical for typically encountered problem sizes. The solution to the optimization problems P0 and P1 in Eqns. (10) and (11) respectively, are provably robust to noise [5] , and the recovery error decreases with decrease in noise magnitude. The error bounds for P0 in Eqn. (10) are of the form, for solutionx [24] : whereas those for P1 in Eqn. (11) have the form [24] : Here ζ (δ 2˜k ) is a monotonically increasing function of δ 2˜k ∈ (0, 1) and has a small value in practice. The Restricted Isometry Property as defined above is also known as RIP-2, because it uses the 2 -norm. Many other sufficient conditions for recovery of k-sparse vectors exist. We define the following which we use later in Section III-F and supplemental Section S.V to prove theoretical guarantees of our method. Definition 1: RIP-1: [25, Defn. 8] A m × n matrix A is said to obey RIP-1 of order k if ∃ δ k ∈ (0, 1) such that for all ksparse vectors x ∈ R n , Over the years, a variety of different techniques for compressive recovery have been proposed. We use some of these for our experiments in Section III-D. These algorithms use different forms of sparsity and incorporate different types of constraints on the solution. The complete pipeline of the Tapestry method is presented in Algorithm 1. First, a wet lab technician performs pooling of n samples into m pools according to a m × n pooling matrix A. Then they run the RT-PCR test on these m pools (in parallel). The output of the RT-PCR tests -the threshold cycle (C t ) values of each pool -is processed to find the relative viral load vector y of the m pools (as shown in Eqn. (8) ). This is given as input to the Tapestry decoding algorithm, which outputs a sparse relative viral load vector x. The Tapestry decoding algorithm, our approach toward group-testing for COVID-19, involves a two-stage procedure. 1 In the first stage, we apply the COMP algorithm described in Section III-B, to identify the sure negatives (if any) in x to form a set X . Let Y be the set of zero-valued measurements in y (i.e. negative tests). Please refer to Section III-A1 for the definition of x, y. Moreover, we defineX ,Ȳ as the complement-sets of X , Y respectively. Also, let yȲ be the vector of m − |Y| measurements which yielded a positive result. Let xX be the vector of n − |X | samples, which does not include the |X | surely negative samples. Let AX ,Ȳ be the submatrix of A, having size (m − |Y|) × (n − |X |), which excludes rows corresponding to zero-valued measurements in y and columns corresponding to negative elements in x. In the second stage, we apply a CS algorithm to recover xX from yȲ , AX ,Ȳ . To avoid symbol clutter, we henceforth just stick to the notation y, x, A, m, n, even though they respectively refer to yȲ , xX , AX ,Ȳ , m − |Y|, n − |X |. Note that the CS stage following COMP is very important for the following reasons: 1) COMP typically produces a large number of false positives. The CS algorithms help reduce the number of false positives as we shall see in later sections. 2) COMP does not estimate viral loads, unlike CS algorithms. 3) In fact, unlike CS algorithms, COMP treats the measurements in y as also being binary, thus discarding a lot of useful information. 4) COMP preserves the RIP-1, RIP-2, RNSP, and 2 -RNSP of the pooling matrix, i.e. if A obeys any of RIP-1, RIP-2, RNSP or 2 -RNSP of order k, then AX ,Ȳ also obeys the same property of the same order k with the same parameters. We formalize and prove these claims in the supplemental Section S.V. However, the COMP algorithm prior to applying the CS algorithm is also very important for the following reasons: 1) Viral load in negative pools is exactly 0. COMP identifies the sure negatives in x from the negative measurements in y. Traditional CS algorithms do not take advantage of this information, since they assume all tests to be noisy (Eqns. (10) and (11)). It is instead easier to discard the obvious negatives before applying the CS step. 2) Since COMP identifies the sure negatives, therefore, it effectively reduces the size of the problem to be solved by the CS step from (m, n) to (m − |Y|, n − |X |). only one contributing sample inX , after negatives have been eliminated by COMP. Such a sample is called a 'high-confidence positive,' and we denote the list of high-confidence positives as HCP. In rare cases, the CS decoding algorithms we employed (see further in this section) did not recognize such a positive. However, such samples will still be returned by our algorithm as positives, in the set HCP (see last step of Alg. 1, and 'definite defectives' in Section III-B). For CS recovery, we employ one of the following algorithms after COMP: the non-negative LASSO (NNLASSO), non-negative orthogonal matching pursuit (NNOMP), Sparse Bayesian Learning (SBL), and non-negative absolute deviation regression (NNLAD). For problems of small size, we also apply a brute force (BF) search algorithm to solve a problem similar to P0 from Eqn. (10) combinatorially. The LASSO (least absolute shrinkage and selection operator) is a penalized version of the constrained problem P1 in Eqn. (11) , and seeks to minimize the following cost function: Here λ is a regularization parameter which imposes sparsity in x. The LASSO has rigorous theoretical guarantees [26] (chapter 11) for recovery of x as well as recovery of the support of x (i.e. recovery of the set of non-zero indices of x). Given the non-negative nature of x, we implement a variant of LASSO with a non-negativity constraint, leading to the following optimization problem: There are criteria defined in [26] for selection of λ under iid Gaussian noise, so as to guarantee statistical consistency. However, in practice, cross-validation (CV) can be used for optimal choice of λ in a purely data-driven fashion from the available measurements. The details of this are provided in the supplemental Section S.III. Orthogonal Matching Pursuit (OMP) [27] is a greedy approximation algorithm to solve the optimization problem in Eqn. (10) . Rigorous theoretical guarantees for OMP have been established in [28] . OMP proceeds by maintaining a set H of 'selected coefficients' in x corresponding to columns of A. In each round a column of A is picked greedily, based on the criterion of maximum absolute correlation with a residual vector r := y − k∈H A kxk . Each time a column is picked, all the coefficients extracted so far (i.e. in set H) are updated. This is done by computing the orthogonal projection of y onto the subspace spanned by the columns in H. The OMP algorithm can be quite expensive computationally. Moreover, in order to maintain non-negativity of x, the orthogonal projection step would require the solution of a non-negative least squares problem, further adding to computational costs. However, a fast implementation of a non-negative version of OMP (NNOMP) has been developed in [29] , which is the implementation we adopt here. For the choice of ε in Eqn. (10), we can use CV as described in Section III-D1. Sparse Bayesian Learning (SBL) [30] , [31] is a non-convex optimization algorithm based on Expectation-Maximization (EM) that has empirically shown superior reconstruction performance to most other CS algorithms with manageable computation cost [32] . In SBL, we consider the case of Gaussian noise in y and a Gaussian prior on elements of x, leading to: Since both x and ϕ (the vector of the {ϕ i } n i=1 values) are unknown, the optimization for these quantities can be performed using an EM algorithm. In the following, we shall denote := diag(ϕ). Moreover, we shall use the notation (l ) for the estimate of in the l th iteration. The E-step of the EM algorithm here involves computing Q( | (l ) ) := E x|y; (l ) log p(y, x; ). It is to be noted that the posterior distribution p(x|y; (l ) ) has the form N (μ, ) where μ := A T y/σ 2 and : . The E-step and M-step are executed alternately until convergence. Convergence to a fixed-point is guaranteed, though the fixed point may or may not be a local minimum. However, all local minima are guaranteed to produce sparse solutions for x (even in the presence of noise) because most of the ϕ i values shrink towards 0. The SBL procedure can also be modified to dynamically update the noise variance σ 2 (as followed in this paper), if it is unknown. All these results can be found in [31] . Unlike NNLASSO or NNOMP, the SBL algorithm from [31] expressly requires Gaussian noise. However we use it as is in this paper for the simplicity it affords. Unlike NNOMP or NNLASSO, there is no explicit non-negativity constraint imposed in the basic SBL algorithm. In our implementation, the non-negativity is simply imposed at the end of the optimization by setting to 0 any negative-valued elements in μ, though more principled, albeit more computationally heavy, approaches such as [33] can be adopted. The Non-Negative Absolute Deviation Regression (NNLAD) [34] and Non-negative Least squares (NNLS) [7] seek to respectively minimize It has been shown in [34] that NNLAD is sparsity promoting for certain conditions on the sensing matrix A, and that its minimizer x * obeys bounds of the form ||x − x * || 1 ≤ C||η|| 1 , where C is a constant independent of x, x * , η, y. A salient feature of NNLAD/NNLS is that they do not require any parameter tuning. This property makes them useful for matrices of smaller size where cross-validation may be unreliable. There exist adaptive group testing techniques which can determine k infected samples in O(k log n) tests via repeated binary search. These techniques are impractical in our setting due to their sequential nature and large pool sizes. We provide details of these techniques in the supplemental Section S.II. We also compare with a two-stage approach called Dorfman's method [2] in Section IV-A7. The sensing matrix A must obey some properties specific to this application such as being non-negative. For ease and speed of pipetting, it is desirable that the entries of A be (1) binary (where A ji = 0 indicates that sample i did not contribute to pool j, and A ji = 1 indicates that a fixed volume of sample i was pipetted into pool j), and (2) sparse. Sparsity ensures that not too many samples contribute to a pool, and that a single sample does not contribute to too many pools. The former is important because typically the volume of sample that is added in a PCR reaction is fixed. Increasing pool size means each sample contributes a smaller fraction of that volume. This leads to dilution which manifests as a shift of the C t value towards larger numbers. If care is not taken in this regard, this can affect the power of PCR to discriminate between positive and negative samples. The latter is important because contribution of one sample to a large number of pools could lead to depletion of sample. The Restricted Isometry Property (RIP-2) of sensing matrices is a sufficient condition for good CS recovery as described in Section III-C. However the matrices which obey the aforementioned physical constraints are not guaranteed to obey RIP-2. Instead, we consider sensing matrices which are adjacency matrices of expander graphs. A left-regular bipartite graph G( with degree of each vertex in V I being d, is said to be a (k, )-unbalanced expander graph for some integer k > 0 and some real-valued Here N (S ) denotes the union set of neighbors of all nodes in S. Intuitively a bipartite graph is an expander if every 'not too large' subset has a 'large' boundary. It can be proved that a randomly generated left-regular bipartite graph with |V O | ≥ O(k log n), n = |V I | is an expander, with high probability [35] , [36] . Moreover, it has been shown in [25, Thm. 1 ] that the scaled adjacency matrix A/d of a (k, )-unbalanced expander graph obeys RIP-1 (Defn. 1) of order k. Here columns of A correspond to vertices in V I , and rows correspond to vertices in V O . That is, for any k-sparse vector x, the following relationship holds: 1 for some absolute constant C > 1. This property again implies that the null-space of A cannot contain vectors that are 'too sparse' (apart from the zero-vector). This summarizes the motivation behind the use of expanders in compressive recovery of sparse vectors, and also in group testing [25] . Although randomly generated left-regular bipartite graphs are expanders, we would need to verify whether a particular such graph is a good expander, which may take prohibitively long in practice [35] . In the application at hand, this can prove to be a critical limitation since matrices of various sizes may have to be served, depending on the number of samples arriving in that batch at the testing centre, and the number of tests available to be performed. Hence, we have chosen to employ deterministic procedures to design such matrices, based on objects from combinatorial design theory known as Kirkman triples (see [8] , [9] ). We first recall Kirkman Triple Systems (an example of which is illustrated in Fig. 1 ) which are Steiner Triple Systems with an extra property. Steiner Triple Systems consist of n = m 2 /3 column vectors with m elements each, with each entry being either 0 or 1 such that each column has exactly three 1 s, every pair of rows has dot product equal to 1 and every pair of columns has dot product at most 1 [37] . This means that each column of a Steiner Triple System corresponds to a triplet of rows (i.e. contains exactly three 1 s), and every pair of rows occurs together in exactly one such triplet (i.e. for every pair of rows indexed by i, j, there exists exactly one column index k for which A ik = A jk = 1). If the columns of a Steiner Triple System can be arranged such that the sum of columns from i to i + m/3 − 1 equals 1 ∈ R m for every i ≡ 1 modulo m/3 then the Steiner Triple System is said to be resolvable, and is known as a Kirkman Triple System [8] . That is, the set of columns of a Kirkman Triple System can be partitioned into (m − 1)/2 disjoint groups, each consisting of m/3 columns, such that each row has exactly one 1 entry in a given such group of columns. Because of this property, we may choose any l such groups of columns of a Kirkman Triple System to form a m × n matrix, n > m, with n = lm/3, and 3 < l ≤ (m − 1)/2, while keeping the number of 1 entries in each row the same. From here on, we refer to such matrices as Kirkman matrices. If l = (m − 1)/2, then we refer to it as a full Kirkman matrix, else it is referred to as a partial Kirkman matrix. Note that in a partial Kirkman matrix, the dot product of any two rows may be at most 1, whereas in a full Kirkman matrix, it must be equal to 1. Notice that m = 6t + 3 for some t ∈ Z ≥0 for a Kirkman Triple System to exist, since m − 1 must be divisible by 2, and m must be divisible by 3. This, and the existence of Kirkman Triple Systems for all t ∈ Z ≥0 have been proven in [9] . Explicit constructions of Kirkman Triple Systems for m ≤ 99 exist [8] . Generalizations of Kirkman Triple Systems under the name of the Social Golfer Problem is an active area of research (see [38] , [39] ). The Social Golfer Problem asks if it is possible for g × p golfers to play in g groups of p players each for w weeks, such that no two golfers play in the same group more than once [40, Sec. 1.1]. Kirkman Triple Systems with m rows and m 2 /3 columns are a solution to the Social Golfer Problem for the case when p = 3, g = m/3 and w = (m − 1)/2. Full or partial Kirkman matrices may be constructed via greedy search techniques used for solving the Social Golfer Problem (such as in [41] ). Previously, Kirkman matrices have been proposed for use as Low-Density Parity Check codes in [42] , due to high girth 2 of Kirkman matrix bipartite graphs and the ability to serve only part of the matrix while keeping the row weights 3 equal. Matrices derived from Steiner Triple Systems have previously been used for pooled testing for transcription regulatory network mapping in [43] . Further, matrices derived from Steiner Systems [44] , a generalization of Steiner Triple Systems, have been proposed for optimizing 2-stage binary group testing in [45] . We show that Kirkman matrix bipartite graphs are (k, )unbalanced expanders, with = (k − 1)/2˜d, where d is the left-degree of the graph and is 3 for Kirkman matrices. Given a set S of column vertices such that |S| ≤ k, we note that the size of the union set of neighbours of S, |N (S)|, is at least |S|d − pr, where p = |S| 2 is the number of (unordered) pairs of columns in S, and r is the maximum number of row vertices in common between any two column vertices. For a Kirkman matrix, since any two columns have dot product at most 1, hence r = 1. Therefore, ). This implies that Kirkman matrix biparite graphs are (k, )-unbalanced expanders, with = (k − 1)/2˜d. If we put in the requirement that d = 3 for Kirkman matrices and < 1, we find that k < 7. Hence it follows from [25, Thm. 1] that the scaled Kirkman matrix has RIP-1 of order k for k < 7 and = (k − 1)/6. This suggests exact recovery for upto 3 infected samples using CS. However, in practice, we observe that using our method we are able to recover much higher number of positives, at the cost of an 2 The girth of a graph is equal to the length of the shortest cycle in it. 3 defined as the number of 1 entries in a row acceptable number of false positives and rare false negatives (Section IV). A Steiner Triple System bipartite graph does not have a cycle of length 4. If it did, then there would exist two rows a and b, and two columns u and v of the Steiner Triple System matrix A such that A au = A bu = 1 and A av = A bv = 1. This would violate the property that dot product of any two rows of the Steiner Triple System must be equal to 1. Furthermore, [42, Lemma 1] show that Steiner Triple System bipartite graphs have girth equal to 6. Since Kirkman Triple Systems are resolvable Steiner Triple Systems (see definitions earlier in this section), their bipartite graphs also have girth equal to 6. For a bipartite graph constructed from a partial Kirkman matrix, the girth is at least 6, since dropping some column vertices will not introduce new cycles in the graph. Furthermore, it is shown in [23, Thm. 10] that adjacency matrices of left-regular graphs with girth at least 6 satisfy RNSP (Defn. 2) of order k (for suitable k). Consequently, they may be used for CS decoding [23, Thm. 5] . They also give lower bounds on the number of rows m of left-regular bipartite graph matrices whose column weight 4 is more than 2, for them to have high girth and consequently satisfy RNSP of order k, given k and n [23, Eqn. 32, 33] . Given k and n, these lower bounds are minimized for graphs of girth 6 and 8, and the bounds are, respectively, m ≥ k √ n and m ≥ k 3/2 √ n ([23, Eqn. 37]). However, with the additional requirement that m < n for CS, it is found that girth 6 matrices can recover k < √ n defects, while girth 8 matrices can only recover k < 3 √ n defects. Hence, matrices whose bipartite graphs have girth equal to 6 are optimal in this sense. Full Kirkman matrix bipartite graphs are left-regular and have girth 6, as argued earlier, and hence they satisfy RNSP, may be used for compressive sensing, and are optimal in the sense of being able to handle most number of defects while minimizing the number of measurements. We note that since we employ Kirkman triples, each column has only three 1 s. The theoretical guarantees for such matrices hold for signals with 0 norm less than or equal to 2. However, we have obtained acceptable false positive and false negative rates in practice for much larger sparsity levels, as will be seen in Section IV. In order for a matrix to be suitable for our method, it should not only be good for CS decoding algorithms, but also for COMP. Kirkman matrices are 2-disjunct, and can recover up to 2 defects exactly using COMP. In a k-disjunct matrix, there does not exist any column such that its support is a subset of the union of the support of k other columns [15] . Matrices which are k-disjunct have exact support recovery guarantee for k-sparse vectors, using COMP (see [15] ). Disjunctness follows from the following properties of Kirkman matrices -that two columns in a Kirkman matrix have at most one row in common with an entry of 1, and that each column has exactly three 1 entries. Consider R a , R b , and R c , the sets of rows for which the three columns a, b and c respectively have a 1 entry. Note that |R a | = |R b | = |R c | = 3, and Empirically we find that even for k > 2, COMP reports only a small fraction of the total number of samples as positives when using Kirkman matrices (Table 1 ). In Section S.XIV (Proposition 6) of the supplemental material, we prove that if a fraction f ∈ (0, 1) of the tests come out to be positive, then COMPreports strictly less than fraction f 2 of the samples as positive for a full Kirkman matrix. This provides intuition behind why Kirkman matrices may be well-suited for our combined COMP + CS method, since most samples are already eliminated by COMP. On the other hand, CS decoding (without the earlier COMP step) on the full Kirkman matrix does not perform as well, as shown in the supplemental Section S.IX. As we have seen in earlier sections, Kirkman matrices are suitable for use in compressed sensing due to their expansion, RIP-1 and high girth properties, as well as for binary group testing due to disjunctness. Furthermore, the dot product between two columns of a Kirkman matrix being at most 1 ensures that no two samples participate in more than one test together. This has favourable consequences in terms of placing an upper bound on the mutual coherence of the matrix, defined as: where A i refers to the i th column of A. Matrices with lower μ(A) values have lower values of worst case upper bounds on the reconstruction error [46] . These bounds are looser than those based on the RIC that we saw in previous sections. However, unlike the RIC, the mutual coherence is efficiently computable. A practical benefit of Kirkman triples that is not shared by Steiner triples is that the former can be served for number of samples far less than n = m 2 /3 while keeping pools balanced (i.e. ensuring that each pool is created from the same number of samples). In fact, we can choose n to be any integer multiple of m/3, and ensure that every pool gets the same number of samples, as discussed in section III-F3. Notice that the expansion, RIP-1, high girth and disjunctness properties hold for full as well as partial Kirkman matrices, as proven in previous sections. This allows us to characterize the properties of the full Kirkman matrix, and use that analysis to predict how it will behave in the clinical situation where the pooling matrix to be served may require very specific values of m, n depending on the prevalence rate. Column weight: Kirkman matrices have column weight equal to 3 -that is, each sample goes to 3 pools. It is possible to construct matrices with higher number of pools per sample (such as those derived from the Social Golfer Problem [38] , which will retain several benefits of the Kirman matrices: (1) They would have the ability to serve only part of the matrix; (2) They would retain the the expander and RIP-1 properties, following a proof similar to the one in Section III-F4; (3) They would not have any 4-cycles in the corresponding bipartite graph, following a similar argument as in Section III-F5; and (4) They would possess the disjunctness property following a proof similar to the one in Section III-F6). Nevertheless, the time and effort needed for pooling increases with more pools per sample. Further, higher pools per sample will come at the cost of a larger number of tests (if pool size is kept constant), or larger pool size (if number of tests is kept constant). Higher number of tests is undesirable for obvious reasons, while larger pool size may lead to dilution of the sample within a pool, leading to individual RT-PCR tests failing. While Kirkman matrices which satisfy RNSP of order k must have at least k √ n measurements, we can get much better bounds in theory if we use random constructions. From [7, Prop. 10] we see that with high probability, 0/1 Bernoulli(p) matrices need only O(k log n) measurements in order to satisfy 2 -RNSP (Defn. 3) of order k, with p ∈ (0, 1) being the probability with which each entry of the matrix is independently 1. In the supplemental Section S.V, we prove that 2 -RNSP is preserved by COMP. That is, the reduced matrix AX ,Ȳ obeys 2 -RNSP of order k with the same parameters as the original matrix A. Hence our method only needs O(k log n) measurements for robust recovery of k-sparse vectors with such random matrix constructions. Bernoulli(p) matrices are also good for COMP - [12, Thm. 4] shows that Bernoulli(p) matrices with p = 1/k need only O(k log n) measurements for exact support recovery of k-sparse vectors with COMP with vanishingly small probability of error. In practice, we observe that Kirkman matrices perform better than Bernoulli(p) matrices using our method in the regime of our problem size. This gap between theory and practice may be arising due to the following reasons: (1) The k √ n lower bound for Kirkman triples is for a sufficient but not necessary condition for sparse recovery; (2) The O(k log n) may be ignoring a very large constant factor which affects the performance of moderately-sized problems such as the ones reported in this paper; and (3) The theoretical bounds are for exact recovery with vanishingly small error, whereas we allow some false positives and rare false negatives in our experiments. Similar comparisons between binary and Gaussian random matrices have been recently put forth in [23] . Moreover, the average column weight of Bernoulli(p) matrices is pm, where m is the number of measurements. This is typically much higher than column weight 3 of Kirkman matrices and hence undesirable (see Section III-F7). In the supplemental Section S.VI, we compare the performance of Kirkman matrices with Bernoulli(0.1) and Bernoulli(0.5) matrices. As mentioned earlier, the mutual coherence from Eqn. (20) is efficient to compute and optimize over. Hence, there is a large body of literature on designing CS matrices by minimizing μ(A) w.r.t. A, for example [47] . We followed such a procedure for designing sensing matrices for some of our experimental results in Section IV-B. For this, we follow simulated annealing to update the entries of A, starting with an initial condition where A is a random binary matrix. For synthetic experiments, we compared such matrices with Bernoulli(p) random matrices, adjacency matrices of biregular random sparse graphs (i.e. matrices in which each column has the same weight, and each row has the same weight -which may be different than the column weight), and Kirkman matrices. We found that matrices of Kirkman triples perform very well empirically in the regime of sizes we are interested in, besides facilitating easy pipetting, and hence the results are reported using only Kirkman matrices. In this section, we show a suite of experimental results on synthetic data as well as on real data. Recall from section II that a typical RT-PCR setup can test 96 samples in parallel. Three of these tests are used as control by the RT-PCR technician in order to have confidence that the RT-PCR process has worked. Hence, in order to optimize the available test bandwidth of the RT-PCR setup, the number of tests we perform in parallel should be ≤ 93, and as close to 93 as possible. Since in Kirkman matrices, the number of rows must be 6t + 3 for some t ∈ Z ≥0 , hence we choose 93. With this choice, the number of samples tested n has to be a multiple of 93/3 = 31, hence we chose n = 961. This matrix is not a full Kirkman matrix -a full matrix with 93 rows will have 1426 columns. However, we keep the number of columns of the matrix under 1000 due to challenges in pooling large number of samples. Furthermore, n = 961, m = 93 satisfies more than 10x factor improvement in testing while detecting 1% infected samples with reasonable sensitivity and specificity and is in a regime of interest for widespread screening or repeated testing. We also present results with a 45 × 105 partial Kirkman matrix in the supplemental Section S.VIII. This matrix gives 2.3x improvement in testing while detecting 9.5% infected samples with reasonable sensitivity and specificity. Further, two such batches of 105 tests in 45 pools may be run in parallel in a single RT-PCR setup. For the case of synthetic data, we generated k-sparse signal vectors x of dimension n = 961, for each k in {5, 8, 10, 12, 15, 17, 20}. We choose a wide range of k in order to demonstrate that not only do our algorithms have high sensitivity and specificity for large values of k, they also keep performing reasonably, well beyond the typical operating regime. The support of each signal vector x -given k -was chosen by sampling a k-sparse binary vector uniformly at random from the set of all k-sparse binary vectors. The magnitudes of the non-zero elements of x were picked uniformly at random from the range [1,32 768 ]. This high dynamic range in the value of x was chosen to reflect a variance in the typical threshold cycle values (C t ) of real PCR experiments, which can be between 16 and 32. From Eqn. (6), we can infer that viral loads vary roughly as 2 −C t (setting q = 1), up to constant multiplicative terms. In all cases, m = 93 noisy measurements in y were simulated following the noise model in Eqn. (3) with σ = 0.1 and q = 0.95. A 93 × 961 Kirkman sensing matrix was used for generating the measurements. The Poisson nature of the elements of x in Eqn. (3) was ignored. This approximation was based on the principle that if X ∼ Poisson(λ), then Std. Dev.(X )/E (X ) = √ λ/λ = 1/ √ λ which becomes smaller and smaller as λ increases. The recovery algorithms were tested on Q = 1000 randomly generated signals for each value of k. The following algorithms were compared: 1) COMP (see Table 1 ) 2) COMP followed by NNLASSO (see Table 2 ) 3) COMP followed by SBL (see Table 3 ) 4) COMP followed by NNOMP (see Table 4 ) 5) COMP followed by NNLAD (see Table V ) 6) COMP followed by NNLS (see Table S .VI in the Supplementary) For each algorithm any positives missed during the CS stage but caught by DD were declared as positives, as mentioned in Section III-D. For small sample sizes we also tested In the following,x denotes the estimate of x. Most numerical algorithms do not produce vectors that are exactly sparse and have many entries with very tiny magnitude, due to issues such as choice of convergence criterion. Since in this application, support recovery is of paramount importance to identify which samples in x were infected, we employed the following post-processing step: All entries inx whose magnitude fell below a threshold τ := 0.2 × x min were set to zero, yielding a vectorx. Here x min refers to the least possible value of the viral load, and this can be obtained offline from practical experiments on individual samples. In these synthetic experiments, we simply set x min := 1. We observed that varying the value of τ over a fairly wide range had negligible impact on the results, as can be observed from Section S.XII of the supplemental material. For SBL, we set τ to 0 and also set also negative entries in the estimate to 0. For NNOMP, such thresholding was inherently not needed. The various algorithms were compared with respect to the following criteria: It should be noted that all algorithms were evaluated on 1000 randomly generated sparse signals, given the same sensing matrix. The average value as well as standard deviation of all quality measures (over the 1000 signals) are reported in the Tables I, II, III, IV, V, S.VI. A comparison of Table 1 to Tables II, III , IV, V, S.VI indicates that COMP followed by NNLASSO/SBL/NNOMP/NNLAD/NNLS significantly reduces the false positives at the cost of a rare false negative. The RMSE is also significantly improved, since COMP does not estimate viral loads. At the same time, COMP significantly reduces the size of the problem for the CS stage. For example, for the 93 × 961 Kirkman matrix, when number of infected samples k is 12, the average size of the matrix after COMP filtering is ∼ 30 × 37. From Table 1 we see that Definite Defectives classifies many positives as high-confidence positives, for k upto 8. We note that the experimental results reported in these tables are quite encouraging, since these experiments are challenging due to small m and fairly large k, n, albeit with testing on synthetic data. We noticed that running the CS algorithms without the COMP step did not perform as well, results for which are presented in the supplemental Section S.IX. We observed that the advantages of our combined group testing and compressed sensing approach holds regardless of the sensing matrix size. For comparison, results of running our algorithms using a 45 × 105 Kirkman matrix instead of the 93 × 961 Kirkman matrix are presented in supplemental Section S.VIII. As mention earlier, the regularization parameters in various estimators such as COMP-NNLASSO, COMP-NNLAD, COMP-NNOMP, etc. are estimated via cross-validation. For these estimators, we therefore do not require knowledge of the σ parameter in the noise model from Eqn. (3) . The q parameter in the noise model is set to 0.95 in all our experiments. It is a reasonable choice as the molecule count is known to roughly double in each cycle of RT-PCR [14] . Moreover, variation of q in the range from 0.7 to 1 showed negligible variation in the results of our wet-lab experiments as can be seen in Section S.XI and Table S .VIII of the supplemental material. Also note that we only report viral loads relative to y min (see Eqn. (8))we do not attempt to estimate y min . These relative viral loads are interpretable by the RT-PCR technicians since they know t min , the minimum C t (threshold cycle) value observed in that experiment. Note as well that since y min is the viral load of the pool with the minimum C t value -it corresponds to the pool with the maximum viral load in that experiment. We also performed a comparison of our algorithms with the popular two-stage Dorfman pooling method (an adaptive However, the pools that are tested positive are passed onto a second stage, where all members of those pools are individually tested. The optimal pool size g * will minimize the expected number of tests taken by this process (given that the membership in each pool is decided randomly). A formula for the expected number of tests taken by Dorfman testing is derived in [2] . The derivation in [2] assumes the following: (1) Any given sample may be positive with probability p, independently of the other samples; (2) The number of samples n is divisible by the pool size g. We modify the formula from [2] for the case that n is not divisible by g (supplemental section S.XIII), and find g * by choosing the value of g which minimizes this number. We set p = k/n, so that out of n samples, the number of infected samples is k in expectation. Table VI shows the expected number of tests computed from the formula in supplemental section S.XIII, assuming that the expected number of infected samples k (and thus the optimal pool size g * ) is known in advance. We also empirically verified the expected number of tests by performing 1000 Monte Carlo simulations of Dorfman testing with the optimal pool size g * for each case, and did not observe much deviation from the numbers reported in Table VI. Comparisons of Tables I, II , III, IV with the two-stage Dorfman pooling method in VI show that our methods require much fewer tests, albeit with a slight increase in number of false negatives. Moreover, all our methods are single-stage methods and therefore require less time for testing, unlike the Dorfman method which requires two stages of testing. The number of CS measurements for successful recovery depends on the number of non-zero elements ( 0 norm) of the underlying signal. For example, this varies as O(k log n) for randomized sensing matrices [5] or as O(max(k 2 , √ n) for deterministic designs [22] . There is a lower bound of k √ n measurements for certain types of expander matrices to satisfy a sufficient (but not necessary) condition for recovery [23] . However, in practice k is always unknown, which leads to the question as to how many measurements are needed as a minimum for a particular problem instance. To address this, we adopt the technique from [48] to estimate k on the fly from the compressive measurements. This technique does not require signal recovery for estimating k. The relative error in the estimate of k is shown to be O( √ log m/m) [49] , which diminishes as m increases (irrespective of the true k). Table VII shows the accuracy of our sparsity estimate on synthetic data. The advantage of this estimate of k is that it can drive the COMP-BF algorithm, as well as act as an indicator of whether there exist any false negatives. We can use this knowledge to enable a graceful failure mode. In this mode, if our estimate of k is larger than what the CS algorithms can handle, we return only the output of the COMP stage. Hence in such rare cases, it minimizes the number of false negatives, at the cost of many false positives. In these cases a second stage of individual testing must be done on the samples which were declared positive. Table VIII shows the effect of using graceful failure mode with COMP followed by SBL for large values of k. In these experiments, output of COMP is returned if the estimated sparsity, k est , is greater than or equal to 20. We see that COMP-SBL with graceful failure mode matches the behaviour of COMP-SBL at sparsity value lower than 20, and that of COMP at sparsity value greater than 20. At sparsity equal to 20, it compromises between the high false positives of COMP, and the high false negatives of COMP-SBL. This is because of the variability in k est , which can occasionally be less than 20 even if k is equal to 20. We acquired real data in the form of test results on pooled samples from two labs: one at the National Center of Biological Sciences (NCBS) in India, and the other at the Wyss Institute at the Harvard Medical School, USA. In both cases, viral RNA was artificially injected into k of the n samples where k n. From these n samples, a total of m mixtures were created. For the datasets obtained from NCBS that we experimented with, we had m = 16, n = 40, k ∈ {1, 2, 3, 4}. For the data from the Wyss Institute, we had m = 24, n = 60, k = 2 and m = 30, n = 120, k = 2. The results for all these datasets are presented in Table IX . The 16 × 40 and 24 × 60 pooling matrices were obtained by performing a simulated annealing procedure to minimize the mutual coherence (see Section III-F9), starting with a random sparse binary matrix as initial condition. The 30 × 120 pooling matrix was a Kirkman matrix. We used q = 0.95 in all cases to obtain relative viral loads from C t values, using Eqn. (8) . While q may be estimated from raw RT-PCR data (Section S.XI, supplemental material), we found q = 0.95 to be a reasonable choice, and did not observe any variation in the number of reported positives when this parameter was changed between 0.7 to 1. For NNLASSO, NNLS and NNLAD, we use τ = 0.2 × y max as the threshold below which an estimated relative viral load is set to 0, since value of x min may not always be available for real experiments. Here y max is the relative viral load of the pool with the largest C t value, and consequently the smallest viral amount. We see that the CS algorithms reduce the false positives, albeit with an introduction of occasional false negatives for higher values of k. We also refer the reader to our work in [6] for a more in-depth description of results on real experimental data. Each algorithm we ran presented a different set of tradeoffs between sensitivity and specificity. While COMP provides us with sensitivity equal to 1, it suffers many false positives, especially for higher k. For other algorithms, in general both the sensitivity and the specificity decrease as k is increased. COMP-NNOMP (Table 4 ) has the highest specificity, but it comes at the cost of sensitivity. COMP-SBL (Table 3) has the best sensitivity for most values of k amongst the CS algorithms. COMP-NNLASSO (Table 2 ) has better specificity than COMP-SBL for small values of k, but loses out for k ≥ 15. COMP-NNLAD and COMP-NNLS (Tables 5 and S.VI) start behaving like COMP for higher values of k, effectively bounding the number of false negatives. However, their number of false positives is almost as much as those with COMP. Ideally, we want both high sensitivity and high specificity while catching a large number of infected samples. Hence, we look at k * , which is the maximum number of infected samples k for which the sensitivity and specificity of the algorithm are greater than or equal to some threshold values. For the 45 × 105 Kirkman matrix, we chose the sensitivity threshold as 0.99 and the specificity threshold as 0.95. For the 93 × 961 Kirkman matrix, we chose both thresholds to be 0.99, since a specificity threshold of 0.95 gives too many false positives for 961 samples. We observed that COMP-SBL has k * = 10 for both matrices, which is the highest amongst all algorithms tested. Typically we do not know the number of infections, but a prevalence rate of infection. The number of infected samples out of a given set of n samples may be treated as a Binomial random variable with probability of success equal to the prevalence rate. Under this assumption, using COMP-SBL with the 93 × 961 Kirkman matrix, we observed that the maximum prevalence rate for which sensitivity and specificity are both above 0.99 is 1%. Similarly, using COMP-SBL with the 45 × 105 Kirkman matrix, we observed that the maximum prevalence rate for which sensitivity is above 0.99 and specificity is above 0.95 is 9.5%. Thus, Tapestry is viable at prevalence rates as high as 9.5%, while reducing testing cost by a factor of 2.3. On the other hand, if the prevalence rate is only 1% or less, it can reduce testing cost by a factor of 10.3. Comments about sensitivity and specificity: We observe that the sensitivity and specificity of our method on synthetic data is within the recommendations of the U.S. Food and Drugs Administration (FDA), as provided in this document [50] . The document provides recommendations for percent positive agreement (PPA) and percent negative agreement (PNA) of a COVID-19 test with a gold standard test (such as RT-PCR done on individual samples). PPA and PNA are used instead of sensitivity and specificity when ground-truth positives are not known. Since for synthetic data we know the ground truth positives, we compare their PPA and PNA recommendations with the sensitivity and specificity observed by us. We use COMP-SBL for comparison, since we consider it to be our best method. For 'Testing patients suspected of COVID-19 by their healthcare provider' (point G.4.a, page 7 of [50] ), the document considers positive and negative agreement of ≥ 95% as acceptable clinical performance (page 9, row 2 of table in [50] ). The sensitivity and specificity of our method on the 93 × 961 Kirkman matrix is within this range for k ≤ 17 infected samples (Table 3) . For the 45 × 105 matrix, it is within this range for k ≤ 10 infected samples (Table S.XII) . For 'Screening individuals without symptoms or other reasons to suspect COVID-19 with a previously unauthorized test' (point G.4.c, page 10 of [50] ), the document considers positive agreement of ≥ 95% and negative agreement of ≥ 98% as acceptable (along with the lower bounds of two-sided 95% confidence interval to be > 76% and > 95% respectively). Similarly, for 'Adding population screening of individuals without symptoms or other reasons to suspect COVID-19 to an authorized test' (point G.4.d, page 12 of [50] ) the document has the same criterion as for point G.4.c. Our sensitivity and specificity are within the ranges specified for the 93 × 961 Kirkman matrix for k ≤ 12 (Table 3 ). While we do not report confidence intervals (as suggested for point G.4.c and G.4.d of [50] ), the standard deviation of sensitivity and specificity reported by us are fairly low, and we believe the performance of our method is within the recommendations of [50] . Since our numbers are on synthetic data -these numbers may vary upon full clinical validation, especially considering that there may be more sources of error in a real test. Nonetheless, we find these numbers to be encouraging. Further, we note that while our method incurs an occasional false negative, the viral loads of these false negative values are fairly small. This means that super-spreaders (who are believed to have high viral load [10] ) will almost always be caught by our method. In the supplemental material, we discuss this in more detail in Sec. S.X, and provide a table of mean and standard deviations of viral loads of false negatives (Table S .VII) for all our methods on synthetic data. Tapestry can detect certain errors caused by incorrect pipetting, pool contamination, or failed RT-PCR amplification of some pools. This is done by performing a consistency check after the COMP stage. If there is a pool which is positive, but all of the samples involved in it have been declared negative by COMP, this is indicative of error. In case of error, we list all samples categorized by the number of tests that they are positive in. However, the COMP consistency check will not catch all errors. Alternately, the noisy COMP [12] algorithm may be used to correct for errors in the COMP stage. A full exposition on detection and correction of errors is left as future work. Although Tapestry can work with a variety of sensing matrix designs, we found Kirkman matrices to be most suitable for our purposes. This is due to lower sparsity and smaller pool sizes presented by Kirkman matrices. Our algorithms also exhibit a more stable behaviour over a wide range of the number of infected samples k when using Kirkman matrices. We compare some alternative matrix designs in Section S.VI. We review some recent work which apply CS or combinatorial group testing for COVID-19 testing. The works in [51] - [53] adopt a nonadaptive CS based approach. The works in [54] - [56] use combinatorial group testing. Compared to these methods, our work is different in the following ways (also see [6] ): 1) Real/Synthetic data: Our work as well as that in [52] have tested results on real data, while the rest present only numerical or theoretical results. 2) Quantitative Noise model: Our work uses the physically-derived noise model in Eqn. (3) (as opposed to only Gaussian noise). This noise model is not considered in [51] . The work in [53] considers unknown noise. Combinatorial group testing methods [54] - [56] do not make use of quantitative information. The work in [52] uses only binary test information, even though the decoding algorithm is based on CS. 3) Algorithms: The work in [51] adopts the BPDN technique (i.e P1 from Eqn. (11)) as well as the brute-force search method for reconstruction. The work in [52] , [57] uses the LASSO, albeit with a ternary representation for the viral loads. The work in [53] uses NNLAD. We use the LASSO with a non-negative constraint, the brute-force method, NNLAD, as well as other techniques such as SBL and NNOMP, all in combination with COMP. The work in [51] assumes knowledge of the (Gaussian) noise variance for selection of ε in the estimator in Eqn. (11) , whereas we use cross-validation for all our estimators. The technique in [52] uses a slightly different form of cross-validation for selection of the regularization parameter in LASSO. Amongst combinatorial algorithms, [56] uses COMP, while [54] and [55] use message passing. 4) Sensing matrix design: The work in [51] uses randomly generated expander graphs, whereas we use Kirkman matrices. The work in [52] uses randomly generated sparse Bernoulli matrices or Reed-Solomon codes, while [55] uses Low-Density Parity Check (LDPC) codes [58] . The work in [53] uses Euler square matrices [59] , and the work in [56] uses the Shifted Transversal Design [60] . Both are deterministic disjunct matrices like Kirkman matrices. Each sample in our matrix participates in 3 pools as opposed to 5 pools as used in [55] , 6 pools as used in [52] and [56] , and 8 pools as used in [53] , which is advantageous from the point of view of pipetting time. 5) Sparsity estimation: Our work uses an explicit sparsity estimator and does not rely on any assumption regarding the prevalence rate. 6) Numerical comparisons: We found that COMP-NNLAD works better than the NNLAD method used in [53] on our matrices (see Tables 5 and S .XIX). We also found that COMP-NNLASSO and COMP-SBL have better sensitivity and specificity than COMP-NNLAD (see Tables 2, 3 , and V). The method in [52] can correctly identify up to 5/384 (1.3%) of samples with 48 tests, with an average number of false positives that was less than 2.75, and an average number of false negatives that was less than 0.33. On synthetic simulations with their 48 × 384 Reed-Solomon code based matrix (released by the authors) for a total of 100 x vectors with 0 norm of 5 using COMP-NNLASSO, we obtained 1.51 false positives and 0.02 false negatives on an average with a standard deviation of 1.439 and 0.14 respectively. Using COMP-SBL instead of COMP-NNLASSO with all other settings remaining the same, we obtained 1.4 false positives and 0.0 false negatives on an average with a standard deviation of 1.6 and 0.1 respectively. As such, a direct numerical comparison between our work and that in [52] is not possible, due to lack of available real data, however these numbers yield some indicator of performance. 7) Number of Tests: We use 93 tests for 961 samples while achieving more than 0.99 sensitivity and specificity for k = 10 infections using COMP-SBL. In a similar setting, [55] use 108 tests for Q = 1000 samples under prevalence rate 0.01 for exact 2-stage recovery. The work in [56] uses 186 tests for 961 samples under the same prevalence rate, albeit for sensitivity equal to 1 and very high specificity. Matrix sizes studied in other work are very different than ours. The work in [61] builds on top of our Tapestry scheme to reduce the number of tests, but it is a two-stage adaptive technique and hence will require much more testing time. We have presented a non-adaptive, single-round technique for prediction of infected samples as well as the viral loads, from an array of n samples, using a compressed sensing approach. We have empirically shown on synthetic data as well as on some real lab acquisitions that our technique can correctly predict the positive samples with a very small number of false positives and false negatives. Moreover, we have presented techniques for appropriate design of the mixing matrix. Our single-round testing technique can be deployed in many different scenarios such as the following: 1) Testing of 105 symptomatic individuals in 45 tests. 2) Testing of 195 asymptomatic individuals in 45 tests assuming a low rate of infection. A good use case for this is airport security personnel, delivery personnel, or hospital staff. 3) Testing of 399 individuals in 63 tests. This can be used to test students coming back to campuses, or police force, or asymptomatic people in housing blocks and localities currently under quarantine. 4) Testing of 961 people in 93 tests, assuming low infection rate. This might be suitable for airports and other places where samples can be collected and tested immediately, and it might be possible to obtain liquid handling robots. Outputs: We have designed an Android app named Byom Smart Testing to make our Tapestry protocol easy to deploy in the future. The app can be accessed at [62] . We are also sharing our code and some amount of data at [63] . More information is also available at our website [64] . Future work: Future work will involve extensive testing on real COVID-19 data, and extensive implementation of a variety of algorithms for sensing matrix design as well as signal recovery, keeping in mind the accurate statistical noise model and accounting for occasional pipetting errors. Estimating COVID-19 prevalence in the United States: A sample selection model approach The detection of defective members of large populations Israelis introduce method for accelerated Covid-19 testing pool testing' increases worldwide capacities many times over An introduction to compressive sampling Tapestry: A single-round smart pooling technique for COVID-19 testing Robust nonnegative sparse recovery and the nullspace property of 0/1 measurements Kirkman's schoolgirl problem Solution of Kirkman's schoolgirl problem Do superspreaders generate new superspreaders? A hypothesis to explain the propagation pattern of COVID-19 Viral dynamics in mild and severe cases of COVID-19 Non-adaptive probabilistic group testing with noisy measurements: Near-optimal bounds with efficient algorithms How is the COVID-19 virus detected using real time RT-PCR? Efficiency of real-time PCR Group testing algorithms: Bounds and simulations Group testing and sparse signal recovery Motion compensated dynamic MRI reconstruction with local affine optical flow estimation Compressed sensing for energy-efficient wireless telemonitoring of noninvasive fetal ECG via block sparse bayesian learning Compressed sensing of multichannel EEG signals: The simultaneous cosparsity and low-rank optimization The restricted isometry property and its implications for compressive sensing A simple proof of the restricted isometry property for random matrices Deterministic construction of compressed sensing matrices Compressed sensing using binary matrices of nearly optimal dimensions Introduction to compressed sensing Combining geometry and combinatorics: A unified approach to sparse signal recovery Statistical Learning With Sparsity: The LASSO and Generalizations Orthogonal matching pursuit: Recursive function approximation with application to wavelet decomposition Orthogonal matching pursuit for sparse signal recovery with noise Fast non-negative orthogonal matching pursuit Sparse bayesian learning and the relevance vector machine Sparse bayesian learning for basis selection A review of sparse recovery algorithms Rectified Gaussian scale mixtures and the sparse non-negative least squares problem Efficient noise-blind 1 -regression of nonnegative compressible signals A fast noniterative algorithm for compressive sensing using binary measurement matrices Performance bounds for expander-based compressed sensing in poisson noise Steiner triple systems Social golfer problem Math games: Social golfer problem Solution Methods for the Social Golfer Problem Scheduling social golfers locally Construction of low-density paritycheck codes from Kirkman triple systems Matrix and steiner-triple-system smart pooling assays for high-performance transcription regulatory network mapping Steiner systems Steiner systems for two-stage disjunctive testing Stable restoration and separation of approximately sparse signals On optimization of the measurement matrix for compresive sensing On the fly estimation of the sparsity degree in compressed sensing using sparse sensing matrices Sparsity estimation from compressive projections via sparse random matrices In vitro diagnostics euas, section: Templates for eua submissions/diagnostic templates (molecular and antigen), bullet: Molecular diagnostic template for laboratories Low-cost and high-throughput testing of COVID-19 viruses and antibodies via compressed sensing: System concepts and computational experiments Efficient high-throughput SARS-CoV-2 testing to detect asymptomatic carriers Practical high-throughput, nonadaptive and noise-robust SARS-CoV-2 testing Noisy pooled PCR for virus testing Group testing-based robust algorithm for diagnosis of COVID-19 Rapid, large-scale, and effective detection of COVID-19 via non-adaptive testing Highly efficient de novo mutant identification in a sorghum bicolor tilling population using the comseq approach Good error-correcting codes based on very sparse matrices Deterministic compressed sensing matrices: Construction via euler squares and applications A new pooling strategy for high-throughput screening: The shifted transversal design Two-stage adaptive pooling with RT-QPCR for COVID-19 screening Byom app Tapestry website Combinatorial Group Testing and Its Applications On the theoretical analysis of cross validation in compressive sensing Minimax optimal convex methods for poisson inverse problems under q -ball sparsity The authors would like to thank the two anonymous reviewers as well as the Associate Editor for careful review of the previous version of this article and helpful suggestions which have greatly improved this article.