key: cord-0440478-9nsmdhue authors: Lin, Yi-Jheng; Yu, Che-Hao; Liu, Tzu-Hsuan; Chang, Cheng-Shang; Chen, Wen-Tsuen title: Constructions and Comparisons of Pooling Matrices for Pooled Testing of COVID-19 date: 2020-09-30 journal: nan DOI: nan sha: 07817e0d2b297a6cf0bf722d09b652cb04b78225 doc_id: 440478 cord_uid: 9nsmdhue In comparison with individual testing, group testing (also known as pooled testing) is more efficient in reducing the number of tests and potentially leading to tremendous cost reduction. As indicated in the recent article posted on the US FDA website, the group testing approach for COVID-19 has received a lot of interest lately. There are two key elements in a group testing technique: (i) the pooling matrix that directs samples to be pooled into groups, and (ii) the decoding algorithm that uses the group test results to reconstruct the status of each sample. In this paper, we propose a new family of pooling matrices from packing the pencil of lines (PPoL) in a finite projective plane. We compare their performance with various pooling matrices proposed in the literature, including 2D-pooling, P-BEST, and Tapestry, using the two-stage definite defectives (DD) decoding algorithm. By conducting extensive simulations for a range of prevalence rates up to 5%, our numerical results show that there is no pooling matrix with the lowest relative cost in the whole range of the prevalence rates. To optimize the performance, one should choose the right pooling matrix, depending on the prevalence rate. The family of PPoL matrices can dynamically adjust their column weights according to the prevalence rates and could be a better alternative than using a fixed pooling matrix. COVID-19 pandemic has deeply affected the daily life of many people in the world. The current strategy for dealing with COVID-19 is to reduce the transmission rate of COVID-19 by preventive measures, such as contact tracing, wearing masks, and social distancing. One problematic characteristic of COVID-19 is that there are asymptomatic infections [1] . As those asymptomatic infections are unaware of their contagious ability, they can infect more people if they are not yet been detected [2] . As shown in the recent paper [3] , massive COVID-19 testing in South Korea on Feb. 24, 2020, can greatly reduce the proportion of undetectable infected persons and effectively reduce the transmission rate of COVID-19. Massive testing for a large population is very costly if it is done one at a time. For a population with a low prevalence rate, group testing (or pool testing, pooled testing, batch testing) that tests a group by mixing several samples together can achieve a great extent of saving testing resources. As indicated in the recent article posted on the US FDA website [4] , the group testing approach has received a lot of interest lately. Also, in the US CDC's guidance for the use of pooling chehaoyu@gapp.nthu.edu.tw; ca-rina000314@gmail.com; cschang@ee.nthu.edu.tw; wtchen@cs.nthu.edu.tw. procedures in SARS-CoV-2 [5] , it defines three types of tests: (i) diagnostic testing that is intended to identify occurrence at the individual level and is performed when there is a reason to suspect that an individual may be infected, (ii) screening testing that is intended to identify occurrence at the individual level even if there is no reason to suspect an infection, and (iii) surveillance testing includes ongoing systematic activities, including collection, analysis, and interpretation of healthrelated data. The general guidance for diagnostic or screening testing using a pooling strategy in [5] (quoted below) basically follows the two-stage group testing procedure invented by Dorfman in 1943 [6] : "If a pooled test result is negative, then all specimens can be presumed negative with the single test. If the test result is positive or indeterminate, then all the specimens in the pool need to be retested individually." The Dorfman two-stage algorithm is a very simple group testing strategy. Recently, there are more sophisticated group testing algorithms proposed in the literature, see, e.g., [7] - [10] . Instead of pooling a sample into a single group, these algorithms require diluting a sample and then splitting it into multiple groups (pooled samples). Such a procedure is specified by a pooling matrix that directs each diluted sample to be pooled into a specific group. The test results of pooled samples are then used for decoding (reconstructing) the status of each sample. In short, there are two key elements in a group testing strategy: (i) the pooling matrix, and (ii) the decoding algorithm. As COVID-19 is a severe contagious disease, one should be very careful about the decoding algorithm used for reconstructing the testing results of persons. Though decoding algorithms that use soft information for group testing, including various compressed sensing algorithms in [8] - [12] , might be more efficient in reducing the number of tests, they are more prone to have false positives and false negatives. A false positive might cause a person to be quarantined for 14 days, and thus losing 14 days of work. On the other hand, a false negative might have an infected person wandering around the neighborhood and cause more people to be infected. In view of this, it is important to have group testing results that are as "definite" as individual testing results (in a noiseless setting). Following the CDC guidance [5] , we use the decoding algorithm, called the definite defectives (DD) algorithm in the literature (see Algorithm 2.3 of the monograph [13] ), that can have definite testing results. The DD algorithm first identifies negative samples from a negative testing result of a group (as advised by the CDC guidance [5] ). Such a step is known as the combinatorial orthogonal matching pursuit (COMP) step in the literature [13] . Then the DD algorithm identifies positive samples if they are in a group with only one positive sample. Not every sample can be decoded by the DD algorithm. As the Dorfman two-stage algorithm, samples that are not decoded by the DD algorithm go through the second stage, and they are tested individually. We call such an algorithm the two-stage DD algorithm. One of the main objectives of this paper is to compare the performance of various pooling matrices proposed in the literature, including 2D-pooling [7] , P-BEST [8] , and Tapestry [9] , [10] , using the two-stage DD decoding algorithm. In addition to these pooling matrices, we also propose a new construction of a family of pooling matrices from packing the pencil of lines (PPoL) in a finite projective plane. The family of PPoL pooling matrices has very nice properties: (i) both the column correlation and the row correlation are bounded by 1, and (ii) there is a freedom to choose the construction parameters to optimize performance. To measure the amount of saving of a group testing method, we adopt the performance measure, called the expected relative cost in [6] . The expected relative cost is defined as the ratio of the expected number of tests required by the group testing technique to the number of tests required by the individual testing. We then measure the expected relative costs of these pooling matrices for a range of prevalence rates up to 5%. Some of the main findings of our numerical results are as follows: There is no pooling matrix that has the lowest relative cost in the whole range of the prevalence rates considered in our experiments. To optimize the performance, one should choose the right pooling matrix, depending on the prevalence rate. (ii) The expected relative costs of the two pooling matrices used in Tapestry [9] , [10] are high compared to the other pooling matrices considered in our experiments. Its performance, in terms of the expected relative cost, is even worse than the (optimized) Dorfman two-stage algorithm. However, Tapestry is capable of decoding most of the samples in the first stage. In other words, the percentages of samples that need to go through the second stage are the smallest among all the pooling matrices considered in our experiments. (iii) P-BEST [8] has a very low expected relative cost when the prevalence rate is below 1%. However, its expected relative cost increases dramatically when the prevalence rate is above 1.3%. (iv) 2D-pooling [7] has a low expected relative cost when the prevalence rate is near 5%. Unlike Tapestry, P-BEST, and PPoL that rely on robots for pipetting, the implementation of 2D-pooling is relatively easy by humans. There is a PPoL pooling matrix with column weight 3 that outperforms the P-BEST pooling matrix for the whole range of the prevalence rates considered in our experiments (up to 5%). We suggest using that PPoL pooling matrix up to the prevalence rate of 2% and then switch to other PPoL pooling matrices with respect to the increase of the prevalence rate. The detailed suggestions are shown in Table IV of Section V. The paper is organized as follows: in Section II, we briefly review the group testing problem, including the mathematical formulation and the DD decoding algorithm. In Section III, we introduce the related works that are used in our comparison study. We then propose the new family of PPoL pooling matrices in Section IV. In Section V, we conduct extensive simulations to compare the performance of various pooling matrices using the two-stage DD algorithm. The paper is concluded in Section VI, where we discuss possible extensions for future works. Consider the group testing problem with M samples (indexed from 1, 2, . . . , M ), and N groups (indexed from 1, 2, . . . , N ). The M samples are pooled into the N groups (pooled samples) through an N × M binary matrix H = (h n,m ) so that the m th sample is pooled into the n th group if h n,m = 1 (see Figure 1 ). Such a matrix is called the pooling matrix in this paper. Note that a pooling matrix corresponds to the biadjacency matrix of an N × M bipartite graph. Let x = (x 1 , x 2 , . . . , x M ) be the binary state vector of the M samples and y = (y 1 , y 2 , . . . , y N ) be the binary state vector of the N groups. Then where the matrix operation is under the Boolean algebra (that replaces the usual addition by the OR operation and the usual multiplication by the AND operation). The main objective of group testing is to decode the vector x given the observation vector y under certain assumptions. In this paper, we adopt the following basic assumptions for binary samples: (i) Every sample is binary, i.e., it is either positive (1) or negative (0). Every group is binary, and a group is positive (1) if there is at least one sample in that group is positive. On the other hand, a group is negative (0) if all the samples pooled into that group are negative. If we test each sample one-at-a-time, then the number of tests for M samples is M , and the average number of tests per sample is 1. The key advantage of using group testing is that the number of tests per sample can be greatly reduced. One important performance measure of group testing, called the expected relative cost in [6] , is the ratio of the expected number of tests required by the group testing technique to the number of tests required by the individual testing. The main objective of this paper is to compare the expected relative costs of various group testing methods. In this section, we briefly review the definite defectives (DD) algorithm (see Algorithm 2.3 of [13] ). The DD algorithm first identifies negative samples from a negative testing result of a group. Such a step is known as the combinatorial orthogonal matching pursuit (COMP) step. Then the DD algorithm identifies positive samples if they are in a group with only one positive sample. The detailed steps of the DD algorithm are outlined in Algorithm 1. Output an M -vector for the test results of the M samples. 0: Initially, every sample is marked "un-decoded." 1: If there is a negative group, then all the samples pooled into that group are decoded to be negative. 2: The edges of samples decoded to be negative in the bipartite graph are removed from the graph. 3: Repeat from Step 1 until there is no negative group. 4: If there is a positive group with exactly one (remaining) sample in that group, then that sample is decoded to positive. 5: Repeat from Step 4 until no more samples can be decoded. In Figure 2 , we provide an illustrating example for Algorithm 1. In Figure 2 (a), the test result of G2 is negative, and thus the three samples S1, S4 and S5, are decoded to be negative. In Figure 2 (b), the edges that are connected to the samples S1, S4 and S5, are removed from the bipartite graph. In Figure 2 (c), the test results of the two groups G1 and G3 are positive. As S2 is the only sample in G3, S2 is decoded to be positive. Note that one might not be able to decode all the samples by the above decoding algorithm. For instance, if a particular sample is pooled into groups that all have at least one positive sample, then there is no way to know whether that sample is positive or negative. As shown in Figure 3 , the sample S3 cannot be decoded by the DD algorithm as the test results of the three groups are the same no matter if S3 is positive or not. As shown in Lemma 2.2 of [13] , one important guarantee of the DD algorithm is that there is no false positive. In order to resolve all the "un-decoded" samples, we add another stage by individually testing each "un-decoded" sample. This leads to the following two-stage DD algorithm in Algorithm 2. In [14] - [16] , it was shown that a single positive sample can still be detected even in pools of 5-32 samples for the standard RT-qPCR test of COVID-19. Such an experimental result provides supporting evidence for group testing of COVID-19. In the following, we review four group testing strategies proposed in the literature for COVID-19. The Dorfman two-stage algorithm [17] : For the case that N = 1, i.e., every sample is pooled into a single group, the DD2 algorithm is simply the original Dorfman two-stage algorithm [6] , i.e., if the group of M samples is tested negative, then all the M samples are ruled out. Otherwise, all the M samples are tested individually. Suppose that the prevalence rate is r 1 . Then the expected number of tests to decode the M samples by the Dorfman two-stage algorithm is 1 + (1 − (1 − r 1 ) M )M . As such, the expected relative cost (defined as the ratio of the expected number of tests required by the group testing technique to the number of tests required by the individual testing in [6] Table I of [6] , the optimal group size M is 11 with the expected relative cost of 20% when the prevalence rate r 1 is 1%. 2D-pooling [7] : On a 96-well plate, there are 8 rows and 12 columns. Pool the samples in the same row (column) (a) Step 1: All the samples pooled into that negative groups are decoded to be negative. Step 2: The edges of negative samples are removed. (c) Step 4: Exactly one sample in a positive group is decoded to be positive. into a group. This results in 20 groups for 96 samples. One advantage of this simple 2D-pooling strategy is to minimize pipetting errors. P-BEST [8] : P-BEST [8] uses a 48 × 384 pooling matrix constructed from the Reed-Solomon code [18] for pooled testing of COVID-19. For the pooling matrix, each sample is pooled into 6 groups, and each group contains 48 samples. In [8] , the authors proposed using a compressed sensing algorithm, called the Gradient Projection for Sparse Reconstruction (GPSR) algorithm for decoding. Though it is claimed in [8] that the GPSR algorithm can detect up to 1% of positive carriers, there is no guarantee that every decoded sample (by the GPSR algorithm) is correct. Tapestry [9] , [10] : The Tapestry scheme [9] , [10] uses the Kirkman triples to construct their pooling matrices. For the pooling matrix in [9] , [10] , each sample is pooled into 3 groups (in their experiments, some samples are only pooled into 2 groups). As such, it is sparser than that used by P-BEST. However, one of the restrictions for the pooling matrices constructed from the Kirkman triples is that the column weights must be 3. Such a restriction limits its applicability to optimize its performance according to the prevalence rate. We note that a compressed-sensing-based decoding algorithm was proposed in [9] , [10] . Such a decoding algorithm further exploits the viral load (Ct value) of each pool and reconstructs the Ct value of each positive sample. It is claimed to be viable not just with low (< 4%) prevalence rates, but even with moderate prevalence rates (5%-10%). In this section, we propose a new family of pooling matrices from packing the pencil of lines (PPoL) in a finite projective plane. Our idea of constructing PPoL pooling matrices was inspired by the constructions of channel hopping sequences in the rendezvous search problem in cognitive radio networks and the constructions of grant-free uplink transmission schedules in 5G networks (see, e.g., [19] - [22] ), in particular, the channel hopping sequences constructed by the PPoL algorithm in [19] . A pooling matrix is said to be (d 1 , d 2 )-regular if there are exactly d 1 (resp. d 2 ) nonzero elements in each column (resp. row). In other words, the degree of every left-hand (resp. right-hand) node in the corresponding bipartite graph is d 1 (resp. d 2 ). The total number of edges in the bipartite graph is A. Perfect difference sets and finite projective planes As our construction of the pooling matrix is from packing the pencil of lines in a finite projective plane, we first briefly review the notions of difference sets and finite projective planes. Definition 3: (Finite projective planes) A finite projective plane of order m, denoted by P G(2, m), is a collection of m 2 + m + 1 lines and m 2 + m + 1 points such that (P1) every line contains m + 1 points, (P2) every point is on m + 1 lines, (P3) any two distinct lines intersect at exactly one point, and (P4) any two distinct points lie on exactly one line. When m is a prime power, Singer [23] established the connection between an (m 2 + m + 1, m + 1, 1)-perfect difference set and a finite projective plane of order m through a collineation that maps points (resp. lines) to points (resp. lines) in a finite projective plane. Specifically, suppose that Then these m 2 + m + 1 points and m 2 + m + 1 lines form a finite projective plane of order m. In this section, we propose the PPoL algorithm for constructing pooling matrices. For this, one first constructs an (m 2 + m + 1, m + 1, 1)-perfect difference set, D = {a 0 , a 1 , . . . , a m } with a 0 = 0 < a 1 = 1 < a 2 < . . . , < a m < m 2 + m + 1. (4) Let p = m 2 + m + 1 and = 0, 1, 2, . . . , p − 1 be the p lines in the corresponding finite projective plane. It is easy to see that the m + 1 lines in the corresponding finite projective plane that contain point 0 are D 0 , D p−a1 , D p−a2 , . . . , D p−am . These m + 1 lines are called the pencil of lines that contain point 0 (as the pencil point). As the only intersection of the m + 1 lines is point 0, these m + 1 lines, excluding point 0, are disjoint, and thus can be packed into Z p . This is formally proved in the following lemma. Proof. First, note that {D 0 , D p−a1 , . . . , D p−am } are the m+ 1 lines that contain point 0. As any two distinct lines intersect at exactly one point, we know that for i = 0, and that for i = j, Thus, they are disjoint. As there are m + 1 points in D 0 and m points in D 0 p−am contains m + 1 + m 2 points. These m + 1 + m 2 points are exactly the set of m 2 + m + 1 points in the finite projective plane of order m. In Algorithm 3, we show how one can construct a pooling matrix from a finite projective plane. The idea is to first construct a bipartite graph with the line nodes on the left and the point nodes on the right. There is an edge between a point node and a line node if that point is in that line. Then we start trimming this line-point bipartite graph to achieve the needed compression ratio. Specifically, we select the subgraph with the m 2 line nodes that does not contain point 0 (on the left) and the d 1 m point nodes in the union of d 1 pencil of lines (on the right). Note that in Algorithm 3, the number of samples has to be m 2 . However, this restriction may not be met in practice. A simple way to tackle this problem is by adding additional dummy samples to ensure that the total number of samples is m 2 . In the literature, there are some sophisticated methods (see, e.g., the recent work [24] ) that further consider the "balance" issue, i.e., samples should be pooled into groups as evenly as possible. Figure 4 (a). In Step 4, first remove point 0 and line 0 along with the edges attached to these two nodes from the bipartite graph. The nodes and the edges that need to be removed are marked in red in Figure 4 (b) , and the trimmed bipartite graph is shown in Figure 4 (c) . Then, let G = (g n, ) be the 6 × 6 biadjacency matrix of the trimmed The two lines that need to be removed are marked in red in Figure 4 (d) , and the bipartite graph after removing the two lines are shown in Figure 4 Proposition 6: There is at most one common nonzero element in two rows (resp. columns) in the pooling matrix H from Algorithm 3, i.e., the inner product of two row vectors (resp. column vectors) is at most 1. Proof. This is because the bipartite graph with the biadjacency matrix H is a subgraph of the line-point bipartite graph corresponding to a finite projective plane. From (P3) and (P4) of Definition 3, any two distinct lines intersect at exactly one point, and any two distinct points lie on exactly one line. Thus, there is at most one common nonzero element in two rows (resp. columns) in H from Algorithm 3. The girth (the minimum length of a cycle) of the bipartite graph with biadjacency matrix H is at least 6. Proof. As the length of a cycle in a bipartite graph must be an even number. It suffices to show that there does not exist a cycle of length 4. We prove this by contradiction. Suppose that there is a cycle of length 4. Suppose that this cycle contains two line nodes L 1 and L 2 and two point nodes P 1 and P 2 . Then the intersection of the two lines L 1 and L 2 contains two points P 1 and P 2 . This contradicts (P3) in Definition 3. Theorem 8: Consider using the d 1 m × m 2 pooling matrix H from Algorithm 3 for a binary state vector x in a noiseless setting. If the number of positive samples in x is not larger than d 1 − 1, then every sample can be correctly decoded by the DD algorithm in Algorithm 1. Proof. Suppose that there are at most d 1 −1 positive samples. We first show that every negative sample can be correctly decoded by the DD algorithm in Algorithm 1. Consider a negative sample. Since there are at most d 1 − 1 positive samples that can be pooled into the d 1 groups of this negative sample, and two different samples can be in a common group at most once (Proposition 6), there must be at least one group without positive samples (among the d 1 groups of this negative sample). Thus, this negative sample can be correctly decoded. Now consider a positive sample. Since there are at most d 1 − 2 positive samples that can be pooled into the d 1 groups of this positive sample, and two different samples can be in a common group at most once (Proposition 6), there must be at least one group in which this positive sample is the only positive sample. Thus, every positive sample can be correctly decoded. We note that there are other methods that can also generate bipartite graphs that satisfy the property in Proposition 6. For instance, in the recent paper [25] , Täufer used the shifted transversal design to generate "mutlipools" (in Definition 1 of [25] ) that satisfy the property in Proposition 6 when m is a prime (in Theorem 3 of [25] ). In this section, we establish the connection between the PPoL design and the shift transversal design when m is restricted to a prime. We do this by identifying a mapping between these two designs in the following example. Example 2: Consider m = 3 in the PPoL algorithm. Then let p = m 2 + m + 1 = 13, and D 0 = {a 0 , a 1 , a 2 , a 3 } = {0, 1, 4, 6} be a perfect difference set in Z 13 . By using the PPoL algorithm in Algorithm 3, we obtain a bipartite graph with 9 samples (lines) and 12 groups (points) in Figure 5 . In the following, we discuss the four cases with d 1 = 1, 2, 3, 4, respectively. Figure 5 .) The above PPoL pooling strategy is the same as (N, n, k) = (m 2 , m, d 1 )-multipool in the shifted transversal design [25] if we arrange the 9 samples in the 3 × 3-square in Table I . Specifically, pooling along rows yields the three groups In fact, these two constructions are closely related to orthogonal Latin squares [26] . i,j = (r * i + j) is in GF(3). With the "vertical" and "horizontal" cases, the maximum number of multiplicity k in the shifted transversal design is n + 1 = 4. Similarly, the maximum number of d 1 in the PPoL algorithm is m + 1 = 4. Moreover, pooling matrices that satisfy the decoding property in Theorem 8 are known as the superimposed codes in [27] . In this section, we conduct a probabilistic analysis of the PPoL pooling matrices. We make the following assumption: (A1) All the samples are i.i.d. Bernoulli random variables. A sample is positive (resp. negative) with probability r 1 (resp. r 0 ). The probability r 1 is known as the prevalence rate in the literature. Note that r 1 +r 0 = 1. Also, let q 1 (resp. q 0 ) be the probability that the group end of a randomly selected edge is positive (resp. negative). Excluding the randomly selected edge, there are d 2 − 1 remaining edges in that group, and thus Let p 0 be the conditional probability that a sample cannot be decoded, given that the sample is a negative sample. Note that a negative sample can be decoded if at least one of its edges is in a negative group, excluding its edge (see Figure 6 ). Consider a negative sample, called the tagged sample. Since the girth of the bipartite graph of the pooling matrix is 6 (as shown in Corollary 7), the samples in the d 1 groups of the subtree of the tagged sample are distinct (see the tree expansion in Figure 6 ). Thus, Letp 0 be the conditional probability that the sample end of a randomly selected edge cannot be decoded, given that the sample end is a negative sample. Note that the excess degree of a sample (excluding the randomly selected edge) is d 1 − 1. Analogous to the argument for (11) (see the bottom subtree of the tree expansion in Figure 7) , we havê Let p 1 be the conditional probability that a sample cannot be decoded given that the sample is a positive sample. Note that a positive sample can be decoded if at least one of its edges is in a group in which all the edges are removed except the edge of the positive sample. Since an edge is removed if its sample end is a negative sample and that sample end is decoded to be negative, the probability that an edge is removed is (1 −p 0 )r 0 . If the tree expansion in Figure 7 is actually a tree, then We note that the tree expansion in Figure 7 may not be a tree for a PPoL pooling matrix generated from Algorithm 3, the identity in (13) is only an approximation. A sufficient condition for the tree expansion in Figure 7 to be a tree of depth 4 is that the girth of the bipartite graph is larger than 8. (If the graph in Figure 7 is not a tree, i.e., there is a loop in that graph, then the girth of the bipartite graph is less than or equal to 8.) Unfortunately, the girth of a PPoL pooling matrix can only proved to be at least 6. Since a sample cannot be decoded with probability r 0 p 0 + r 1 p 1 , the average number of tests needed for the DD2 algorithm in Algorithm 2 to decode the M samples is N + M (r 0 p 0 + r 1 p 1 ). The expected relative cost for the DD2 algorithm with an N × M pooling matrix is N + M (r 0 p 0 + r 1 p 1 ) where G = M/N is the (compressing) gain of the pooling matrix in (2) . Note that for a (d 1 , d 2 )-regular pooling matrix, we have from (2) that G = d 2 /d 1 . Thus, we can use (11), (13) and (14) to find the (d 1 , d 2 )-regular pooling matrix that has the lowest expected relative cost (though (13) is only an approximation for the pooling matrices constructed from the PPoL algorithm). In Table II , we use grid search to find the (d 1 , d 2 )-regular pooling matrix with the lowest expected relative cost for various prevalence rates r 1 up to 10%. The search regions for the grid search are 2 ≤ d 1 ≤ 8 and d 1 ≤ d 2 ≤ 31. In the last column of this table, we also show the expected relative cost of the Dorfman two-stage algorithm (Table I of [6] ). As shown in this table, using the DD2 algorithm (with the optimal pooling matrices) has significant gains over the Dorfman two-stage algorithm. Unfortunately, not every optimal (d 1 , d 2 )-regular pooling matrix in Table II can be constructed by using the PPoL algorithm in Algorithm 3. In the next section, we will look for suboptimal pooling matrices that have small performance degradation. In this section, we compare the performance of various pooling matrices by using the DD2 algorithm in Algorithm 2. The first four pooling matrices are constructed by using the PPoL algorithm in Algorithm 3 with the parameters (m, d 1 ) = (31, 3), (23, 4) , (13, 3) , and (7, 2), respectively. The fifth pooling matrix is the pooling matrix used in P-BEST [8] . The sixth matrix is the 15 × 35 pooling matrix constructed by the Kirkman triples. The next two pooling matrices are used in Tapestry [9] , [10] . The last pooling matrix is the 2D-pooling matrix in [7] . In Table III , we show the basic information of these pooling matrices. The size of an N × M pooling matrix indicates that the number of groups is N , and the number of samples is M . The parameter d 1 is the number of groups in which a sample is pooled. On the other hand, d 2 is the number of samples in a group. Note that there are some pooling matrices that are not (d 1 , d 2 )-regular. For instance, in the 2D-pooling matrix, there are 8 groups with 12 samples, and 12 groups with 8 samples. Also, both the 16 × 40 matrix and the 24 × 60 matrix used in Tapestry are not (d 1 , d 2 )-regular. The column marked with row cor. (resp. col. cor.) is the maximum of the inner product of two rows (resp. columns) in a pooling matrix. For a pooling matrix, the column marked with girth is the minimum length of a cycle in the bipartite graph corresponding to that pooling matrix. The column marked with (comp.) gain is the compressing gain G of a pooling matrix, which is the ratio of the number of columns (samples) to the number of rows (groups), i.e., G = M/N . As shown in Table III , both the row correlation and the column correlation of the pooling matrices constructed from the PPoL algorithm in Algorithm 3 are 1. So are the 15 × 35 pooling matrix constructed by the Kirkman triples. Such a correlation result is expected from Proposition 6. On the other hand, the row correlation and the column correlation of the pooling matrix in P-BEST [8] are 6 and 2, respectively. Also, the girth of the pooling matrix in P-BEST is only 4, which is smaller than the other four matrices. The girth of the 16 × 40 pooling matrix in Tapestry is also 4. This shows that the pooling matrices from the PPoL algorithm are more "spread-out" than the pooling matrix in P-BEST and the 16 × 40 pooling matrix in Tapestry. To compare the performance of these pooling matrices, we conduct 10,000 independent experiments for each value of the prevalence rate r 1 , ranging from 0% to 5%. Each numerical result is obtained by averaging over these 10,000 independent experiments. In Figure 8 , we show the (measured) conditional probability p 0 (that a sample cannot be decoded given it is a negative sample) for these pooling matrices. For the PPoL pooling matrices, the measured p 0 's match extremely well with the theoretical results from (11) . As shown in this figure, the Kirkman matrix and the two matrices in Tapestry have the best performance. This is because their d 2 's (the number of samples in a group) are small (below 9 for these three matrices). As such, the probability that a group is tested negative is higher than the other pooling matrices. Note that these three matrices also have low (compressing) gains, 2.33-2.5. On the other hand, P-BEST has the worst performance for p 0 as the number of samples in a group for that matrix is 48, which is the largest among all these pooling matrices. In Figure 9 , we show the (measured) conditional probability p 1 (that a sample cannot be decoded given it is a positive sample) for these pooling matrices. Once again, the Kirkman matrix and the two matrices in Tapestry have the best perfor- mance. This is mainly due to the low (compressing) gains of these three matrices. Though not shown in Figure 9 , we note that the measured p 1 's are very close to those from (13) , and thus the tree expansion in Figure 7 is actually tree-like. As discussed in Section IV-D, the probability that a sample cannot be decoded is r 0 p 0 +r 1 p 1 . Such a probability is also the probability that a sample needs to go through the second stage for individual testing. In Figure 10 , we show the probability r 0 p 0 + r 1 p 1 as a function of the prevalence rate r 1 for various pooling matrices. As shown in this figure, the Kirkman matrix and the two matrices in Tapestry have the best performance. Once again, this is mainly due to the low (compressing) gains of these three matrices. We note that it takes time to do the second test. The numerical results in Figure 10 imply that using the Kirkman matrix (or the two matrices in Tapestry) has the shortest expected time to obtain a testing result. A fair comparison of these pooling matrices is to measure their expected relative costs (defined in [6] ). Recall that the expected relative cost is the ratio of the expected number of tests required by the group testing technique to the number of tests required by the individual testing. In Figure 11 , we show the (measured) expected relative costs for these pooling matrices. In this figure, we also plot the curve for the Dorfman two-stage algorithm (the black curve) with the optimal group size M chosen from Table 1 of [6] for the prevalence rates, 1%, 2%, . . . , 5%. To our surprise, the curves for the Kirkman matrix and the two matrices in Tapestry are above the black curve. This means that the expected relative costs of these three matrices are higher than the (optimized) Dorfman two-stage algorithm. Thus, if the additional amount of time to go through the second stage is not critical, using other pooling matrices could lead to more cost reduction than using these three matrices. There are several pooling matrices that have very low relative costs when the prevalence rates are below 1%. The P-BEST pooling matrix is one of them. However, the relative cost of the P-BEST pooling matrix increases dramatically when the prevalence rates are above 1.3%. Moreover, the P-BEST pooling matrix has a higher relative cost than the (optimized) Dorfman two-stage algorithm when the prevalence rate is above 2.5%. On the other hand, 2D-pooling has a very low relative cost when the prevalence rates are above 2.5%. To summarize, there does not exist a pooling matrix that has the lowest relative cost in the whole range of the prevalence rates considered in our experiments. To optimize the performance, one should choose the right pooling matrix, depending on the prevalence rate. However, this might be difficult as the exact prevalence rate of a new outbreak of COVID-19 in a region might not be known in advance. Our suggestion is to use suboptimal PPoL matrices for a range of prevalence rates, as shown in Table IV . As shown in this table, the costs computed from the theoretical approximations in (14) and the costs measured from simulations are very close, and they are within 2% of the minimum costs for (d 1 , d 2 )-regular pooling matrices in Table II . From our numerical results in Figure 11 , we suggest using the PPoL matrix with d 1 = 3 and d 2 = 31 when the prevalence rate r 1 is below 2%. In this range of prevalence rates, its expected relative cost is even smaller than that of P-BEST. Moreover, it can achieve an 8-fold reduction in test costs when the prevalence rate is near 1% (as shown in Table IV) , and most samples can be decoded in the first stage (as shown in Figure 10 ). When the prevalence rate r 1 is between 2%-4%, we suggest using the PPoL matrix with d 1 = 4 and d 2 = 23. In this range of prevalence rates, using such a pooling matrix can still achieve (at least) a 3-fold reduction in test costs. Roughly, 17% of samples need to go through the second stage when the prevalence rate is near 4% (as shown in Figure 10 ). When the prevalence rate r 1 is between 4%-7%, we suggest using the PPoL matrix with d 1 = 3 and d 2 = 13, and it can still achieve (at least) a 2-fold reduction in test costs. When the prevalence rate r 1 is between 7%-10%, we suggest using the PPoL matrix with d 1 = 2 and d 2 = 7. Though its expected relative cost is still lower than that of the Dorfman two-stage algorithm, the difference is small. In this paper, we proposed a new family of PPoL polling matrices that have maximum column correlation and row correlation of 1 for a wide range of column weights. Using the two-stage definite defectives (DD2) decoding algorithm, we compare their performance with various pooling matrices proposed in the literature, including 2D-pooling [7] , P-BEST [8] , and Tapestry [9] , [10] . Our numerical results showed no pooling matrix with the lowest expected relative cost in the whole range of the prevalence rates. To optimize the performance, one should choose the right pooling matrix, depending on the prevalence rate. As the family of PPoL matrices can dynamically adjust their construction parameters according to the prevalence rates, it seems that using such a family of pooling matrices might lead to better cost reduction than using a fixed pooling matrix. There are several research directions for future works: (i) Other decoding algorithms: in this paper, we only evaluated the performance of pooling matrices using the DD2 algorithm. To probe further, we are currently investigating the possibility of using other decoding algorithms, in particular, the GPSR algorithm in [8] and the belief propagation (BP) algorithm in [28] . (ii) Noisy decoding: The DD2 algorithm works very well in the noiseless setting. However, it is not clear whether it can continue to perform well in a noisy setting. There are several noise models proposed in the literature (see, e.g., the monograph [13] ). Among them, the dilution noise model is of particular interest to us. Our preliminary numerical results show that the number of false negatives of the DD2 algorithm might increase significantly with respect to the increase of the dilution noise. As such, one should be cautious about using the DD2 algorithm when the dilution noise is not negligible. Another recent work [29] deals with noisy pooled testing, where a noisy communication channel causes false positives and false negatives. To decode samples, the authors of [29] proposed the generalized approximate message passing (GAMP) algorithm that requires both falsenegative and false-positive probabilities. But the exact channel conditions, including false-negative and false-positive probabilities, are very difficult to estimate in practice. (iii) Ternary samples: in this paper, we only considered binary samples. For ternary samples, there are three test outcomes: negative (0), weakly positive (1), and strongly positive (2) . It seems possible to extend the DD2 algorithm for binary samples to the setting with ternary samples by using successive cancellations. Coronavirus disease (COVID-19) outbreak Estimation of the asymptomatic ratio of novel coronavirus infections (COVID-19) A timedependent SIR model for COVID-19 with undetectable infected persons Pooled sample testing and screening testing for COVID-19 Interim guidance for use of pooling procedures in SARS-CoV-2 diagnostic, screening, and surveillance testing The detection of defective members of large populations Evaluation of group testing for SARS-CoV-2 RNA Efficient high-throughput SARS-CoV-2 testing to detect asymptomatic carriers Tapestry: A single-round smart pooling technique for COVID-19 testing A compressed sensing approach to group-testing for COVID-19 detection Low-cost and high-throughput testing of COVID-19 viruses and antibodies via compressed sensing: System concepts and computational experiments Two-stage adaptive pooling with RT-qPCR for COVID-19 screening Group testing: An information theory perspective Pooling of samples for testing for SARS-CoV-2 in asymptomatic people Assessment of specimen pooling to conserve SARS CoV-2 testing resources Evaluation of COVID-19 RT-qPCR test in multi-sample pools Group testing against COVID-19 Polynomial codes over certain finite fields PPoL: A periodic channel hopping sequence with nearly full rendezvous diversity On the multichannel rendezvous problem: Fundamental limits, optimal hopping sequences, and bounded time-to-rendezvous Asynchronous grant-free uplink transmissions in multichannel wireless networks with heterogeneous qos guarantees On the theoretical gap of channel hopping sequences with maximum rendezvous diversity in the multichannel rendezvous problem A theorem in finite projective geometry and some applications to number theory HYPER: Group testing via hypergraph factorization applied to COVID-19 Rapid, large-scale, and effective detection of COVID-19 via non-adaptive testing Verhandelingen uitgegeven door het zeeuwsch Genootschap der Wetenschappen te Vlissingen Nonrandom binary superimposed codes Note on noisy group testing: Asymptotic bounds and belief propagation reconstruction Noisy pooled PCR for virus testing