key: cord-0495577-g7gtkfxx authors: Teo, Bernard; Scarlett, Jonathan title: Noisy Adaptive Group Testing via Noisy Binary Search date: 2021-06-23 journal: nan DOI: nan sha: 059332124606fd279dc31ac27264896cb035573f doc_id: 495577 cord_uid: g7gtkfxx The group testing problem consists of determining a small set of defective items from a larger set of items based on a number of possibly-noisy tests, and has numerous practical applications. One of the defining features of group testing is whether the tests are adaptive (i.e., a given test can be chosen based on all previous outcomes) or non-adaptive (i.e., all tests must be chosen in advance). In this paper, building on the success of binary splitting techniques in noiseless group testing (Hwang, 1972), we introduce noisy group testing algorithms that apply noisy binary search as a subroutine. We provide three variations of this approach with increasing complexity, culminating in an algorithm that succeeds using a number of tests that matches the best known previously (Scarlett, 2019), while overcoming fundamental practical limitations of the existing approach, and more precisely capturing the dependence of the number of tests on the error probability. We provide numerical experiments demonstrating that adaptive group testing strategies based on noisy binary search can be highly effective in practice, using significantly fewer tests compared to state-of-the-art non-adaptive strategies. The group testing problem consists of determining a small subset S of defective items within a larger set of items {1, . . . , n}, based on a number of possibly-noisy tests. This problem has a history in medical testing [1] , and has regained significant attention following new applications in areas such as communication protocols [2] , pattern matching [3] , database systems [4] , and COVID-19 testing [5] , as well as connections with compressive sensing [6] . In the noiseless setting, each test takes the form where the test vector X = (X 1 , . . . , X n ) ∈ {0, 1} n indicates which items are included in the test, and Y is the resulting observation. That is, the output indicates whether at least one defective item was included in the test. One wishes to design a sequence of tests X (1) , . . . , X (t) , with t ideally as small as possible, such that the outcomes can be used to reliably recover the defective set S with probability close to one. One of the defining features of the group testing problem is the distinction between the non-adaptive and adaptive settings. In the non-adaptive setting, every test must be designed prior to observing any outcomes, whereas in the adaptive setting, a given test X (i) can be designed based on the previous outcomes Y (1) , . . . , Y (i−1) . It is of considerable interest to determine the extent to which this additional freedom helps in reducing the number of tests. In the noiseless setting, the problem of finding a near-optimal adaptive algorithm was solved long ago by Hwang [7] , who proposed an algorithm based on binary splitting (see Section I-B) and showed that it is guaranteed to succeed with t = k log 2 n k + O(k) tests. Due to a well-known counting argument, this is asymptotically optimal whenever k = o(n). In addition, Hwang's algorithm has the advantage of degrading gracefully (i.e., still successfully identifying many defectives) when t falls below the given threshold [8] . Although perhaps not as ubiquitous as the noiseless version, noisy binary search algorithms have also attracted significant research attention outside the context of group testing [9] - [12] . Given the importance of Hwang's noiseless group testing algorithm that uses binary splitting as a subroutine, it is therefore natural to ask the following: Do there exist near-optimal noisy adaptive group testing strategies based on noisy binary search? In this paper, we partially answer this question in the affirmative, with near-optimality holding at least in certain scaling regimes, and the number of tests matching the best previously known practical algorithm more generally. We will see that depending on the desired recovery guarantees and number of tests, somewhat more care is needed in the testing strategy compared to the noiseless setting. We let the defective set S be a fixed but unknown subset of {1, . . . , n} of cardinality k. Throughout the paper, we adopt the widely-used assumption k = o(n) as n → ∞, which is convenient for absorbing certain "nuisance" terms into the lower-order asymptotic terms. More care would be needed with these terms when handling the regime k = Θ(n) (see [13] for the analog in the noiseless setting), and this is left for future work. An adaptive algorithm iteratively designs a sequence of tests X (1) , . . . , X (t) , with X (i) ∈ {0, 1} n , and the corresponding outcomes are denoted by Y = (Y (1) , . . . , Y (t) ), with Y (i) ∈ {0, 1}. A given test is allowed to depend on all of the previous outcomes. Generalizing (1) , we consider the following widely-adopted symmetric noise model: where Z ∼ Bernoulli(ρ) for some ρ ∈ 0, 1 2 , and ⊕ denotes modulo-2 addition. While symmetric noise is not always realistic, it is commonly considered in the literature on noisy group testing (e.g., [14] , [15] ). The main reason for this restriction is that we use the noisy binary search algorithm of [11] as a subroutine, and that work focuses on symmetric DRAFT noise. However, we use their algorithm in a "black-box" manner, meaning that if any noisy binary search algorithm is devised for other noise models (e.g., Z-channel models [16] ), our algorithms and analysis can be adapted to utilize them accordingly. Given the tests and their outcomes, a decoder forms an estimate S of S. We consider the exact recovery criterion, in which the error probability is given by where the probability is taken over the randomness of the tests X (1) , . . . , X (t) (if randomized), and the noisy outcomes In the adaptive setting, the algorithm may choose when to stop, and accordingly, the number of tests used may be random. In such cases, we denote the random number of tests by T , and we are interested in characterizing its average, While there exist extensive works on group testing (e.g., see [17] , [18] for surveys), we focus our attention here on adaptive algorithms, since this is the focus of our work. Noiseless adaptive group testing. As mentioned above, one of the most well-known strategies for noiseless adaptive group testing is Hwang's generalized binary splitting algorithm [7] , which works as follows: 1. Arbitrarily partition the n items into k groups of size n k . 2. For each of the k partitions: (a) Test the entire partition, and if the outcome is negative, then no further steps are performed for this partition. (b) If the outcome is positive, then test the left half of the partition, use the outcome to identify a defective sub-partition (left half or right half), and recursively continue until a single item remains. (c) Declare the item just found as defective, remove it from the partition, and return to Step (a). This algorithm succeeds using k log 2 n k (1 + o(1)) tests whenever k = o(n). Subsequent works attained the same threshold using distinct four-stage [19] and two-stage [20] , [21] adaptive designs, though the two-stage designs only attain P e → 0 instead of P e = 0. Noisy adaptive group testing. The most related existing theoretical results on noisy adaptive group testing are outlined as follows: • The capacity bound states that any adaptive algorithm attaining P e → 0 must have an average number of tests lower bounded as follows when k = o(n) [22] : 1 where I(ρ) = log 2 − H 2 (ρ) (with H 2 (ρ) = ρ log 1 ρ + (1 − ρ) log 1 1−ρ denoting the binary entropy function) is the capacity of a binary symmetric channel with parameter ρ. Moreover, this bound can be strengthened in the regime k = Θ(n) by replacing k log n k by the larger quantity log n k . The capacity bound is usually stated for the case that the number of tests is fixed (i.e., non-random), but the case of a variable number of tests follows similarly from the fact that I(ρ) is the capacity of the binary symmetric channel even when variable-length coding is allowed [23] . • The best known upper bound for a computationally efficient algorithm is given in [24] , and states that there exists a four-stage algorithm that succeeds with probability approaching one using a number of tests satisfying where D 2 (a b) = a log a b + (1 − a) log 1−a 1−b is the binary relative entropy function. A refinement of this result is also given in [20, Thm. 3] , but it based on a computationally expensive brute force search over n k subsets in the first stage, and the difference between the resulting bound and (5) is almost imperceptible when plotted (see Figure 1 ). Since the refined bound is cumbersome to state, we omit the details here. • An additional converse bound is given in [20] , stating that for any algorithm attaining P e → 0 both in the cases of k defectives and k − 1 defectives, the number of tests must satisfy 2 This exceeds the capacity bound (4) in sufficiently dense scaling regimes (namely, k = Ω(n θ ) with θ sufficiently close to one). In Figure 1 , this bound is represented by the diagonal dashed line, whereas the capacity bound is the horizontal dashed line. In the commonly-considered regime k = Θ(n θ ) with θ ∈ (0, 1), the upper bound (5) coincides with (4) in the limit θ → 0, and coincides with (6) when ρ → 0 and θ → 1 simultaneously [20] , though a gap still remains for fixed ρ ∈ (0, 1) and θ ∈ (0, 1). Beyond the above works, a noisy adaptive version of the GROTESQUE algorithm [25] attains an order-optimal number of tests and decoding time, but the underlying constants are left unspecified. On the other hand, there are notable algorithmic similarities between GROTESQUE and our approach, which we discuss in detail in Section VI. Recent practical noisy adaptive group testing algorithms include [26] , [27] , but these do not currently come with theoretical guarantees on the number of tests, which is the main focus of our work. Noisy binary search. A relatively early study of noisy binary search was given by Karp and Kleinberg [10] . While the constant factors in their theoretical guarantees were not optimized and are too large for our purposes, we will see Figure 1 : Asymptotic performance bounds for noisy group testing with noise level ρ = 0.11. This figure is replicated from [24] , with the non-adaptive achievability result coming from [15] . in Section VI that the algorithm itself (which is based on the method of multiplicative weight updates) is very effective in practice. Feige [9] devised a noisy binary search algorithm based on traversing a balanced binary tree. The root node contains all items, each child node contains half of the parent subset, and the leaf nodes are singletons. The idea is to use the noisy test outcomes to traverse the tree until a defective leaf is found with high probability. It was shown that for fixed ρ ∈ 0, 1 2 , this algorithm requires O (log n) + O log 1 δ queries, where n is the number of elements in the array being searched, and δ is the target error probability. Ben-Or [11] builds on the work of [9] to devise a noisy binary search algorithm with a more precise constant in the O(log n) term. For fixed ρ ∈ 0, 1 − 1 √ 2 , their algorithm solves the noisy binary search problem in log n I(ρ) + O(log log n) + O log 1 δ queries. We use this algorithm as a building block for our group testing algorithms. Noisy binary search for group testing. To our knowledge, the only previous work applying noisy binary search ideas to adaptive group testing is [28] , in the context of private anomaly detection. However, the analysis therein focuses on the simpler setting of k = 1, allowing the direct application of feedback communication strategies [29] , [30] . In our understanding, this is no longer directly possible in the case that k > 1. In a recent work [31] , a non-adaptive noisy group testing algorithm based on binary splitting was proposed, building on other recent works handling the noiseless setting [32] , [33] . As a result of being non-adaptive, the algorithm differs significantly from our adaptive approach. In addition, the analysis in [31] only establishes scaling laws on the number of tests and decoding time with unspecified constants, whereas in this paper we are interested in the precise constant factors. DRAFT In this paper, we present three algorithms for noisy adaptive group testing via noisy binary search in increasing order of complexity: • The first variant (Section III) applies noisy binary search (NBS) in a direct manner, and uses a number of tests whose first term matches the first term in (5), but whose second term has an unspecified constant coefficient to O(k log k). • The second variant (Section IV) combines low-confidence NBS with high-confidence repetition testing, and uses a number of tests nearly matching (5) (when P e → 0 sufficiently slowly) but with a higher constant in the second term. • The third variant (Section V) allows the repetition testing to make some errors and corrects for these in a later step, and uses a number of tests matching (5) when P e → 0 sufficiently slowly, while also giving a more general bound specifying the precise dependence on P e . The advantages of our approach over the one in [24] (attaining (5) ) are discussed in more detail in Section VI. In addition, we provide a numerical example indicating that our general approach can indeed be effective in practice. From the perspective of our theoretical results, the first two variants mentioned above are primarily introduced as stepping stones towards the third. On the other hand, the first and simplest variant is seen to perform well experimentally, suggesting that the gaps in the theoretical results may be primarily due the analysis techniques used, rather than inherent strengths and weaknesses in the approaches themselves. See Section VI for further discussion. In this section, we introduce a number of useful theoretical and algorithmic tools that will be used throughout the paper. We first momentarily depart from group testing and discuss the noisy binary search (NBS) algorithm that we will make use of [11] . The variant of NBS that we consider is as follows (see [10] for more general formulations): There exists an unknown index i * ∈ {1, . . . , n}, and the goal is to locate it via adaptive queries of the form "Is i * ≤ i?". Each query independently returns the correct answer with probability 1 − ρ, and the incorrect answer with probability ρ. We will make use of the following main result from [11] . Lemma 1. (NBS Guarantee [11] ) There exists an NBS algorithm that, given any δ ∈ (0, 1), succeeds with probability at least 1 − δ while satisfying DRAFT Note that we re-use the symbol n here as standard notation, but we will typically apply this result to the group testing problem with a smaller quantity, such as n k in place of n (i.e., considering subsets of the entire set of items). The inner workings of the algorithm in [11] are not directly relevant for our main goal of deriving theoretical bounds on the number of tests. However, in practice, the choice of NBS algorithm is naturally very important. We are not aware of any works performing a practical implementation of the algorithm in [11] , and doing so may be difficult due to the existence of several "nuisance" parameters therein. In our own experiments (Section VI), we instead use an algorithm from [10] with much looser theoretical guarantees, but excellent practical performance. To make NBS more directly useful for group testing, we modify it to consider an array of items in which any number of items (possibly zero or greater than one) may be defective. Indexing these items as {1, . . . , n} (again keeping in mind that later we will substitute n k or similar), we modify the goal as follows: • If there are one or more defective items in the array, then the algorithm should return the index i * of the left-most one (e.g., if items 2, 6, and 12 are defective, then i * = 2). • If there are no defective items in the array, then the algorithm should return φ, a symbol indicating that it believes all items to be non-defective. 3 In addition, the queries are now changed to regular group testing queries, and the overall problem is termed modified noisy binary search (MNBS). Defining the notion of error/success probability for an MNBS procedure according to the above criteria, we can obtain the following as a simple consequence of Lemma 1. There exists an MNBS algorithm using noisy group tests that, given any δ ∈ (0, 1), succeeds with probability at least 1 − δ while satisfying Proof. The idea is to reduce the problem to regular NBS by (i) always performing group tests with pools of the form {1, . . . , i}, and (ii) adding an (n + 1)-th "dummy" defective to detect when to declare all items as non-defective by returning φ. The details are given in Appendix A. It will be useful to have a procedure for estimating the number of defectives k among items {1, . . . , n} (and similarly for subsets of items), so that we do not need to assume a priori knowledge regarding k. While numerous works have given algorithms for estimating k in the noiseless setting [34] - [36] , analogous results for the noisy setting appear to be very limited. The following result is based on a simple approach that can likely be improved, but is sufficient for our purposes. Fix any constant c > 0. There exists an adaptive algorithm that, given n items among which k are defective, outputs k satisfying k 2 ≤ k ≤ k with probability 1 − O(n −c ), using an average of O(log k · log n) tests. Moreover, this can be improved to (1 − )k ≤ k ≤ k for any fixed ∈ (0, 1), provided that the implicit constant in the number of tests is suitably modified as a function of . Proof. The idea is to iteratively perform a sequence of sufficiently reliable tests to check whether k ≤ 2( , until the answer is affirmative. The details are given in Appendix B. Along with noisy binary search itself, we will additionally use suitably-chosen repeated tests to combat the noise. The relevant auxiliary results for this purpose are given as follows. Basic result. Suppose that we observe a binary value v through a noisy symmetric channel that produces the correct output with probability 1 − ρ, where ρ ∈ 0, 1 2 . We would like to determine v with error probability at most δ. The following lemma gives a standard upper bound on the number of repeated observations that we require. Proof. See Appendix C. Guarantee for multiple uses. A standard approach to applying Lemma 4 multiple times is to choose δ small enough for a union bound to keep the probability of any error small. In one version of our algorithm, we will instead consider an approach that uses fewer tests at the expense of making a small number of errors (which are later corrected). Formally, we have the following. Suppose that we performt individual tests of k 0 non-defectives and k 1 defectives (i.e.,t(k 0 + k 1 ) tests in total). For any fixed constant ζ ∈ (ρ, 1 − ρ) and any δ 0 ∈ (0, 1), δ 1 ∈ (0, 1) and Then, if k 0 = o(k 1 ), we for sufficiently large n that, with probability at least 1 − δ 0 − δ 1 , the (1 − 1 )k 1 items that returned positive the highest number of times are all defective. Proof. See Appendix C. DRAFT As a simple starting point, we consider following the structure of Hwang's noiseless algorithm (Section I-B), while applying modified NBS (Section II-B) with a small enough error probability such that every invocation simultaneously succeeds with high probability. The algorithm is described in Algorithm 1. Here and subsequently, we use the terminology that a partition is empty if it contains no defectives. Algorithm 1 Description of Approach 1 Input: Number of items n and defectives k, confidence parameter δ 1: Split the n items into k partitions of size n/k each; 4 2: Use repetition testing (Lemma 4) with confidence δ k to test whether each partition contains a defective or not (i.e., for each partition, repeatedly test all the items in the partition together). 3: For each partition that was declared to be non-empty, run MNBS (Lemma 2) with confidence parameter δ k , and add the result (if not given by φ) to the estimated defective set. Discard all partitions declared empty and all items declared defective, and return to Step 2 (or terminate once all partitions are discarded). We state the following recovery guarantee for the above algorithm. Theorem 1. Suppose that k = o(n) as n → ∞. For any δ ∈ (0, 1) such that δ k = o(1), Algorithm 1 succeeds with probability at least 1 − 3δ using an average number of tests satisfying This result already matches the capacity bound (4) when δ ≥ 1 poly(k) and k = o(n c ) for arbitrarily small c > 0 (e.g., k = poly(log n)), in which case (11) reduces to (1)). However, it leaves an unspecified coefficient to To obtain an explicit constant, we need refined techniques, and these are explored in the subsequent sections. Analysis of correctness. Consider the algorithm shown above. We observe that as long as Steps 2 and 3 never make incorrect decisions, the returned output will be correct. Moreover, as long as these decisions remain correct, Step 2 will be executed 2k times (once per defective and once more per partition for when it becomes empty), and Step 3 will be executed k times (once per defective). Hence, by a union bound over these 3k calls and the choice of confidence parameter δ k , we find that the algorithm succeeds with probability at least 1 − 3δ. Number of tests. The number of tests contributed by the first 2k calls to Step 2 is 2k log k δ D( 1 2 ρ) by Lemma 4, and the average number of tests contributed by the first k calls to Step 3 is by Lemma 2. While further tests may occur if some of these calls to Step 2 and 3 are erroneous, we claim that this only increases the overall average number of tests by a multiplicative 1 + O δ k factor (which behaves as 1 + o(1) by assumption). To see this, note that subsequent calls to these steps fail independently of one another, so the number of failures before the first success follows a geometric distribution with success probability 1 − O δ k . Since the mean of the Geometric(p) distribution is 1 p , the desired claim follows. Hence, the overall average number of tests is given as in (11) up to a multiplicative factor of 1 + o(1). Since the first two terms collectively scale as O(k log n), the final O(k log log n) term can also be factored into the multiplication by 1 + o(1), thus establishing (10). While we presented this approach assuming knowledge of k, the analysis goes through unchanged when only an upper bound k is known to the algorithm (again assuming k = o(n)), and accordingly k is replaced by k in the number of tests and the error probability. In fact, the fraction in the first term in (10) can be k log n k I(ρ) (with a leading term of k instead of k), but the second term O k log k δ may dominate if the upper bound k is much larger than k. A key weakness in Approach 1 is that applying noisy binary search with a confidence of O 1 k leads to a number of tests with the O(k log k) term having an unspecified implied constant, since the main result on NBS in [11] does not specify the coefficient to O log 1 δ . In this section, we consider a refined approach that applies MNBS with lower confidence (namely, O 1 log n ) and then uses repetition testing to certify any items that MNBS declares to be defective. To help better understand this approach, we first outline a simpler version that uses such repetition testing for certifying both defective items and non-defective partitions. A simple refinement of Approach 1 is to first use a higher target error probability (e.g., O 1 log n instead of O δ k ) in noisy binary search and repetition testing, and then only using the smaller O δ k confidence parameter for certification, i.e., double-checking that the partition is empty, or double-checking that an estimated defective is indeed defective. DRAFT For brevity, we do not study this variant in detail, but we mention that it leads to the following number of tests when δ decays to zero sufficiently slowly: While this is a significant improvement on Theorem 1, the factor of 2 in the second term is not ideal. Roughly speaking, this comes from paying a price of k log k D( 1 2 ρ) for certifying each defective (see Lemma 4) , and a price of k log k D( 1 2 ρ) for certifying each partition as having no remaining defectives. This motivates the main algorithm of this section described in the following, which only performs high-probability certification of defectives, and reduces the above-mentioned constant from 2 to 1. We describe the algorithm in two parts. Note that here we do not assume any prior knowledge of k -not even an upper bound. Description of outer loop. We first introduce an outer loop, where the goal of each iteration is to identify a constant fraction of the remaining defectives. The outer loop is described in Algorithm 2a. Algorithm 2a Outer algorithm for Approach 2 Input: Number of items n, confidence parameters δ est and δ, 5 constant C I: Run the sub-routine for estimating k (Lemma 3), with confidence parameter δ est . Let the returned value be k, which should ideally satisfy k 2 ≤ k ≤ k. II: If k ≤ C log n, then estimate the remaining defectives using Approach 1, with k in place of k and confidence level δ, and terminate the algorithm returning all estimated defectives. III: Otherwise, run the group testing subroutine (inner algorithm) below, append the returned subset to the overall set of estimated defectives, remove that subset from further consideration, and return to Step I. Description of inner algorithm. In the following, we refer to running repetition testing according to Lemma 4, and declaring the defectivity status according to a majority vote, as repetition-based certification. The group testing subroutine for Step III is described in Algorithm 2b. Note that in contrast to Approach 1, we only attempt to find a single defective in each partition, rather than returning to Step 2 in order to seek any further ones; the same effect is instead captured by the outer loop. In addition, in view of the discussion in Section IV-A, any partitions believed to be empty are simply ignored, rather than seeking to certify that they are non-defective. 5 We will choose δest = O(n −c ) for arbitrarily large c > 0, and leave δ as a free parameter, leading to the algorithm succeeding with probability Input: Number of items n, estimated upper bound k, confidence parameter δ 1: Randomly split the items into k partitions of size n k , uniformly at random. 2: Use repetition testing with confidence 1 log n to test whether each partition contains a defective. 3: For each partition tested in Step 2, if it was declared non-empty, then: (a) Run modified noisy binary search (MNBS) on the partition with confidence parameter 1 log n . (b) If MNBS does not return φ, 6 perform repetition-based certification on the returned item with confidence parameter δ k . If the item is still declared to be defective, add it to the estimated defective subset. 4: Return the estimated subset of defectives. We state the following recovery guarantee for the above algorithm. We focus on the scaling regime k = ω(log n), noting that for any smaller k, the same result (with the number of tests simplifying to k log n k I(ρ) (1+o(1))) can be obtained via Approach 1, at least under the assumptions on δ used here. Compared to Theorem 1, this result is much more reminiscent of the best known existing result (5), though still falls slightly short due to the denominator of D( 1 2 ρ) instead of D(1 − ρ ρ) in the second term. We note that the assumption δ ≥ e −ψ k with ψ k = o(k) is fairly mild, allowing the target error probability to be nearly exponentially small in k. The additive term O(n −c ) in the error probability amounts to a slightly stronger restriction, but it is still fairly mild since we allow c to be arbitrarily large. Analysis of correctness. We first establish the correctness of the algorithm; in accordance with the theorem statement, we would specifically like to show the following: The algorithm succeeds with probability where c > 0 may be made arbitrarily large by suitable choices of the algorithm's parameters. Observe that if the estimate k in the outer algorithm (Step I) remains valid (i.e., with k current defectives, it holds that k 2 ≤ k ≤ k), and the inner algorithm always finds at least a constant fraction of the remaining defectives, then the number of outer iterations will be O(log k). We will show that this is indeed the case, with high probability. 6 Recall that MNBS returns φ to indicate that it believes all items to be non-defective. Regarding k, we know that a single invocation of Lemma 3 gives the desired event with probability 1 − O(n −c ) for arbitrary c > 0, and hence, its first O(log k) (or even poly(n)) invocations simultaneously succeed with probability at least 1 − O(n −c ), where again c > 0 can be arbitrarily large. Hence, any errors here only contribute to the O(n −c ) part in (13) . To characterize the inner algorithm, we first show that with high probability, a constant fraction of the partitions are non-empty. Recall that the partitions are formed uniformly at random. Fix α ∈ (0, 1), and note that if k ∈ k 2 , k is the current remaining number of defectives, 7 the probability of (1 − α)k specific partitions of size n k all being empty is Setting α = 1 9e , this simplifies to 9 −k ≤ 9 −k/2 = 3 −k . Taking the union bound over all k (1−α)k ≤ 2 k choices of the (1 − α)k partitions, it follows that the probability of having below a 1 9e fraction of non-empty partitions behaves as e −Ω(k) . Moreover, since we designed the outer algorithm to ensure that k ≥ C log n for C that we can choose to our liking, the preceding probability reduces to O(n −c ) for arbitrary c > 0. Similarly to the analysis of k above, we can then apply the union bound over the first O(log k) invocations of the inner algorithm, and the resulting expression only contributes to the O(n −c ) part in (13) . Now, given that at least a 1 9e fraction of the partitions are non-empty, we claim that at least half of these non-empty partitions will reach Step 3(b) (i.e., repetition testing will identify that a defective is present in Step 2, and MNBS will not return φ in Step 3(a)). This is because when a defective is present, the probability of one or both of Step 2 or Step 3(a) failing is at most 2 log n . Hence, the number of successes dominates a binomial distribution with k 9e trials and success probability 1 − 2 log n . By a standard Chernoff bound, this implies that at least k 10e trials succeed, with probability 1 − e −Ω(k) (see Appendix D). Again using k ≥ C log n with arbitrarily large C, the e −Ω(k) term here only contributes to the O(n −c ) term in (13) . For each empty partition, Step 3 is only entered with probability 1 log n , i.e., the confidence parameter used in Step 2. In any given invocation of the inner algorithm, there are only k partitions, and from this, we can again apply the term. Next, we sum over the multiple invocations of the inner algorithm, denoting the k value in the i-th invocation by k i , and the true number of remaining defectives by k i . Under the above high-probability events, we have k i+1 ≤ k i − ki 10e and ki 2 ≤ k i ≤ k i , and combining these facts gives i k i = O(k) (noting that ∞ i=1 a i < ∞ for any a ∈ (0, 1)). Thus, even when we sum all k i values across all invocations, the total remains linear in k. 7 The current number of non-discarded items could be denoted by a different symbol n ≤ n, but we use n, as doing so does not impact the final result. Moreover, one could always pad the current list of items with dummy non-defectives to keep the total at n throughout. with probability at least 1 − O(δ), thus leading to the O(δ) term in (13) . In addition, we observe that this implies no errors being made by Step III of the outer algorithm. This is because whenever a defective is tested in Step 3(b) it is correctly added to the estimate, whereas when a non-defective is tested, it is not added. Applying the union bound over all of the failure events above, we deduce that the success probability takes the desired form 1 − O(δ) − O(n −c ). The reversion to Approach 1 in Step II additionally contributes to the O(δ) term. We first study the number of tests conditioned on the high-probability events used in the analysis above, and then turn to the unconditional average. Under the above high-probability events, we have the following: • In Step I, we use an average of O(log k · log n) tests per invocation for estimating the number of defectives (see Lemma 3), for a total of O(log 2 k · log n) in the O(log k) invocations. • Step 2 is invoked once per partition, and having established that i k i = O(k), the resulting total number of tests where the log log n term comes from log 1 δ with δ = 1 log n . • There are k invocations of Step 3 that lead to identifying a defective in Step 3(b), and by Lemma 2, these incur a total average number of tests given by k log n k I(ρ) + O(k log log n) Step 3(a) Step 3(b) (1 + o(1)). Here the 1 + o(1) term is included partly for later convenience, but also due to the subtle issue that in Step 3(a), average the number of tests conditioned on the decision being correct could in principle be different from the unconditional average. However, denoting the random number of tests by T and the error event by E, we (1)). • Regarding the "incorrect" invocations of Step 3 in which the item under consideration is non-defective, we have established that their number is smaller by a factor of 1 log log n = o(1). Hence, combining these invocations with those considered in the previous dot point, the number of tests is still (16) with the differences only amounting to the 1 + o(1) term. By a similar argument, the number of tests corresponding to Step 3(a) failing for a defective item also only contributes to this multiplicative 1 + o(1) term. • In Step II of the outer algorithm, we run Approach 1 with k = O(log n). By Theorem 1 (with k in place of k as discussed in Section III-D), this requires O((log n) 2 + log n · log 1 δ ) tests. We now sum the number of tests listed above. Note that (16) already contributes O k log n k + k log k = O(k log n). This means that the O(k log log n) and O(log 2 k · log n) terms immediately amount to at most multiplication by DRAFT 1 + o(1). Moreover, our assumption on δ ensures that the same is true of log n · log 1 δ , since δ ≥ e −o(k) is equivalent to log 1 δ ≤ o(k). Thus, we are left with the simplified number of tests given in (12) , as desired. It remains to argue that the unconditional average number of tests also satisfies the same bound. Similarly to the analysis of Approach 1, this follows from the fact that subsequent invocations of all steps use independent tests. Thus, for any given step, the number of failures until the first success follows a geometric distribution with success probability 1 − o(1), amounting to an average of 1 + o(1) invocations until the first success. The events of these steps failing have no "knock-on" effects for future steps, with the possible exception that the incorrect estimation of k could lead to using an unusually large number of tests. However, the estimation subroutine only fails with probability O(n −c ) for arbitrarily large c, so even the extreme case of returning k = n (and therefore performing one-by-one testing of items, each being in their own "partition") only amounts to a negligible o(1) contribution to the overall average number of tests. We are now in a position to state the version of our algorithm that provides the strongest recovery guarantee, matching that of [24] but with notable advantages that we will discuss in Section VI. Inspired by the adaptive algorithms in [20] , [24] with 3 or 4 stages, the idea is to use fewer tests in the certification (repetition testing) step at the expense of misclassifying a small number of items, but correcting those mistakes using a small number of additional tests at the end. We again avoid assuming any prior knowledge on k (even an upper bound). In this final variant, we modify Approach 2 to produce a refined variant, detailed in Algorithm 3a. Algorithm 3a Outer algorithm for Approach 3 Input: Number of items n, confidence parameters δ and δ est , individual testing parameter t indiv , and constants C and I: Run the sub-routine for estimating k (second part of Lemma 3), with confidence parameter δ est and approximation parameter such that the returned value k should satisfy (1 − )k ≤ k ≤ k. 8 On the first invocation of this step, additionally set k init = k for later use. II: If k ≤ C log n, then estimate the remaining defectives using Approach 1, with k in place of k and confidence parameter δ, then proceed to Step IV. III: (If k > C log n) Run the group testing subroutine (inner algorithm) below with parameter t indiv , and append the returned set of (item, count) pairs to a list L. Remove all such items from further consideration; return to Step I. IV: For the (item, count) pairs in L with the (1 − 2 )k init highest counts, add each corresponding item to the set of estimated defectives. V: For the remaining |L| − (1 − 2 )k init items, perform individual testing with confidence parameter δ k , and declare the items with at least half positive tests as defective. VI: Return the set of all defectives identified in Steps II, IV, and V. The inner algorithm is almost identical to that of Approach 2, but we provide the full details for convenience; see Algorithm 3b. Input: Number of items n, estimated upper bound k, confidence parameter δ, individual testing parameter t indiv 1: Randomly split the items into k partitions of size n k , uniformly at random. 2: Use repetition testing with confidence 1 log n to test whether each partition contains a defective or not. 3: For each partition tested in Step 2, if it was declared non-empty, then: (a) Run modified noisy binary search (MNBS) on the partition with confidence parameter 1 log n . (b) If MNBS does not return φ, perform t indiv individual tests on the returned item, and append the resulting (item, count) pair (with "count" being the number of positive tests) to the output list. 4 : Return the produced list of (item, count) pairs. We state the following recovery guarantee for the above algorithm. Similarly to Theorem 2, we focus on the scaling regime k = ω(log n). While the minimization over ζ and the freedom in choosing δ 0 and δ 1 make this expression somewhat more complicated than our previous results, we can highlight two important special cases: (i) Setting δ 0 = δ 1 = δ and ζ = 1 2 , we recover the bound in Theorem 2. (ii) If δ 0 = δ 1 = δ and δ → 0 with δ = k −o(1) (e.g., δ = 1 poly(log k) ), then the max{·, ·} in (17) is asymptotically attained by the first term for any fixed ζ ∈ (ρ, 1 − ρ). Hence, we can take ζ to be arbitrary close to ρ, and the overall bound simplifies to Thus, we recover the existing bound (5) (note that D 2 (1 − ρ ρ) = D 2 (ρ 1 − ρ)). A more detailed comparison is given in Section VI. Re-used findings. Since the inner algorithm and Steps I-III of the outer algorithm coincide with those of Approach 2, we are able to re-use the following high-probability findings from the proof of Theorem 2: DRAFT • The estimate k i in each iteration i satisfies the desired bound (1 − )k i ≤ k i ≤ k i , where k i is the true number of remaining defectives. k i+1 ≤ (1 − Ω(1))k i , and there are O(log k) total invocations. • Step 3(b) is reached by all defectives (except those deferred to Approach 1 in Step II), but is only reached by o(k) non-defectives. • Assuming t indiv = Ω(log k) (which will indeed be the case when we set its value), the overall number of tests resulting from Steps I-III (including those in the inner algorithm) is at most and the overall contribution to the error probability from Steps I-III is O(δ) + O(n −c ). Analysis of correctness. With the above findings in place, we seek to apply Lemma 5 to deduce that Step IV succeeds, in the sense that the top (1 − 2 )k init ranked items in the list L are indeed defective. By the assumption k = ω(log n) and the third finding above, we have that L contains k 1 = k(1 − o(1)) defectives, and k 0 = o(k) non-defectives; in particular, compared to k = ω(log n), the C log n (or fewer) items deferred to Approach 1 can be absorbed into the 1 − o(1) term. Hence, the condition k 0 = o(k 1 ) in Lemma 5 is satisfied. In accordance with (9) , and crudely upper bounding k 0 ≤ k, we fix 1 ∈ (0, 1) and ζ ∈ (ρ, 1 − ρ) and set Noting that 1 is a fixed positive constant (but arbitrarily small), we claim that this simplifies to This is immediate when δ 1 = o(1), and otherwise, the assumption k → ∞ implies that the maximum is achieved by the first term in both (20) and (21) anyway. Recalling the assumption δ 1 + δ 2 = O(δ), Lemma 5 implies that with probability 1 − O(δ), the top (1 − 1 )k 1 ranked items are all defective. Having established that k 1 = k(1 − o(1)) and (1 − )k ≤ k init ≤ k, we observe that we can choose 1 sufficiently small such that (1 − 2 )k init ≤ (1 − 1 )k 1 . Hence, the top (1 − 2 )k init ranked items are all defective, as desired for Step IV of the algorithm to succeed. Step V, the remaining number of defectives is k − (1 − 2 )k init ≤ C k, where C is an absolute constant, and the remaining number of non-defectives is o(k). We apply the basic form of repetition testing in Lemma 4 with confidence δ k . By a union bound, every item is classified correctly with probability at least 1 − δ when we test each item log k δ D( 1 2 ρ) times, leading to a total number of tests for Step V given by Number of tests. We combine (19) with (21) and (22) . Since can be arbitrarily small, the term (22) is arbitrarily small compared to (21) , and can be absorbed into the multiplicative 1 + o(1) term. Choosing ζ ∈ (ρ, 1 − ρ) to minimize the total, we obtain the desired bound (17) . We have derived this bound conditioned on high-probability events, but the unconditional average again follows via an analogous argument to that of Approach 2. In this section, we first compare our approach to that of [24] (in which (5) was attained), and then provide an experimental example indicating that NBS-based group can be effective in practice. Comparison to [24] . As we already noted, if our goal is simply to attain P e → 0 as n → ∞ (with k → ∞ and k = o(n)), then the number of tests required by Approach 3 in this paper matches that of [24] . An immediate advantage of our approach is that we also explicitly characterize how the number of tests depends on the target error probability δ, whereas adapting the analysis of [24] to characterize this dependence appears to be difficult. In addition, an inspection of the approach and analysis in [24] reveals the following limitations: (i) As a subroutine, the algorithm in [24] uses a capacity-achieving channel code with block length O(log n). In view of the fundamental limits imposed at finite block lengths [37] , this short block length is likely to require a significant backoff from capacity in practical problem sizes. For instance, even n = 10 10 gives log n ≈ 23, which is an extremely short block length for a communication code. 9 (ii) The analysis of the error probability in [24] contains terms of the form k − α , where both and α are taken to zero at the end of the proof. This indicates that k may need to be extremely large (and n is in turn even larger) to attain a small error probability. Regarding item (i), our use of the NBS subroutine overcomes this limitation and exploits the benefits of full adaptivity and variable-length stopping, analogously to how variable-length feedback communications codes require significantly less finite-length backoff from capacity than fixed-length codes without feedback [38] . The caveat is that requiring full adaptivity serves as a notable disadvantage compared to the approach of [24] , which uses only 4 stages of adaptivity. In fairness, our analysis may have some similar limitations to item (ii) above (e.g., absorbing O(log log n) factors into lower-order terms), but nevertheless, we believe it to be a significantly more practical approach overall. In particular, it does not suffer from any notion of "short effective code length" or similar, and it permits the use of NBS algorithms in a black-box manner. In the following subsection, we corroborate this discussion with an experimental example. Another minor advantage is that our algorithms (Approaches 2 and 3) do not use any prior knowledge regarding the number of defectives k, though it should be straightforward to overcome this limitation by first estimating k and then applying the approach in [24] . Comparison to GROTESQUE. The theoretical guarantees for GROTESQUE [25] are less comparable to ours, since they only seek scaling laws and not precise constants. Nevertheless, it is interesting to note the algorithmic similarities between our approach (particularly Approach 1 in Section III-A) and that of GROTESQUE (adaptive version). Both share a similar high-level structure, with the following main differences: (i) We use repetition testing to distinguish between the cases of {0, ≥ 1} defectives in each partition, whereas GROTESQUE uses i.i.d. Bernoulli 1 2 testing to distinguish between the cases of {0, 1, ≥ 2} defectives, and only proceeds with 1-sparse recovery if the answer is "1". (ii) We use noisy binary search to identify the left-most defective in the "≥ 1" case, whereas GROTESQUE uses an expander code of length O(log n) to identify the unique defective in the "1" case. (iii) Similarly to our Approach 2, GROTESQUE continually re-shuffles the partitions in order to ensure that each item ends up in the "1" case after enough outer iterations. Regarding item (i), our sub-problem appears to be easier to solve, since distinguishing the cases "0" vs. "≥ 1" can be done with a simple majority vote, whereas distinguishing "1" vs. "0 or ≥ 2" requires more carefully choosing two appropriate thresholds (deciding "1" if the fraction of positive tests is in between them). More importantly, item (ii) implies that the same practical limitation as that of [24] discussed above also applies to GROTESQUE. While our approach has the advantage of avoiding this limitation, this again comes with the caveat that one should also consider the number of stages of adaptivity -our approach is fully adaptive, but GROTESQUE only requires O(log n) stages. Here we present a simple proof-of-concept experiment to show that our general approach can be effective; note that we do not seek to be comprehensive in terms of diverse experimental settings or detailed comparisons. We follow the experimental setup of [18, Sec. 3.7] , and compare against non-adaptive group testing with the two best-performing decoding algorithms therein: belief propagation (BP) [39] and linear programming (LP) [40] . For the non-adaptive test matrix, we consider both an i.i.d. design and a constant-column weight design, with the parameters chosen such that each test has a 50% chance of being positive, as suggested by numerous theoretical studies [15] , [21] , [41] - [43] . Due to the practical challenges discussed above, we do not attempt to compare to the adaptive algorithms in [24] , [25] . For a practical implementation, we aim to keep our adaptive algorithm as simple as possible, and accordingly use Approach 1 (Section III-A) with the following modifications: DRAFT • In Step 2 (repetition testing), we simply set the number of repetitions to a fixed odd number r, separately considering the values r ∈ {3, 5, 7, 9, 11, 13} . To slightly reduce the number of tests, we stop early when the outcome is already determined (e.g., if r = 5 and the first three outcomes are the same, then we can skip the final two). • In Step 3 (applying NBS), we apply Karp and Kleinberg's multiplicative weights algorithm [10] , 10 which we found to be the most practical despite its analysis providing highly suboptimal constant factors. This algorithm has a single parameter δ NBS , which we set to δ NBS = δ 3k with various δ ∈ [0.002, 5]. We do not make use of the modification in which φ may be returned, as we found this to make little difference here. Approaches 2 and 3 appear to be more difficult to implement, and since our experiments are only meant to serve as a simple proof of concept, we do not attempt to do so; see Remark 1 below for further discussion. which is zero when S = S and one when none of the defectives are found. Under both recovery criteria, we see that adaptivity is able to provide significant gains over the state-of-the-art non-adaptive algorithms (at least for suitably-chosen δ and r), despite having only considered the simplest version of our algorithm with no fine-tuning. We note that the distinct "curves" for the adaptive algorithm correspond to the six different values of r ∈ {3, 5, 7, 9, 11, 13} mentioned above. We also found that the fraction of mistakes for the adaptive algorithm is often close to 1 k times the error probability, indicating that when an error is made, it may only be due to having a single false positive or a single false negative. A slight caveat to these performance gains is that the gap may be reduced if hard limits are placed on the number of tests used in each trial. In such cases, it is crucial to understand not only the average number of tests used by the adaptive algorithm, but also the variance. To visualize this, we give an example histogram of the number of tests used by one configuration of (r, δ) among 10 6 trials (namely, r = 5 and δ = 0.2). We observe that the standard deviation is small compared to the average number of tests, though non-negligible. A notable difficulty in implementing Approaches 2 and 3 is that the theoretical choices of the extra parameters may be less suitable at practical problem sizes (e.g., δ est , C, the 1 log n confidence term, and in Approach 3), amounting to having significantly more parameters to tune instead of just r and δ. The main reason for requiring 10 We used the implementation available at https://github.com/adamcrume/robust-binary-search. these approaches in our theoretical analysis is to overcome the unspecified constant in the O log 1 δ dependence in [11] . However, the algorithm in [10] appears to behave very favorably with respect to decreasing δ in practice (significantly better than the associated theoretical bounds). Thus, it may be that the requirement of two confidence levels (lowconfidence NBS and high-confidence certification) is purely theoretical and not beneficial for practical problem sizes; answering this definitively is beyond the scope of our work. We have introduced and analyzed noisy group testing algorithms that rely on noisy binary search as a subroutine, serving as natural noisy counterparts to the famous splitting algorithm of Hwang [7] . The most sophisticated variant of our algorithm attains the best known theoretical guarantee attained by any practical algorithm, while also overcoming a key practical limitation and explicitly specifying dependence on the target error probability. We have observed that even the simplest variant of our algorithm can perform very well in practice, and in future work, it may be of interest to refine this to attain further improved practical variants. DRAFT We describe the two ways in which the algorithm of [11] is modified as follows. Implementing NBS queries via group testing queries. Noisy binary search (NBS) works with an array of length n, and seeks to identify an unknown index i * ∈ {1, . . . , n} via noisy answers to questions of the form "Is i * ≤ i?" In contrast, in group testing, we have a binary vector in {0, 1} n (where 1 indicates defectivity), and we can observe noisy "OR" queries on any chosen subset. To apply NBS using group testing queries, we need to account for the fact that there may be multiple defectives (or no defectives, but this is handled separately below). To do so, we specifically seek to identify the left-most defective in the list, thus making the notion of a specific item i * well-defined. Then, if we want to ask the query "Is i * ≤ i?", we simply test all of the items in {1, . . . , i}. Thus, we can search for i * using NBS in a black-box manner, even if i * is not the only defective item. Allowing the possibility of no defectives. In some cases, we will apply the NBS subroutine on arrays where every item is non-defective, and we would like NBS to be able to detect this scenario. To achieve this, we add a dummy defective to the end of the array, i.e., if the original length was n, then we artificially add another defective indexed by n + 1, and manually set the test outcome to be Bernoulli(1 − ρ) whenever this item is included. Then, if NBS returns n + 1 as its estimate of i * , we take this to indicate that no defectives are present among indices {1, . . . , n}, and return φ. The bound on the number of tests from Lemma 1 is unchanged, because the change from n to n + 1 only affects the higher-order terms. We first analyze a useful subroutine, and then present the full algorithm. A useful subroutine. Let k 0 ≥ 2 be a putative number of defective items, and consider the goal of declaring whether k ≥ k 0 or k ≤ k0 √ 2 . For any k ∈ k0 √ 2 , k 0 , either declaration is considered adequate. We make use of i.i.d. Bernoulli testing, in which each item is independently placed into each test with probability ν k0 for some ν > 0. We set ν = ν 0 , where ν 0 is defined to be the parameter that would lead to i.i.d. Bernoulli testing having positive and negative tests outcomes being equally likely if exactly k 0 defectives were present, i.e., 1 − ν0 k0 k0 = 1 2 (e.g., see [8] ). It follows immediately that when k ≥ k 0 , the probability of a positive test is at least 1 2 . On the other hand, if k ≤ k0 √ 2 , then in the noiseless group testing model (whose output we denote by U to distinguish it from Y ), the probability of a positive test satisfies DRAFT where the strict inequality holds since replacing k 0 / √ 2 by k 0 would give exactly 1 2 due to our choice of ν. The precise gap to 1 2 is not important for our purposes. Next, we observe that a strict gap to 1 2 in the noiseless case implies the same in the noisy case, since if U is flipped with probability ρ to produce Y , then The right-hand side is strictly increasing with respect to P[U = 1], and equals 1 2 when P[U = 1] = 1 2 , which establishes the desired claim. Thus, distinguishing between k ≥ k 0 and k ≤ k0 √ 2 simply amounts to distinguishing independent Bernoulli random variables with parameters known to be at least 1 2 and strictly less than 1 2 , respectively. By standard Chernoff bounds, this can be achieved with success probability decaying exponentially in the number of tests. In particular, we can attain success probability 1 − O(n −c0 ) for any constant c 0 > 0, using O(log n) tests with an implied constant depending on c 0 . Algorithm for estimating k. We turn the above procedure into an algorithm that estimates the total number of defectives to within a factor of 1 2 . To do this, we simply run the above procedure with k 0 = 2, then again with k 0 = 2 √ 2 if the first step declares k ≥ 2, and so on until the procedure declares k ≤ k0 √ 2 , at which point we return k = k 0 . As long as no errors are made, we have the following: • It holds that k 2 ≤ k ≤ k (otherwise, we would contradict one of the final two decisions made); • The procedure is called O(log k) times. By considering a geometric random variable counting the number of possible failures after k 0 indeed exceeds 2k, the average number of calls is also O(log k), for a total of O(log k · log n) tests on average. In addition, success is guaranteed as long as the first O(log k) calls to the procedure succeed, so by the union bound and a suitable choice of c 0 above, we are guaranteed to succeed with probability 1 − O(n −c ) for any fixed c > 0. This establishes the first claim in Lemma 3. The second part of the lemma with a general value of ∈ (0, 1) follows similarly; the above analysis corresponds to = 1 2 , but there is no significant change for any other fixed value in (0, 1). Proof of Lemma 4. Suppose that we use a number of repeated observations given by some generic valuet, and that we estimate that v is 1 if more than half of the tests returned 1, and 0 otherwise. Then by the Chernoff bound for binomial random variables [44, Sec. 2.2] , the error probability is at most e −tD2( 1 2 ||ρ) . We sett = log(1/δ) D2( 1 2 ||ρ) , and observe that the error probability is upper bounded by exp(− log(1/δ) D2( 1 2 ||ρ) · D 2 ( 1 2 ||ρ)) = δ, thus proving Lemma 4. Proof of Lemma 5. Similarly to the above, suppose that we individually test some itemt times. As is well-known and was also used in [20] , the Chernoff bound implies for any ζ ∈ (ρ, 1 − ρ) that: • If j is defective, the probability of ζt or fewer outcomes is at most e −tD2(ζ 1−ρ) . • If j is non-defective, the probability of ζt or more outcomes is at most e −tD2(ζ ρ) . Note that setting ζ = 1 2 recovers the bound used in the proof of Lemma 4 above. Now suppose that this repetition testing is applied separately to each of the k 1 defectives and k 0 non-defectives. Then, we have the following: • (Non-defectives) By the union bound, the probability of any non-defective giving ζt or more positive outcomes is at most k 0 e −tD2(ζ ρ) . This is at most δ 0 as long as • (Defectives) The average number of defectives returning ζt or fewer positive outcomes is upper bounded by k 1 e −tD2(ζ 1−ρ) . Hence, by Markov's inequality, the probability of this occurring for more than 1 k 1 defectives is at most 1 1 e −tD2(ζ 1−ρ) . This is at most δ 1 as long as Recalling that k 0 = o(k 1 ) and 1 is constant, the above high-probability events ensure that the top (1 − 1 )k 1 ranked items are all defective. Takingt to equal the more stringent of the two values in (27) and (28), this establishes Lemma 5. In generic notation, consider a binomial random variable Z with K trials and success probability q = 1 log c N , for some c > 0. Hence, E[Z] = K log c N . A standard multiplicative form of the Chernoff bound states that, for any α > 0, we have [44, Ch. 2] Fix any function f (N ) satisfying f (N ) ≤ o(log c N ), and consider the probability P Z ≥ K f (N ) . Since f (N ) = o(log c N ) and E[Z] = K log c N , this amounts to choosing α = ω(1) in (29) ; specifically, α = log c N f (N ) − 1 = log c N f (N ) (1 + o(1)) = ω(1). In addition, we have the simplification (1 + α) log(1 + α) − α = (α log α)(1 + o(1)). Hence, (29) yields We use this result in the following two special cases: • If f (N ) = C for some constant C > 0, then we get P[Z ≥ K/C] ≤ e −ω(K) . • If f (N ) = log log N , then we get P[Z ≥ K/ log log N ] ≤ e −Ω(K) , since the Θ log log N f (N ) = Θ(log log N ) term cancels with f (N ) (up to a constant factor). The detection of defective members of large populations Unbounded contention resolution in multiple-access channels Pattern matching with don't cares and few errors What's hot and what's not: Tracking most frequent items dynamically Pooled testing and its applications in the COVID-19 pandemic Group testing and sparse signal recovery A method for detecting all defective members in a population by group testing On the all-or-nothing behavior of Bernoulli group testing Computing with noisy information Noisy binary search and its applications The Bayesian learner is optimal for noisy binary search (and pretty good for quantum as well) Noisy generalized binary search Rates of adaptive group testing in the linear regime Non-adaptive probabilistic group testing with noisy measurements: Near-optimal bounds with efficient algorithms Phase transitions in group testing Noisy non-adaptive group testing: A (near-)definite defectives approach Combinatorial group testing and its applications Group testing: An information theory perspective Randomized group testing both query-optimal and minimal adaptive Noisy adaptive group testing: Bounds and algorithms Optimal group testing The capacity of adaptive group testing Variable-rate channel capacity An efficient algorithm for capacity-approaching noisy adaptive group testing Efficient algorithms for noisy group testing Noisy adaptive group testing using Bayesian sequential experimental design Crackovid: Optimizing group testing Using noisy binary search for differentially private anomaly detection Sequential transmission using noiseless feedback On the problem of interval estimation under controlled observations Fast splitting algorithms for sparsity-constrained and noisy group testing A fast binary splitting approach to non-adaptive group testing Combinatorial group testing and sparse recovery schemes with near-optimal decoding time Competitive group testing and learning hidden vertex covers with minimum adaptivity Estimating the number of defectives with group testing Adaptive group testing algorithms to estimate the number of defectives Channel coding rate in the finite blocklength regime Feedback in the non-asymptotic regime Note on noisy group testing: Asymptotic bounds and belief propagation reconstruction Boolean compressed sensing: LP relaxation for group testing Group testing with random pools: Phase transitions and optimal strategy Performance of group testing algorithms with near-constant tests-per-item Information-theoretic and algorithmic thresholds for group testing Concentration Inequalities: A Nonasymptotic Theory of Independence The authors are with the Department of Computer Science and the Department of Mathematics, National University of Singapore (e-mail: bernardteo@u.nus.edu, scarlett@comp.nus.edu.sg). J. Scarlett is also with the Institute of Data Science, National University of Singapore.This work was supported by the Singapore National Research Foundation (NRF) under grant number R-252-000-A74-281.