key: cord-0165828-dtbmccum authors: Angelopoulos, Anastasios N.; Bates, Stephen; Zrnic, Tijana; Jordan, Michael I. title: Private Prediction Sets date: 2021-02-11 journal: nan DOI: nan sha: a699dbb9c3b48d51eeef5d439c7124aef3f03410 doc_id: 165828 cord_uid: dtbmccum In real-world settings involving consequential decision-making, the deployment of machine learning systems generally requires both reliable uncertainty quantification and protection of individuals' privacy. We present a framework that treats these two desiderata jointly. Our framework is based on conformal prediction, a methodology that augments predictive models to return prediction sets that provide uncertainty quantification -- they provably cover the true response with a user-specified probability, such as 90%. One might hope that when used with privately-trained models, conformal prediction would yield privacy guarantees for the resulting prediction sets; unfortunately this is not the case. To remedy this key problem, we develop a method that takes any pre-trained predictive model and outputs differentially private prediction sets. Our method follows the general approach of split conformal prediction; we use holdout data to calibrate the size of the prediction sets but preserve privacy by using a privatized quantile subroutine. This subroutine compensates for the noise introduced to preserve privacy in order to guarantee correct coverage. We evaluate the method on large-scale computer vision datasets. The impressive predictive accuracies of black-box machine learning algorithms on tightly-controlled test beds do not sanctify their use in consequential applications. For example, given the gravity of medical decision-making, automated diagnostic predictions must come with rigorous instance-wise uncertainty to avoid silent, high-consequence failures. Furthermore, medical data science requires privacy guarantees, since individuals would suffer material harm were their data to be accessed or reconstructed by a nefarious actor. While uncertainty quantification and privacy are generally dealt with in isolation, they arise together in many real-world predictive systems, and, as we discuss, they interact. Accordingly, the work that we present here involves a framework that addresses uncertainty and privacy jointly. Specifically, we develop a differentially private version of conformal prediction that results in private, rigorous, finite-sample uncertainty quantification for any model and any dataset at little computational cost. Our approach builds on the notion of prediction sets-subsets of the response space that provably cover the true response variable with pre-specified probability (e.g., 90%). Formally, for a test point with feature vector X ∈ X and response Y ∈ Y, we compute an uncertainty set function, C(·), mapping a feature vector to a subset of Y such that P{Y ∈ C(X)} ≥ 1 − α, for a user-specified confidence level 1 − α ∈ (0, 1). We use the output of an underlying predictive model (e.g., a pre-trained, privatized neural network) along with a held-out calibration dataset, {(X i , Y i )} n i=1 , from the same distribution as (X, Y ) to fit the set-valued function C(·). The probability in expression (1) is therefore taken over both the randomness in (X, Y ) and {(X i , Y i )} n i=1 . If the underlying model expresses uncertainty, C will be large, signaling skepticism regarding the model's prediction. Moreover, we introduce a differentially private mechanism for fitting C, such that the sets that we compute have low sensitivity to the removal of any calibration point. This will allow an individual to contribute a viral bacterial, viral bacterial, viral, normal Figure 1 : Examples of private conformal prediction sets on COVID-19 data. We show three examples of lung X-rays taken from the CoronaHack dataset [1] with their corresponding private prediction sets at α = 10% from a ResNet-18. All three patients had viral pneumonia (likely COVID -19) . The classes in the prediction sets appear in ranked order according to the softmax score of the model; the center and right images are incorrectly classified if the predictor returns only the most likely class, but are correctly covered by the private prediction sets. See Experiment 4.4 for details. calibration data point without fear that the prediction sets will reveal their sensitive information. Note that even if the underlying model is trained in a privacy-preserving fashion, this provides no privacy guarantee for the calibration data. Therefore, we will provide an adjustment that masks the calibration dataset with additional randomness, addressing both privacy and uncertainty simultaneously. See Figure 1 for a concrete example of private prediction sets applied to the automated diagnosis of COVID-19. In this setting, the prediction sets represent a set of plausible diagnoses based on an X-ray image-either viral pneumonia (presumed COVID-19), bacterial pneumonia, or normal. We guarantee that the true diagnosis is contained in the prediction set with high probability, while simultaneously ensuring that an adversary cannot detect the presence of any one of the X-ray images used to train the predictive system. Our main contribution is a privacy-preserving algorithm which takes as input any predictive model together with a calibration dataset, and outputs a set-valued function C(·) that maps any input feature vector X to a set of labels such that the true label Y is contained in the predicted set with probability at least 1 − α, as per equation (1) . In order to generate prediction sets satisfying this property, we will use ideas from split conformal prediction [2, 3, 4] , modifying this approach to ensure privacy. Importantly, if the provided predictive model is also trained in a differentially private way, then the whole pipeline that maps data to a prediction set function C(·) is differentially private as well. In Algorithm 1, we sketch our main procedure. Algorithm 1 first computes the conformity scores for all training samples. Informally, these scores indicate how well a feature-label pair "conforms" to the provided modelf , a low score implying high conformity and a high score being indicative of an atypical point from the perspective off . Then, the algorithm generates a certain carefully chosen private quantile of the scores. Finally, it returns a prediction set function C(·) which, for a given input feature vector, returns all labels that result in a conformity score below the critical thresholdŝ. Our main theoretical result asserts that Algorithm 1 has strict coverage guarantees and is differentially private. In addition, we show that the coverage is almost tight, that is, not much higher than 1 − α. Theorem 1 (Informal preview). The prediction set function C(·) returned by Algorithm 1 is -differentially private and satisfies We obtain a gap between the lower and upper bound on the probability of coverage to be roughly of the order O((n ) −1 ), similar to the standard gap O(n −1 ) without the privacy requirement. With this, we provide the first theoretical insight into the cost of privacy in conformal prediction. To shed further light on the properties of our procedure, we perform an extensive empirical study where we evaluate the tradeoff between the level of privacy on one hand, and the coverage and size of prediction sets on the other. Differential privacy [5] has become the de facto standard for privacy-preserving data analysis, as witnessed by its widespread adoption in large-scale systems such as those by Google [6, 7] , Apple [8] , Microsoft [9] , and the US Census Bureau [10, 11] . This increasing adoption of differential privacy goes hand in hand with steady progress in differentially private model training, ranging across both convex [12, 13] and non-convex [14, 15] settings. Our work complements these works by proposing a procedure that can be combined with any differentially private model training algorithm to account for the uncertainty of the resulting predictive model by producing a prediction set function with formal guarantees. At a technical level, closest to our algorithm on the privacy side are existing methods for reporting histograms and quantiles in a privacypreserving fashion [5, 16, 17, 18, 19] . Finally, there have also been significant efforts to quantify uncertainty with formal privacy guarantees through various types of private confidence intervals [20, 21, 22, 23] . While prediction sets resemble confidence intervals, they are fundamentally different objects as they do not aim to cover a fixed parameter of the population distribution, but rather a randomly sampled outcome. As a result, existing methods for differentially private confidence intervals do not generalize to our problem setting. Prediction sets as a way to represent uncertainty are a classical idea, going back at least to tolerance regions in the 1940s [24, 25, 26, 27] . See Krishnamoorthy & Mathew [28] for an overview of tolerance regions and Park et al. [29] for a recent application to deep learning models. Conformal prediction [30, 3, 31] is a related way of producing predictive sets with finite-sample guarantees. Most relevant to the present work, split conformal prediction [2, 32, 4] is a convenient version that uses data splitting to give prediction sets in a computationally efficient way. Vovk [33] and Barber et al. [34] refine this approach to re-use data for both training and calibration, improving statistical efficiency. Recent work has targeted desiderata such as small set sizes [35, 36] , coverage that is approximately balanced across feature space [37, 38, 39, 40, 41, 42, 43] , and coverage that is balanced across classes [44, 35, 45, 46] . Further extensions address problems in distribution estimation [47, 48] , handling or testing distribution shift [49, 50, 51] , causal inference [52] , and controlling other notions of statistical error [53] . We suggest [54] and [31] as introductory tutorials on conformal prediction for the unfamiliar reader. Lastly, we highlight two alternative approaches with a similar goal to conformal prediction. First, the calibration technique in Jung et al. [55] and Gupta et al. [56] generates prediction sets via the estimation of higher moments across many overlapping sub-populations. Second, there is a family of techniques that define a utility function balancing set-size and coverage and then search for set-valued predictors to maximize this utility [57, 58, 59] . The present work builds on split conformal prediction, but modifies the calibration step to preserve privacy. In this section, we formally introduce the main concepts in our problem setting. Split conformal prediction assumes access to a predictive model,f , and aims to output prediction sets that achieve coverage by quantifying the uncertainty off and the intrinsic randomness in X and Y . It quantifies this uncertainty using a calibration dataset consisting of n i.i.d. samples, , that were not used to trainf . The calibration proceeds by defining a score function Sf : X × Y → R. Without loss of generality we take the range of this function to be the unit interval [0, 1]. The reader should think of the score as measuring the degree of consistency of the response Y with the features X based on the predictive modelf (e.g., the size of the residual in a regression model), but any score function would lead to correct coverage. To simplify notation we will write S(·, ·) to denote the score, where we implicitly assume an underlying modelf . From this score function, one forms prediction sets as follows: for a choice ofŝ based on the calibration dataset. In particular,ŝ is taken to be a quantile of the calibration scores s i = S(X i , Y i ) for i = 1, . . . , n. In non-private conformal prediction, one simply takesŝ to be the (n + 1)(1 − α) /n quantile, and then a standard argument shows that the coverage property in (1) holds. In this work we show how to take a modified private quantile that maintains this coverage guarantee. As a concrete example of standard split conformal prediction, consider classifying an image in X = R m×d into one of a thousand classes, Y = {1, ..., 1000}. Given a standard classifier outputting a probability distribution over the classes,f : X → [0, 1] 1000 (e.g., the output of a softmax layer), we can define a natural score function based on the activation of the correct class, S(x, y) = 1 −f (x) y . Then we takeŝ as the upper 0.9(n + 1) /n quantile of the calibration scores s 1 , . . . , s n and define C as in equation (2). That is, we take as the cutoffŝ the value such that if we include all classes with estimated probability greater than 1 −ŝ, our sets have (only slightly more than) 90% coverage on the calibration data. The result C(x) on a test point is then a set of plausible classes guaranteed to contain the true class with probability 90%. Our proposed method will follow a similar workflow, but with a slightly different choice ofŝ to guarantee both coverage and privacy. We next formally define differential privacy. We say that two datasets D, D are neighboring if they differ in a single element, i.e., either dataset can be obtained from the other by removing a single entry. For Differential privacy then requires that two neighboring datasets produce similar distributions on the output. Definition 1 (Differential privacy [5] ). A randomized algorithm A is -differentially private if for all neighboring datasets D and D , it holds that: for all measurable sets O. In short, if no adversary observing the algorithm's output can distinguish between D and a dataset D with the i-th entry removed, the presence of individual i in the analysis cannot be detected and hence their privacy is not compromised. A key ingredient to our procedure is a privatized quantile of the conformity scores. We obtain this private quantile by discretizing the scores into bins and applying the exponential mechanism [60], one of the most ubiquitous tools in differential privacy. Our private quantile routine is then an extension of the private median routine proposed by Feldman and Steinke [19] to handle arbitrary quantiles. Specifically, let us fix a number of bins m ∈ N, as well as edges 0 ≡ e 0 < e 1 < ... < e m−1 < e m ≡ 1. The edges define the bins I j = (e j−1 , e j ], j = 1, ..., m. We use Algorithm 2 with appropriately chosen quantile level q as a subroutine of our main conformal procedure. We next precisely state our main algorithm and its formal guarantees. First, our algorithm has a calibration step, Algorithm 3, carried out one time using the calibration scores s 1 , . . . , s n as input; this is the heart of our proposed procedure. The output of this step is a cutoffŝ learned from the calibration data. With this in hand, one forms the prediction set for a test point x as in equation (2), which for completeness we state in Algorithm 4. Algorithm 3 Differentially private calibration input: calibration scores {s 1 , . . . , s n }, privacy parameter , coverage level α, bins {I 1 , . . . , I m } Computeq-quantile of {s 1 , . . . , s n } via Algorithm 2, whereq is defined in (3), denotedŝ output: calibrated score cutoffŝ Algorithm 4 Differentially private prediction set input: test point x, calibrated score cutoffŝ output: prediction set as in (2): This algorithm both satisfies differential privacy and guarantees correct coverage, as stated next in Proposition 1 and Theorem 2, respectively. The privacy property is a straightforward consequence of the privacy guarantees on the exponential mechanism [60]. Therefore, the main challenge for theory lies in understanding how to compensate for the added differentially private noise in order to get strict, distribution-free coverage guarantees. Theorem 2 (Coverage guarantee). Fix the differential privacy level > 0 and miscoverage level α ∈ (0.5, 1), as well as a free parameter γ ∈ (0, 1). Let and letŝ be the output of Algorithm 2 at level min{q, 1}. Then, the prediction sets in (2) with cutoffŝ satisfy the coverage property in (1). Remark 1. We can choose γ to minimizeq, which leads to smallest prediction sets. The optimal value γ * depends only on n, m, and α, and can be found by taking a derivative of (2); see Appendix C. Note that the significance levelq in (3) is just a slightly inflated version of the non-private conformal quantile:q ≥ (n+1)(1−α) n ≥ 1 − α. Indeed, taking → ∞, γ → 0 in (2) recovers the non-private quantile. Intuitively, we must raise the significance level to compensate for the noise introduced to preserve privacy. We informally sketch the main ideas in the proof, deferring the details to the Appendix. Proof sketch. We can write the probability of coverage as: where F is the distribution of appropriately discretized empirical scores. We observe that for allq, the exponential mechanism with inputq and s 1 , . . . , s n returns an empirical quantile no smaller than theq − O(1/(n )) empirical quantile. This allows us to write whereF denotes the empirical distribution of the discretized scores. For any q, the random variable F (F −1 (q)) is distributed as the nq -th order statistic of a super-uniform distribution, which implies that it can be stochastically lower bounded by the nq -th order statistic of a uniform distribution. This order statistic follows a beta distribution with known parameters, whose expectation can hence be evaluated analytically. Carefully choosingq as a function of this expectation completes the proof of the theorem. With the validity of Algorithm 3 established, we next prove that the algorithm is not too conservative in the sense that the coverage is not far above 1 − α. A key quantity in our upper bound is This quantity captures the impact of the score discretization. Smaller p m max corresponds to mass spread more evenly throughout the bins. For well-behaved score functions, we expect p m max to scale as O(m −1 ). Indeed, if the scores have any continuous density on [0, 1] bounded above and we take uniformly spaced bins, then p m max = O(m −1 ). In terms of p m max , we have the following upper bound. (2) withŝ is as in Theorem 2, satisfy the following coverage upper bound: whereq is defined in (3). If we further assume a weak regularity condition on the scores, then by balancing the rates in the expression above we arrive at an explicit upper bound. Corollary 1 (Coverage upper bound, simplified form). Suppose that the input scores follow a continuous distribution on [0, 1] with a density that is bounded above. Take m ∝ n and γ = 1/m. Then, the prediction sets in (2), withŝ as in Theorem 2, satisfy the following upper bound: We emphasize that the assumptions on the score distribution are only needed to prove the upper bound; the coverage lower bound holds for any distribution. In any case, these assumptions are very weak, essentially requiring only that the score distribution contains no point masses. In fact, this requirement could even be enforced ex post facto by adding a small amount of tiebreaking noise, in which case we would need no restrictions on the input distribution of scores whatsoever. The upper bound answers an important practical question: how many bins should we take? If m is too small, then the histogram only coarsely approximates the empirical distribution of the scores. On the other hand, if m is too large, then the histogram is accurate, but the private quantile in 3 can grow as well. This tension can be observed in the terms in Theorem 3 that have a dependence on m. Corollary 1 suggests that : Coverage and set size with private/non-private models and private/non-private conformal prediction. We demonstrate histograms of coverage and set size of non-private/private models and non-private/private conformal prediction at the level α = 0.1, with = 8, δ = 1e − 5, and n = 5000. the correct balance-which leads to minimal excess coverage-is to take m ∝ n . In practice, because the dependence ofq on m is only logarithmic, m is often very large. This upper bound also gives insight to an important theoretical question: what is the cost of privacy in conformal prediction? In non-private conformal prediction, the upper bound is 1 − α + O(n −1 ) [4] . In private conformal prediction, we achieve an upper bound of 1 − α +Õ((n ) −1 ), a relatively modest cost incurred by privacy-preserving calibration. We now turn to an empirical evaluation of differentially private conformal prediction for image classification problems. In this setting, each image X i has a single unique class label Y i ∈ {1, ..., K} estimated by a predictive modelf : X → [0, 1] K . We seek to create private prediction sets, C(X i ) ⊆ {1, ..., K}, achieving coverage as in equation (1), using the following score function: S(x, y) = 1 −f (x) y , as in Sadinle et al. [35] . This section evaluates the prediction sets generated by Algorithm 3 by quantifying the cost of privacy and the effects of the model, number of calibration points, and number of bins used in our procedure. We use the CIFAR-10 dataset [61] wherever we require a privately trained neural network. Otherwise, we use a non-private model on the ImageNet dataset [62], to investigate the performance of our procedure in a more challenging setting with a large number of possible labels. Except where otherwise mentioned, we use an automated number of uniformly spaced bins m * to construct the privatized CDF. Appendix C describes the algorithm for choosing an approximately optimal value of m * when the conformal scores are roughly uniform based on fixed values of n, , and α. We finish the section by providing private prediction sets for diagnosing viral pneumonia on the CoronaHack dataset [1] . The reader can reproduce the experiments exactly using our public GitHub repository. We would like to disentangle the effects of private conformal prediction from those of private model training. To that end, we report the coverage and set sizes of the following four procedures: private conformal prediction with a private model, non-private conformal prediction with a private model, private conformal prediction with a non-private model, and non-private conformal prediction with a non-private model. The non-private model and private model are both the same stock convolutional architecture from the Opacus library. The private model is trained with private SGD [14] , as implemented in the Opacus library, with privacy parameters = 8 and δ = 1e − 5. We used the suggested private model training parameters from the Opacus library (see Appendix C), as our work does not aim to improve private model training. The non-private model's accuracy (73%) was significantly higher than that of the private model (67%). Figure 3 shows histograms of the coverages and set sizes of these procedures over 1000 random splits of the CIFAR-10 validation set with n = 5000. Notably, the results show the price of private conformal prediction is very low, as evidenced by the minuscule increase in set size caused by private conformal prediction. However, the private model training causes a larger set size due to the private model's comparatively poor performance. Note that a user desiring a fully private pipeline will use the procedure in the bottom right quadrant of the plot. Here we probe the performance of private prediction sets as the number of uniformly spaced bins m in our procedure changes. Based on our theoretical results, m should be on the order of n , with the exact number dependent on the underlying model and the choices of α, n, and . A too-small choice of m coarsely quantizes the scores, so Algorithm 4 may be forced to round up to a very conservative private quantile. A too-large choice of m increases the logarithmic term in 3. The optimal choice of m balances these two factors. To demonstrate this tradeoff, we performed experiments on ImageNet. We used a non-private, pre-trained ResNet-152 from the torchvision repository as the base model. Figure 4 shows the coverage and set size Next we quantify how the coverage changes with the privacy parameter . We used n = 30000 calibration points and 20000 evaluation points as in Experiment 4.3. For each value of we choose a different value of m * . Figure 5 shows the coverage and set size of private prediction sets over 100 splits of ImageNet's validation set for several choices of . As grows, the procedure becomes less conservative. Overall the procedure exhibits little sensitivity to . Next we show results on the CoronaHack dataset, a public chest X-ray dataset containing 5908 X-rays labeled as normal, viral pneumonia (primarily COVID-19), or bacterial pneumonia. Using 4408 training pairs over 14 epochs, we (non-privately) fine-tuned the last layer of a pretrained ResNet-18 from torchvision to predict one of the three diagnoses. The private conformal calibration procedure saw a further n = 1000 examples, and we used the remaining 500 for validation. The ResNet-18 had a final accuracy of 75% after fine-tuning. Figure 6 plots the coverage and set size of this procedure over 1000 different train/calibration/validation splits of the dataset, and Figure 1 shows selected examples of these sets. We introduce a method to produce differentially private prediction sets that contain the true response with a user-specified probability by blending split conformal prediction with differentially private quantile computation. The primary challenge we resolve in this work is simultaneously satisfying the coverage property and privacy property, which requires a careful choice of the conformal score threshold to account for the added privacy noise. Our corresponding upper bound shows that the coverage does not greatly exceed the nominal level 1 − α, meaning that our procedure is not too conservative. Moreover, our upper bound gives insight into the price of privacy in conformal prediction: the upper bound scales asÕ((n ) −1 ) compared to O(n −1 ) for non-private conformal prediction, a mild decrease in efficiency. This is confirmed in our experiments, where we show that there is little difference between private and non-private conformal prediction when using the same predictive model. We also observe the familiar phenomenon that there is a substantial decrease in accuracy for private model fitting compared to non-private model fitting. We conclude that the cost of privacy lies primarily in the model fitting-private calibration has a comparatively minor effect on performance. We also note that any improvement in private model training would immediately translate to smaller prediction sets returned by our method. In sum, we view private conformal prediction as an appealing method for uncertainty quantification with differentially private models. [60] F. [63] C. Dwork and A. Roth, "The algorithmic foundations of differential privacy.," Foundations and Trends in Theoretical Computer Science, vol. 9, no. 3-4, pp. 211-407, 2014. We start with a result about the error of the private quantile mechanism, stated in Algorithm 2. The following is an extension of the the analogous result for the private median due to Feldman and Steinke [19] . Lemma 1. For any δ ∈ (0, 1), the differentially private quantile algorithm (Algorithm 2) satisfies: Going back to equation (4), we have that with probability at least 1 − δ, Similarly, Next, we package some classical facts about the distribution of order statistics in a form helpful for analyzing conformal prediction. ∼ F , and letF denote the empirical CDF corresponding to Z 1 , . . . , Z n . Denote also p m max = max 1≤i≤m P{Z 1 = a i }. Then, where Z Beta follows the beta distribution Beta( nq , n − nq + 1) and denotes first-order stochastic dominance. Proof. Since we takeF −1 (q) = inf{z :F (z) ≥ q} by definition, then that impliesF −1 (q) = Z ( nq ) , where Z (i) denotes the i-th non-decreasing order statistic of Z 1 , . . . , Z n . By monotonicity of F , we further have that F (Z ( nq ) ) is identical to the nq -th non-decreasing order statistic of F (Z 1 ), . . . , F (Z n ). By a standard argument, the samples F (Z 1 ), . . . , F (Z n ) are super-uniform, i.e. P{F (Z 1 ) ≤ u} ≤ u for all u ∈ [0, 1]. In other words, they are stochastically larger than a uniform distribution on [0, 1], and thus their nq -th order statistic is stochastically lower bounded by the nq -th order statistic of a uniform distribution, which follows the Beta( nα , n − nα + 1) distribution. This completes the proof of the lower bound. For the upper bound, we use the fact that P{F (Z 1 ) ≤ u} ≥ u − p m max , and so F (Z i ) are stochastically dominated by Their nq -th order statistic is distributed as Z Beta + p m max , which completes the proof. B.1 Proof of Theorem 2 First we introduce some notation. By F we will denote the discretized CDF of the scores; in particular, for any i ∈ {1, . . . , n}, Here, by [s i ] we denote a discretized version of s i where we set [s i ] = e j if s i ∈ I j . We also letF denote the empirical distribution of the discretized scores: By convention, we let F −1 (δ) denote the left-continuous inverse of F , i.e. F −1 (δ) := inf{s : F (s) ≥ δ}, and we similarly defineF −1 (δ). We can write Denote the event E = 1 n #{i : [s i ] ≤ŝ} ≥q − 2 n log(m/(γα)) , and note that by Lemma 1 and the fact thatq ≥ 0.5, P{E} ≥ 1 − γα. By splitting up the analysis depending on E, we obtain the following: where the final inequality follows by the definition of E. Thus, it suffices to show that Let j * = n q − 2 n log (m/(γα)) . Then, by Lemma 2, F F −1 q − 2 n log (m/(γα)) Beta(j * , n − j * + 1), so E F F −1 q − 2 n log (m/(γα)) ≥ j * n + 1 = n(q − 2 n log (m/(γα))) n + 1 . By the definition ofq, we see that n(q − 2 n log (m/(γα))) n + 1 holds, which implies equation (5) and thus completes the proof. We adopt the definitions of F ,F from Theorem 2, and define E as the event 1 n #{i : [s i ] <ŝ} ≤q + 2 max{q 1−q , 1} log(m/γα) n , which by Lemma 1 has probability at least 1 − γα. By a similar reasoning as in Theorem 2, we obtain the following: where the final inequality follows by the definition of E. Let j * = n q + 2 max{q 1−q ,1} n log (m/(γα)) . By Lemma 2, we have F F −1 q + 2 max{q 1−q , 1} n log (m/(γα)) Beta(j * , n − j * + 1) + p m max , so E F F −1 q + 2 max{q 1−q , 1} n log (m/(γα)) ≤ j * n + 1 +p m max = n q + 2 max{q 1−q ,1} n log (m/(γα)) n + 1 +p m max . (7) By the definition ofq, we see that n q + 2 max{q 1−q ,1} n log (m/(γα)) n + 1 ≤ 1−α 1−γα (n + 1) + 2(1+max{q 1−q ,1}) log (m/(γα)) n + 1 = 1 − α 1 − γα + 2(1 + max{q 1−q , 1}) log(m/(γα)) (n + 1) . Putting together equations (6), (7) , and (8) completes the proof. Choosing m * and γ. Algorithm 5 gives automatic choices of the optimal number of uniformly spaced bins, m * , and the tuning parameter γ that work well for approximately uniformly distributed scores. In a moment, we will show how to find the optimal value γ * for a fixed value of m. Once γ * gets chosen, we will simulate uniformly distributed scores to choose the value m * that results in the best quantile for specific, pre-determined values of α, , and n. In practice, m * can be chosen from a relatively coarse grid of, say, 50 values logarithmically spaced from 10 2 to 10 6 . We start choosing the optimal value γ * by solving for the zeros of the derivative δq δγ , leading to the quadratic expression, δq δγ = 0 ⇐⇒ α 2 γ 2 − α(1 − α) (n + 1) 2 + 2α γ + 1 = 0. Letting Γ be the roots of (9), we can then choose the optimal value γ * as γ * = arg min γ∈Γ∩(0,1)∪{1e−12} (n + 1)(1 − α) n(1 − γα) where the number 1e-12 takes care of the case that both roots lie outside the interval (0, 1). Algorithm 5 Get optimal number of bins and γ input: number of calibration points n, privacy level > 0, confidence level α ∈ (0, 1) Simulate n uniform conformity scores s i ∼ Unif(0, 1), i = 1, ..., n Choose m * to be the value of m minimizing the output of Algorithm 3 on the s i with the optimal γ * chosen by (10) . output: m * , γ * Private training procedure. We used the Opacus library with the default parameter choices included in the CIFAR-10 example code. The only difference in the non-private model training is the use of the --disable-dp flag, turning off the added noise but preserving all other settings. Databiology Lab CORONAHACK: Collection of public COVID-19 data Inductive confidence machines for regression Algorithmic Learning in a Random World Distribution-free predictive inference for regression Calibrating noise to sensitivity in private data analysis Rappor: Randomized aggregatable privacy-preserving ordinal response Prochlo: Strong privacy for analytics in the crowd Differential Privacy Team Apple Collecting telemetry data privately The US census bureau adopts differential privacy Differential privacy and the US census Differentially private empirical risk minimization Private empirical risk minimization: Efficient algorithms and tight error bounds Deep learning with differential privacy Oracle efficient private non-convex optimization Differentially private histogram publication Differentially private m-estimators Privacy-preserving statistical estimation with optimal convergence rates Generalization for adaptively-chosen estimators via stable median Finite sample differentially private confidence intervals Differentially private ordinary least squares Locally private mean estimation: z-test and tight confidence intervals Differentially private confidence intervals for empirical risk minimization Determination of sample sizes for setting tolerance limits Statistical prediction with special reference to the problem of tolerance limits An extension of Wilks' method for setting tolerance limits Non-parametric estimation II. statistically equivalent blocks and tolerance regions-the continuous case Statistical Tolerance Regions: Theory, Applications, and Computation PAC confidence sets for deep neural networks via calibrated prediction Machine-learning applications of algorithmic randomness A tutorial on conformal prediction A conformal prediction approach to explore functional data Cross-conformal predictors Predictive inference with the jack-knife+ Least ambiguous set-valued classifiers with bounded error levels Uncertainty sets for image classifiers using conformal prediction Conditional validity of inductive conformal predictors The limits of distribution-free conditional predictive inference Conformalized quantile regression Flexible distribution-free conditional predictive bands using density estimators Classification with valid and adaptive coverage Conformal prediction with localization Knowing what you know: valid and validated confidence sets in multiclass and multilabel prediction Classification with confidence Cautious deep learning Prediction and outlier detection in classification problems Nonparametric predictive distributions based on conformal prediction Conformal calibrators Conformal prediction under covariate shift Robust validation: Confident predictions even when distributions shift A distribution-free test of covariate shift using conformal prediction Conformal inference of counterfactuals and individual treatment effects Distribution-free, risk-controlling prediction sets A gentle introduction to conformal prediction and distribution-free uncertainty quantification Moment multicalibration for uncertainty estimation Online multivalid learning: Means, moments, and prediction intervals Classification with set-valued decision functions Learning nondeterministic classifiers Efficient set-valued prediction in multi-class classification