key: cord-0484939-66mnpk8q authors: Aktay, Ahmet; Bavadekar, Shailesh; Cossoul, Gwen; Davis, John; Desfontaines, Damien; Fabrikant, Alex; Gabrilovich, Evgeniy; Gadepalli, Krishna; Gipson, Bryant; Guevara, Miguel; Kamath, Chaitanya; Kansal, Mansi; Lange, Ali; Mandayam, Chinmoy; Oplinger, Andrew; Pluntke, Christopher; Roessler, Thomas; Schlosberg, Arran; Shekel, Tomer; Vispute, Swapnil; Vu, Mia; Wellenius, Gregory; Williams, Brian; Wilson, Royce J title: Google COVID-19 Community Mobility Reports: Anonymization Process Description (version 1.0) date: 2020-04-08 journal: nan DOI: nan sha: 858fc3babc8436bf2936968b37b7a8ea3049c6f1 doc_id: 484939 cord_uid: 66mnpk8q This document describes the aggregation and anonymization process applied to the initial version of Google COVID-19 Community Mobility Reports (published at http://google.com/covid19/mobility on April 2, 2020), a publicly available resource intended to help public health authorities understand what has changed in response to work-from-home, shelter-in-place, and other recommended policies aimed at flattening the curve of the COVID-19 pandemic. Our anonymization process is designed to ensure that no personal data, including an individual's location, movement, or contacts, can be derived from the resulting metrics. The high-level description of the procedure is as follows: we first generate a set of anonymized metrics from the data of Google users who opted in to Location History. Then, we compute percentage changes of these metrics from a baseline based on the historical part of the anonymized metrics. We then discard a subset which does not meet our bar for statistical reliability, and release the rest publicly in a format that compares the result to the private baseline. COVID-19 Community Mobility Reports provide insights into changes in mobility patterns. These reports use anonymized, aggregated data to chart movement trends over time by geography, as well as by place categories, showing trends over several weeks. This works in a similar way to existing Google products and features. For example, Google Maps uses aggregated, anonymized data to show how busy certain types of places are, including when a local business tends to be the most crowded. Public health officials have suggested this same type of aggregated, anonymized data could also be helpful as they make critical decisions to combat COVID-19. The COVID-19 Community Mobility Reports provide insights into what has changed in response to workfrom-home, stay-at-home, and other recommended policies aimed at flattening the curve of the COVID-19 pandemic. They analyze trends in visits made to high-level categories of places, including workplaces, retail and recreational venues, groceries and pharmacies, parks, transit centers, and places of residence. Each version of the report will show trends over several weeks, with the most recent data representing 48 hours prior. As explained in greater technical detail below, the anonymization process for these reports includes differential privacy [1] , which is well-suited to produce analytics in contexts where the categories of data are known in advance. Our rigorous approach intentionally adds random noise to metrics in a way that maintains both users' privacy and the overall accuracy of the aggregated data. This paper is structured as follows: we introduce our method to produce anonymized metrics with differential privacy. We then explain how we post-process the anonymized metrics to generate the reports. Location History users The metrics in these reports are based on the data of Google users who have opted in to Location History [2], ("LH users"), a feature which is off by default. Differential Privacy [3] Let ε be a positive real number and A be a randomized algorithm that computes a metric. In the context of this report, A is considered ε-differentially private if for all input datasets D 1 and D 2 such that D 2 can be obtained from D 1 by adding or removing a single user, and for all subsets of S ∈ imA: Granularity levels The metrics are aggregated per day and per geographic area. There are three levels of geographic areas; in this paper, we call these granularity levels. • Granularity level 0 corresponds to metrics aggregated by country / region. • Granularity level 1 corresponds to metrics aggregated by top-level geopolitical subdivisions (e.g. US states). • Granularity level 2 corresponds to metrics aggregated by higher-resolution granularity (e.g. U.S. Counties). Granularity levels 1 and 2 are defined differently in different countries, to account for knowledge of local public-health needs. Note that in general, the geographic area represented gets smaller as the Granularity number increases. No metrics are published for geographic regions smaller than 3km 2 . We are releasing aggregated, anonymized data that is designed to ensure that no personal data, including an individual's location, movement, or contacts, can be derived from the resulting metrics. To that end, we anonymize the statistics with differential privacy. We query the underlying data using our open-source differential privacy library [4] , which adds Laplace noise [5] to protect each metric with differential privacy. We count the number of unique LH users who visited a public place of a given category in a given day at each granularity level. There are seven different categories derived from the data: retail, recreation, eateries (reported as part of "Retail & recreation"); groceries, pharmacies; transit; and parks. We add Laplace noise to each count according to the following For each location (at all geographic levels), each LH user can contribute at most once to each category. We also bound the contribution of each LH user to 4 pairs per day and per geographic level, using a process similar to the one described in this paper [6] : if an LH user contributes to more than 4 pairs in a given day and given geographic level, we randomly select 4 of them, and discard the others. For example, suppose that on the same day, an LH user goes to public places in all 7 categories in two distinct neighboring countries. This makes a total of 14 pairs at country level. We would randomly discard 10 of these pairs when computing country-level statistics. This process does not significantly affect data accuracy: in the US, at county level, 99% of LH users contribute 3 or fewer pairs per day on average. Thus, each daily place visit is protected by differential privacy with ε = 0.44, and the total daily contribution of each user with a maximum of ε = 1.76. For the purposes of this analysis, we use signals like relative frequency, time and duration of visits to calculate metrics related to places of residence. We calculate an average amount of time spent at places of residence for LH users in hours. This computation is performed for each day and geographic area, using the differentially private mean mechanism from our open-source library [7] . This mechanism works as follows: • We compute the amount of time spent at place of residence in a given day and geographic area in hours by summing up the individual values per user offset by 12, so all individual values fall into the range [−12; 12]. We then add Laplace noise to this sum; the scale of the noise is indicated in the table below. We denote the real sum S, and noisy sum NS. • We compute the count of unique users who spent any time at residences in a given day and geographic area. We then add Laplace noise to this count; the scale of the noise is indicated in the table below. We refer to the real count C, and the noisy count NC. • Finally, we compute the ratio NS/NC for each day and each geographic area, add 12 as offset and clamp it to the range [0, 24] hours/day. For example, at county-level, NS is obtained by first sampling a random number from a Laplace distribution of scale 109.1, and then adding that number to S. In the table below, we also indicate the standard deviation σ of the noise added to each value. Each user can contribute to at most one region per granularity level, which protects these metrics by differential privacy with ε = 0.44 total budget across all granularities. A description of the differentially private mean mechanism implemented and a proof of its privacy guarantees is described in [8] (Algorithm 2.4). For the purposes of this analysis, we use signals like relative frequency, time and duration of visits to calculate metrics related to places of residence and places of work of LH users. We calculate how many LH users spent more than 1 hour at their places of work. This computation is performed for each day and geographic area. Then, we add Laplace noise to each count according to the following table. Scale of Laplace noise Corresponding ε parameter 0 1/0.11 ≈ 9.09 (σ ≈ 12.86) 0.11 1 1/0.11 ≈ 9.09 (σ ≈ 12.86) 0.11 2 1/0.22 ≈ 4.55 (σ ≈ 6.43) 0.22 Table 3 : Noise parameters used for the work places metrics The count is aggregated by places of residence of LH users. Since each user can contribute to at most one geographic area per granularity level, these metrics are protected by differential privacy with ε = 0.44. 3 Generating the report from the anonymized metrics The metrics described above are generated for each day, starting on 2020-01-01. They are then used to generate the percentage changes relative to day of the week published in the reports. All operations described below use only the output of the differentially private mechanisms described in the previous section; so they do not consume any privacy budget. Additional privacy protections We discard all metrics for which the geographic region is smaller than 3km 2 , or for which the differentially private count of contributing users (after noise addition) is smaller than 100. Geographic regions smaller than 3km 2 may be merged such that the union of their area is above the 3km 2 threshold. This merging does not occur across country boundaries, except for the Vatican City and Italy. For each individual metric generated using the mechanisms described above, we compute the ratio between the metric for a given day D and the same metric computed for the baseline period. The reference baseline is defined in the following way. • We consider the 5-week range from 2020-01-03 through 2020-02-06. • Within this 5-week range, we consider the 5 days with the same day of week as D. For example, if D is 2020-03-20, D is a Friday, so we consider the 5 Fridays in this 5-week range (Jan 3 to Jan 31, inclusive). • We compute the median of the differentially private metrics for these 5 baseline days. • This median metric is the baseline metric for D. We then compute and publish the ratio between the metric for D and the baseline metric, as a percentage. In some regions, the noise added to obtain differential privacy can reduce the confidence that we are capturing a meaningful change, typically when there is not a lot of data for the metric. When, because of this uncertainty, the percentage change for one of these metrics has a 5% chance (or higher) of being wrong by more than ±10 absolute percentage points, we do not publish it and instead include an asterisk denoting that there is not enough data available to present privacy-safe information. More precisely: • Before releasing a ratio metric/baseline, we compute 97.5% confidence intervals for the metric and its baseline. Let us denote [metric min, metric max] and [baseline min, baseline max] these respective confidence intervals. • We compute the ratios metric min/baseline max and metric max/baseline min. • If one of these ratios differs from the differentially private ratio by more than 10 absolute percentage points, we do not publish the corresponding percentage changes. If the last condition is not satisfied, then the probability of being wrong by more than 10 absolute percentage points in each direction is lower than 2.5%. By union bound, this means that there is at most a 5% risk of being wrong by more than 10 absolute percentage points. Note that the confidence intervals are based on an already differentially private value and on public data (the scale and shape of the noise), so no privacy budget is consumed by this operation. Enabling developers and organizations to use differential privacy Calibrating Noise to Sensitivity in Private Data Analysis Google's C++ Differential Privacy Library Differentially Private SQL with Bounded User Contribution Differential Privacy: From Theory to Practice