key: cord-0607818-b2adiqav authors: Anantharaman, Siva; Frittella, Sabine; Nguyen, Benjamin title: Distributed Transition Systems with Tags for Privacy Analysis date: 2022-04-06 journal: nan DOI: nan sha: 35703829a726a820b83c308cc74400e8d163e7d6 doc_id: 607818 cord_uid: b2adiqav We present a logical framework that formally models how a given private information P stored on a given database D, can get captured progressively, by an agent/adversary querying the database repeatedly.Named DLTTS (Distributed Labeled Tagged Transition System), the frame-work borrows ideas from several domains: Probabilistic Automata of Segala, Probabilistic Concurrent Systems, and Probabilistic labelled transition systems. To every node on a DLTTS is attached a tag that represents the 'current' knowledge of the adversary, acquired from the responses of the answering mechanism of the DBMS to his/her queries, at the nodes traversed earlier, along any given run; this knowledge is completed at the same node, with further relational deductions, possibly in combination with 'public' information from other databases given in advance. A 'blackbox' mechanism is also part of a DLTTS, and it is meant as an oracle; its role is to tell if the private information has been deduced by the adversary at the current node, and if so terminate the run. An additional special feature is that the blackbox also gives information on how 'close',or how 'far', the knowledge of the adversary is, from the private information P , at the current node. A metric is defined for that purpose, on the set of all 'type compatible' tuples from the given database, the data themselves being typed with the headers of the base. Despite the transition systems flavor of our framework, this metric is not 'behavioral' in the sense presented in some other works. It is exclusively database oriented,and allows to define new notions of adjacency and of -indistinguishabilty between databases, more generally than those usually based on the Hamming metric (and a restricted notion of adjacency). Examples are given all along to illustrate how our framework works. Keywords:Database, Privacy, Transition System, Probability, Distribution. Our aim in this paper is to propose a logical framework for formally modeling how the information stored in a database can get captured progressively, by an agent/adversary querying repeatedly the database. The data can be of any following types: numerical, non-numerical, or literal. In practice however, some of the literals representing 'sensitive data' could be in a taxonomical relation; moreover, part of the data could be presented, for 'anonymization' purposes, as finite intervals or sets, over the basic types. We shall therefore agree to consider the types of the data in an extended 'overloaded' sense. Cf. Example 1 below. (Only tree-structured taxonomies will be considered in this work.) We assume given a data base D, with its attributes set A, usually divided in three disjoint groups: the subgroup A (i) of identifiers, A (qi) of quasi-identifiers, and A (s) of sensitive attributes. The tuples of the base D will be generally denoted as t, and their attributes denoted respectively as t i , t qi , and t s in the three subgroups of A. The attributes t i on any tuple t of D are conveniently viewed as defining a 'user' or a 'client' of the database D. Quasi-identifiers 1 are informally defined in general, as a set of public attributes, which in combination with other attributes and/or external information, can allow to re-identify all or some of the users to whom the information refers. The base D itself could be 'distributed probabilistically' over a finite set (referred to then, as 'universe', and its elements named as possible 'worlds'). By a privacy policy P = P A (D) on D with respect to a given agent/adversary A is meant the stipulation that for a certain given set of tuples {t ∈ P ⊂ D}, the sensitive attributes t s on any such t shall remain inaccessible ('even after further deduction' -see below) to A. It is assumed that A is not the user identified by the attributes t i on these t's. The logical framework we propose in this work, to model the evolution of the 'knowledge' that an adversary A can gain by repeatedly querying the given database D -with a view to get access to sensitive data meant to remain hidden for him/her under the given privacy policy P -, will be called Distributed Labeled-Tagged Transition System (DLTTS); The underlying logic for DLTTS is first-order, with countably many variables and finitely many constants (including certain usual dummy symbols like ' , $, #'). In this work, the basic signature Σ for the framework is assumed to have no non-constant function symbols. By 'knowledge' of A we shall mean the data that A retrieves as answers to his/her successive queries, as well as other data that can be deduced/derived, under relational operations on these answers; and in addition, some others derivable from these, using relational combinations with data (possibly involving some of the users of D) from finitely many external DBs given in advance, denoted as B 1 , . . . , B m , to which the adversary A is assumed to have free access. These relational and querying operations are all assumed done with a well-delimited fragment of the language SQL ; it is assumed that this fragment of SQL is part of the signature Σ underlying the DLTTSs. In addition, if n ≥ 1 is the length of the tuples forming the data in D, finitely many predicate symbols K i , 1 ≤ i ≤ n, each K i of arity i, will be part of the signature Σ; in the work presented here they will be the only predicate symbols in Σ. The role of these symbols is to allow us to see any data tuple of length r, 1 ≤ r ≤ n, as a variable-free first-order formula with top symbol K r , with all arguments assumed typed implicitly (with the help of the headers of the base D). In practice however, we shall choose to drop these top symbols K i , and see any data tuple that is not part of the given privacy policy P A (D), directly as a first-order variable-free formula over Σ; data tuples t that are elements of the policy P A (D) will in practice be just written as ¬t. As we shall see, the DLTTS framework is well suited for capturing the ideas on acquiring knowledge and on policy violation, in an elegant and abstract setup. A preliminary definition of this framework (Section 2) considers only the case where the data, as well as the answers to the queries, do not involve any notion of 'noise'. (By 'noise' we shall mean the perturbation of data by some external random (probabilistic mechanism.) But we shall extend this definition in a later section, as an option to also handle noisy data.The notion of violation of any given privacy policy on a database can then be (optionally) extended into a notion of violation up to some given ≥ 0 ( -violation, for short). In the first part of the work, we will be modeling the lookout for the sensitive attributes of certain given users, by a single adversary. In the second part of the work (Section 5 onwards), we propose a method for comparing the evolution of knowledge of an adversary at two different nodes on a given run, or on two different possible runs; the same method also applies for comparing the evolution of knowledge of two different adversaries A 1 , A 2 , both querying repeatedly (and independently) the given database. But before formally defining the DLTTS, a couple of examples might help. They will in particular throw some light on how to delimit properly the fragment of SQL that we want included in our logical setup. Table 1 below is the record kept by the central Hospital of a Faculty, with three Departments, in a University, on recent consultations by the faculty staff. In this record, 'Name' is an identifier attribute, 'Ailment' is sensitive, the others are QI; 'Ailment' is categorical with 3 branches: Heart-Disease, Cancer, and Viral-Infection; this latter in turn is categorical too, with 2 branches: Flu and CoVid. By convention., such taxonomical relations are assumed known to public, (For simplicity of the example, we assume that all Faculty staff are on the consultation list of the Hospital.) The Hospital intends to keep 'secret' information concerning CoVid infected faculty members; the tuple ¬(John, 46, M, #, CoV id) therefore constitutes its And that confirms A's suspicion concerning John. In this case, the DLTTS framework would function as follows: At the starting state s a transition with three branches would a priori be possible, corresponding to the three ('M') lines 2, 4 and 5 of Table 2 , which represent the knowledge that would be acquired respectively along these branches. Now A is on the lookout for a possible CoVid case, so rules out the 'line 2 branch' (i.e., gives this branch probability 0). As for the remaining two branches (corresponding to lines 4 and 5 on Table 2 ), A chooses to go by the line 5 branch, considering it twice more likely to be successful, than the other (A had the impression that 'John was not too old'). That leads to the probability distribution 0, 1/3, 2/3 assigned respectively on the three possible branches for the transition. If s 0 , s 1 , s 2 are the respective successor states for the transition considered, the privacy policy of the Hospital (concerning John's CoVid information) would thus be violated at state s 2 (with probability 2/3), it wouldn't be at s 1 (probability 1/3); no information deduced at state s 0 . As just seen, modeling an adversary's search for some specific information on a given data base D -as 'runs' on a suitable DLTTS and probability distributions over the successor steps along the runs -, depends in general on the nature(structure) of the information looked for. The probability distributions on the transitions along the runs would generally depend on some random mechanism, which could also reflect the choices the adversary might make. The role of our next example is to point out that specifying Privacy policy policies will in general have some serious side effects on the functioning of the primitives and aggregate procedures of SQL. If the policies are to have some 'content', operationally speaking, the DBMS may have to stipulate that the queries employing these primitives either should have 'void outputs' in certain contexts, or 'get filtered by the Privacy policy'. Table 3 below is an imaginary record D of a bank L, containing a list of its clients: with client ids, their names, and their monthly balances. (Client id is the identifier attribute, Monthly-balance is sensitive.) The privacy policy P of the bank is that client Jean's Monthly-balance should 'be invisible' to others; formally, the policy P is the negated formula ¬(Jean, ≥ 420). On the other hand, the bank is obliged administratively to render public a monthly statement, on its minimum total Monthly-balance; that is Table 4 . Name Monthly-balance 1 Claude 320 2 Paul 270 3 Jean 420 4 Martin 150 5 Michel 420 An adversary A wants to know if Jean is a client of the bank, and if so, with a monthly balance among the highest. So A first queries the bank to get the list of its clients with their Monthly-balances. The Bank-DBMS's answer to A's query will be, say, as in Table 5 Remark 1: In the above example, if the external DB (Table 4 ) was unavailable to A, the DBMS could have answered his/her query with a Table 5 ' where the entire tuple on Jean is deleted; in such a case, the privacy policy P on D (concerning Jean) would a priori remain unviolated; except if we assume that the DBMS accepts queries with aggregate operations on the database D that 'do not explicitly look' for Jean's sensitive attribute: For instance A could first retrieve the SUM on the entire Monthly-balance column, then ask for SUM(Monthlybalance) where 'Name <> Jean'. A relational deduction then leads to the violation of the policy P . The above two Examples show that the violation of privacy policies needs, in general, some additional 'outside knowledge'. We may assume wlog that the given external bases B 1 , . . . , B m -to which A could resort, with relational operations for deducing additional information -are also of the same signature Σ as D; so all the knowledge A can deduce/derive from his/her repeated queries can be expressed as a first-order variable-free formula over the signature Σ. The DLTTS framework presented in this section synthesizes ideas coming from various domains, such as the Probabilistic Automata of Segala ([12] , Probabilistic Concurrent Systems, Probabilistic labelled transition systems ( [3, 4] . Although the underlying signature for the DLTTS can be rich in general, for the purposes of our current work we shall be working with a limited first-order signature (as mentioned in the Introduction) denoted Σ, with countably many variables, finitely many constants (including some 'standard dummies'), no nonconstant function symbols, and a finite limited set of predicate (propositional) symbols. Let E be the set of all variable-free formulas over Σ, and Ext a given subset of E. We assume given a decidable procedure C whose role is to 'saturate' any finite set G of variable-free formulas into a finite set G, by adding a finite (possibly empty) set of variable-free formulas, using relational operations on G and Ext. This procedure C will be internal at every node on a DLTTS; in addition, there will also be a 'blackbox' mechanism O, acting as an oracle telling if the given privacy policy on a given database is violated at the current node. In a later paragraph (Section 5) more details will be given on an additional role the oracle will play, in a privacy analysis schema, on any given DB. Definition 1 A Distributed Labeled-Tagged Transition System (DLTTS), over a given signature Σ, is formed of: -a finite (or denumerable) set S of states, an 'initial' state s 0 ∈ S, and a special state ⊗ ∈ S named 'fail': -a finite set Act of action symbols (disjoint from Σ), with a special action δ ∈ Act called 'violation'; is the set of all probability distributions over S, with finite support. -a tag τ (s) attached to every state s ∈ S {⊗}, formed of finitely many first-order variable-free formulas over Σ; the tag τ (s 0 ) at the initial state is the singleton set { }. -at every state s a special action symbol ι = ι s ∈ Act, said to be internal at s, completes/saturates τ (s) into a set τ (s) with the procedure C, by using relational operations between the formulas in τ (s) and Ext. A (probabilistic) transition t ∈ T will generally be written as a triple (s, a, t(s)); and t will be said to be 'from' (or 'at') the state s, the states of t(s) will be the 'successors' of s under t. The formulas in the tag τ (s) attached to any state s will all be assigned the same probability as the state s in Distr(S). If the set τ (s) of formulas turns out to be inconsistent, then the oracle mechanism O will (intervene and) impose (s, δ, ⊗) as the only transition from s, standing for 'violation' and 'fail', by definition, Nondeterminism of transitions can be defined without difficulty on DLTTS, as a nondeterministic choice between the possible probabilistic transitions at any given state. We shall assume that nondeterminism is managed by the choice of a suitable scheduler; and in addition, that at most one probabilistic transition is firable from any state s ∈ S {⊗}, and none from the halting state ⊗. DLTTS and Repeated queries on a database: The states of the DLTTS will stand for the various 'moments' of the querying sequence, while the tags attached to the states will stand for the knowledge A has acquired on the data of D 'thus far'. This knowledge consists partly in the answers to the queries (s)he made so far, then completed with additional knowledge using the internal 'saturation' procedure C of the framework. In the context of DBs, this procedure would consist in relational algebraic operations between the answers retrieved by A for his/her repeated queries on D, all seen as tuples (variable-free formulas), and suitable tuples from the given external databases B 1 , . . . , B m . If the saturated knowledge of A at a current state s on the DLTTS (i.e., the tag τ (s) attached to the current state s) is not inconsistent, then the transition from s to its successor states represents the probability distribution of the likely answers A would expect to get for his/her next query. Note that we make no assumption on whether the repeated queries by A on D are treated interactively, or non-interactively, by the DBMS. It appears that the logical framework would function exactly alike, in both cases. Remark 2: (a) Suppose t is a transition from a state s, on the DLTTS corresponding to a querying sequence by an adversary A, and s is one of the successors of s under t; then, by definition, the 'fresh' knowledge τ (s ) of A at s resulting from this transition, is the addition to A's saturated knowledge τ (s) at s, the part of the response of the DBMS's answering mechanism for A's current query, represented by the branch of t going from s to s . (b) As already mentioned, we assume that the relational operations needed for gaining further knowledge are done using a well delimited finite subset of the functionalities of SQL; and that 'no infinite set can get generated from a finite set' under these functionalities, assumed included in the signature Σ. (This corresponds to the bounded inputs outputs assumption, as in e.g., [1, 2] .) Proposition 1 Suppose given a database D, a finite sequence of repeated queries on D by an adversary A, and a first-order relational formula P = P A (D) over the signature Σ of D, expressing the privacy policy of D with respect to A. Let W be the DLTTS modeling the various queries of A on D, and the evolution of the knowledge of A on the data of D, resulting from these queries and the internal actions at the states of W, as described above. Proof: Assertion (i) is restatement. Observe now, that at any state s on W, the tags τ (s), τ (s) are both finite sets of first-order variable-free formulas over Σ, without non-constant function symbols. For, to start with, the knowledge of A consists of the responses received for his/her queries, in the form of a finite set of data tuples from the given databases, and some subtuples. And by our assumptions of Remark-2 (b), no infinite set can be generated by saturating this initial knowledge with procedure C. Assertion (ii) follows then from the known result that the inconsistency of any given finite set of variable-free first-order Datalog formulas is decidable, e.g., by the analytic tableaux procedure. (Only the absence of variables is essential.) Our objective now is to extend the result of Proposition 1 to the case when the violation to be considered can be up to some given ≥ 0, in a sense to be made precise. We stick to the same notation as above. The set E of all variable-free formulas over Σ is thus a disjoint union of subsets of the form i standing for the common length of the formulas in the subset, and K for the common root symbol of its formulas; each set E K i will be seen as a database of i-tuples. We shall first look at the situation where the queries intend to capture certain (sensitive) values on a given tuple t in the database D. Two different tuples in E might correspond to two likely answers to such a query, but with possibly different probabilities in the distribution assigned for the transitions, by the probabilistic mechanism M (e.g., as in Example 1). Given two such instances, and a real ≥ 0, we can also define a notion of their -local-indistinguishabilty, wrt the tuple t and the mechanism M answering the queries. This can be done in a slightly extended setup, where the answering mechanism may, as an option, also add 'noise' to certain numerical data values, for several reasons among which the safety of data. We shall then assume that the internal procedure C of the DLTTS at each of its states (meant to saturate the current knowledge of the adversary querying the database) incorporates the following three well-known noise adding mechanisms: the Laplace, Gauss, and exponential mechanisms. With the stipulation that this optional noise additions to numerical values can be done in a bounded fashion, so as to be from a finite prescribed domain around the values; it will then be assumed that tuples formed of such noisy data are also in E. Definition 2 (i) Suppose that, while answering a given query on the base D, at two instances v, v , the probabilistic answering mechanism M outputs the same tuple α ∈ E. Given ≥ 0, these two instances are said to be -localindistinguishable wrt α, if and only if: (ii) The probabilistic answering mechanism M is said to satisfy -local differential privacy ( -LDP) for ≥ 0, if and only if: For any two instances v, v of M that lead to the same output, and any set S ⊂ Range(M), we have We shall also be needing the following notion of -indistinguishability (and of -distinguishability) of two different outputs of the mechanism M: These definitions -as well that of -DP given below -are essentially reformulations of the same (or similar) notions defined in [7, 8] . Otherwise, the outputs α, α will be said to be -distinguishable. Remark 3: Given an ≥ 0, one may assume as an option, that at every state on the DLTTS the retrieval of answers to the current query (from the mechanism M) is done up to -indistinguishabilty; this will then be implicitly part of what was called the saturation procedure C at that state. The procedure thus enhanced for saturating the tags at the states, will then be denoted as C, when necessary (it will still be decidable, under the finiteness asumptions of Remark-2 (b)). Inconsistency of the set of formulas, in the ' C-saturated' tag at any state, will be checked up to -indistinguishabilty, and referred to as -inconsistency, or -failure. The notion of privacy policy will not need to be modified; that of its violation will be referred to as -violation, Under these optional extensions of -failure and -violation, it must be clear that the statements of Proposition 1 continue to be valid. Two small examples of -local-indistinguishability, before closing this section. (Table 2) , both point to Viral-Infection as output; they can thus be seen as log(2)-localindististinguishable, for the adversary A. (ii) The 'Randomized Response' mechanism RR ([13]) can be modelled as follows. Input is (X, F 1 , F 2 ) where X is a Boolean, and The notion of -indistinguishability of two given databases D, D for an adversary, is more general than that of -local-indistinguishability (of pairs of instances of a probabilistic answering mechanism giving the same output, defined in the previous section). -indistinguishability is usually defined only for pairs of databases D, D that are adjacent in a certain sense (cf. below). There is no uniquely defined notion of adjacence on pairs of databases; in fact, several are known, and in use in the literature. Actually, a notion of adjacence can be defined in a generic parametrizable manner (as in e.g., [5] ), as follows. We assume given a map f from the set D of all databases of m-tuples (for some given m > 0), into some given metric space (X, d X ). The binary relation on pairs of databases in D, defined by f adj (D, D ) = d X (f (D), f (D )) is then said to define a measure of adjacence on these databases. The relation f adj is said to define an 'adjacency relation'. Definition 4 Let f adj be a given adjacency relation on a set D of databases, and M a probabilistic mechanism answering queries on the databases in D. -The mechanism M is said to satisfy f adj -differential privacy (f adj -DP), if and only if the above condition is satisfied for every pair of databases D, D in D, and any possible output S ⊂ Range(M). Comments: (i) Given ≥ 0, the 'usual' notions of -indistinguishability and -DP correspond to the choice of adjacency f adj = d h , where d h is the Hamming metric on databases -namely, the number of 'records' where D and D differ, plus the assumption d h (D, D ) ≤ 1 (cf. [5] ). (ii) In Section 6, we propose a more general notion of adjacency, based on a different metric defined 'value-wise', to serve other purposes as well. (iii) On disjoint databases, one can work with different adjacency relations, using different maps to the same (or different) metric space(s), (iv) The mechanism RR described above is actually log (3) In the previous two sections, we looked at the issue of 'quantifying' the indistinguishability of two data tuples or databases, under repeated queries of an adversary A. In this section, our concern will be in a sense 'orthogonal': the issue will be that of quantifying how different the probabilistic mechanism's answers can be, at different moments of A's querying sequence. Remember that the knowledge of A, at any node on the DLTTS of the run corresponding to the query sequence, is represented as a set of tuples; and also that the data forming any tuple are assumed implicitly typed, 'labeled with' (i.e., under) the headers of the database D. To be able to compare two tuples of the same length, we shall assume that there is a natural, injective, type-preserving map from one of them onto the other; this map will remain implicit in general; two such tuples will be said to be type-compatible. If the two tuples are not of the same length, one of them will be projected onto (or restricted to) a suitable subtuple, so as to be type-compatible and comparable with the other; if this turns out to be impossible, the two tuples will be said to be uncomparable. The quantification looked for will be based on a suitable notion of 'distance' between two sets of type-compatible tuples. For that, we shall first define 'dis-tance' between any two type-compatible tuples; more precisely, define such a notion of distance between any two data values under every given header of D. As a first step, we shall therefore begin by defining, for every given header of D, a binary 'distance' function on the set of all values that get assigned to the attributes under that header, along the sequence of A's queries. This distance function to be defined will be a metric: non-negative, symmetric, and satisfying the so-called Triangle Inequality (cf. below). The 'direct-sum' of these metrics, taken over all the headers of D, will then define a metric d on the set of all type-compatible tuples of data assigned to the various attributes, under all the headers of D, along the sequence of A's queries. The 'distance' d(t, t ), from any given tuple t in this set to another type-compatible tuple t , will be defined as the value of this direct-sum metric on the pair of tuples (t, t ); it will, by definition, be calculated 'column-wise' on the base D, and also on the intermediary databases along A's query sequence; note that it will give us a priori an m-tuple of numbers, where m is the number of headers (or columns) in the database D. A single number can then be derived as the sum of the entries in the m-tuple d(t, t ). This sum will be denoted as d(t, t ), and defined as the distance from the tuple t to the tuple t in the database D. Finally, if S, S are any two given finite sets of type-compatible tuples, of data that get assigned to the various attributes (along the queries), we shall define the distance from the set S to the set S as the number ρ(S, S ) = min{ d(t, t ) | t ∈ S, t ∈ S } Some preliminaries are needed before we can define the 'distance' function between the data values under every given header of D. We begin by dividing the headers of the base D into four classes classes, for clarity of presentation: . 'Nominal': identities, names, attributes receiving literal data not in any taxonomy (e.g., gender, city, . . . ), finite sets of such data; . 'Numerval' : attributes receiving numerical values, or bounded intervals of (finitely many) numerical values; . 'Numerical': attributes receiving single numerical values (numbers). . 'Taxoral': attributes receiving literal data in a taxonomy relation. For defining the 'distance' between any two values v, v assigned to an attribute under a given 'Nominal' header of D, for the sake of uniformity we agree to consider every value as a finite set of singleton values. (In particular, a singleton value 'x' will be seen as the set {x}.) Given two such values v, v , note first that the so-called Jaccard Index between them is the number which is a 'measure of their similarity'; but this index is not a metric: the triangle inequality is not satisfied; however, the Jaccard metric d N om (v, v ) = 1 − jacc(v, v ) = |(v∆v )/(v ∪ v )| does satisfy that property, and will suit our purposes. Thus defined, d N om (v, v ) is a 'measure of the dissimilarity' between the sets v and v . Let τ N om be the set of all data assigned to the attributes under the 'Nominal' headers of D, along the sequence of A's queries. Then the above defined binary function d N om extends to a metric on the set of all type-compatible data-tuples from τ N om , defined as the 'direct-sum' taken over the 'Nominal' headers of D. If τ N um is the set of all data assigned to the attributes under the 'Numerval' headers along the sequence of queries by A, we also define a 'distance' metric Between numerical data x, x under the 'Numerical' headers, the distance we shall work with is the euclidean metric |x − x |, normalized as: d eucl (x, x ) = |x − x |/D, where D > 0 is a fixed finite number, bigger than the maximal euclidean distance between the numerical data on the databases and on the answers to A's queries. On the data under the 'Taxoral' headers, we choose as distance function the metric d wp , defined in Lemma 1 (cf. Appendix) between the nodes of any Taxonomy tree. Note that the 'datawise distance functions' defined above are all with values in the real interval [0, 1]. (This is also one reason for our choice of the distance metric on Taxonomy trees.) This fact is of importance, for comparing the metric ρ we defined above with the Hamming metric, cf. Section 6. An additonal role for Oracle O: In Section 5.1 below, we present a procedure for comparing the knowledge of an adversary A at different nodes of the DLTTS that models the 'distributed sequence' of A's queries on a given database D. The comparison can be with respect to any given 'target' dataset T (e.g., a privacy policy P on D). In operational terms, so to say, the oracle mechanism O of the DLTTS keeps the target dataset 'in store'; and as said earlier, a first role for the oracle O of the DLTTS is to keep a watch on the deduction of the target dataset by the adversary A at some node. The additional second role that we assign now to the oracle O, is to publish information on the distance of A's saturated knowledge τ (s), at any given node s, to the target dataset T . This distance is calculated wrt the distance ρ, defined above as the minimal distance d(t, t ) between the tuples t ∈ τ (s), t ∈ T , where d is the direct sum of the 'column-wise distances' between the data on the tuples. Before presenting the comparison schema, here is an example to illustrate how the notions developed above operate in practice. Example 1 bis. We go back to the Hospital-CoVid example seen earlier, more particularly its Table 6 : Hospital's public record recalled 'Gender' and 'Dept.'. are the 'Nominal' headers in this record, 'Age' is 'Numerval' and 'Ailment' is 'Taxoral'. We are interested in the second, fourth and fifth tuples on the record, respectively referred to as l 2 , l 4 , l 5 . The 'target set' of (type-compatible) tuple in this example is taken as the (negation of the) privacy policy specified, namely the tuple T = (John, 46, M, #, CoV id). We compute now the distance d between the target T , and the three tuples The tuple l 2 is the farthest from the target, while l 5 is the closest. This 'explains' that the adversary can choose the branch on the transition that leads to a state where l 5 is added to his/her knowledge. This is more formally detailed in the procedure presented below. · Given: DLTTS associated with a querying sequence, by adversary A on given database D; and a Target set of tuples T . (i) Compute d i = ρ(l i , T ), i ∈ 1 · · · n, and d j = ρ(l j , T ), j ∈ 1 · · · m. Define: (ii) Select conf ig 1 , and discard conf ig 2 , IF the following two conditions hold: ∃ an i, 1 ≤ i ≤ n, such that d i = d min (s, T ) and iii) IF conditions are not satisfied, EXIT. Given a randomized/probabilistic mechanism M answering the queries on databases, and an ≥ 0, recall that the -indistinguishability of any two given databases under M, and the notion of -DP for M, were both defined in Definition 4 (Section 4), based first on a hypothetical map f from the set of all the databases concerned, into some given metric space (X, d X ), and an 'adjacency relation' on databases defined as f adj (D, D ) = d X (f D, f D ), which was subsequently instantiated to f adj = d h , where d h is the Hamming metric between databases. It must be observed here, that the Hamming metric is defined only between databases with the same number of columns, and usually only with all data of the same type. In this subsection, our objective is to propose a more general notion of adjacency, based on the distance metric ρ defined above, between type-compatible tuples on databases with data of multiple types. In other words, our D here will be the set of all databases, not necessarily all with the same number of columns, and with data of several possible types as mentioned in the Introduction. We define then a binary relation f ρ adj (D, D ) between databases D, D in the set D by setting f ρ adj (D, D ) = ρ(D, D ), visualizing D, D as sets of type-compatible data tuples. Given , we can then define the notion of ρ -indistinguishabilty of two databases D, D under a (probabilistic) answering mechanism M, as well as the notion of ρ -DP for M, exactly as in Definition 4, by replacing f adj first with the relation f ρ adj , and subsequently with ρ. The notions thus defined are more general than those presented earlier in Section 4 with the choice f adj = d h . An example will illustrate this point. We go back to the 'Hospital's public record' of our previous example, with the same notation. For this example, we shall assume that the mechanism M answering a query for 'ailment information involving men' on that record, returns the tuples l 2 , l 4 , l 5 with the probability distribution 0, 2/5, 3/5, respectively. Let us look for the minimum value of ≥ 0, for which these three tuples will be ρ -indistinguishable under the mechanism M. The output l 2 , with probability 0, will be ρ -distinguishable for any ≥ 0. Only the two other outputs l 4 , l 5 need to be considered. We first compute the ρ-distances between these two tuples: d(l 4 , l 5 ) = (1 − 1 20 ) + 0 + 1 + 0 = 39/20. The condition for l 4 and l 4 to be ρ -indistinguishable under M is thus: i.e., ≥ (20/39) * ln(3/2). In other words, for any ≥ (20/39) * ln(3/2), the two tuples l 4 and l 5 will be ρ -indistinguishable; and for values of with 0 ≤ < (20/39) * ln(3/2), these tuples will be ρ -distinguishable. For the -indistinguishabilty of these tuples wrt the Hamming metric d h , we proceed similarly: the distance d h (l 4 , l 5 ) is by definition the number of 'records' where these tuples differ, so d h (l 4 , l 5 ) = 2. So the condition on ≥ 0 for their -indistinguishabilty wrt d h is: (3/5) ≤ e 2 * (2/5), i.e., ≥ (1/2) * ln(3/2) . In other words, if these two tuples are ρ -indistinguishables wrt ρ under M for some , then they will be -indistinguishable wrt d h for the same . But the converse is not true, since (1/2) * ln(3/2) < (20/39) * ln(3/2). Said otherwise: M -distinguishes more finely with ρ, than with d h . Remark 4: The statement¡'M -distinguishes more finely with ρ, than with d h ", is always true (not just in Example 4). For the following reasons: The records that differ 'at some given position' on two bases D, D are always at distance 1 for the Hamming metric d h , by definition, whatever be the type of data stored at that position. Now, if the data stored at that position 'happened to be' numerical, the usual euclidean distance between the two data could have been (much) bigger than their Hamming distance 1; precisely to avoid such a situation, our definition of the metric d eucl on numerical data 'normalized' the euclidean distance, to ensure that their d eucl -distance will not exceed their Hamming distance. Thus, all the 'record-wise' metrics we have defined above have their values in [0, 1], as we mentioned earlier; so, whatever the type of data at corresponding positions on any two bases D, D , the ρ-distance between the records will never exceed their Hamming distance. That suffices to prove our statement above. The Proposition below formulates all this, more precisely: Proposition 2 Let D m be the set of all databases with the same number m of columns, over a finite set of given data, and M a probabilistic mechanism answering queries on the bases in D. Let ρ be the metric (defined above) and d h the Hamming metric, between the databases in D, and suppose given an ≥ 0. -If two databases D, D ∈ D m are ρ -indistinguishable under M wrt ρ, then they are also -indistinguishable under M wrt d h . -If the mechanism M is ρ -DP on the bases in D m (wrt ρ), then it is also -DP (wrt d h ) on these bases. The idea of 'normalizing' the Hamming metric between numerical databases (with the same number of columns) was already suggested in [5] for the same reasons. When only numerical databases are considered, the metric ρ that we have defined above is the same as the 'normalized Hamming metric' of [5] . Our metric ρ must actually be seen as a generalization of that notion, to directly handle bases with more general types of data: anonymized, taxonomies, . . . A starting point for the work presented here was the observation that databases could be distributed over several 'worlds' in general, and querying such bases leads to answers which would also be distributed. Conceivably, to such distributed answers one could assign probability distributions of relevance or pertinence to the query. The probabilistic automata of Segala ( [11, 12] ) are perhaps among the first logical structures proposed to model such a vision, in particular with outputs. Distributed Transition Systems (DTS) appeared a little later; most of them had as objective the behavioral analysis of the distributed transitions, based on traces or on simulation/bisimulation. Quasi-or pseudoor hemi-metrics, suitably defined, were essential for the reasonings employed, e.g., as in [3, 4, 6] . Our quest for a metric based vision for privacy analysis has certainly been influenced by these works, although not for the same objectives. As the developments presented in our work show, our lookout was for a syntax-based metric that can directly handle data of 'mixed types'; they can be numbers or literals , but can also be 'anonymized' as intervals or sets; they can also be taxonomically related to each other (in a tree structure). Moreover, the metric we were looking for was intended as a syntactic measure to express how far or how close a querying process on a database gets to a given privacy policy on that base. Part of future work could be to define a 'divergence measure' (as a quasi-metric?) between two given nodes on a DLTTS that models a querying process, in terms of the respective knowledge distributions at the two nodes, independently of any notion of a given target data set. What we are interested in, for the purposes of our current paper, is a metric between the nodes of a taxonomy tree, which in addition will suit our semantic considerations. This is the objective of our Lemma below. (A result that seems to be unknown, to our knowledge.) Lemma 1 On any taxonomy tree T , the binary function between its nodes defined by d wp (x, y) = 1 − 2 cxy cx+cy (notation as above) is a metric. Proof: We drop the suffix wp for this proof, and just write d. Clearly d(x, y) = d(y, x); and d(x, y) = 0 if and only if x = y. We only have to prove the Triangle Inequality; i.e. show that d(x, z) ≤ d(x, y) + d(y, z) holds for any three nodes x, y, z on T . A 'configuration' can be typically represented in its 'most general form' by the diagram below. (Other possible configurations are all particular instances of this general case, as can be easily checked; see Comment below.) The boldface characters X, Y, Z, a, h in the diagram all stand for the number of arcs on the corresponding paths. So that, for the depths of the three nodes x, y, z, and of their farthest common ancestors on the tree T , we get: c x = X + h + 1, c y = Y + h + a + 1, c z = Z + h + a + 1, The '+1' in these equalities is because the X, Y, Z, a, h stand for the number of arcs on the paths, whereas the depths are defined as the number of nodes. Also note that the X, Y, Z, a, h must all be integers ≥ 0. Now, for the Triangle Inequality to hold on the three nodes x, y, z on T , the following inequality should be satisfied: Eliminating the denominators, which are all strictly positive, we write this out as an inequality between two polynomials eq1, eq2 on X, Y, Z, h, a, which must be satisfied under the constraint that all must be non-negative integers: eq1 : (X + Y + 2 * h + a + 2) * (Y + Z + 2 * h + 2 * a + 2) * (X + Z + 2 * h + a + 2) eq2 : (h + 1) * (Y + Z + 2 * h + 2 * a + 2) * (X + Z + 2 * h + a + 2) +(h + a + 1) * (X + Y + 2 * h + a + 2) * (X + Z + 2 * h + a + 2) −(h + 1) * (X + Y + 2 * h + a + 2) * (Y + Z + 2 * h + 2 * a + 2) eq : eq1 − 2 * eq2. To check: eq ≥ 0 ? The equation eq once expanded (under Maxima) appears as: eq : Y Z 2 + XZ 2 + aZ 2 + Y 2 Z + 2XY Z + 4hY Z + 2aY Z + 4Y Z + X 2 Z + 4hXZ + 2aXZ + 4XZ + a 2 Z + XY 2 + 4hY 2 + aY 2 + 4Y 2 + X 2 Y + 4hXY + 2aXY + 4XY + 8h 2 Y + 8ahY + 16hY + a 2 Y + 8aY + 8Y The coefficients are all positive; and our Lemma is proved. Probabilistic relational reasoning for differential privacy Deciding Differential Privacy for Programs with Finite Inputs and Outputs A Logical Characterization of Differential Privacy via Behavioral Metrics The metric linear-time branching-time spectrum on nondeterministic probabilistic processes Broadening the Scope of Differential Privacy Using Metrics Linear and Branching System Metrics Differential privacy The Algorithmic Foundations of Differential Privacy The Bounded Laplace Mechanism in Differential Privacy Modeling and Verification of Randomized Distributed Real-Time Systems A compositional trace-based semantics for probabilistic automata Probabilistic simulations for probabilistic processes Randomized Response: A Survey Technique for Eliminating Evasive Answer Bias Comment: The other possible configurations for the three nodes x, y, z on the tree T are all special cases of the one above Taxonomies are frequent in machine learning. Data mining and clustering techniques employ reasonings based on measures of symmetry, or on metrics, depending on the objective. The Wu-Palmer symmetry measure on tree-structured taxonomies is one among those in use; it is defined as follows ( [14] ): Let T be a given taxonomy tree. For any node x on T , define its depth c x as the number of nodes from the root to x (both included), along the path from the root to x. For any pair x, y of nodes on T , let c xy be the depth of the common ancestor of x, y that is farthest from the root. The Wu-Palmer symmetry measure between the nodes x, y on T is then defined as WP(x, y) =