key: cord-0506044-azg5bzsu authors: Bahamyirou, Asma; Schnitzer, Mireille E. title: Data Integration through outcome adaptive LASSO and a collaborative propensity score approach date: 2021-03-28 journal: nan DOI: nan sha: 1d93903dd65ac3ac3d7532f96a9b2f376e7fb18f doc_id: 506044 cord_uid: azg5bzsu Administrative data, or non-probability sample data, are increasingly being used to obtain official statistics due to their many benefits over survey methods. In particular, they are less costly, provide a larger sample size, and are not reliant on the response rate. However, it is difficult to obtain an unbiased estimate of the population mean from such data due to the absence of design weights. Several estimation approaches have been proposed recently using an auxiliary probability sample which provides representative covariate information of the target population. However, when this covariate information is high-dimensional, variable selection is not a straight-forward task even for a subject matter expert. In the context of efficient and doubly robust estimation approaches for estimating a population mean, we develop two data adaptive methods for variable selection using the outcome adaptive LASSO and a collaborative propensity score, respectively. Simulation studies are performed in order to verify the performance of the proposed methods versus competing methods. Finally, we presented an anayisis of the impact of Covid-19 on Canadians. Administrative data, or non-probability sample data, are being increasingly used in practice to obtain official statistics due to their many benefits over survey methods (lower cost, larger sample size, not reliant on response rate). However, it is difficult to obtain unbiased estimates of population parameters from such data due to the absence of design weights. For example, the sample mean of an outcome in a non-probability sample would not necessarily represent the population mean of the outcome. Several approaches have been proposed recently using an auxiliary probability sample which provides representative covariate information of the target population. For example, one can estimate the mean outcome in the probability sample by using an outcome regression based approach. Unfortunately, this approach relies on the correct specification of a parametric outcome model. Valliant & Dever (2011) used inverse probability weighting to adjust a volunteer web survey to make it representative of a larger population. Elliott & Valliant (2017) proposed an approach to model the indicator representing inclusion in the nonprobability sample by adapting Bayes' rule. Rafei et al. (2020) extended the Bayes' rule approach using Bayesian Additive Regression Trees (BART). Chen (2016) proposed to calibrate non-probability samples using probability samples with the least absolute shrinkage and selection operator (LASSO). In the same context, Beaumont & Chu (2020) proposed a tree-based approach for estimating the propensity score, defined as the probability that a unit belongs to the non-probability sample. Wisniowski et al. (2020) developed a Bayesian approach for integrating probability and nonprobability samples for the same goal. Doubly robust semiparametric methods such as the augmented inverse propensity weighted (AIPW) estimator (Robins, Rotnitzky and Zhao, 1994) and targeted minimum loss-based estimation (TMLE; van der Laan & Rubin, 2006; van der Laan & Rose, 2011) have been proposed to reduce the potential bias in the outcome regression based approach. The term doubly robust comes from the fact that these methods require both the estimation of the propensity score model and the outcome expectation conditional on covariates, where only one of which needs to be correctly modeled to allow for consistent estimation of the parameter of interest. Chen, Li & Wu (2019) developed doubly robust inference with non-probability survey samples by adapting the Newton-Raphson procedure in this setting. Reviews and discussions of related approaches can be found in Beaumont (2020) and Rao (2020) . Chen, Li & Wu (2019) considered the situation where the auxiliary variables are given, i.e. where the set of variables to include in the propensity score model is known. However, in practice or in high-dimensional data, variable selection for the propensity score may be required and it is not a straight-forward task even for a subject matter expert. In order to have unbiased estimation of the population mean, controlling for the variables that influence the selection into the non-probability sample and are also causes of the outcome is important (VanderWeele & Shpitser, 2011) . Studies have shown that including instrumental variables -those that affect the selection into the non-probability sample but not the outcome -in the propensity score model leads to inflation of the variance of the estimator relative to estimators that exclude such variables (Schisterman et al., 2009; Schneeweiss et al., 2009; . However, including variables that are only related to the outcome in the propensity score model will increase the precision of the estimator without affecting bias (Brookhart et al. 2006; Shortreed & Ertefaie, 2017) . Using the Chen, Li & Wu (2019) estimator for doubly robust inference with a non-probability sample, Yang, Kim & Song (2020) proposed a two step approach for variable selection for the propensity score using the smoothly clipped absolute deviation (SCAD; Fan & Li, 2001) . Briefly, they used SCAD to select variables for the outcome model and the propensity score model separately. Then, the union of the two sets is taken to obtain the final set of the selected variables. To the best of our knowledge, their paper is the first to investigate a variable selection method in this context. In causal inference, multiple variable selection methods have been proposed for the propensity score model. We consider two in particular. Shortreed & Ertefaie (OALASSO; 2017) developed the outcome adaptive LASSO. This approach uses the adaptive LASSO (Zou; 2006) but with weights in the penalty term that are the inverse of the estimated covariate coefficients from a regression of the outcome on the treatment and the covariates. Benkeser, Cai & van der Laan (2019) proposed a collaborative-TMLE (CTMLE) that is robust to extreme values of propensity scores in causal inference. Rather than estimating the true propensity score, this method instead fits a model for the probability of receiving the treatment (or being in the non-probability sample in our context) conditional on the estimated conditional mean outcome. Because the treatment model is conditional on a single-dimensional covariate, this approach avoids the challenges related to variable and model selection in the propensity score model. In addition, it relies on only sub-parametric rates of convergence of the outcome model predictions. In this paper, we firstly propose a variable selection approach in high dimensional covariate settings by extending the outcome adaptive LASSO (Shortreed & Ertefaie, 2017) . The gain in the present proposal relative to the existing SCAD estimator (Yang, Kim & Song 2020) is that the OALASSO can accommodate both the outcome and the selection mechanism in a onestep procedure. Secondly, we adapt the Benkeser, Cai & van der Laan (2019) collaborative propensity score in our setting. Finally, we perform simulation studies in order to verify the performance of our two proposed estimators and compare them with the existing SCAD estimator for the estimation of the population mean. The remainder of the article is organized as follows. In Section 2, we define our setting and describe our proposed estimators. In Section 3, we present the results of the simulation study. We present an analysis of the impact of Covid-19 on Canadians in Section 4. A discussion is provided in Section 5. In this section, we present the two proposed estimators in our setting: 1) an extension of the OALASSO for the propensity score (Shortreed & Ertefaie, 2017) and 2) the application of Benkeser, Cai & van der Laan's (2020) alternative propensity score. Let U = {1, 2, ..., N} be indices representing members of the target population. Define {X, Y } as the auxiliary and response variables, respectively where X = (1, X (1) , X (2) , ..., X (p) ) is a vector of covariates (plus an intercept term) for an arbitrary individual. The finite target population data consists of {(X i , Y i ), i ∈ U}. Let the parameter of interest be the finite population mean µ = 1/N i∈U Y i . Let A be indices for the non-probability sample and let B be those of the probability sample. As illustrated in Figure 1 , A and B are possibly overlapping subsets of U. Let d i = 1/π i be the design weight for unit i with π i = P (i ∈ B) known. The data corresponding to B consist of observations {(X i , d i ) : i ∈ B} with sample size n B . The data corresponding to the non-probability sample A consist of observations {(X i , Y i ) : i ∈ A} with sample size n A . i-th subject's data realization. Let p i = P (∆ i = 1|X i ) be the propensity score (the probability of the unit belonging to A). In order to identify the target parameter, we assume these conditions in the finite population: (1) Ignorability, such that the selection indicator ∆ and the response variable Y are independent given the set of covariates X (i.e. ∆ ⊥ Y |X) and (2) positivity such that p i > ǫ > 0 for all i. Note that assumption (1) implies that E(Y |X) = E(Y |X, ∆ = 1), which means that the conditional expectation of the outcome can be estimated using only the non-probability sample A. Assumption (2) guarantees that all units have a non-zero probability of belonging in the non-probability sample. Let's assume for now that the propensity score follows a logistic regression model with The true parameter value β 0 is defined as the argument of the minimum (arg min) of the risk function , with summation taken over the target population. One can rewrite this based on our observed data as Equation (1) cannot be solved directly since X has not been observed for all units in the finite population. However, using the design weight of the probability sample B, β 0 can be estimated by minimising the pseudo risk function as arg min Let X B be the matrix of auxiliary information (i.e. the design matrix) of the sample B and L(β) the pseudo risk function defined above. Define The parameter β in equation (2) can be obtained by solving the Newton-Raphson iterative procedure as proposed in Chen, Li & Wu (2019) by setting In a high dimensional setting, suppose that an investigator would like to choose relevant auxiliary variables for the propensity score that could help to reduce the selection bias and standard error when estimating the finite population mean. In the causal inference context of estimating the average treatment effect, Shortreed & Ertefaie (2017) proposed the OALASSO to select amongst the X (j) s in the propensity score model. They penalized the aforementioned risk function by the adaptive LASSO penalty (Zou, 2006) where the coefficient-specific weights are the inverse of an estimated outcome regression coefficient representing an association between the outcome, Y , and the related covariate. In our setting, let the true coefficient values of a regression of Y on X be denoted α j . The parameters β = (β 0 , β 1 , ..., β p ), corresponding to the covariate coefficients in the propensity score, can be estimated by minimizing the pseudo risk function in (2) penalized by the adaptive LASSO penalty: whereω j = 1/|α j | γ for some γ > 0 andα j is a √ n-consistent estimator of α j . Consider a situation where variable selection is not needed (λ = 0). Chen, Li & Wu (2019) proposed to estimate β by solving the Newton-Raphson iterative procedure. One can rewrite the gradient as The Newton-Raphson update step can be written as: (4) is equivalent to the estimator of the weighted least squares problem with Y * as the new working response and S i = −d i p i (1 − p i ) as the weight associated with unit i. Thus, in our context as well, we can select the important variables in the propensity score by solving a weighted least squares problem penalized with an adaptive LASSO penalty. Now we describe how our proposal can be easily implemented in a two-stage procedure. In the first stage, we construct the pseudo-outcome by using the Newton-Raphson estimate of β defined in equation (2) and the probability sample B. In the second stage, using sample B, we solve a weighted penalized least squares problem with the pseudo-outcome as response variable. The selected variables correspond to the non-zero coefficients of the adaptive LASSO regression. The proposed algorithm for estimating the parameters β in equation (3) with a given value of λ is as follows: Algorithm 1 OALASSO for propensity score estimation 1: Use the Newton-Raphson algorithm for the unpenalized logistic regression in Chen, Li & Wu (2019) to estimateβ in (2). 2: Obtain the estimated propensity scorep i = p(X i ,β) for each unit. 3: Construct an estimate of the new working response Y * by plugging in the estimatedβ. 4: Select the useful variables by following steps (a)-(d) below: (b) Run a parametric regression of Y on X using sample A. Obtaiň α j , the estimated coefficient of X (j) , j = 1, ..., p. (c) Define the adaptive LASSO weightsω j = 1/|α j | γ , j = 1, ..., p for γ > 0. (d) Using sample B, run a LASSO regression of Y * on X withω j as the penalty factor associated with X (j) with the given λ. The non-zero coefficient estimate from (d) are the selected variables. The final estimate of the propensity score is For the adaptive LASSO tuning parameters, we choose γ = 1 (Nonnegative Garotte Problem; Yuan & Lin, 2007) and λ is selected using V-fold cross-validation in the sample B. The sampling design needs to be taken 8 into account when creating the V-fold in the same way that we form random groups for variance estimation (Wolter, 2007) . For cluster or stratified sampling for example, all elements in the cluster or stratum should be placed in the same fold. Yang, Kim & Song (2020) proposed a two step approach for variable selection using SCAD. In the first step, they used SCAD to select relevant variables for both the propensity score and the outcome model, respectively. Denote C p (respectively C m ) the selected set of relevant variables for the propensity score (respectively the outcome model). The final set of variables used for estimation is C = C p ∪ C m . Horvitz & Thompson (1952) proposed the idea of weighting observed values by inverse probabilities of selection in the context of sampling methods. The same idea is used to estimate the population mean in the missing outcome setting. Recall that p(X) = P (∆ = 1|X) is the propensity score. In order to estimate the population mean, the units in the non-probability sample A are assigned the weights w i = 1/ p i where p i = p(X i , β) is the estimated propensity score obtained using Algorithm 1. The inverse probability weighted (IPW) estimator for the population mean is given by For the estimation of the variance, we use the proposed variance of Chen, Li & Wu (2019) which is given by where V p () denotes the design-based variance of the total under the probability sampling design for B and Doubly robust semi-parametric methods such as AIPW (Scharfstein, Rotnitzky & Robins, 1999) or Targeted Minimum Loss-based Estimation (TMLE, van der Laan & Rubin, 2006; van der Laan & Rose, 2011) have been proposed to potentially reduce the error resulting from misspecified outcome regressions but also avoid total dependence on the propensity score model specification. We denote m(X) = E(Y |X) and let m(X) be an estimate of m(X). Under the current setting, the AIPW estimator proposed in Chen, Li & Wu (2019) for µ is , where V p () denotes the design-based variance of the total under the probability sampling design for B and b 3 = i∈A 1 We consider a similar simulation setting as Chen, Li & Wu (2019) . However, we add 40 pure binary noise covariates (unrelated to the selection mechanism or outcome) to our set of covariates. We generate a finite population .., N} with N = 10, 000, where Y is the outcome variable and X = {X (1) , ..., X (p) }, p = 44 represents the auxiliary variables. Define Z 1 ∼ Bernoulli(0.5), Z 2 ∼ Unif orm(0, 2), Z 3 ∼ Exponential(1) and Z 4 ∼ χ 2 (4). The observed outcome Y is a Gaussian with a mean θ = 2 + 0. N(0, 1) . From the finite population, we select a probability sample B of size n B ≈ 500 under a Poisson sampling with probability π ∝ {0.25 + X (2) + 0.03Y }. We also consider three scenarios for selecting a non-probability sample A with the inclusion indicator ∆ ∼ Bernoulli(p): • Scenario 1 considers a situation in which the confounders X (1) and X (2) (common causes of inclusion and the outcome) have a weaker relationship with inclusion (∆ = 1) than with the outcome: P (∆ = 1|X) = expit{−2 + 0.3X (1) + 0.3X (2) − X (5) − X (6) } • Scenario 2 considers a situation in which both confounders X (1) and X (2) have a weaker relationship with the outcome than with inclusion: • Scenario 3 involves a stronger association between the instrumental variables X (5) and X (6) and inclusion: P (∆ = 1|X) = expit{−2 + X (1) + X (2) − 1.8X (5) − 1.8X (6) } To evaluate the performance of our method in a nonlinear setting (Scenario 4), we simulate a fourth setting following exactly Kang & Schafer (2007) . In this scenario, we generate independent Z (i) ∼ N(0, 1), i = 1, ..4. The observed outcome is generated as Y = 210 + 27.4Z (1) + 13.7Z (2) + 13.7Z (3) + 13.7Z (4) + ǫ, where ǫ ∼ N(0, 1) and the true propensity model is P However, the analyst observes the variables X (1) = exp{Z (1) /2}, X (2) = Z (2) /[1 + exp{Z (1) }] + 10, X (3) = {Z (1) Z (3) /25 + 0.6} 3 , and X (4) = {Z (2) + Z (4) + 20} 2 rather than the Z (j) s. Under each scenario, we use a correctly specified outcome regression model for the estimation of m(X). For the estimation of the propensity score, we perform logistic regression with all 44 auxiliary variables as main terms, LASSO, and OALASSO, respectively. For the Benkeser method, we also use logistic regression for the propensity score. Because the 4th scenario involves model selection but not variable selection, we only compare logistic regression with the Benkeser method for the propensity score. We fit a misspecified model and the highly adaptive LASSO (Benseker & van der Lann, 2016) for the outcome model. The performance of each estimator is evaluated through the percent bias (%B), the mean squared error (MSE) and the coverage rate (COV), computed as respectively, where µ r is the estimator computed from the rth simulated sample, CI r = ( µ r − 1.96 √ v r , µ r + 1.96 √ v r ) is the confidence interval with v r the estimated variance using the method proposed by Chen, Li & Wu (2019) for the rth simulation sample, and R = 1000 is the total number of simulation runs. Tables 2, 3 and 4 contain the results for the first three scenarios. In all three, the IPW estimators performed the worst overall in terms of % bias. Similar to Chen, Li & Wu (2019), the coverage rates of IPW were suboptimal in all scenarios and the standard error was substantially underestimated. The AIPW estimator, implemented with logistic regression, LASSO and OALASSO for the propensity score, performed very well in all scenarios with unbiased estimates and coverage rates close to the nominal 95%. In comparison to IPW and AIPW with logistic regression, incorporating the LASSO or the OALASSO did not improve the bias but did lower the variance and allowed for better standard error estimation. The Benkeser method slightly increased the bias of AIPW and had underestimated standard errors, leading to lower coverage. The Yang method had the highest bias compared to the other implementations of AIPW and greatly overestimated standard error in all three scenarios. For the first three scenarios, Figure 2 displays the percent selection of each covariate (1,...,44), defined as the percentage of estimated coefficients that are non-zero throughout the 1000 generated datasets. Overall, the LASSO tended to select the true predictors of inclusion: X (1) , X (2) , X (5) and X (6) . For example, in scenario (2), confounders (X (1) , X (2) ) were selected in around 94% of simulations and instruments (X (5) , X (6) ) around 90%. However, the percent selection of pure causes of the outcome (X (3) , X (6) ) was around 23%. On the other hand, when OALASSO was used for the propensity score, the percent selection of confounders (X (1) , X (2) ) was around 98% and instruments (X (5) , X (6) ) was 64%. However, the percent selection of pure causes of the outcome (X (3) , X (4) ) increased to 83%. When using Yang's proposed selection method, X (1) , X (2) and X (3) were selected 100 percent of the time. Table 5 contains the results of the Kang and Shafer (2007) setting. AIPW with HAL for the outcome model and either the collaborative propensity score (AIPW -Benkeser method) or propensity score with logistic regression with main terms (AIPW -Logistic (2) ) achieved lower % bias and MSE compared to IPW. However, when the outcome model was misspecified, AIPW with logistic regression (AIPW -Logistic (1) ) performed as IPW. In this scenario, the true outcome expectation and the propensity score functionals were nonlinear, making typical parametric models misspecified. Consistent estimation of the outcome expectation can be obtained by using flexible models. The collaborative propensity score was able the reduce the dimension of the space and collect the necessary information using the estimated conditional mean outcome for unbiased estimation of the population mean with a coverage rate that was close to nominal. Table 2 : Scenario 1: Estimates taken over 1000 generated datasets. %B (percent bias), MSE (mean squared error), MC SE (monte carlo standard error), SE (mean standard error) and COV (percent coverage). IPW-Logistic: IPW with logistic regression for propensity score; IPW-LASSO: IPW with LASSO regression for propensity score; IPW-OALASSO: IPW with OALASSO regression for propensity score; AIPW-Logistic: AIPW with logistic regression for propensity score; AIPW-LASSO: AIPW with LASSO regression for propensity score; AIPW-OALASSO: AIPW with OALASSO regression for propensity score; AIPW-Benkeser: AIPW with the collaborative propensity score; AIPW-Yang: Yang's proposed AIPW. Table 3 : Scenario 2: Estimates taken over 1000 generated datasets. %B (percent bias), MSE (mean squared error), MC SE (monte carlo standard error), SE (mean standard error) and COV (percent coverage). IPW-Logistic: IPW with logistic regression for propensity score; IPW-LASSO: IPW with LASSO regression for propensity score; IPW-OALASSO: IPW with OALASSO regression for propensity score; AIPW-Logistic: AIPW with logistic regression for propensity score; AIPW-LASSO: AIPW with LASSO regression for propensity score; AIPW-OALASSO: AIPW with OALASSO regression for propensity score; AIPW-Benkeser: AIPW with the collaborative propensity score; AIPW-Yang: Yang's proposed AIPW. Table 4 : Scenario 3: Estimates taken over 1000 generated datasets. %B (percent bias), MSE (mean squared error), MC SE (monte carlo standard error), SE (mean standard error) and COV (percent coverage). IPW-Logistic: IPW with logistic regression for propensity score; IPW-LASSO: IPW with LASSO regression for propensity score; IPW-OALASSO: IPW with OALASSO regression for propensity score; AIPW-Logistic: AIPW with logistic regression for propensity score; AIPW-LASSO: AIPW with LASSO regression for propensity score; AIPW-OALASSO: AIPW with OALASSO regression for propensity score; AIPW-Benkeser: AIPW with the collaborative propensity score; AIPW-Yang: Yang's proposed AIPW. In this section, we apply our proposed method to a survey which was conducted by Statistics Canada to measure the impacts of COVID-19 on Canadians. The main topic was to determine the level of trust Canadians have in others (elected officials, health authorities, other people, businesses and organizations) in the context of the COVID-19 pandemic. Data was collected from May 26 to June 8, 2020. The dataset was completely non-probabilistic with a total of 35, 916 individuals responding and a wide range of basic demographic information collected from participants along with the main topic variables. The dataset is referred to as Trust in Others (TIO). We consider Labor Force Survey (LFS) as a reference dataset, which consists of n B = 89, 102 subjects with survey weights. This dataset does not have measurements of the study outcome variables of interest; however, it contains a rich set of auxiliary information common with the TIO. Summaries (unadjusted sample means for TIO and design-weighted means for LFS) of the common covariates are listed in Tables 8 and 9 in the appendix. It can be seen that the distributions of the common covariates between the two samples are different. Therefore, using TIO only to obtain any estimate about the Canadian population may be subject to selection bias. We apply the proposed methods and the sample mean to estimate the population mean of two response variables. Both of these variables were assessed as ordinal : Y 1 , "trust in decisions on reopening, Provincial/territorial government" -1: cannot be trusted at all, 2, 3: neutral, 4, 5: can be trusted a lot; and Y 2 , "when a COVID-19 vaccine becomes available, how likely is it that you will choose to get it?" -1: very likely, 2: somewhat likely, 3: somewhat unlikely, 4: very unlikely, 7: don't know. Y 1 was converted to a binary outcome which equals 1 for a value less or equal to 3 (neutral) and 0 otherwise. The same type of conversion was applied for Y 2 to be 1 for a value less or equal to 2 (somewhat likely) and 0 otherwise. We used logistic regression, outcome adaptive group LASSO (Wang & Leng, 2008; Hastie et al. 2008 ; as we have categorical covariates), and the Benkeser method for the propensity score. We also fit group LASSO for the outcome regression when implementing AIPW. Each categorical covariate in Table 8 ,9 were converted to binary dummy variables. Using 5-fold cross-validation, the group LASSO variable selection procedure identified all available covariates in the propensity score model. Table 6 below presents the point estimate, the standard error and the 95% Wald-type confidence intervals. For estimating the standard error, we used the variance estimator for IPW and the asymptotic variance for AIPW proposed in Chen, Li & Wu (2019) . For both outcomes, we found significant differences in estimates between the naive sample mean and our proposed methods for both AIPW with OA group LASSO and the Benkeser method. For example, the adjusted estimates for Y 1 suggested that, on average, at most 40% (using both outcome adaptive group LASSO or the Benkeser method) of the Canadian population have no trust at all or are neutral in regards to decisions on reopening taken by their provincial/territorial government compared to 43% if we would have used the naive mean. The adjusted estimates for Y 2 suggested that at most 80% using the Benkeser method (or 82% using outcome adaptive group LASSO) of the Canadian population are very or somewhat likely to get the vaccine compared to 83% if we would have used the naive mean. In the othe hand, there was no significant differences between OA group LASSO and group LASSO compared to the naive estimator. The package IntegrativeFPM (Yang, 2019) threw errors during application, which is why it is not included. In this paper, we proposed an approach to variable selection for propensity score estimation through penalization when combining a non-probability sample with a reference probability sample. We also illustrated the application of the collaborative propensity score method of Benkeser, Cai & van der Laan (2020) with AIPW in this context. Through the simulations, we studied the performance of the different estimators and compared them with the method proposed by Yang. We showed that the LASSO and the OALASSO can reduce the standard error and mean squared error in a high dimensional setting. The collaborative propensity score produced good results but the related confidence intervals were suboptimal as the true propensity score is not estimated there. Overall, in our simulations, we have seen that doubly robust estimators generally outperformed the IPW estimators. Doubly robust estimators incorporate the outcome expectation in such a way that can help to reduce the bias when the propensity score model is not correctly specified. Our observations point to the importance of using doubly robust methodologies in this context. In our application, we found statistically significant differences in the results between our proposed estimator and the corresponding naive estimator for both outcomes. This analysis used the variance estimator proposed by Chen, Li & Wu (2019) which relies on the correct specification of the propensity score model for IPW estimators. For future research, it would be quite interesting to develop a variance estimator that is robust to propensity score misspecification and that can be applied to the Benkeser method. Other possible future directions include post-selection variance estimation in this setting. Doubly robust estimation in missing data and causal inference models Les enquêtes probabilites sont-elles vouéesà disparaître pour la production de statistiques officielles Statistical data integration through classification trees The highly adaptive LASSO estimator A nonparametric super-efficient estimator of the average treatment effect Variable selection for propensity score models Random Forests Using LASSO to Calibrate Non-probability Samples using Probability Samples Doubly robust inference with Nonprobability survey samples A generalization of sampling without replacement from a finite universe Constructing Inverse Probability Weights for Marginal Structural Model Inference for nonprobability samples Variable selection via nonconcave penalized likelihood and its oracle properties An application of collaborative tar-geted maximum likelihood estimation in causal inference and genomics The Elements of Statistical Learning Robust inference on the average treatment effect using the outcome highly adaptive lasso Demystifying Double Robustness: A Comparison of Alternative Strategies for Estimating a Population Mean from Incomplete Data Weight Trimming and Propensity Score Weighting Big data for finite population inference: Applying quasi-random approaches to naturalistic driving data using Bayesian Additive Regression Trees On making Valid inferences by integrating data from surveys and other sources Estimation of regression coefficients when some regressors are not always observed Estimating causal effects of treatments in randomized and nonrandomized studies Adjusting for nonignorable dropout using semiparametric nonresponse models, (with discussion and rejoinder) Overadjustment bias and unnecessary adjustment in epidemiologic studies High-dimensional propensity score adjustment in studies of treatment effects using health care claims data Outcome-adaptive lasso: Variable selection for causal inference Regression shrinkage and selection via the lasso Estimating propensity adjustments for volunteer web surveys Collaborative double robust targeted maximum likelihood estimation Targeted learning: causal inference for observational and experimental data Targeted maximum likelihood learning A new criterion for confounder selection A Note on Adaptive Group Lasso Integrating probability and nonprobability samples for survey inference Introduction to variance estimation Doubly Robust Inference when Combining Probability and Non-probability Samples with Highdimensional Data On the non-negative garrotte estimator The adaptive LASSO and Its Oracle Properties Table 7 : Distributions of common covariates from the two samples. Methods