Meta-analysis of field experiments shows no change in racial discrimination in hiring over time Meta-analysis of field experiments shows no change in racial discrimination in hiring over time Lincoln Quilliana,b,1, Devah Pagerc,d, Ole Hexela,e, and Arnfinn H. Midtbøenf aDepartment of Sociology, Northwestern University, Evanston, IL 60208; bInstitute for Policy Research, Northwestern University, Evanston IL 60208; cDepartment of Sociology, Harvard University, Cambridge, MA 02138; dKennedy School of Government, Harvard University, Cambridge MA 02138; eSciences Po, Observatoire Sociologique du Changement (OSC), CNRS, 75007 Paris, France; and fInstitute for Social Research, N-0208 Oslo, Norway Edited by Douglas S. Massey, Princeton University, Princeton, NJ, and approved August 8, 2017 (received for review April 14, 2017) This study investigates change over time in the level of hiring discrimination in US labor markets. We perform a meta-analysis of every available field experiment of hiring discrimination against African Americans or Latinos (n = 28). Together, these studies represent 55,842 applications submitted for 26,326 positions. We focus on trends since 1989 (n = 24 studies), when field experi- ments became more common and improved methodologically. Since 1989, whites receive on average 36% more callbacks than African Americans, and 24% more callbacks than Latinos. We ob- serve no change in the level of hiring discrimination against African Americans over the past 25 years, although we find modest evi- dence of a decline in discrimination against Latinos. Accounting for applicant education, applicant gender, study method, occupational groups, and local labor market conditions does little to alter this result. Contrary to claims of declining discrimination in American society, our estimates suggest that levels of discrimination remain largely unchanged, at least at the point of hire. discrimination | labor markets | field experiments | race | ethnicity The American racial landscape has changed in fundamentalways since the Civil Rights Movement of the 1960s. During that time, sweeping legal and social reforms reduced the barriers facing African Americans in many important domains (1, 2). A rising African American middle class and a growing acceptance of the prin- ciples of inclusion led some to conclude that racial discrimination had declined to the point that it was no longer a primary determinant of life chances for African Americans and Latinos (2, 3). Supporting this perspective, a variety of indicators pointed toward a reduction of discriminatory treatment. Surveys indicated that whites increasingly endorsed the principle of equal treatment re- gardless of race (4). Rates of high school graduation for whites and African Americans converged substantially, and the black–white test score gap declined (5, 6). Large companies increasingly recognized diversity as a goal and revamped their hiring to curtail practices that disadvantaged minority applicants (7). With the election of the country’s first African-American president in 2008, many concluded that the country had finally moved beyond its troubled racial past (8). Despite clear signs of racial progress, however, on several key dimensions racial inequality persists and has even increased. For example, racial gaps in unemployment have shown little change since 1980 (9, 10), and the black–white gap in labor force parti- cipation rates among young men widened during this time (11). Recently, the Black Lives Matter movement shone a spotlight on the ongoing struggles with racism and discrimination experienced by people of color in interactions with law enforcement. The election of Donald J. Trump as the 45th President of the United States with the support of antiimmigrant and white nationalist groups high- lighted the persistence of racial resentment (12). In light of persistent racial gaps in key social and economic indicators, some scholars have challenged prevailing assumptions about waning discrimination. Indeed, while expressions of ex- plicit prejudice have declined precipitously over time, measures of stereotypes and implicit bias appear to have changed little over the past few decades (13–15). In this view, far from disappearing, racial bias has taken on new forms, becoming more contingent, subtle, and covert (15–18). What can we reliably say about trends in discrimination over time? Has the role of race appreciably diminished across the board, or are there important domains in which little racial progress has been achieved? Answers to these questions are important for un- derstanding the sources of persistent racial inequality. In this study, we examine trends in racial and ethnic discrim- ination in American labor markets based on a meta-analysis of every available field experiment of hiring discrimination (with fieldwork dates through December 2015). Meta-analysis is a body of formal methods to synthesize data from a population of existing studies. Field experiments of hiring discrimination are experi- mental studies in which fictionalized matched candidates from different racial or ethnic groups apply for jobs. These studies include both resume audits, in which fictionalized resumes with distinct racial names are submitted online or by mail (e.g., ref. 19), and in-person audits, in which racially dissimilar but other- wise matched pairs of trained testers apply for jobs (e.g., ref. 20). The field experimental method is a design with high causal (internal) validity because it benefits from aspects of experimental design. The experimenter carefully manages the application pro- cess, which provides control over many potential confounding variables. The exact basis of causal inference across the two main forms of field experiment, resume and in-person audits, is some- what different. In the typical resume audit, clues indicating race (such as a racially identifiable name) are randomly assigned to other- wise similar resumes, allowing for treatment and control groups to be equated through randomization. In in-person audits, matched Significance Many scholars have argued that discrimination in American society has decreased over time, while others point to per- sisting race and ethnic gaps and subtle forms of prejudice. The question has remained unsettled due to the indirect methods often used to assess levels of discrimination. We assess trends in hiring discrimination against African Americans and Latinos over time by analyzing callback rates from all available field experiments of hiring, capitalizing on the direct measure of discrimination and strong causal validity of these studies. We find no change in the levels of discrimination against African Americans since 1989, although we do find some indication of declining discrimination against Latinos. The results document a striking persistence of racial discrimination in US labor markets. Author contributions: L.Q. designed research; L.Q., O.H., and A.H.M. performed research; L.Q. and O.H. analyzed data; and L.Q., D.P., O.H., and A.H.M. wrote the paper. The authors declare no conflict of interest. This article is a PNAS Direct Submission. Freely available online through the PNAS open access option. See Commentary on page 10815. 1To whom correspondence should be addressed. Email: l-quillian@northwestern.edu. This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10. 1073/pnas.1706255114/-/DCSupplemental. 10870–10875 | PNAS | October 10, 2017 | vol. 114 | no. 41 www.pnas.org/cgi/doi/10.1073/pnas.1706255114 D o w n lo a d e d a t C a rn e g ie M e llo n U n iv e rs ity o n A p ri l 5 , 2 0 2 1 http://crossmark.crossref.org/dialog/?doi=10.1073/pnas.1706255114&domain=pdf mailto:l-quillian@northwestern.edu http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental www.pnas.org/cgi/doi/10.1073/pnas.1706255114 pairs of trained testers who differ on the basis of race but are otherwise similar apply for jobs; the between-race contrast is grounded in matching pairs of applicants to make them as similar as possible in all employment-relevant characteristics except race. Both resume and in-person audit methods provide a strong basis from which to draw conclusions about hiring discrimina- tion, particularly relative to the nonexperimental methods widely used in the literature, including by all prior studies of discrimi- nation trends over time (ref. 21 and SI Appendix, section 1). We use meta-analytic techniques to investigate change in hiring discrimination over time based on all existing US field experi- mental studies of labor market discrimination. Our procedure follows three basic stages: First, we identified all existing studies, published or unpublished, that use a field experimental method and that provide contrasts in hiring-related outcomes between equally qualified candidates from different racial or ethnic groups. Second, we coded key characteristics of the studies into a database for our analysis based on a coding rubric. This produced 24 studies containing 30 estimates of discrimination against African Americans and Latinos since 1989, together representing 54,318 ap- plications submitted for 25,517 positions. Finally, we performed a random-effects meta-regression to identify trends over time. We assess discrimination for each study using the ratio of the proportion of applications that received “callbacks”—or invitations to interview—by white applicants relative to African-American or Latino applicants. We calculated the proportions based on counts of the number of callbacks received by each group (white/African American/Latino) within each study. This discrimination ratio measured at the study level is the outcome in our meta-regression. Other methods of calculating hiring disparities between groups produced substantively similar results (SI Appendix, section 8). We analyze the relationship of discrimination ratios to years in which the data were gathered to provide an estimate of the trend in discrimination. Specifically, we regress the log of the discrimi- nation ratio on year of survey, with controls for key characteristics of the studies, using meta-regression. Meta-regression is a pro- cedure similar to standard regression, except covariates are mea- sured at the level of the study rather than the level of the individual, and the outcome is an effect from the study of interest (in our case, the outcome is the estimate of discrimination against African Americans or Latinos). Methods and Materials discusses further methodological and modeling details. Results To explore trends over time, we estimate a series of meta-regressions. We take the natural log of the discrimination ratio (our outcome variable) to account for skew. In the simplest meta-regression models, the only covariate is the time trend. In later models, we include a more extensive set of predictors to control for other factors that might confound the time trend. To capture sources of variability not covered by the covariates, we use a random effects specification (22). Random effects incorporate a variance com- ponent capturing variation in outcomes across studies that are due to unobserved study-level factors (Methods and Materials). Our core analysis focuses on studies that conducted their field- work from 1989 to 2015, allowing us to observe trends in discrim- ination over the past 25 years. For some supplementary analyses, we also add four field experiments conducted before 1989, although these studies use less standardized methodologies. On average, white applicants receive 36% more callbacks than equally qualified African Americans (95% confidence interval of 25–47% more), based on random-effects meta-analysis of data since 1989, repre- senting a substantial degree of direct discrimination. White appli- cants receive on average 24% more callbacks than Latinos (95% confidence interval of 15–33% more). For more detailed results, see SI Appendix, section 2 and Figs. S1 and S2. Do we find evidence of change over time in rates of hiring discrimination? With respect to African Americans, the answer is no. Fig. 1 plots estimates of discrimination by year, with linear trends of best fit and 95% confidence regions (detailed estimates are in SI Appendix, section 3 and Table S3; in Fig. 1, we expo- nentiate predictions to present predicted values as discrimina- tion ratios rather than less interpretable log discrimination ratios). The solid line captures the trend since 1990. The dashed line extends this time trend back to 1972, adding four resume audits conducted from 1972 to 1980. The size of the symbol is proportional to the weight it is given in the meta-analysis. The line of best fit for studies since 1990 is close to flat, sloping slightly upward, suggesting no change in the rate of discrimination over the past 25 years. The longer time series includes studies that use a more heterogeneous set of procedures (Methods and Materials), but even here we see no clear change over time in the level of hiring discrimination against African Americans. Is there sufficient power based on 21 studies to conclude that discrimination against African Americans did not decline? The confidence interval of the annual change provides a way to answer this question. The 95% confidence interval of the slope 1989– 2015 is −0.007 to 0.015. [This is the confidence interval of the slope of “year” (our time trend variable) with the log discrimination ratio outcome. The regression is shown in SI Appendix, Table S3.] The lower end of this interval indicates a decline in the discrimination ratio of 0.7% per year. If we take this number as the smallest slope consistent with the data based on the confidence interval, this sug- gests only a slight decline in discrimination each year. We conclude that this evidence rules out all but a slow decline in discrimination— with the most likely estimate being the point estimate, which indi- cates no decline in discrimination at all. Fig. 2 presents the trend for Latinos (as with Fig. 1, model predictions have been exponentiated to allow interpretation as discrimination ratios rather than log ratios). Here, we see the line slopes downward, indicating a possible decline in discrimi- nation, although this trend is outside of conventional levels of significance (P = 0.099). The point estimate suggests a decline from whites receiving 30% more callbacks than Latinos in 1990 to 15% more callbacks in 2010 (1.30 vs. 1.15). Because of the small number of Latino field experiments (n = 9), there is high uncertainty in characterizing this trend. (Using the difference in proportions or the odds ratio as outcomes, rather than the dis- crimination ratio, results in downward slopes in discrimination against Latinos over time that are statistically significant at the P < 0.05 level; see SI Appendix, section 8 and Table S9. However, sensitivity checks that modified the outcome sample counts slightly result in nonsignificant year coefficients of the difference in pro- portion or odds ratio, see SI Appendix, section 4 and Table S5). Is it possible that key aspects of study design changed over time, influencing our estimates of changes in discrimination? To consider this question, we estimate a meta-regression model of discrimination rates as a function of a time trend plus other study characteristics. We discuss only models for African Americans, because the number of studies with Latinos (n = 9) is too small to produce reasonable precise estimates in a meta-regression model with multiple covariates. Fig. 3 graphs estimates of change over time when the outcome discrimination ratio is modified and when controls are added. Full coefficients of the models are shown in SI Appendix, Table S4, with additional discussion in SI Appendix, section 4. The coefficients can be interpreted as the one-year percentage change in the discrimination ratio. [Because the outcome is logged, and exp(b) ≈ 1 + b for b < 0.1, coefficients with values less than about 0.1 can be multiplied by 100 to closely approximate percentage changes with a one-unit change in x.] The first coefficient graphed shows the annual percentage change in discrimination from 1990 to 2015, corresponding to solid line in Fig. 1. The second shows the annual change for the longer time period 1972–2015, corresponding to the dashed line in Fig. 1. Quillian et al. PNAS | October 10, 2017 | vol. 114 | no. 41 | 10871 SO C IA L SC IE N C ES SE E C O M M EN TA R Y D o w n lo a d e d a t C a rn e g ie M e llo n U n iv e rs ity o n A p ri l 5 , 2 0 2 1 http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf The next few models alter the dependent variable to see if this changes our results (using our base sample of 1990–2015). In one modification, we use “job offer” in place of callback as the outcome for studies for which the job offer outcome is available (n = 3), retaining callbacks as the outcome for studies in which the measure of job offer is not available. This makes the outcome variable less uniform across studies, although closer to the outcome of greatest substantive interest, getting a job. With this modification, the trend line for African Americans slants more downward, but is still close to zero (−0.008) and statistically nonsignificant. A second modifi- cation eliminates applicant profiles that included either a fictitious criminal background (n = 7) or a disability (n = 1). This limits the applicant profiles to those with more mainstream job backgrounds and credentials. The modified results show the trend line slanting slightly more upward, providing less evidence to support a down- ward trend than the results including a more heterogeneous set of applicant characteristics. A third modification uses only resume audit studies, discarding in-person audits. This results in an almost perfectly flat line (−0.002). The next estimates are based on models that add controls for applicant attributes, region and area unemployment rates, and occupational categories to the baseline time-trend model. The “Applicant Attributes” model introduces covariates representing applicant characteristics (e.g., gender, education) and study design (resume or in-person audits). The “UE & Regions” model adds controls for the unemployment rate of the local metropolitan area and dummy variables for region. The “Occupations” model in- cludes controls for occupational categories of blue collar, office- focused, and restaurant occupations. Finally we present the co- efficient from a trimmed model in which only the predictors with the largest t ratios from prior models are included. In each case, we see coefficients for the time trend that are close to zero—ranging from an estimated increase of 0.1% per year (0.001) to an increase of 1.3% per year (0.013) —suggesting little change in the level of discrimination facing African Americans over time. Notably, then, we find evidence of stability, not change, in hiring discrimination against African Americans. Few of the measured covariates in our analysis (SI Appendix, Table S4) demonstrate a clear relationship to patterns of dis- crimination. This likely is in part due to the relatively small overall sample of studies (n = 21 for African Americans since 1990), which limits our ability to detect statistical significance. However, even looking at the point estimates we find no large differences in magnitude across categories. This result is consistent with the findings within individual audit studies that suggest relative stability in measured discrimination across job types, applicant gender, and skill levels (e.g., refs. 19 and 20). (We also note that a meta-analysis designed to look specifically at effects of many of Fig. 1. No reduction in hiring discrimination facing African Americans over time. 10872 | www.pnas.org/cgi/doi/10.1073/pnas.1706255114 Quillian et al. D o w n lo a d e d a t C a rn e g ie M e llo n U n iv e rs ity o n A p ri l 5 , 2 0 2 1 http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf www.pnas.org/cgi/doi/10.1073/pnas.1706255114 these covariates would use within-study variability—such as con- trasting male and female auditors in the same study—which could provide more power to discern effects. Within-study variability cannot be applied to understand change over time since studies are generally conducted over the span of just a few months.) As a final check on the influence of covariates, we tested for time trends among our study-level and individual-level characteristics, finding no evidence of systematic change (SI Appendix, section 9 and Table S10). This suggests that covariates are unlikely to influence the observed time trend for discrimination among either the African- American or Latino samples. In relation to our estimate of changes in discrimination over time, the inclusion of study-level and applicant-level character- istics has little impact. In all models, we see little evidence of a reduction in hiring discrimination against African Americans over time. Fig. 2. Modest evidence of a reduction in hiring discrimination facing Latinos over time. Fig. 3. Alternative estimates of change in hiring discrimination against African-Americans. Quillian et al. PNAS | October 10, 2017 | vol. 114 | no. 41 | 10873 SO C IA L SC IE N C ES SE E C O M M EN TA R Y D o w n lo a d e d a t C a rn e g ie M e llo n U n iv e rs ity o n A p ri l 5 , 2 0 2 1 http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf A potential concern of any meta-analysis is publication bias. In the present case, publication bias may entail studies that show no discrimination being less likely to be published and, thus, included in our study. We sought to address this issue by seeking out and including all nonpublished field experiments available (n = 11). Their inclusion did little to affect our estimates. Finally, in SI Ap- pendix, section 5 and Table S7, we show that studies in which racial discrimination was the focus of the analysis (and for which there may be more pressure to demonstrate a positive effect) show no more discrimination than studies in which other characteristics were the main focus (with race included as an secondary or incidental cova- riate), further reducing concerns over publication bias for our results. Discussion Contrary to widespread assumptions about the declining signif- icance of race, the magnitude and consistency of discrimination we observe over time is a sobering counterpoint. We note that our results do not address the possibility that hiring discrimina- tion may have substantially dropped in the 1960s or early 1970s, during the civil rights era when many forms of direct discrimi- nation were outlawed, as some evidence suggests (1). Further, we note that our results pertain only to discrimination at the point of hire, not at later points in the employment relationship such as in wage setting or termination decisions. Social psychological the- ories would predict hiring to be most vulnerable to the influence of racial bias, given that objective information is limited or un- reliable (23–25). Likewise, from an accountability standpoint, discrimination is less easily detected, and therefore less costly to employers, at the point of hire (26). It may be the case, then, that more meaningful reductions in discrimination have taken place at other points in the employment relationship not measured here. What our results point to, however, is that at the initial point of entry—hiring decisions—African Americans remain substantially disadvantaged relative to equally qualified whites, and we see little indication of progress over time. These findings lead us to temper our optimism regarding racial progress in the United States. At one time it was assumed that the gradual fade-out of prejudiced beliefs, through cohort replacement and cultural change, would drive a steady reduction in discriminatory treatment (27). At least in the case of hiring discrimination against African Americans, this expectation does not appear borne out. We find some evidence of a decline in discrimination against Latinos since 1989. The small number of audit studies including Latinos limits our ability to include controls and the precision of our estimates—the decline is marginally significant statistically (P = 0.099). More evidence is needed to establish the trend in hiring discrimination against Latinos with greater certainty. Our results point toward the need for strong enforcement of antidiscrimination legislation and provide a rationale for continuing compensatory policies like affirmative action to improve equality of opportunity. Discrimination continues, and we find little evidence in regards to African Americans that it is disappearing or even grad- ually diminishing. Instead, we find the persistence of discrimination at a distressingly uniform rate. Materials and Methods Our procedure follows three basic stages: first, to identify all existing field experiments of hiring discrimination; second, to develop a coding rubric and to code studies to produce a database of their results; and third, to perform a statistical meta-analysis to draw conclusions from the combined results. We discuss each of these steps in turn. Identifying Relevant Studies. We aimed to include in our meta-analysis all existing studies, published or unpublished, that use a field experimental method and that provide contrasts in hiring-related outcomes between different race and ethnic groups in the United States. This includes both in- person audit studies and resume studies (or correspondence studies). We also required that contrasts of hiring outcomes between race or ethnic groups were made for groups that were on average equivalent in their labor market relevant characteristics, since otherwise discrimination estimates are con- founded with the difference in nonracial characteristics. We used three methods to identify relevant field experiments: searches in bibliographic databases, citation searches, and an email request to corre- sponding authors of field experiments of race-ethnic discrimination in labor markets and other experts on field experiments and discrimination. We began with a bibliographic search. Our search covered the following bibliographic databases and working paper repositories: Thomson’s Web of Science (Social Science Citation Index), ProQuest Sociological Abstracts, ProQuest Dissertations and Theses, Lexis Nexis, Google Scholar, and NBER working papers. We searched for some combination of “field experiment” or “audit study” or “correspondence study” and sometimes included the term “discrimination,” with some variation depending on the search functions of the database. We also searched two French-language indexes, Cairn and Persée, and two in- ternational sources, IZA discussion papers, a German working paper archive, and ILO International Migration Papers. Our second technique for identifying relevant studies relied on citation search. Working from the initial set of studies located through bibliographic search, we examined the bibliographies of all review articles and eligible field studies to find additional field experiments of hiring discrimination. The last technique used was an email request of authors of existing field experiments of discrimination. From our list of audit studies identified by bibliographic and citation search, we compiled a list of email addresses of authors of existing field experiments of discrimination. To this we added the addresses of authors of literature review articles on field experiments. Our email request asked for citations or copies of field discrimination studies published, unpublished, or ongoing. We also asked that authors refer us to any other researchers who may have recent or ongoing field experiments. The email requests were conducted in two phases. In the initial wave, 131 apparently valid email addresses were contacted. We received 56 responses. We also sent out a second wave of 68 e-mails which consisted of additional authors identified from the initial wave of surveys and some corrected email addresses. We received 19 responses to this second wave of email surveys. Overall, our search located 34 studies that were US-based field experiments of hiring, included contrasts between white and nonwhite applicant profiles that were on-average equivalent in their labor-market relevant characteristics (e.g., education, experience level in the labor market). Six studies were excluded for various reasons, as explained in SI Appendix, section 6. Our remaining 28 studies yielded 24 estimates of discrimination against African Americans and 9 against Latinos relative to whites. Coding and Selection of Analysis Period (1989–2015). We coded key charac- teristics of the studies into a database for our analysis. Coding was based on a coding rubric, which listed each potentially relevant characteristic of the research and included coding instructions. To develop the rubric, we initially read several studies and, based on this, developed an initial coding rubric of factors we thought might influence measured rates of discrimination. The initial rubric was reviewed and updated by all authors of this study for completeness. It was subsequently refined as coding progressed. Each study was coded independently by two raters, with disagreement resolved by the first author. See SI Appendix, section 7 for more discussion of coding pro- cedures. A list of coded characteristics for the 1989–2015 studies are shown in the SI Appendix, Tables S1 and S2. Studies have fieldwork periods range from 1972 to 2015 for African Americans and 1989 to 2015 for Latinos. For most analyses in this paper, we focus on the period 1989–2015. We focus on this period because the data are sparse before this period (only four studies before 1989) and because our reading of the early studies indicates key methodological differences among these early studies that may affect their results. Resume audits typically signal race by using race-typed names on resumes, but the pre-1989 studies either indicated race directly on the resume [McIntyre et al. (28) put “Race: BLACK” on the minority resumes and nothing about race on the “white” resumes] or attached photos to resumes (a procedure used by Newman; ref. 29). Excluding the early studies leaves us with 21 estimates of discrimination against African Americans and nine against Latinos from 24 studies (six studies include esti- mates of discrimination against both African Americans and Latinos). The Meta-Analysis Model. A meta-analysis aggregates information from across studies to produce an estimate of an effect of interest (30). In this study, our basic measure of discrimination is the discrimination ratio. This is the ratio of the percentage of callbacks for interviews received by white applicants to the percentage of callbacks or interviews received by African Americans or Latinos. Formally, if cw is the number of callbacks received by whites, and cm is the number of callbacks received by African Americans or Latinos, and nw is the number of applications submitted by white applicants, and nm is the 10874 | www.pnas.org/cgi/doi/10.1073/pnas.1706255114 Quillian et al. D o w n lo a d e d a t C a rn e g ie M e llo n U n iv e rs ity o n A p ri l 5 , 2 0 2 1 http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf www.pnas.org/cgi/doi/10.1073/pnas.1706255114 number of applications submitted by African American or Latino applicants, then the discrimination ratio is (cw/nw)/(cm/nm). Ratios above 1 indicate whites received more positive responses than African Americans or Latinos, with the amount above 1 multiplied by 100 indicating the percentage higher callbacks for whites relative to the minority group. Because audit studies equate groups on their nonracial characteristics either through matching and assignment of characteristics (in-person audits) or through random as- signment (most resume audits), no further within-study controls are re- quired. SI Appendix, section 8 discusses potential alternative measures of discrimination using the difference in proportions and the odds ratio, and presents alternative results using these measures. Our basic result—no de- cline in discrimination against African Americans over time—holds using both of the alternative measures, whereas evidence of a decline in dis- crimination for Latinos appears somewhat stronger with the difference in proportions or the odds ratio. The goal of a meta-analysis is to combine information across studies. This requires measuring the information each study contains about discrimination against a group. The information each study provides is inversely pro- portional to the square of the SE of the discrimination ratio. We calculate the SE of the ratio from counts reported in each study, accounting for audit pairs in the design when possible. In cases where information on paired outcomes is available from the study (counts of pairs in which both the white and the nonwhite tester receive a callback, white yes nonwhite no, white no nonwhite yes, neither get a callback), we calculated SEs of discrimination ratios accounting for the pairing (see SI Appendix, section 9 for details and formulas). For studies that are not paired between whites or nonwhites or where paired outcomes are not reported, we use formulas for the SE for unpaired groups. This formula will slightly overestimate the SE of the effect for studies that are paired but we treat as unpaired due to lack of information about the outcomes at the pair level, underweighting these studies a bit in computing the overall effect, and slightly inflating the overall cross-study SE. Of course field experiments vary in their characteristics, such as the geographic area they cover, the exact job sectors covered, and details of their methodology. To account for this variability in understanding the time trend, we use two procedures. First, we include controls, discussed further below, for many study characteristics. Second, to capture sources of variability not covered by the covariates, we use a random effects specification (22). Random effects incorporate a variance component capturing variation in outcomes across studies that are due to unobserved study-level factors. Random ef- fects are recommended whenever there is reason to believe that the effect in question is likely to vary as a function of design features of the study, rather than representing a single underlying effect that is constant over the whole population. This is surely the case in our analysis, as we expect that the level of racial discrimination may depend on the year of the study, the situation the study considers (e.g., the occupational categories), the skill level of the applicants, and so on. The random effect increases the SEs of estimates to correctly account for variabilities among studies in drawing inferences about overall trend. More formally, random-effects meta-analysis allows the true effects of race on the callback rate in each situation estimated by each study, θi, to vary between studies by assuming that they have a normal distribution around a mean effect, θ. If yi is the discrimination ratio in the ith study, then the meta- analysis model is as follows: ln  ðyiÞ = θ + ui + ei, where  ui ∼ N � 0, τ2 � and  ei ∼ N � 0, σ2i � . Here, τ2 is the between-study variance, estimated from between-study var- iance as part of the meta-analysis model, while σi 2 is the variance of the log response ratio in the ith study, estimated from study counts as described above. Following standard practice in the meta-analysis literature, we log the response ratio to reduce the asymmetry of the ratio. Meta-regression allows that the rate of discrimination is a function of a vector of k characteristics of the studies and effects, x, plus (in the random effects specification) residual study-level heterogeneity (between study variance not explained by the covariates). The model assumes the study-level heterogeneity follows a normal distribution around the linear predictor: ln  ðyiÞ = xiβ + ui + ei, where  ui ∼ N � 0, τ2 � and  ei ∼ N � 0, σ2i � , where β is a k × 1 vector of coefficients (including a constant), and xi is a 1 × k vector of covariate values in study i (including a 1 for a constant). Estimation is by restricted maximum likelihood. For details, see SI Appendix, section 9. To explore trends over time, we include covariates for the year of fieldwork of the study. In the simplest models, the only covariate is this time trend. In later models, we include a more extensive set of predictors to control for other factors that might confound the time trend. These additional controls include resume audit vs. field audit as the study method, gender and education level of the fictitious applicants, occupations tested, unemployment rates at the field sites used for testing, criminal background of some fictitious applicants, and region of the country. For discussions of why these controls were selected, see SI Appendix, section 4 (see SI Appendix, Tables S1 and S2 for descriptive statistics on the controls; for a discussion of trends in covariates, see SI Appendix, section 10). ACKNOWLEDGMENTS. We thank Anthony Heath, Fenella Fleischmann, Matthew Salganik, Frank Dobbin, András Tilcsik, Donald Green, David Neumark, Hedwig Lee, two anonymous PNAS reviewers, and the editor for comments; Larry Hedges for methodological advice; and Jim Cheng Chen and Joshua Aaron Klingenstein for excellent research assistance. We have received financial support for this project from the Russell Sage Foundation and the Institute for Policy Research at Northwestern University. 1. Donohue J-J, Heckman J (1991) Continuous versus episodic change: The impact of civil rights policy on the economic status of blacks. J Econ Lit 29:1603–1643. 2. Wilson W-J (1978) The Declining Significance of Race: Blacks and Changing American Institutions (Univ of Chicago Press, Chicago). 3. Heckman J (1998) Detecting discrimination. J Econ Perspect 12:101–116. 4. Schuman H, Steeh C, Bobo L, Krysan M, eds (1998) Racial Attitudes in America: Trends and Interpretations (Harvard Univ Press, Cambridge, MA), Rev ed. 5. Jencks C, Phillips M (2011) The Black-White Test Score Gap (Brookings Institution Press, Washington, DC), p 542. 6. Reardon S-F, Fahle E (2017) Education. State of the union: The poverty and inequality report (Stanford Center on Poverty and Inequality, Palo Alto, CA), special issue, Pathways Magazine. 7. Dobbin F (2009) Inventing Equal Opportunity (Princeton Univ Press, Princeton, NJ), p 321. 8. Tesler M, Sears D-O (2010) Obama’s Race: The 2008 Election and the Dream of a Post- Racial America (Univ of Chicago Press, Chicago). 9. Austin A-A (2013) The unfinished march: An overview. (Economic Policy Institute). Ava- liable at www.epi.org/publication/unfinished-march-overview/. Accessed May 25, 2016. 10. Cancio A-S, Evans D, Maume D-J (1996) Reconsidering the declining significance of race: Racial differences in early career wages. Am Sociol Rev 61:541–556. 11. Holzer H-J, Offner P, Sorensen E (2005) Declining employment among young black less-educated men: The role of incarceration and child support. J Policy Anal Manage 24:329–350. 12. Hajnal Z, Abrajano M (2016) Trump’s all too familiar strategy and its future in the GOP. Forum 14:295–309. 13. Devine P-G, Elliot A-J (1995) Are racial stereotypes really fading? The Princeton trilogy revisited. Pers Soc Psychol Bull 21:1139–1150. 14. Bobo L-D, Charles C-Z, Krysan M, Simmons A-D (2012) The real record on racial atti- tudes. Social Trends in American Life: Findings from the General Social Survey Since 1972, ed Marsden P-V (Princeton Univ Press, Princeton), pp 38–83. 15. Dovidio J-F, Gaertner S-L (2010) Intergroup bias. Handbook of Social Psychology, eds Fiske S-T, Gilbert D-T, Lindzey G, Jongsma A-E (Wiley, Hoboken, NJ), 5th Ed. 16. Kinder D-R, Sears D-O (1981) Prejudice and politics: Symbolic racism versus racial threats to the good life. J Pers Soc Psychol 40:414–431. 17. McConahay J-B (1983) Modern racism and modern discrimination the effects of race, racial attitudes, and context on simulated hiring decisions. Pers Soc Psychol Bull 9: 551–558. 18. Bonilla-Silva E (2006) Racism Without Racists: Color-Blind Racism and the Persistence of Racial Inequality in America (Rowman & Littlefield, Lanham, MD), 2nd Ed. 19. Bertrand M, Mullainathan S (2004) Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. Am Econ Rev 94:991–1013. 20. Pager D, Western B, Bonikowski B (2009) Discrimination in a low-wage labor market: A field experiment. Am Sociol Rev 74:777–799. 21. National Research Council (2004) Measuring Racial Discrimination, eds Blank RM, Dabady M, Citro CF (National Academies Press, Washington, DC). 22. Raudenbush S-W (2009) Analyzing effects sizes: Random effects coding. The Handbook of Research Synthesis and Meta-Analysis, eds Cooper HM, Hedges LV, Valentine JC (Russell Sage Foundation, New York), 2nd Ed, pp 295–316. 23. Oettinger G-S (1996) Statistical discrimination and the early career evolution of the black-white wage gap. J Labor Econ 14:52–78. 24. Fiske S (1998) Stereotyping, prejudice, and discrimination. The Handbook of Social Psychology, eds Gilbert D, Fiske S, Lindzey G (Wiley, New York), 4th Ed, pp 357–411. 25. Altonji J-G, Pierret C-R (2001) Employer learning and statistical discrimination. Q J Econ 116:313–350. 26. Wessel D (September 11, 2003) Fear of bias suits may be affecting hiring decisions. Wall Street Journal, section A, p 2. 27. Firebaugh G, Davis K-E (1988) Trends in antiblack prejudice, 1972–1984: Region and cohort effects. Am J Sociol 94:251–272. 28. McIntyre S, Moberg D-J, Posner B-Z (1980) Preferential treatment in preselection decisions according to sex and race. Acad Manage J 23:738–749. 29. Newman J-M (1978) Discrimination in recruitment: An empirical analysis. Ind Labor Relat Rev 32:15–23. 30. Borenstein M, Hedges L-V, Higgins J-P-T, Rothstein H-R (2009) Introduction to Meta- Analysis (John Wiley & Sons, Chichester, UK), p 421. Quillian et al. PNAS | October 10, 2017 | vol. 114 | no. 41 | 10875 SO C IA L SC IE N C ES SE E C O M M EN TA R Y D o w n lo a d e d a t C a rn e g ie M e llo n U n iv e rs ity o n A p ri l 5 , 2 0 2 1 http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1706255114/-/DCSupplemental/pnas.1706255114.sapp.pdf http://www.epi.org/publication/unfinished-march-overview/