key: cord-0157726-bl5zrcwq authors: Held, Leonhard; Matthews, Robert; Ott, Manuela; Pawel, Samuel title: Reverse-Bayes methods for evidence assessment and research synthesis date: 2021-02-26 journal: nan DOI: nan sha: 16bfdc60c2177837134c2e2e5602ff146c645738 doc_id: 157726 cord_uid: bl5zrcwq It is now widely accepted that the standard inferential toolkit used by the scientific research community -- null-hypothesis significance testing (NHST) -- is not fit for purpose. Yet despite the threat posed to the scientific enterprise, there is no agreement concerning alternative approaches for evidence assessment. This lack of consensus reflects long-standing issues concerning Bayesian methods, the principal alternative to NHST. We report on recent work that builds on an approach to inference put forward over 70 years ago to address the well-known"Problem of Priors"in Bayesian analysis, by reversing the conventional prior-likelihood-posterior ("forward") use of Bayes's Theorem. Such Reverse-Bayes analysis allows priors to be deduced from the likelihood by requiring that the posterior achieve a specified level of credibility. We summarise the technical underpinning of this approach, and show how it opens up new approaches to common inferential challenges, such as assessing the credibility of scientific findings, setting them in appropriate context, estimating the probability of successful replications, and extracting more insight from NHST while reducing the risk of misinterpretation. We argue that Reverse-Bayes methods have a key role to play in making Bayesian methods more accessible and attractive for evidence assessment and research synthesis. As a running example we consider a recently published meta-analysis from several randomized controlled clinical trials investigating the association between corticosteroids and mortality in hospitalized patients with COVID-19. 1 Introduction: the origin of Reverse-Bayes methods " We can make judgments of initial probabilities and infer final ones, or we can equally make judgments of final ones and infer initial ones by Bayes's theorem in reverse." Good (1983, p. 29) There is now a common consensus that the most widely-used methods of statistical inference have led to a crisis in both the interpretation of research findings and their replication (e. g. Gelman and Loken, 2014; Wasserstein and Lazar, 2016) . At the same time, there is a lack of consensus on how to address the challenge (Matthews, 2017) , as highlighted by the plethora of alternative techniques to null-hypothesis significance testing now being put forward (see e. g. Wasserstein et al., 2019, and references therein) . Especially striking is the relative dearth of alternatives based on Bayesian concepts. Given their intuitive inferential basis and output (see e. g. Wagenmakers et al., 2008; McElreath, 2018, or some other textbook), these would seem obvious candidates to supplant the prevailing frequentist methodology. However, it is well-known that the adoption of Bayesian methods continues to be hampered by several factors, such as the belief that advanced computational tools are required to make Bayesian statistics practical (e. g. Green et al., 2015) . The most persistent of these is that the full benefit of Bayesian methods demands specification of a prior level of belief, even in the absence of any appropriate insight. This "Problem of Priors" has cast a shadow over Bayesian methods since their emergence over 250 years ago (see e. g. McGrayne, 2011) , and has led to a variety of approaches, such as prior elicitation, prior sensitivity analysis, and objective Bayesian methodology; all have their supporters and critics. One of the least well-known was suggested over 70 years ago (Good, 1950) by one of the best-known proponents of Bayesian methods during the 20 th century, I.J. Good. It involves reversing the conventional direction of Bayes's Theorem and determining the level of prior belief required to reach a specified level of posterior belief, given the evidence observed. This reversal of Bayes's Theorem allows the assessment of new findings on the basis of whether the resulting prior is reasonable in the light of existing knowledge. Whether a prior is plausible in the light of existing knowledge can be assessed informally or more formally using techniques for comparing priors with existing data as suggested by Box (1980) and further refined by Evans and Moshonov (2006) . Good stressed that despite the routine use of the adjectives "prior" and "posterior" in applications of Bayes's Theorem, the validity of any resulting inference does not require a specific temporal ordering, as the theorem is simply a constraint ensuring consistency with the axioms of probability. While reversing Bayes's Theorem is still regarded as unacceptable by some on the grounds it allows "cheating" in the sense of choosing priors to achieve a desired posterior inference (e. g. O' Hagan and Forster, 2004, p. 143) , others point out this is not an ineluctable consequence of the reversal (e. g. Cox, 2006, p. 78-79) . As we shall show, recent technical advances further weaken this criticism. Good's belief in the value of Reverse-Bayes methods won support from E.T. Jaynes in his well-known treatise on probability. Explaining a specific manifestation of the approach (to be discussed shortly) Jaynes remarked: "We shall find it helpful in many cases where our prior information seems at first too vague to lead to any definite prior probabilities; it stimulates our thinking and tells us how to assign them after all" (Jaynes, 2003, p. 126 ). Yet despite the advocacy of two leading figures in the foundations of Bayesian methodology, the potential of Reverse-Bayes methods has remained largely unexplored. Most published work has focused on their use in putting new research claims in context, with Reverse-Bayes methods being used to assess whether the prior evidence needed to make a claim credible is consistent with existing insight (Carlin and Louis, 1996; Matthews, 2001a,b; Spiegelhalter, 2004; Greenland, 2006 Greenland, , 2011 Held, 2013; Colquhoun, 2017 Colquhoun, , 2019 Held, 2019a Pawel and Held, 2020) . The purpose of this paper is to highlight recent technical developments of Good's basic idea which lead to inferential tools of practical value in the analysis of summary measures as reported in meta-analysis. As a running example we consider a recently published meta-analysis investigating the association between corticosteroids and mortality in hospitalized patients with COVID-19. Specifically, we show how Reverse-Bayes methods address the current concerns about the interpretation of new findings and their replication. We begin by illustrating the basics of the Reverse-Bayes approach for both hypothesis testing and parameter estimation. This is followed by a discussion of Reverse-Bayes methods for assessing effect estimates in Section 2. These allow the credibility of both new and existing research findings reported in terms of NHST to be evaluated in the context of existing knowledge. This enables researchers to go beyond the standard dichotomy of statistical significance/non-significance, extracting further insight from their findings. We then discuss the use of the Reverse-Bayes approach in the most recalcitrant form of the Problem of Priors, involving the assessment of research findings which are unprecedented and thus lacking any clear source of prior support. We show how the concept of intrinsic credibility resolves this challenge, and puts recent calls to tighten p-value thresholds on a principled basis (Benjamin et al., 2017) . In Section 3 we describe Reverse-Bayes methods with Bayes factors, the principled solution for Bayesian hypothesis testing. Finally, we describe in Section 4 Reverse-Bayes approaches to interpretational issues that arise in conventional statistical analysis based on p-values, and how they can be used to flag the risk of inferential fallacies. We close with some extensions and final conclusions. The subjectivity involved in the specification of prior distributions is often seen as a weak point of Bayesian inference. The Reverse-Bayes approach can help to resolve this issue both in hypothesis testing and parameter estimation, we will start with the former. Consider a null hypothesis H 0 with prior probability π = Pr(H 0 ), so Pr(H 1 ) = 1 − π is the prior probability of the alternative hypothesis H 1 . Computation of the posterior probability of H 0 is routine with Bayes' theorem: Bayes' theorem can be written in more compact form as i. e. the posterior odds are the likelihood ratio times the prior odds. The standard 'forward-Bayes' approach thus fixes the prior odds (or one of the underlying probabilities), determines the likelihood ratio for the available data, and takes the product to compute the posterior odds. Of course, the latter can be easily back-transformed to the posterior probability Pr(H 1 | data), if required. The Problem of Priors is now apparent: in order for us to update the odds in favour of H 1 , we must first specify the prior odds. This can be problematic in situations where, for example, the evidence on which to base the prior odds is controversial or even non-existent. However, as Good emphasised it is entirely justifiable to "flip" Bayes's theorem around, allowing us to ask the question: Which prior, when combined with the data, leads to our specified posterior? For illustration we re-visit an example put forward by Good (1950, p. 35) , perhaps the first published Reverse-Bayes calculation. It centres on a question for which the setting of an initial prior is especially problematic: does an experiment provide convincing evidence for the existence of extra-sensory perception (ESP)? The substantive hypothesis H 1 is that ESP exists, so that H 0 asserts it does not exist. Imagine an experiment in which a person has to make n consecutive guesses of random digits (between 0 and 9) and all are correct. The likelihood ratio is therefore It is unlikely that sceptics and advocates of the existence of ESP would ever agree on what constitutes reasonable priors from which to start a standard Bayesian analysis of the evidence. However, Good argued that Reverse-Bayes offers a way forward by using it to set bounds on the prior probabilities for H 1 and H 0 . This is achieved via the outcome of an imaginary (Gedanken) experiment capable of demonstrating H 1 is more likely than H 0 , that is, of leading to posterior probabilities such that Pr(H 1 | data) > Pr(H 0 | data). Using this approach, which Good termed the Device of Imaginary Results, we see that if the ESP experiment produced 20 correct consecutive guesses, (2) implies that ESP may be deemed more likely than not to exist by anyone whose priors satisfy Pr(H 1 )/Pr(H 0 ) > 10 −20 . In contrast, if only n = 3 correct guesses emerged, then the existence of ESP could be rejected by anyone whose priors satisfy Pr(H 1 )/Pr(H 0 ) < 10 −3 . Using Bayes's Theorem in reverse has thus led to a quantitative statement of the prior beliefs that either advocates or sceptics of ESP must be able to justify in the face of results from a real experiment. The practical value of Good's approach was noted by Jaynes in his treatise: "[I]n the present state of development of probability theory, the device of imaginary results is usable and useful in a very wide variety of situations, where we might not at first think it applicable" (Jaynes, 2003, p. 125-126) . It is straightforward to extend (1) and (2) to hypotheses that involve unknown parameters θ. The likelihood ratio Pr(data | H 1 )/ Pr(data | H 0 ) is then called a Bayes factor (Jeffreys, 1961; Kass and Raftery, 1995) where is the marginal likelihood under hypothesis H i , i = 0, 1, obtained be integration of the ordinary likelihood with respect to the prior distribution f (θ | H i ). We will apply the Reverse-Bayes approach to Bayes factors in Section 3 and 4. We can also apply the Reverse-Bayes idea to continuous prior and posterior distributions of a parameter of interest θ. Reversing Bayes' theorem then leads to So the prior is proportional to the posterior divided by the likelihood with proportionality constant f (data). Consider Bayesian inference for the mean θ of a univariate normal distribution, assuming the variance σ 2 is known. Let x denote the observed value from that N(θ, σ 2 ) distribution and suppose the prior for θ (and hence also the posterior) is normal. Each of them is determined by two parameters, usually the mean and the variance, but two distinct quantiles would also work. If we fix both parameters of the posterior, then the prior in (3) is -under a certain regularity condition -uniquely determined. For ease of presentation we work with the observational precision κ = 1/σ 2 and denote the prior and posterior precision by δ and δ , respectively. Finally let µ and µ denote the prior and posterior mean, respectively. Forward-Bayesian updating tells us how to compute the posterior precision and mean: For example, fixed-effect (FE) meta-analysis is based on iteratively applying (4) and (5) to the summary effect estimate x i with standard error σ i from the i-th study, i = 1, . . . , n, starting with an initial precision of zero. Reverse-Bayes simply inverts these equations, which leads to the following: provided δ > κ, i. e. the posterior precision must be larger than the observational precision. We will illustrate the application of (6) and (7) as well as the methodology in the rest of this paper using a recent meta-analysis combining information from n = 7 randomized controlled clinical trials investigating the association between corticosteroids and mortality in hospitalized patients with COVID-19 (WHO REACT Working Group, 2020); its results are reproduced in Figure 1 (here and henceforth, odds ratios (ORs) are expressed as log odds ratios to transform the range from (0, ∞) to (−∞, +∞), consistent with the assumption of normality). Let x i =θ i denote the maximum likelihood estimate (MLE) of the log odds ratio θ in the i-th study with standard error σ i . The meta-analytic odds ratio estimate under the fixed-effect model (the pre-specified primary analysis) is OR = 0.66 [95% CI, 0.53, 0.82], respectivelyθ = −0.42 [95% CI, -0.63, -0.20] for the log odds ratio θ, indicating evidence for lower mortality of patients treated with corticosteroids compared to patients receiving usual care or placebo. The pooled effect estimateθ represents a posterior mean µ with posterior precision δ = 83.8. Figure 1: Forest plot of fixed-effect meta-analysis of randomized clinical trials investigating association between corticosteroids and mortality in hospitalized patients with COVID-19 (WHO REACT Working Group, 2020). Shown are number of deaths among total number of patients for treatment/control group, log odds ratio effect estimates with 95% confidence interval, two-sided p-values p, and prior-predictive tail probabilities p Box with a meta-analytic estimate based on the remaining studies serving as the prior. With a meta-analysis such as this, it is of interest to quantify potential conflict among the effect estimates from the different studies. To do this, we follow Presanis et al. (2013) and compute a prior-predictive tail probability (Box, 1980; Evans and Moshonov, 2006) for each study-specific estimateθ i , with a meta-analytic estimate based on the remaining studies serving as the prior. As discussed above, fixed-effect meta-analysis is standard forward-Bayesian updating for normally distributed effect estimates with an initial flat prior. Hence, instead of fitting a reduced meta-analysis for each study, we can simply use the the Reverse-Bayes equations (6) and (7) together with the overall estimate to compute the parameters of the prior in the absence of the i-th study (denoted by the index −i): For example, through omitting the RECOVERY Collaborative Group (2020) trial result θ i = −0.53 with standard error σ i = 0.145 we obtain δ −i = 36.1 and µ −i = −0.26. A prior predictive tail probability using the approach from Box (1980) is then obtained by computing This leads to p Box = 0.22 for the RECOVERY trial, indicating very little prior-data conflict. The tail probabilities for the other studies are even larger, with the exception of the COVID STEROID trial (p Box = 0.05), see Figure 1 . The lack of strong conflict can be seen as an informal justification of the assumptions of the underlying fixed-effect meta-analysis (Presanis et al., 2013; Ferkingstad et al., 2017) . A related method in network meta-analysis is to assess consistency via "node-splitting" (Dias et al., 2010) . Instead of determining the prior completely based on the posterior, one may also want to fix one parameter of the posterior and one parameter of the prior. This is of particular interest in order to challenge "significant" or "non-significant" findings through the Analysis of Credibility, as we will see in the following section. A more general question amenable to Reverse-Bayes methods is the assessment of effect estimates and their statistical significance or non-significance. This issue has recently attracted intense interest following the public statement of the American Statistical Association about the misuse and misinterpretation of the NHST concepts of statistical significance and non-significance (Wasserstein and Lazar, 2016) . First investigated 20 years ago in Matthews (2001a) with subsequent discussion in Matthews (2001b) , Reverse-Bayes methods for assessing both statistically significant and non-significant findings has been termed the Analysis of Credibility (or AnCred, Matthews, 2018) , whose principles and practice we now briefly review. Suppose the study gives rise to a conventional confidence interval for the unknown effect size θ at level 1 − α with lower limit L and upper limit U . Assume that L and U are symmetric around the point estimateθ (assumed to be normally distributed with standard error σ). AnCred then takes this likelihood and uses a Reverse-Bayes approach to deduce the prior required in order to generate evidence for the existence of an effect, in the form of a posterior that excludes no effect. As such, AnCred allows evidence deemed statistically significant/non-significant in the NHST framework to be assessed for its credibility in the Bayesian framework. As the latter represents Pr(H 0 | data) and thus a conditioning on the data rather than the null hypothesis, it is inferentially directly relevant to researchers. After a suitable transformation AnCred can be applied to a large number of commonly used effect measures such as differences in means, odds ratios, relative risks and correlations (see the literature of meta-analysis for details about conversion among effect size scales, e. g. Cooper et al., 2019, Chapter 11.6 ). The inversion of Bayes's Theorem needed to assess credibility requires the form and location of the prior distribution to be specified. This in turn depends on whether the claim being assessed is statistically significant or non-significant; we consider each below. Challenging statistically significant findings A statistically significant finding at level α is characterized by both L and U being either positive or negative. Equivalently z 2 > z 2 α/2 is required where z =θ/σ denotes the corresponding test statistic and z α/2 the (1 − α/2)-quantile of the standard normal distribution. For significant findings, the idea is to ask how sceptical we would have to be not to find the apparent effect estimate convincing. To this end, a "critical prior interval" (Matthews, 2001b) with limits −S and S is derived such that the corresponding posterior credible interval just includes zero, the value of no effect. This critical prior interval can then be compared with internal or external evidence to assess if the finding is credible or not, despite being "statistically significant". More specifically, a reverse Bayes approach is applied to significant confidence intervals (at level α) based on a normally distributed effect estimate. The prior is a "sceptical" mean-zero normal distribution with variance τ 2 = g · σ 2 , so the only free parameter is the relative prior variance g = τ 2 /σ 2 . The posterior is hence also normal and either its lower α/2-quantile (for positiveθ) or upper 1 − α/2-quantile (for negativê θ) is fixed to zero, so just represents "non-credible". The sufficiently sceptical prior then has relative variance see Held (2019a, Appendix) for a derivation. The corresponding scepticism limit is which holds for any value of α provided the effect is significant at that level. The left plot in Figure 2 illustrates the AnCred procedure for the finding from the RECOVERY trial (RECOVERY Collaborative Group, 2020), the only statistically significant result (at the convention α = 0.05 level) shown in Figure 1 . The trial found a decrease in COVID-19 mortality for patients treated with corticosteroids compared to usual care or placebo (θ = −0.53 [95% CI, -0.82, -0.25]). The sufficiently sceptical prior has relative variance g = 0.39, so the sufficiently sceptical prior variance needs to be roughly 2.5 times smaller than the variance of the estimate to make the result non-credible. The scepticism limit on the log odds ratio scale turns out to be S = 0.18, which corresponds to a critical prior interval with limits 0.84 and 1.19 on the odds ratio scale. Thus sceptics may still reject the RECOVERY trial finding as lacking credibility despite its statistical significance if external evidence suggests mortality reductions (in terms of odds) are unlikely to exceed 1 − 0.84 ≈ 16%. It is also possible to apply the approach to the meta-analytic log odds ratio estimatê θ = −0.42 [95% CI, -0.63, -0.20] from all 7 studies combined. Then S = 0.13, so the meta-analytic estimate can be considered as non-credible if external evidence suggests that mortality reductions are unlikely to exceed 1 − exp(−S) = 1 − 0.88 ≈ 12%. This illustrates that the meta-analytic estimate has gained credibility compared to the result from the RECOVERY study alone, despite the reduction in the effect estimate ( OR = exp(θ) = 0.66 vs. 0.59 in the RECOVERY study). It is also possible to challenge "non-significant" findings (i. e. those for which the CI now includes zero, so z 2 < z 2 α/2 ) using a prior that pushes the posterior towards being credible in the Bayesian sense, with posterior credible interval no longer including zero, corresponding to no effect. Matthews (2018) proposed the "advocacy prior" for this purpose, a normal prior with positive mean µ and variance τ 2 chosen such that the α/2-quantile is fixed to zero (for positive effect estimatesθ > 0). He showed that the "advocacy limit" AL, the (1 − α/2)-quantile of the advocacy prior is to reach credibility of the corresponding posterior at level α. We show in Appendix A that the corresponding relative prior mean m = µ/θ is There are two important properties of the advocacy prior. First, the coefficient of variation CV is The advocacy prior θ ∼ N(µ, τ 2 = µ 2 CV 2 ) is hence characterized by a fixed coefficient of variation, so this prior has equal evidential weight (quantified in terms of µ/τ = z α/2 ) as data which are "just significant" at level α. Second, the advocacy limit AL defines the family of normal priors capable of rendering a "non-significant" finding credible at the same level. Such priors are summarized by the credible interval Thus when confronted with a "non-significant" result -often, and wrongly, interpreted as indicating no effect -advocates of the existence of an effect may still claim the existence of the effect is credible to the same level if there exists prior evidence or insight compatible with the credible interval (L o , U o ). If the evidence for an effect is weak (strong), the resulting advocacy prior will be broad (narrow), giving advocates of an effect more (less) latitude to make their case under terms of AnCred. Note that (10) and (11) also hold for negative effect estimates, where we fix the (1 − α/2)-quantile of the advocacy prior to zero and define the advocacy limit AL as the α/2-quantile of the advocacy prior. Figure 2 : Two examples of the Analysis of Credibility. Shown are point estimates within 95% confidence/credible intervals. The left plot illustrates how a sceptical prior is used to challenge the significant finding from the RECOVERY trial (RECOVERY Collaborative Group, 2020). The right plot illustrates how an advocacy prior is used to challenge a non-significant finding from the REMAP-CAP trial (REMAP-CAP Investigators, 2020). In both scenarios the posterior is fixed to be just credible/non-credible. For illustration we consider the data from the REMAP-CAP trial (REMAP-CAP Investigators, 2020) that supported the RECOVERY trial finding of decreased COVID-19 mortality from corticosteroid use. However, this trial involved far fewer patients, and despite the point estimate showing efficacy, the relatively large uncertainty rendered the overall finding non-significant at the 5% level (θ = −0.34 [95% CI, −0.96, 0.29]). Such an outcome is frequently (and wrongly) taken to imply no effect. The use of AnCred leads to a more nuanced conclusion. The advocacy limit AL on the log odds ratio scale for REMAP-CAP is −1.89, i. e. 0.15 on the odds ratio scale, see also the right plot in Figure 2 . Thus advocates of the effectiveness of corticosteroids can regard the trial as providing credible evidence of effectiveness despite its non-significance if external evidence supports mortality reductions (in terms of odds) in the range 0% to 85%. So broad an advocacy range reflects the fact that this relatively small trial provides only modest evidential weight, and thus little constraint on prior beliefs about the effectiveness of corticosteroids. Assessing credibility via equivalent prior study sizes Greenland (2006) showed that reverse-Bayes credibility assessments can be formulated in terms of the size of a prior study capable of challenging a claim of statistical significance. This perspective helps to put the required weight of prior evidence needed for such a challenge in the context of the observed data. For normal priors with mean µ and variance τ 2 the equivalent data prior has 2/τ 2 cases in each arm of a trial with a sufficiently large number of patients in each arm. For example, the sceptical prior for the RECOVERY trial corresponds to 244 deaths in each of two arms with, say, 100'000 patients each. If we aim for a more realistic mortality rate in the hypothetical prior trial, for example the same as in the RECOVERY trial overall (37.5%), then the sceptical prior corresponds to 389 deaths out of 1038 patients in each arm. This is considerably larger than the actual number of deaths in the control arm (283) and much larger that the number of deaths in the intervention arm (95) and underlines that the sufficiently sceptical prior needs to be chosen rather tight to make the RECOVERY trial result no longer convincing. On the other hand, the advocacy prior for the REMAP-CAP result translates into only 9 deaths in each of the two arms of sufficiently large size, considerably smaller than the observed number of deaths in the two arms of the study (26 resp. 29). The non-zero prior mean µ = −0.94 can be incorporated with an allocation ratio of 1 : R = exp(µ), approximately 5 : 2, to shift the prior towards "advocacy", for example with 250'000 patients in the intervention arm and 100'000 in the control arm. If we aim for roughly the same control mortality rate as in the REMAP-CAP trial (32%), then the sceptical prior corresponds to 11 deaths in each arm out of 83 respectively 39 patients. Another data representation of a sceptical prior forms the basis of the well-known "failsafe N " method, sometimes also called "file-drawer analysis". This method, first introduced by Rosenthal (1979) and later refined by Rosenberg (2005) , is commonly applied to the results from a meta-analysis and answers the question: "How many unpublished negative studies do we need to make the meta-analytic effect estimate non-significant?" A relatively large N of such unpublished studies suggests that the estimate is robust to potential null-findings, for example due to publication bias. Calculations are made under the assumption that the unpublished studies have an average effect of zero and a precision equal to the average precision of the published ones. While the method does not identify nor adjust for publication bias, it provides a quick way to assess how robust the meta-analytic effect estimate is. The method is available in common software packages such as metafor (Viechtbauer, 2010) and its simplicity and intuitive appeal have made it very popular among researchers. AnCred and the fail-safe N are both based on the idea to challenge effect estimates such that they become "non-significant/not credible", and it is easy to show that the methods are under some circumstances also technically equivalent. To illustrate this, we consider again the meta-analysis on the association between corticosteroids and COVID-19 mortality (WHO REACT Working Group, 2020) which gave the pooled log odds ratio estimateθ = −0.42 with standard error σ = 0.11, posterior precision δ = 83.8 and test statistic z =θ/σ = −3.81. Using the Rosenberg (2005) approach (as implemented in the fsn() function from the metafor package) we find that at least N = 20 additional but unpublished nonsignificant findings are needed to make the published meta-analysis effect non-significant. If instead, we challenge the overall estimate with AnCred, we obtain the relative prior variance g = 0.36 using equation (8), so τ 2 = 0.0043. Taking into account the average precision δ /n = 11.98 of the different effect estimates estimates in the meta-analysis leads to N = n/(δ ·τ 2 ) = 19.5 which is equivalent to the fail-safe N result after rounding to the next larger integer. The Problem of Priors is at its most challenging in the context of entirely novel "out of the blue" effects for which no obviously relevant external evidence exist. By their nature, such findings often attract considerable interest both within and beyond the research community, making their reliability of particular importance. Given the absence of external sources of evidence, Matthews (2018) proposed the concept of intrinsic credibility. This requires that the evidential weight of an unprecedented finding is sufficient to put it in conflict with the sceptical prior rendering it non-credible. In the AnCred framework, this implies a finding possesses intrinsic credibility at level α if the estimatê θ is outside the corresponding sceptical prior interval [−S, S] extracted using Reverse-Bayes from the finding itself, i. e.θ 2 > S 2 with S given in (9). Matthews showed this implies an unprecedented finding is intrinsically credible at level α = 0.05 if its p-value does not exceed 0.013. Held (2019a) refined the concept by suggesting the use of a prior-predictive check (Box, 1980; Evans and Moshonov, 2006) to assess potential prior-data conflict. With this approach the uncertainty of the estimateθ is also taken into account since it is based on the prior-predictive distribution, in this caseθ ∼ N(0, σ 2 + τ 2 = σ 2 (1 + g)) with g as given in (8). Intrinsic credibility is declared if the (two-sided) tail-probability p Box = Pr χ 2 1 ≥θ 2 /(σ 2 + τ 2 ) = Pr χ 2 1 ≥ z 2 /(1 + g) ofθ under the prior-predictive distribution is smaller than α. It turns out that the p-value associated with θ needs to be at least as small as 0.0056 to obtain intrinsic credibility at level α = 0.05, providing another principled argument for the recent proposition to lower the p-value threshold for the claims of new discoveries to 0.005 (Benjamin et al., 2017) . A simple check for intrinsic credibility is based on the credibility ratio, the ratio of the upper to the lower limit (or vice versa) of a confidence interval for a significant effect size estimate. If the credibility ratio is smaller than 5.8 then the result is intrinsically credible (Held, 2019a) . This holds for confidence intervals at all possible values of α, not just for the 0.05 standard. For example, in the RECOVERY study the 95% confidence interval for the log-odds ratio ranges from −0.82 to −0.25, so the credibility ratio is −0.82/ − 0.25 = 3.27 < 5.8 and the result is intrinsically credible at the standard 5% level. Whether intrinsic credibility is assessed based on the prior or the prior-predictive distribution, it depends on the level α in both cases. To remove this dependence, Held (2019a) proposed to consider the smallest level at which intrinsic credibility can be established, defining the p-value for intrinsic credibility see Held (2019a, section 4) for the derivation. Now z =θ/σ, so compared to the standard p-value p = 2 {1 − Φ (|z|)}, the p-value for intrinsic credibility is based on twice the variance σ 2 of the estimateθ. Although motivated from a different perspective, inference based on intrinsic credibility thus mimics the doubling the variance rule advocated by Copas and Eguchi (2005) as a simple means of adjusting for model uncertainty. Moreover, Held (2019a) showed that p IC is connected to p rep (Killeen, 2005) , the probability that a replication will result in an effect estimateθ r in the same direction as the observed effect estimateθ, by p rep = 1 − p IC /2. Hence, an intrinsically credible estimate at a small level α will have high chance of replicating since p rep ≥ 1 − α/2. Note that p rep lies between 0.5 and 1 with the extreme case p rep = 0.5 ifθ = 0. As an example, the p-value for intrinsic credibility for the RECOVERY trial finding (with p-value p = 0.0002) cited earlier is p IC = 0.01 and thus the probability of the replication effect going in the same direction (i. e. reduced mortality in this case) is 0.995. In contrast, the finding from the smaller REMAP-CAP trial (with p = 0.29) leads to p IC = 0.46, and the probability of effect direction replication is hence only 0.77. The AnCred procedure as described above uses posterior credible intervals as a means of quantifying evidence. However, quantification of evidence with Bayes factors is a more principled solution for hypothesis testing in the Bayesian framework (Jeffreys, 1961; Kass and Raftery, 1995) . Bayes factors enable direct probability statements about null and alternative hypothesis and they can also quantify evidence for the null hypothesis, both are impossible with indirect measures of evidence such as p-values (Held and Ott, 2018) . Reverse-Bayes approaches combined with Bayes factor methodology was pioneered in Carlin and Louis (1996) but then remained unexplored until Pawel and Held (2020) proposed an extension of AnCred where Bayes factors are used as a means of quantifying evidence. Rather than determining a prior such that a finding becomes "non-credible" in terms of a posterior credible interval, this approach determines a prior such that the finding becomes "non-compelling" in terms of a Bayes factor. In the second step of the procedure, the plausibility of this prior is quantified using external data from a replication study. Here, we will illustrate the methodology using only an original study; we mention extensions for replications in Section 5.1. A standard hypothesis test compares the null hypothesis H 0 : θ = 0 to the alternative H 1 : θ = 0. Bayesian hypothesis testing requires specification of a prior distribution of θ under H 1 . A typical choice is a local alternative, a unimodal symmetric prior distribution centred around the null value (Johnson and Rossell, 2010) . We consider again the sceptical prior θ | H 1 ∼ N(0, τ 2 = g · σ 2 ) with relative prior variance g for this purpose. This leads to the Bayes factor comparing H 0 to H 1 being 2 . Yet again, the amount of evidence which the data provide against the null hypothesis depends on the prior parameter g; As g becomes smaller (g ↓ 0), the null and the alternative will become indistinguishable, so the data are equally likely under both (BF 01 → 1). On the other hand, for increasingly diffuse priors (g → ∞), the null hypothesis will always prevail (BF 01 → ∞) due to the Jeffreys-Lindley paradox (Robert, 2014) . In between, the BF 01 reaches a minimum at g = max z 2 − 1, 0 leading to minBF 01 = |z| · exp −z 2 /2 · √ e if |z| > 1 1 else (12) which is an instance of a minimum Bayes factor, the smallest possible Bayes factor within a class of alternative hypotheses, in this case zero-mean normal alternatives (Edwards et al., 1963; Berger and Sellke, 1987; Sellke et al., 2001; Held and Ott, 2018) . Reporting of minimum Bayes factors is one attempt of solving the problem of priors in Bayesian inference. However, this bound may be rather small and the corresponding prior unrealistic. In contrast, the Reverse-Bayes approach makes the choice of the prior explicit by determining the relative prior variance parameter g such that the finding is no longer compelling, followed by assessing the plausibility of this prior. To do so, one first fixes BF 01 = γ, where γ is a cut-off above which the result is no longer convincing, for example γ = 1/10, the level for strong evidence according to Jeffreys (1961) . The sufficiently sceptical relative prior variance is then given by where W(·) is the Lambert W function (Corless et al., 1996) , see Pawel and Held (2020, Appendix B) for a proof. The sufficiently sceptical relative prior variance g exists only for a cut-off γ if minBF 01 ≤ γ, similar to standard AnCred where it exists only at level α if the original finding was significant at the same level. In contrast to standard AnCred, however, if the sufficiently sceptical relative prior variance g exists, there are always two solutions, a consequence of the Jeffreys-Lindley paradox: If BF 01 decreases in g below the chosen cut-off γ, after attaining its minimum it will monotonically increase and intersect a second time with γ, admitting a second solution for the sufficiently sceptical prior. Relative prior variance g Figure 3 : Illustration of the AnCred with Bayes factors procedure using the findings from the meta-analysis on the association of COVID-19 mortality and corticosteroids. The left plot shows the Bayes factor BF 01 as a function of the relative variance g of the sceptical prior. The result from the RECOVERY trial is challenged with a sceptical prior such that BF 01 = 1/10, for the other trials such a prior does not exist. The right plot shows the Bayes factor BF 01 as a function of the relative mean m = µ/θ of the advocacy prior where the coefficient of variation from the prior is fixed to CV = τ /µ = 1/z(γ = 1/3) = 0.67, where z(γ) is given in (15). The RECOVERY and the CAPE COVID findings are challenged such that BF 01 = 1/3, for the other trials such a prior does not exist. We now revisit the meta-analysis example considered earlier: The left plot in Figure 3 shows the Bayes factor BF 01 as a function of the relative prior variance g for each finding included in the meta-analysis. Most of them did not include a great number of participants and thus provide little evidence against the null for any value of the relative prior variance g. In contrast, the finding from the RECOVERY trial (RECOVERY Collaborative Group, 2020) provides more compelling evidence and can be challenged up to minBF 01 = 1/148.9. For example, we see in Figure 3 that the sceptical prior variance needs to be g = 0.59, so 1.69 times smaller than the variance of the effect estimate, such that the finding is no longer compelling at level γ = 1/10. This translates to a 95% prior credible interval from 0.8 to 1.24 for the OR. Hence, a sceptic might still consider the RECOVERY finding to be unconvincing, despite its minimum BF being very compelling, if external evidence supports ORs in that range. Note that also g = 8190 gives a Bayes factor of BF 01 = 1/10, however, such a large relative prior variance represents ignorance rather than scepticism and is less useful for Reverse-Bayes inference. The plausibility of the sufficiently sceptical prior can be evaluated in light of external evidence, but what should we do in the absence of such? We could again use the Box (1980) prior-predictive check, however, the resulting tail probability is difficult to compare to the Bayes-factor cut-off γ. When a specific alternative model to the null is in mind, Box also suggested to use a Bayes factor contrasting the two models. Following this approach, Pawel and Held (2020) proposed to define a second Bayes factor contrasting the sufficiently sceptical prior to an optimistic prior, which they defined as θ | H 2 ∼ N(θ, σ 2 ) the posterior of θ based on the data and the reference prior f (θ) ∝ 1. We can then conclude that the effect estimate is intrinsically credible at level γ if the data favour the optimistic prior over the sufficiently sceptical prior at a higher level than 1/γ (i. e. if BF 12 ≤ γ), analogously to intrinsic credibility based on significance. For example, we obtain BF 12 = 1/64 for the finding from the RECOVERY trial, so it is intrinsically credible at γ = 1/10. To remove the dependence on the choice of γ, one can then determine the smallest cut-off γ where intrinsic credibility can be established, defining a Bayes factor for intrinsic credibility similar to the definition of the p-value for intrinsic credibility. For the RECOVERY finding, this turns out to be BF IC = 1/25. A natural question is whether we can also define an advocacy prior, a prior which renders an uncompelling finding compelling, in the AnCred framework with Bayes factors. In traditional AnCred, advocacy priors always exist since one can always find a prior that, when combined with the data, can overrule them. This is fundamentally different to inference based on Bayes factors, where the prior is not synthesized with the data, but rather used to predict them. A classical result due to Edwards et al. (1963) states that if we consider the class of all possible priors under H 1 , the minimum Bayes factor is given by which is obtained for H 1 : θ =θ. This implies that a non-compelling finding can not be "rescued" further than to this bound. For example, for the finding from the REMAP-CAP trial (REMAP-CAP Investigators, 2020) the bound is unsatisfactorily minBF 01 = 1/1.7, so at most "worth a bare mention" according to Jeffreys (1961) . Putting these considerations aside, we may still consider the class of N(µ, τ 2 ) priors under the alternative H 1 . The Bayes factor contrasting H 0 to H 1 is then given by The reverse-Bayes approach now determines the prior mean µ and variance τ 2 which lead to the Bayes factor BF 01 being just at some cut-off γ. However, if both parameters are free, there are infinitely many solutions to BF 01 = γ, if any exist at all. The traditional AnCred framework resolves this by restricting the class of possible priors to advocacy priors with fixed coefficient of variation of CV = τ /µ = 1/z α/2 . We can translate this idea to the Bayes factor AnCred framework and fix the prior's coefficient of variation to CV = 1/z(γ), where obtained by solving (14) for z with minBF 01 = γ. The advocacy prior thus carries the same evidential weight as data with minBF 01 = γ. Moreover, the determination of the prior parameters becomes more feasible since there is only one free parameter left (either µ or τ 2 ). The right plot in Figure 3 illustrates application of the procedure on data from the meta-analysis on association between COVID-19 mortality and corticosteroids. The coefficient of variation of the advocacy prior is fixed to CV = 1/z(γ = 1/3) = 0.67 and thus the Bayes factor BF 01 only depends on the relative mean m = µ/θ. Under the sceptical prior only the RECOVERY finding could be challenged at γ = 1/3 (where z(γ) = 1.5 corresponds to α = 13%). With the advocacy prior this is now also possible for the CAPE COVID finding (Dequin et al., 2020) , where a prior with mean µ = m ·θ = 0.37 · (−0.79) = −0.29 and standard deviation τ = CV · µ = 0.2 is able to make the finding compelling at γ = 1/3. The corresponding prior credible interval for the OR at level 1 − α ranges from 0.55 to 1, so advocates may still consider the "noncompelling" finding as providing moderate evidence in favour of a benefit, if external evidence supports mortality reductions in that range. Note that the advocacy prior may not be unique, e. g. for the CAPE COVID finding the prior with relative mean m = 1.26 and standard deviation τ = 0.67 also renders the data as just compelling at γ = 1/3. We recommend to choose the prior with m closer to zero, as it is the more conservative choice. Application of the Analysis of Credibility with Bayes factors as described in Section 3 assumes some familiarity with Bayes factors as measures of evidence. Colquhoun (2019) argued that very few nonprofessional users of statistics are familiar with the notion of Bayes factors or likelihood ratios. He proposes to quantify evidence with the false positive risk, "if only because that is what most users still think, mistakenly, that that is what the p-value tells them". More specifically, Colquhoun (2019) defines the false positive risk (FPR) as the posterior probability that the point null hypothesis H 0 of no effect is true given the observed p-value p, i. e. FPR = Pr(H 0 | p). As before, H 0 corresponds to the point null hypothesis H 0 : θ = 0. Note also that we take the exact (two-sided) p-value p as the observed "data", regardless of whether or not it is significant at some pre-specified level, the so-called "p-equals" interpretation of NHST (Colquhoun, 2017) . FPR can be calculated based on the Bayes factor associated with p. For ease of presentation we invert Bayes' theorem (1) and obtain where BF 01 = 1/BF 10 is the Bayes factor for H 0 against H 1 , computed directly from the observed p-value p. The common 'forward-Bayes' approach is to compute the FPR from the prior probability Pr(H 0 ) and the Bayes factor with (16). However, the prior probability Pr(H 0 ) is usually unknown in practice and often hard to assess. This can be resolved via the Reverse-Bayes approach (Colquhoun, 2017 (Colquhoun, , 2019 : Given a p-value and a false positive risk value, calculate the corresponding prior probability Pr(H 0 ) that is needed to achieve that false positive risk. Of specific interest is the value FPR = 5%, because many scientists believe that a Type-I error of 5% is equivalent to a FPR of 5% (Greenland et al., 2016) . This is of course not true and we follow Berger and Sellke (1987, Example 1) and use the reverse-Bayes approach to derive the necessary prior assumptions on Pr(H 0 ) to achieve FPR = 5% with Equation (16): Colquhoun (2017, appendix A.2) uses a Bayes factor based on the t-test, but for compatibility with the previous sections we assume normality of the underlying test statistic. We consider Bayes factors under all simple alternatives, but also Bayes factors under local normal priors, see Held and Ott (2018) for a detailed comparison. Instead of working with a Bayes factor for a specific prior distribution, we prefer to work with the minimum Bayes factor minBF 01 as introduced in Section 3. In what follows we will use the minimum Bayes factor based on the z-test (Held and Ott, 2018, Section 2.1 and 2.2). The minimum Bayes factor based on the z-test among all possible priors can be computed using the function zCalibrate in the package pCalibrate. The option alternative = "local" gives the minBF (12) under local normal priors. Let minBF 01 denote the minimum Bayes factor over a specific class of alternatives. From equation (17) we obtain the inequality The right-hand side is thus an upper bound on the prior probability Pr(H 0 ) for a given p-value to achieve a pre-specified FPR value. There are also minBFs not based on the z-test statistic, but directly on the (twosided) p-value p, the so-called "−e p log p" (Sellke et al., 2001 ) calibration minBF = −e p log p for p < 1/e 1 otherwise, and the "−e q log q" calibration, where q = 1 − p (Held and Ott, 2018, Section 2.3): For small p, equation (20) can be simplified to minBF ≈ e p, which mimics the Good (1958) transformation of p-values to Bayes factors (Held, 2019b) . The two p-based calibrations are also available in the package pCalibrate. They carry less assumptions than the minimum Bayes factors based on the z-test under normality. The "−e p log p" provides a general bound under all unimodal and symmetrical local priors for p-values from z-tests (Sellke et al., 2001 , Section 3.2). The "−e q log q" calibration is more conservative and gives a smaller bound on the Bayes factor than the "−e p log p" calibration. It can be viewed as a general lower bound under simple alternatives where the direction of the effect is taken into account, see Held and Ott (2018, Section 2.1 and 2.3). The left plot in Figure 4 shows the resulting upper bound on the prior probability Pr(H 0 ) as a function of the two-sided p-value if the FPR is fixed at 5%. For p = 0.05, the "−e p log p" bound is around 11% and 28% for the "−e q log q" calibration. The corresponding values based on the z-test are slightly smaller (10% and 15%, respectively). All the probabilities are below the 50% value of equipoise, illustrating that borderline significant result with p ≈ 0.05 do not provide sufficient evidence to justify an FPR value of 5%. For p = 0.005, the upper bounds are closer to 50% (37% for local and 57% for simple alternatives). Turning again to the example from the RECOVERY trial (RECOVERY Collaborative Group, 2020), the p-value associated with the estimated treatment effect is p = 0.0002. The left plot in Figure 4 shows that the false positive risk can safely be assumed to be around 5% (or lower), since the upper bound on Pr(H 0 ) are all very large for such a small p-value. Fixing FPR at the 5% level may be considered as arbitrary. Another widespread misconception is the belief that that the FPR is equal to the p-value. Held (2013) used a reverse-Bayes approach to investigate which prior assumptions are required such that FPR = p holds. Combining (17) Figure 4 : The left plot shows the upper bound on the prior probability Pr(H 0 ) to achieve a false positive risk of 5% as a function of the p-value calibrated with either a z-test calibration (simple and local alternatives) or with the "−e p log p" or "−e q log q" calibrations, respectively. The right plot shows the upper bound on Pr(H 0 ) as a function of the p-value using the same calibrations but assuming the p-value equals the FPR. which is approximately 1/(1 + e) = 26.9% for small p. The right plot in Figure 4 compares the bounds based on these two calibrations with the ones obtained from simple respectively local alternatives. We can see that strong assumptions on Pr(H 0 ) are needed to justify the claim FPR = p: Pr(H 0 ) cannot be larger than 15.2% if the p-value is conventionally significant (p < 0.05). For p < 0.005, the bound drops further to 11.4%. Even under the conservative "−e q log q" calibration, the upper bound on Pr(H 0 ) is 26.9% for small p and increases only slightly for larger values of p. This illustrates that the misinterpretation FPR = p only holds if the prior probability of H 0 is substantially smaller than 50%, an assumption which is questionable in the absence of strong external knowledge. The Reverse-Bayes methods described above have focused on the comparison of the prior needed for credibility with findings from other studies and/or more general in-sights. However, replication studies make an obvious additional source of external evidence, as these are typically conducted to confirm original findings by repeating their experiments as closely as possible. The question is then whether the original findings have been successfully "replicated", currently of considerable concern to the research community. To date, there remains no consensus on the precise meaning of replication in a statistical sense. The proposal of Held (2020) (see also Held et al., 2021) was to challenge the original finding using AnCred, as described in Section 2.1, and then evaluate the plausibility of the resulting prior using a prior-predictive check on the data from a replication study. A similar procedure but using AnCred based on Bayes factors as in Section 3 was proposed in Pawel and Held (2020) . Reverse-Bayes inference seems to fit naturally into this setting as it provides a formal framework to challenge and substantiate scientific findings. Apart from using data from a replication study, there are also other possible extensions of AnCred: We proposed either prior-predictive checks (Box, 1980; Evans and Moshonov, 2006) or Bayes-factors (Jeffreys, 1961; Kass and Raftery, 1995) for the formal evaluation of the plausibility of the priors derived through Reverse-Bayes. Other methods could be used for this purpose, for example, Bayesian measures of surprise (Bayarri and Morales, 2003) . Furthermore, AnCred in its current state is derived assuming a normal likelihood for the effect estimateθ. This is the same framework as in standard meta-analysis and provides a good approximation for studies with reasonable sample size (Carlin, 1992) . For the comparison of binomial outcomes with small counts, the normal approximation of the log odds ratio could be improved with a Yates continuity correction (Spiegelhalter, 2004, Sec. 2.4.1) or replaced with the exact profile likelihood of the log odds ratio (Held and Sabanés Bové, 2020, Sec. 5.3 ). Likewise, more robust prior distributions could be considered such as double-exponential or Student t-distributions (Pericchi and Smith, 1992) . For example, Fúquene et al. (2009) investigate the use of robust priors in an application to binomial data from a randomized clinical trial. The inferential advantages of Bayesian methods are increasingly recognised within the statistical community. However, among the majority of working researchers they have failed to make any serious headway, and retain a reputation for complex and "controversial". We have outlined how an idea that began with Jack Good's proposal for resolving the "Problem of priors" over 70 years ago (Good, 1950) has experienced a renaissance over recent years. The basic idea is to invert Bayes' theorem: a specified posterior is combined with the data to obtain the Reverse-Bayes prior, which is then used for further inference. This approach is useful in situations where it is difficult to decide what constitutes a reasonable prior, but easy to specify the posterior which would lead to a particular decision. A subsequent prior-to-data conversion (Greenland, 2006) helps to assess the weight of the Reverse-Bayes prior in relation to the actual data. We have shown that the Reverse-Bayes methodology is useful to extract more insights from the results typically reported in a meta-analysis. It facilitates the computation of prior-predictive checks for conflict diagnostics (Presanis et al., 2013) and has been shown capable of addressing many common inferential challenges, including assessing the credibility of scientific findings (Spiegelhalter, 2004; Greenland, 2011) , making sense of "out of the blue" discoveries with no prior support (Matthews, 2018; Held, 2019a) , estimating the probability of successful replications (Held, 2019a , and extracting more insight from standard p-values while reducing the risk of misinterpretation (Held, 2013; Colquhoun, 2017 Colquhoun, , 2019 . The appeal of Reverse-Bayes techniques has recently been widened by the development of inferential methods using both posterior probabilities and Bayes Factors (Carlin and Louis, 1996; Pawel and Held, 2020) . These developments come at a crucial time for the role of statistical methods in research. Despite the many serious -and now well-publicised -inadequacies of NHST (Wasserstein and Lazar, 2016) , the research community has shown itself to be remarkably reluctant to abandon NHST. Techniques based on the Reverse-Bayes methodology of the kind described in this review could encourage the wider use of Bayesian inference by researchers. As such, we believe they can play a key role in the scientific enterprise of the 21 th century. All analyses were performed in the R programming language version 3.6.3 (R Core Team, 2017). Data and code to reproduce all analyses is available at https://gitlab. uzh.ch/samuel.pawel/Reverse-Bayes-Code. Bayesian measures of surprise for outlier detection Redefine statistical significance Testing a point null hypothesis: Irreconcilability of P values and evidence (with discussion) Sampling and Bayes' Inference in Scientific Modelling and Robustness (with discussion) Identifying Prior Distributions That Produce Specific Decisions, With Application to Monitoring Clinical Trials Meta-analysis for 2 × 2 tables: A Bayesian approach The reproducibility of research and the misinterpretation of p-values The False Positive Risk: A Proposal Concerning What to Do About p-Values The Handbook of Research Synthesis and Meta-Analysis. Russell Sage Foundation Local model uncertainty and incomplete-data bias (with discussion) On the Lambert W function Principles of Statistical Inference Effect of Hydrocortisone on 21-Day Mortality or Respiratory Support Among Critically Ill Patients With COVID-19 Checking consistency in mixed treatment comparison meta-analysis Bayesian Statistical Inference in Psychological Research Checking for prior-data conflict Fast and accurate Bayesian model criticism and conflict diagnostics using R-INLA A case for robust Bayesian priors with applications to clinical trials The statistical crisis in science Probability and the Weighing of Evidence Significance Tests in Parallel and in Series Good Thinking: The Foundations of Probability and Its Applications Bayesian computation: a summary of the current state, and samples backwards and forwards Bayesian perspectives for epidemiological research: I. Foundations and basic methods Null misinterpretation in statistical testing and its impact on health risk assessment Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations Reverse-Bayes analysis of two common misinterpretations of significance tests The assessment of intrinsic credibility and a new argument for p < 0.005 On the Bayesian interpretation of the harmonic mean p-value A new standard for the analysis and design of replication studies (with discussion) The assessment of replication success based on relative effect size On p-Values and Bayes Factors Likelihood and Bayesian Inference -With Applications in Biology and Medicine Probability Theory: The Logic of Science Theory of Probability On the use of non-local prior densities in Bayesian hypothesis tests Bayes factors An Alternative to Null-Hypothesis Significance Tests Methods for assessing the credibility of clinical trial outcomes Why should clinicians care about Bayesian methods? (with discussion) The ASA's p-value statement, one year on Beyond 'significance': principles and practice of the Analysis of Credibility Statistical Rethinking The Theory That Would Not Die Kendall's Advanced Theory of Statistic 2B The sceptical Bayes factor for the assessment of replication success Exact and Approximate Posterior Moments for a Normal Location Parameter Conflict Diagnostics in Directed Acyclic Graphs, with Applications in Bayesian Evidence Synthesis R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing Dexamethasone in Hospitalized Patients with Covid-19 -Preliminary Report Effect of Hydrocortisone on Mortality and Organ Support in Patients With Severe COVID-19 On the Jeffreys-Lindley Paradox The file-drawer problem revisited: A general weighted method for calculating fails-safe numbers in meta-analysis The file drawer problem and tolerance for null results Calibration of p Values for Testing Precise Null Hypotheses Incorporating Bayesian Ideas into Health-Care Evaluation Conducting Meta-Analyses in R with the metafor Package Bayesian Versus Frequentist Inference The ASA's Statement on p-Values: Context, Process, and Purpose Moving to a World Beyond Association Between Administration of Systemic Corticosteroids and Mortality Among Critically Ill Patients With COVID-19: A Meta-analysis Support by the Swiss National Science Foundation (Project # 189295) is gratefully acknowledged. We are grateful to Sander Greenland for helpful comments on a previous version of this article. Reverse-Bayes methodology enables researchers to extract new insights from summary measures, to assess the credibility of scientific findings and to reduce the risk of misinterpretation. A Proof of equation (11) Suppose that the estimateθ is not significant at level α, so z 2 /z 2 α/2 < 1. With U, L = θ ± z α/2 σ we have U + L = 2θ, U L =θ 2 − z 2 α/2 σ 2 and U − L = 2 z α/2 σ. We therefore obtain with (10):The advocacy standard deviation is τ = AL/(2 z α/2 ) = µ/z α/2 and the coefficient of variation is therefore CV = τ /µ = z −1 α/2 .