key: cord-0212125-ikae2v6i authors: Martin, Gael M.; Loaiza-Maya, Rub'en; Frazier, David T.; Maneesoonthorn, Worapree; Hassan, Andr'es Ram'irez title: Optimal probabilistic forecasts: When do they work? date: 2020-09-21 journal: nan DOI: nan sha: 9d00a3c90ee99b429d40b3eeebd3f90cb7dfee09 doc_id: 212125 cord_uid: ikae2v6i Proper scoring rules are used to assess the out-of-sample accuracy of probabilistic forecasts, with different scoring rules rewarding distinct aspects of forecast performance. Herein, we re-investigate the practice of using proper scoring rules to produce probabilistic forecasts that are `optimal' according to a given score, and assess when their out-of-sample accuracy is superior to alternative forecasts, according to that score. Particular attention is paid to relative predictive performance under misspecification of the predictive model. Using numerical illustrations, we document several novel findings within this paradigm that highlight the important interplay between the true data generating process, the assumed predictive model and the scoring rule. Notably, we show that only when a predictive model is sufficiently compatible with the true process to allow a particular score criterion to reward what it is designed to reward, will this approach to forecasting reap benefits. Subject to this compatibility however, the superiority of the optimal forecast will be greater, the greater is the degree of misspecification. We explore these issues under a range of different scenarios, and using both artificially simulated and empirical data. forecasts that have theoretically equivalent out-of-sample performance according to any proper score, with numerical differences reflecting sampling variation only. That is, all such methods are coherent in the sense that, in the limit, no one forecast is out-performed by another. However, the concept of coherence really has most import in the empirically relevant case where a predictive model is misspecified. In this setting, one cannot presume that estimating the parameters of a predictive model by optimizing any proper criterion will reveal the true predictive model. Instead, one is forced to confront the fact that no such 'true model' will be revealed, and that the criterion should be defined by a score that rewards the type of predictive accuracy that matters for the problem at hand. It is in this misspecified setting that we would hope to see strict coherence on display; providing justification as it would for simply producing a forecast via the scoring rule that is pertinent to the problem at hand, and leaving matters at that. 1 The concept of 'coherence' is distinct from the concept of 'consistency' that is used in some of the literature cited above (e.g. Gneiting, 2011a , Holzmann and Eulert, 2014 , Ehm et al., 2016 , and Patton, 2019 . As pointed out by Patton, in the probabilistic forecasting setting a 'consistent' scoring function is analogous to a 'proper' scoring rule, which is 'consistent' for the true forecast distribution in the sense of being maximized (for positively-oriented scores) at that distribution. We restrict our attention only to proper (or 'consistent') scores. Within that set of scores, we then document when optimizing according to any one proper score produces out-of-sample performance -according to that score -that is superior to that of predictions deduced by optimizing alternative scores, and when it does not; i.e. when strict coherence between in-sample estimation and out-ofsample performance is in evidence and when it is not. What we illustrate is that the extent to which coherent forecasts arise in practice actually depends on the form, and degree of misspecification. First, if the interplay between the predictive model and the true data generating process is such that a particular score cannot reward the type of predictive accuracy it is designed to reward, then optimizing that model according to that score will not necessarily lead to a strictly coherent forecast. Second, if a misspecified model is sufficiently 'compatible' with the process generating the data, in so much as it allows a particular score criterion to reward what it was designed to, strict coherence will indeed result; with the superiority of the optimal forecast being more marked, the greater the degree of misspecification, subject to this basic compatibility. We demonstrate all such behaviours in the context of both probabilistic forecasts based on a single parametric model, and forecasts produced by a linear combination of predictive distributions. In the first case optimization is performed with respect to the parameters of the assumed model; in the second case optimization is with respect to both the weights of the linear combination and the parameters of the constituent predictives. To reflect our focus on model misspecification, at no point do we assume that the true model is spanned by the linear pool; that is, we adopt the so-called M-open view of the world (Bernardo and Smith 1994) . Our results con-tribute to the active literature on frequentist estimation of linear predictive combinations via predictive criteria (e.g. Hall and Mitchell, 2007 , Ranjan and Gneiting, 2010 , Clements and Harvey, 2011 , Geweke and Amisano, 2011 , Gneiting and Ranjan, 2013 , Kapetanios et al., 2015 , Opschoor et al., 2017 , Ganics, 2018 and Pauwels et al., 2020 . In particular, our results provide a possible explanation behind the often mixed out-of-sample performance of optimal weighting schemes. After introducing the concept of coherent predictions in Section 2, in Section 3 we conduct a set of simulation exercises, with a range of numerical and graphical results used to illustrate coherence (including strict incoherence) under various design scenarios. Attention is given to accurate prediction of extreme values of financial returns, by using -as the optimization criterion -a scoring rule that rewards accurate prediction in the tails of the predictive distribution. We provide a simple example that illustrates how easy it is to stumble upon a model that lacks sufficient compatibility with the true data generating process to prevent a nominally 'optimal' forecast from out-performing others in the manner expected. The illustration using linear pools highlights the fact that optimal pools can reap benefits relative to alternative approaches. However, the very use of a combination of predictives to provide a more flexible and, hence, less misspecified representation of the true model can in some cases mitigate against the benefits of optimization. Section 4 documents the results of an empirical exercise that focuses on accurate prediction of returns on the S&P500 index and the MSCI Emerging Market (MSCIEM) index. Once again we demonstrate that there are benefits in seeking a predictor that is optimal according to a particular scoring rule, with slightly more marked gains in evidence in the case of the single predictive models than in the case of the linear pool. The paper concludes in Section 5. Let (Ω, F , G 0 ) be a probability space, and let Y ∞ 1 := {Y 1 , . . . , Y n , . . . } be a sequence of random variables whose infinite-dimensional distribution is G 0 . In general, G 0 is unknown, and so a hypothetical class of probability distributions is postulated for G 0 . Let P be a convex class of probability distributions operating on (Ω, F ) that represents our best approximation of G 0 . Assume our goal is to analyze the ability of the distribution P ∈ P to generate accurate probabilistic forecasts. The most common concept used to capture accuracy of such forecasts is a scoring rule. A scoring rule is a function S : {P ∪{G 0 }} ×Ω → R whereby if the forecaster quotes the distribution P and the value y eventuates, then the reward (or 'score') is S(P, y). As described earlier, in general terms a scoring rule rewards a forecast for assigning a high density ordinate (or high probability mass) to y, often subject to some shape, or sharpness criterion, with higher scores denoting qualitatively better predictions than lower scores, assuming all scores are positively-oriented. The result of a single score evaluation is, however, of little use by itself as a measure of predictive accuracy. To obtain a meaningful gauge of predictive accuracy, as measured by S(·, ·), we require some notion of regularity against which different predictions can be assessed. By far the most common measure of regularity used in the literature is via the notion of the expected score: following , the expected score under the true measure G 0 of the probability forecast P , is given by S(P, G 0 ) = y∈Ω S(P, y)dG 0 (y). A scoring rule S(·, ·) is 'proper' relative to P if for all P, G 0 ∈ P, S(G 0 , G 0 ) ≥ S(P, G 0 ), and is strictly proper if S(G 0 , G 0 ) = S(P, G 0 ) ⇔ P = G 0 . That is, if the forecaster's best judgement is G 0 , then a proper scoring rule rewards the forecaster for quoting P = G 0 . The concept of a proper scoring rule is useful from a practical perspective since it guarantees that, if we knew that the true DGP was G 0 , then according to the rule S(·, ·) the best forecast we could hope to obtain would, on average, result by choosing G 0 . Note that this notion of 'average' is embedded into the very definition of a proper scoring rule since it, itself, relies on the notion of an expectation. It is clear that, in practice, the expected score S(·, G 0 ) is unknown and cannot be calculated. However, if one believes that the true DGP is an element of P, a sensible approach to adopt is to form an empirical version of S(·, G 0 ) and search P to find the 'best' predictive over this class . More formally, for τ such that T ≥ τ ≥ 1, let {y t } T −τ t=2 denote a series of size T − (τ + 1), over which we wish to search for the most accurate predictive, where T is the total number of observations on y t , and where τ denotes the size of a hold-out sample. Assume that the class of models under analysis is indexed by a vector of unknown parameters θ ∈ Θ ⊂ R d θ , i.e., P ≡ P(Θ), where P(Θ) := {θ ∈ Θ : P θ }. For F t−1 denoting the time t − 1 information set, and for each θ ∈ Θ, we associate to the model P θ the predictive measure P t−1 the optimizers according to S 1 , S 2 , it should be the case that, for large enough τ , and That is, coherent results are expected: the predictive that is optimal with respect to S 1 cannot be beaten out-of-sample (as assessed by that score) by a predictive that is optimal according to S 2 , and vice-versa. As mentioned earlier, this definition of coherence subsumes the case where G 0 ∈ P and, θ 1 and θ 2 are both consistent for the true (vector) value of θ, θ 0 . Hence, in this special case, for τ → ∞, the expressions in (3) and (4) would collapse to equalities. What is of relevance empirically though, as already highlighted, is the case where G 0 / ∈ P. Whether coherence holds in this setting depends on four things: the unknown true model, G 0 , the assumed (but misspecified) model, P θ , and the two rules, S 1 , S 2 , in which we are optimizing to obtain predictive distributions. As we will illustrate with particular examples, it is this collection, {G 0 , P θ , S 1 , S 2 }, that determines whether or not the above notion of coherence holds. We begin this illustration with a series of simulation experiments in Section 3. We first specify a single predictive model P θ to be, in order: correctly specified; misspecified, but suitably 'compatible' with G 0 to allow strict coherence to prevail; and misspecified in a way in which strict coherence does not hold; where by strict coherence we mean that strict inequalities hold in expressions like (3) and (4). The numerical results are presented in a variety of different ways, in order to shed light on this phenomenon of coherence and help practitioners gain an appreciation of what they should be alert to. As part of this exercise, we make the link between our broadly descriptive analysis and the formal test of equal predictive ability of Giacomini and White (2006) , in a manner to be described. Six different proper scoring rules are entertained -both in the production of the optimal predictions and in the out-of-sample evaluation. We then shift the focus to a linear predictive pool that does not span the true model and is, as a consequence, misspecified; documenting the nature of coherence in this context. In Section 4 the illustration -based on both single models and linear pools -proceeds with empirical returns data. In this first set of simulation experiments the aim is to produce an optimal predictive distribution for a variable that possesses the stylized features of a financial return. With this in mind, we assume a predictive associated with an autoregressive conditional heteroscedastic model of order 1 (ARCH(1)) for the logarithmic return, y t , Panels A, B and C of Table 1 then describe both the true data generating process (DGP) and the precise specification of the model in (5) for the three scenarios: correct specification in (i), and two different types of misspecification in (ii) and (iii). As is clear: in scenario (i), the assumed model matches the Gaussian ARCH(1) model that has generated the data; in scenario (ii) -in which a generalized ARCH (GARCH) model with Student t innovations generates y t -it does not; whilst in scenario (iii) there is not only misspecification of the assumed model, but the marginal mean in that model is held fixed at zero. Thus, in the third case, the predictive of the assumed model is unable to shift location and, hence, to 'move' to accommodate extreme observations, in either tail. The consequences of this, in terms of relative predictive accuracy, are highlighted below. The predictive P t−1 θ , with density p (y t |F t−1 , θ), is associated with the assumed Gaussian ARCH(1) model, where θ = (θ 1 , θ 2 , θ 3 ) ′ . We estimate θ as in (1) using the following three types of scoring rules: for I(y ∈ A) the indicator on the event y ∈ A, where P (.|F t−1 , θ) in (7) denotes the predictive cumulative distribution function associated with p (.|F t−1 , θ). Use of the log-score (LS) in (6) yields the average log-likelihood function as the criterion in (2) and, under correct specification and appropriate regularity, the asymptotically efficient estimator of θ 0 . The score in (8) is the censored likelihood score (CLS) of Diks et al. (2011) . This score rewards predictive accuracy over any region of interest A (A c denoting the complement of this region). We report results for A defining the lower and upper tails of the predictive distribution, as determined in turn by the 10%, 20%, 80% and 90% percentiles of the empirical distribution of y t . The results based on the use of (8) in (2) are labelled hereafter as CLS 10%, CLS 20%, CLS 80% and CLS 90%. The continuously ranked probability score (CRPS) (see is sensitive to distance, and rewards the assignment of high predictive mass near to the realized value of y t , rather just at that value, as in the case of the log-score. It can be evaluated in closed form for the (conditionally) Gaussian predictive model assumed under all three scenarios described in Table 1 . Similarly, in the case of the CLS in (8), all components, including the integral A c p (y|F t−1 , θ) dy, have closed-form representations for the Gaussian predictive model. Note that all scores are positively-oriented; hence, higher values indicate greater predictive accuracy. For each of the Monte Carlo designs, we conduct the following steps: 1. Generate T observations of y t from the true DGP; 2. Use observations t = 1, ..., 1, 000 to compute θ [i] as in (1), for S i , i ∈ {LS, CRPS, CLS 10%, CLS 20%, CLS 80%, CLS 90%}; 3. Construct the one-step-ahead predictive P t−1 θ [i] , and compute the score, S j P t−1 θ [i] , y t , based on the 'observed' value, y t , using S j , j ∈ {LS, CRPS, CLS 10%, CLS 20%, CLS 80%, CLS 90%}; 4. Expand the estimation sample by one observation and repeat Steps 2 and 3, retaining notation θ [i] for the S i -based estimator of θ constructed from each expanding sample. Do this τ = T − 1, 000 times, and compute: for each (i, j) combination. The results are tabulated and discussed in Section 3.2. In Table 2 , results are recorded for both τ = 5, 000 and τ = 10, 000, and under correct specification of the predictive model. The large value of τ = 5, 000 is adopted in order to minimize the effect of sampling error on the results. The even larger value of τ = 10, 000 is then adopted as a check that τ = 5, 000 is sufficiently large to be used in all subsequent experiments. All numbers on the Rows in the table correspond to the ith optimizing criterion (with θ [i] the corresponding 'optimizer'), and columns to the results based on the jth out-of-sample score, S j . Using the definition of coherence in Section 2.2, and given the correct specification, we would expect any given diagonal element to be equivalent to all values in the column in which it appears, at least up to sampling error. As is clear, in Panel A the results based on τ = 5, 000 essentially bear out this expectation; in Panel B, in which τ = 10, 000, all but three results display the requisite equivalence, to two decimal places. In Tables 3 and 4 we record results for the misspecified designs. Given the close correspondence between the τ = 5, 000 and τ = 10, 000 results in the correct specification case, we now record results based on τ = 5, 000 only. In Table 3 , the degrees of freedom in the Student t innovation of the true DGP moves from being very low (ν = 3 in Panel A) to high (ν = 30 in Panel C), thereby producing a spectrum of misspecification -at least in terms of the distributional form of the innovations -from very severe to less severe; and the results on relative out-of-sample accuracy change accordingly. In Panel A, a strict form of coherence is in evidence: each diagonal value exceeds all other values in its column (and is highlighted in bold accordingly). As ν increases, the diagonal values remain bold, although there is less difference between the numbers in any particular column. Hence, in this case, the advice to a practitioner would certainly be to optimize the score criterion that is relevant. In particular, given the importance of accurate estimation of extreme returns, the edict would indeed be: produce a predictive based on an estimate of θ that is optimal in terms of the relevant CLS-based criterion. Given the chosen predictive model, no other estimate of this model will produce better predictive accuracy in the relevant tail, and this specific estimate may well yield quite markedly superior results to any other choice, depending on the fatness of the tails in the true DGP. In Table 4 however, the results tell a very different story. In particular, in Panel A -despite ν being very low -with one exception (the optimizer based on CRPS), the predictive based on any given optimizer is never superior out-of-sample according to that same score criterion; i.e. the main diagonal is not uniformly diagonal. A similar comment applies to the results in Panels B and C. In other words, the assumed predictive model -in which the marginal mean is held fixed -is not flexible enough to allow any particular scoring rule to produce a point estimator that delivers good out-of-sample performance in that rule. For example, the value of θ that optimizes the criterion in (2) based on CLS 10% does not correspond to an estimated predictive that gives a high score to extremely low values of y t , as the predictive model cannot shift location, and thereby assign high density ordinates to these values. The assumed model is, in this sense, incompatible with the true DGP, which will sometimes produce very low values of y t . We now provide further insights into the results in Tables 2-4, including the lack of strict coherence in Table 4 , by providing useful approaches for visualizing strict coherence, and its absence. Reiterating: under correct specification of the predictive model, in the limit all predictives optimized according to criteria based on proper scoring rules will yield equivalent predictive performance out-of-sample. In contrast, under misspecification we expect that each score criterion will yield, in principle, a distinct optimizing predictive and, hence, that out-of-sample performance will differ Table 2 : Average out-of-sample scores under a correctly specified Gaussian ARCH (1) model (Scenario (i) in Table 1 ). Panel A (B) reports the average scores based on τ = 5, 000 (τ = 10, 000) out-of-sample values. The rows in each panel refer to the optimizer used. The columns refer to the out-of-sample measure used to compute the average scores. The figures in bold are the largest average scores according to a given out-of-sample measure. Panel A: 5,000 out-of-sample evaluations Table 3 : Average out-of-sample scores under a misspecified Gaussian ARCH(1) model (Scenario (ii) in Table 1 ). All results are based on τ = 5, 000 out-of-sample values. Panels A, B and C, respectively, report the average scores when the true DGP is GARCH(1,1) with t ν=3 , t ν=10 and t ν=30 errors. The rows in each panel refer to the optimizer used. The columns refer to the out-of-sample measure. The figures in bold are the largest average scores according to a given out-of-sample measure. Panel A: The true DGP is GARCH(1,1)-t ν=3 Out-of-sample score Table 1 ). All results are based on τ = 5, 000 out-of-sample values. Panels A, B and C, respectively, report the average scores when the true DGP is GARCH(1,1) with t ν=3 , t ν=10 and t ν=30 errors. The rows in each panel refer to the optimizer used. The columns refer to the out-of-sample measure. The figures in bold are the largest average scores according to a given out-of-sample measure. Panel A: The true DGP is GARCH(1,1)-t ν=3 Out-of-sample score across predictives; with an optimizing predictive expected to beat all others in terms of that criterion. Therefore, a lack of evidence in favour of strict coherence, in the presence of misspecification, implies that the conjunction of the model and scoring rule is unable to produce sufficiently distinct optimizers to, in turn, yield distinct out-of-sample performance. It is possible to shed light on this phenomenon by considering the limiting behavior of the optimizers for the various scoring rules, across different model specification regimes (reflecting those scenarios given in Table 1 ). To this end, define g t (θ * ) = where θ * denotes the maximum of the limiting criterion function to which S(θ) in (2) converges as T diverges. Under regularity, the following limiting distribution is Under correct specification, and for criteria defined by proper scoring rules, we expect that θ * = θ 0 for all versions of S(θ). Given the efficiency of the maximum likelihood estimator in this scenario, we would expect that the sampling distribution of the optimizer associated with the log-score would be more tightly concentrated around θ * than optimizers associated with the other rules. However, since all optimizers would be concentrating towards the same value, this difference would abate and ultimately lead to scoring performances that are quite similar; i.e., a form of strict coherence would not be in evidence, as is consistent with the results in Table 2 . In contrast, under misspecification we expect that θ * = θ 0 , with different optimizers consistent for different values of θ * . While the sampling distributions of the different optimizers may differ substantially from each other, thereby leading to a form of strict coherence as in Table 3 , this is not guaranteed to occur. Indeed, it remains entirely possible that the resulting optimizers, while distinct, have sampling distributions that are quite similar, even for very large values of T . 2 In this case, the sampling distribution of the out-of-sample "optimized" j-th scoring rule S j ( θ [i] ), evaluated at the i-th optimizer, will not vary significantly with i, and strict coherence will likely not be in evidence, even for large sample sizes, even though the model is misspecified (and the limit optimizers unique). This behavior can be illustrated graphically by simulating and analyzing (an approximation to) the sampling distribution of S j ( θ [i] ). We begin by generating T = 10, 000 observations from the three 'true' DGPs in Table 1 , and producing predictions from the corresponding assumed predictive in each of the three scenarios: (i) to (iii). Using the simulated observations, and for each scenario, we compute θ [i] in (1) . That is, we are interested in the density of the jth sample score criterion evaluated at the ith optimizer, where f (s j j ) denotes the density of the jth score evaluated at its own optimizer. To approximate this density we first simulate { θ [i] m } M m=1 from the corresponding sampling distribution of θ [i] : N( θ [i] , V * /T ), where V * is the usual finite sample estimator of V * in (10). 3 Given the simulated draws { θ m ) for m = 1, . . . , M. Under coherence, we do not expect any (estimated) density, f (s i j ), for i = j, to be located to the right of the (estimated) scorespecific density, f (s j j ) as, with positively-oriented scores, this would reflect an inferior performance of the optimal predictive. Under strict coherence, we expect f (s j j ) to lie to the right of all other densities, and for there to be little overlap in probability mass between f (s j j ) and any other density. The results are given in Figure 1 . In the name of brevity, we focus on Panels B and C of Figure 1 , which correspond respectively to Panels B and C of Table 1 Table 1 . In this case, the impact of the misspecification is stark. The score-specific (i = j) density in each case is far to the right of the densities based on the other optimizers, and markedly more concentrated. In Panels B.2 and B.3 we see that optimizing accordingly to some sort of left-tail criterion, even if not that which matches the criterion used to measure out-of-sample performance, produces densities that are further to the right than those based on the log-score optimizer. 4 In contrast, we note in Panel B.1 that when the log-score itself is the out-of-sample criterion of interest, it is preferable to use an optimizer that focuses on a larger part of the support (either θ [i] , i ∈ {CLS 20%} or θ [i] , i ∈ {CLS 80%}), rather than one that focuses on the more extreme tails. Moreover, due to the symmetry of both the true DGP and the assumed model, it makes no difference (in terms of performance in log-score) which tail optimizer (upper or lower) is used. Panels C.1 to C.3 correspond to Scenario (iii) in Panel C of Table 1 , with ν = 3 for the true DGP, and with predictions produced via the misspecified Gaussian ARCH(1) model with the marginal mean fixed at zero. The assumed model thus has no flexibility to shift location; this feature clearly limiting the ability of the estimated predictive to assign higher weight to the relevant part of the support when the realized out-of-sample value demands it. As a consequence, there is no measurable gain in using an optimizer that fits with the out-of-sample measure. These observations are all consistent with the distinct similarity of all scores (within a column) in Columns 1, 3 and modifications to model specification may have a significant impact in terms of the occurrence, or otherwise, of strictly coherent predictions. The distinction between coherence and strict coherence can be couched in terms of the distinction between the null hypothesis that two predictives -one 'optimal' and one not -have equal expected performance, and the alternative hypothesis that the optimal predictive has superior expected performance. The test of equal predictive ability of (any) two predictives was a focus of Giacomini and White (2006) (GW hereafter; see also related references: Diebold and Mariano, 1995 , Hansen, 2005 , and Corradi and Swanson, 2006 ; hence, accessing the asymptotic distribution of their test statistic enables us to shed some light on coherence. Specifically, what we do is solve the GW test decision rule for the (out-of-sample) sample size required to yield strict coherence, under misspecification. This enables us to gauge how large the sample size must be to differentiate between an optimal and a non-optimal prediction, in any particular misspecified scenario. In terms of the illustration in the previous section, this is equivalent to gauging how large the sample size needs to be to enable the relevant score-specific density in each figure in Panels B and C of Figure 1 to lie to the right of the others. For where the subscript τ is used to make explicit the number of out-of-sample evaluations used to compute the difference in the two average scores. The test of equal predictive ability is a (1) , where var τ (∆ ji t ) denotes the sample variance of ∆ ji t computed over the evaluation period of size τ. Hence, at the α × 100% significance level, the null will be rejected when, for given values of (∆ ji τ ) 2 and var τ (∆ ji t ), where χ 2 (1) (1 − α) denotes the relevant critical value of the limiting χ 2 (1) distribution of the test statistic. The right-hand-side of the inequality in (11), from now on denoted by τ * , indicates the minimum number of out-of-sample evaluations associated with detection of a significant difference between S j (θ [j] ) and S j (θ [i] ). For the purpose of this exercise, if ∆ ji τ < 0, we set τ * = τ , as no value of τ * will induce rejection of the null hypothesis in favour of strict coherence, which is the outcome we are interested in. The value of τ * thus depends, for any given α, on the relative magnitudes of the sample quantities, var τ (∆ ji t ) and (∆ ji τ ) 2 . At a heuristic level, if (∆ ji τ ) 2 and var τ (∆ ji t ) converge in probability to constants c 1 and c 2 , at rates that are some function of τ , then we are interested in plotting τ * as a function of τ , and discerning when (if) τ * begins to stabilize at a particular value. It is this value that then serves as a measure of the 'ease' with which strict coherence is in evidence in any particular example. In Figures 2 to 4 we plot τ * as a function of τ , for τ = 1, 2, ..., 5, 000, and α = 0.05, for the misspecification scenarios (ii) and (iii) in Table 1 . In all figures, the diagonal panels simply plot a 45% line, as these plots correspond to the case where j = i and ∆ ji τ = 0 by construction. Again, for the purpose of the exercise if ∆ ji τ < 0, we set τ * = τ , as no value of τ * will induce support of strict coherence. Moreover, whenever ∆ ji τ > 0, but τ * > τ , we also set τ * = τ . This allows us to avoid arbitrarily large values of τ * that cannot be easily visualized. These latter two cases are thus also associated with 45% lines. Figures 2 and 3 report results for Scenario (ii) with ν = 3 and ν = 30 respectively, whilst Figure 4 presents the results for Scenario (iii) with ν = 3. In each figure, sub-panels A.1 to A.3 record results for j ∈ {LS}, and i ∈ {LS, CLS 10% and CLS 90%}. Sub-panels B.1 to B.3 record the corresponding results for j ∈ {CLS 10%}, while sub-panels C.1 to C.3 record the results for j ∈ {CLS 90%}. First consider sub-panels B.3 and C.2 in Figure 2 . For τ > 1, 000 (approximately), τ * stabilizes at a value that is approximately 20 in both cases. Viewing this value of τ * as 'small', we conclude that it is 'easy' to discern the strict coherence of an upper tail optimizer relative to its lower tail counterpart, and vice versa, under this form of misspecification. In contrast, Panels A.2 and A.3 indicate that whilst strict coherence of the log-score optimizer is eventually discernible, the value at which τ * settles is larger (between about 100 and 200) than when the distinction is to be drawn between the two distinct tail optimizers. Panels B.1 and C.1 show that it takes an even larger number of out-of-sample observations (τ * exceeding 1, 500) to detect the strict coherence of a tail optimizer relative to the log-score optimizer. Indeed, from Panel C.1 it could be argued that the value of τ * required to detect strict coherence relative to the log-score in the case of CLS 90% has not settled to a finite value even by τ = 5, 000. A comparison of Figures 2 and 3 highlights the effect of a reduction in misspecification. In each off-diagonal sub-panel in Figure 3 , the value of τ * is markedly higher (i.e. more observations are required to detect strict coherence) than in the corresponding sub-panel in Figure 2 . Indeed, Panel C.1 in Figure 3 indicates that strict coherence in this particular case is, to all intents and purposes, unable to be discerned in any reasonable number of out-of-sample observations. The dissimilarity of the true DGP from the assumed model is simply not marked enough for the optimal version of the CLS 90% score to reap accuracy benefits relative to the version of this score based on the log-score optimizer. This particular scenario highlights the fact that, even if attainable, the pursuit of coherence may not always be a practical endeavour. For example, if the desired scoring rule is more computationally costly to evaluate than, say, the log-score, then the small improvement in predictive accuracy yielded by optimal prediction may not justify the added computational burden, in particular for real-time forecasting exercises. Finally, even more startling are the results in Figure 4 , which we have termed the 'incompatible' case. For all out-of-sample scores considered, and all pairs of optimizers, a diagonal line, τ * = τ , results, as either τ * exceeds τ (and, hence, τ * is set to τ ) for all values of τ , or ∆ ji τ < 0, in which case τ * is also set to τ. Due to the incompatibility of the assumed model with the true DGP strict coherence simply does not prevail in any sense. A common method of producing density forecasts from diverse models is to consider the 'optimal' combination of forecast (or predictive) densities defined by a linear pool. Consider the setting where we entertain several possible models M k , k = 1, ..., n, all based on the same information set, and with associated predictive distributions, where the dependence of the kth model on a d k -dimensional set of unknown parameters, θ k = (θ k,1 , θ k,2 , ..., θ k,d k ) ′ , is captured in the short-hand notation, m k (·|·), and the manner in which θ k is estimated is addressed below. The goal is to determine how to combine the n predictives in (12) to produce an accurate forecast, in accordance with some measure of predictive accuracy. As highlighted in the Introduction, we do not assume that the true DGP coincides with any one of the constituent models in the model set. Herein, we follow McConway (1981) , and focus on the class of linear combination processes only; i.e., the class of 'linear pools' (see also Genest, 1984, and Geweke and Amisano, 2011) : w k m k (y t |F t−1 ); n k=1 w k = 1; and w k ≥ 0 (k = 1, ..., n) . (13) Following the notion of optimal predictive estimation, and building on the established literature cited earlier, we produce optimal weight estimateŝ w := arg max w∈∆n S(w), where ∆ n := w k ∈ [0, 1] : n k=1 w k = 1, w k ≥ 0 (k = 1, ..., n) , where S(w) is a sample average of the chosen scoring rule, evaluated at the predictive distribution with density p(y t |F t−1 , w), over a set of values defined below. The estimatorŵ is referred to as the optimal score estimator (of w) and the density p(y t |F t−1 ,ŵ) as the optimal linear pool. The same set of scoring rules as described in Section 3.1 are adopted herein. We simulate observations of y t from an autoregressive moving average model of order (1,1) (ARMA(1,1)), where φ 1 = 0, φ 2 = 0.95 and φ 3 = −0.4. We employ five different distributional assumptions for Table 1 , where the true DGP is the GARCH(1,1)-t ν=3 model. 0 models: M 2 : y t = θ 2,1 + θ 2,2 y t−1 + η t with η t ∼ i.i.d.N(0, θ 2,3 ) (17) M 3 : y t = θ 3,1 + θ 3,2 η t−1 + η t with η t ∼ i.i.d.N(0, θ 3,3 ). All designs thus correspond to some degree of misspecification, with less misspecification occurring when the true error term is either normal or Student-t with a large value for ν. Use of a skewed error term in (15) arguably produces the most extreme case of misspecification and, hence, is the case where we would expect strict coherence to be most evident. 5 For each design scenario, we take the following steps: 1. Generate T observations of y t from the true DGP; 2. Use observations t = 1, ..., J, where J = 1, 000, to compute θ i as in (1) 3. For each k = 1, 2, 3, construct the one-step-ahead predictive density m k (y t |F t−1 ) = p(y t |F t−1 , θ i , M k ), for t = J + 1, ..., J + ζ, and compute w = ( w 1 , w 2 , w 3 ) ′ based on these ζ = 50 sets of predictive densities as in (14), with S(w) := 1 ζ J+ζ t=J+1 S P t−1 θ,w , y t , where θ = ( θ 1 , θ 2 , θ 3 ) ′ and P t−1 θ,w is the predictive distribution associated with the density p(y t |F t−1 , w) in (13). 4. Use w to obtain the pooled predictive density for time point t = J + ζ + 1, p(y t |F t−1 , w) = n=3 k=1 w k m k (y t |F t−1 ). Steps 2 to 4, using the (non-subscripted) notation θ = ( θ 1 , θ 2 , θ 3 ) ′ for the estimator of θ = (θ 1 , θ 2 , θ 3 ) ′ and w for the estimator of w based on each rolling sample of size J + ζ. Produce τ = T − (J+ ζ) pooled predictive densities, and compute: The results are tabulated and discussed in Section 3.6. To keep the notation manageable, we have not made explicit the fact that θ and w are produced by a given choice of score criterion, which may or may not match the score used to construct (19). The notation P t−1 θ, w refers to the predictive distribution associated with the density p(y t |F t−1 , w). With reference to the results in Table 5 , our expectations are borne out to a large extent. The average out-of-sample scores in Panel B pertain to arguably the most misspecified case, with the mixture of normals inducing skewness in the true DGP, a feature that is not captured by any of the components of the predictive pool. Whilst not uniformly indicative of strict coherence, the results for this case are close to being so. In particular, the optimal pools based on the CLS 20%, CLS 80% and CLS 90% criteria always beat everything else out of sample, according to each of those same measures (i.e. the bold values appear on the diagonal in the last three columns in Panel B). To two decimal places, the bold value also appears on the diagonal in the column for the out-of-sample average of CLS 10%. Thus, the degree of misspecification of the model pool is sufficient to enable strict coherence to be in evidence -most notably when it come to accurate prediction in the tails. It can also be seen that log-score optimization reaps benefits out-of-sample in terms of the log-score measure itself; only the CRPS optimizer does not out-perform all others out-of-sample, in terms of the CRPS measure. In contrast to the results in Panel B, those in Panel A (for the normal error in the true DGP) are much more reminiscent of the 'correct specification' results in Table 2 , in that all numbers within a column are very similar, one to the other, and there is no marked diagonal pattern. Interestingly however, given the earlier comments in the single model context regarding the impact of the efficiency of the log-score optimizer under correct specification, we note that the log-score optimizer yields the smallest out-of-sample averages according to all measures in Panel A. This superiority of the log-score optimizer continues to be in evidence in all three panels in Table 6 , in which the degrees of freedom in the error term in the true DGP is successively increased, across the panels. Moreover, there is arguably no more uniformity within columns in Panel C of this table (in which the t 30 errors are a better match to the Gaussian errors assumed in each component model in the pool), than there is in Panel A. Clearly the use of the model pool is sufficient to pick up any degree of fatness in the tails in the true DGP, so that no one design scenario is any further from (or closer to) 'correct specification' than the other. Hence, what we observe in this table is simply a lack of strict coherence -i.e. the degree of misspecification is not marked enough for score-specific optimizers to reap benefits out-of-sample, and there is a good deal of similarity in the performance of all optimizers, in any particular setting. Reiterating the opening comment in this paragraph, in these settings of 'near' to correct specification, the efficiency of the log-score optimizer seems to be in evidence. It is, in these cases, the only optimizer that one need to entertain, no matter what the specific performance metric of interest! Table 5 : Average out-of-sample scores under two different specifications for the true innovation, ε t in (15). Panel A (B) reports the average scores based on ε t ∼ i.i.d.N(0, 1) (ε t ∼ i.i.d.Mixture of normals). The rows in each panel refer to the optimizer used. The columns refer to the out-of-sample measure used to compute the average scores. The figures in bold are the largest average scores according to a given out-of-sample measure. All results are based on τ = 5, 000 out-of-sample values. We now illustrate the performance of optimal prediction in a realistic empirical setting. We return to the earlier example of financial returns, but with a range of increasingly sophisticated models used to capture the features of observed data. Both single models and a linear pool are entertained. We consider returns on two indexes: S&P500 and MSCI Emerging Market (MSCIEM). The data for both series extend from January 3rd, 2000 to May 7th, 2020. All returns are continuously compounded in daily percentage units. For each time series, we reserve the first 1,500 observations for the initial parameter estimation, and conduct the predictive evaluation exercise for the period between March 16th, 2006 and May 7th, 2020, with the predictive evaluation period covering both the global financial crisis (GFC) and the recent downturn caused by the COVID19 pandemic. As is consistent with the typical features exhibited by financial returns, the descriptive statistics reported in Table 7 provide evidence of time-varying and autocorrelated volatility (significant serial correlation in squared returns) and marginal non-Gaussianity (significant non-normality in the level of returns) in both series, with evidence of slightly more negative skewness in the MSCIEM series. Treatment of the single predictive models proceeds following the steps outlined in Section 3.1, whilst the steps outlined in Section 3.5 are adopted for the linear predictive pool. However, due to the computational burden associated with the more complex models employed in this empirical setting, we update the model parameter estimates every 50 observations only. The predictive distributions are still updated daily with new data, with the model pool weights also updated daily using the window size ζ = 50. In the case of the S&P500 index, the out-of-sample predictive assessment is based on τ = 3, 560 observations, while for the MSCIEM index, the out-of-sample period comprises τ = 3, 683 observations. For both series, we employ three candidate predictive models of increasing complexity: i) a naïve Gaussian white noise model: M 1 : y t ∼ i.i.d.N(θ 1,1 , θ 1,2 ) ′ ; ii) a GARCH model, with Gaussian innovations: M 2 : y t = θ 2,1 + σ t ε t ; σ 2 t = θ 2,2 + θ 2,3 (y t − θ 2,1 ) 2 + θ 2,4 σ 2 t−1 ; ε t ∼ i.i.d.N(0, 1); and iii) a stochastic volatility with jumps (SVJ) model, with Gaussian innovations: M 3 : y t = θ 3,1 + exp (h t /2) ε t + ∆N t Z p t ; h t = θ 3,2 + θ 3,3 h t−1 + θ 3,4 η t ; (ε t , η t ) ′ ∼ i.i.d.N(0, I 2×2 ); P r(∆N t = 1) = θ 3,5 ; Z p t ∼ i.i.d.N(θ 3,6 , θ 3,7 ). The first model is obviously inappropriate for financial returns, but is included to capture misspecification and, potentially, incompatibility. Both M 2 and M 3 account for the stylized feature of time-varying and autocorrelated return volatility, but M 3 also captures the random price jumps that are observed in practice, and is the only model of the three that can account for skewness in the predictive distribution. The linear predictive pool is constructed from all three models, M 1 , M 2 and M 3 . For this empirical exercise we consider seven scoring rules: the log-score in (6), four versions of CLS in (8), for the 10%, 20%, 80% and 90% percentiles, and two quantile scores (QS) evaluated at the 5th and 10th percentiles (denoted by QS 5% and QS 10% respectively). The QS defined at the pth percentile is defined as QS p% = (y t − q t ) 1 (yt≤q) − p , with q t denoting the predictive quantile satisfying P r(y t ≤ q t |y 1:t−1 ) = p. 6 Use of QS (in addition to CLS) enables some conclusions to be drawn regarding the relevance of targeting tail accuracy per se in the production of optimal predictions, as opposed to the importance of the score itself. Tables 8 and 9 report the results for the S&P500 and MSCIEM index respectively, with the format of both tables mimicking that used in the simulation exercises. In particular, we continue to use bold font to indicate the largest average score according to a given out-of-sample measure, but now supplement this with the use of italics to indicate the second largest value in any column. We make three comments regarding the empirical results. First, for both data sets, and for all three single models, strict coherence is close to holding uniformly, with most of the diagonal elements in all panels being either the highest (bolded) or the second highest (italics) values in their respective columns. This suggests that each individual model, whilst inevitably a misspecified version of the true unknown DGP, is compatible enough with the true process to enable score-specific optimization to reap benefits. Second , we remark that all three individual models are quite distinct, and are likely to be associated with quite different degrees of estimation error. Hence, while the naïve model is no doubt the most misspecified, given the documented features of both return series, it is also the most parsimonious and, hence, likely to produce estimated scores with small sampling variation. Thus, it is difficult to assess which model has the best predictive performance overall, due to the interplay between sampling variation and model misspecification (see Patton, 2019 , for an extensive investigation of this issue). While the matter of model selection per se is not the focus of the paper, we do note that of the single models, the Gaussian GARCH(1,1) model estimated using the relevant score-specific optimizer is the best performer out-of-sample overall, according to all measures. Third, we note that the pooled forecasts exhibit close to uniform strict coherence, for both series, highlighting that the degree of misspecification in the pool is still sufficient for benefits to be had via score-specific optimization. However, the numerical gains reaped by score-specific optimization in the case of the pool are typically not as large as in the single model cases. That is, and as is consistent with earlier discussion, the additional flexibility produced by the pooling can reduce the ability of score-specific optimization to produce marked predictive improvements in 6 See for a discussion of the properties of QS as a proper scoring rule. This paper contributes to a growing literature in which the role of scoring rules in the production of bespoke forecasts -i.e. forecasts designed to be optimal according to a particular measure of forecast accuracy -is given attention. With our focus solely on probabilistic forecasts, our results highlight the care that needs to be taken in the production and interpretation of such forecasts. It is not assured that optimization according to a problem-specific scoring rule will yield benefits; the relative performance of so-called 'optimal' forecasts depending on the nature of, and interplay between: the true model, the assumed model and the score. That is, if the predictive model simply does not allow a given score to reward the type of accuracy it should, optimization with respect to that score criterion comes to naught. One may as well use the simplest optimizer for the problem at hand, and leave it at that. However, subject to a basic match, or compatibility, between the true process and the assumed predictive model, it is certainly the case that optimization can produce accuracy gains in the manner intended, with the gains being more marked the greater the degree of misspecification. Knowing when optimization will yield benefits in any particular empirical scenario is difficult, but the use of a plausible predictive model that captures the key features of the true data generating process is obviously key. The results in the paper also highlight the fact that use of score-specific optimization in the linear pool context is likely to reap less benefits than in the context of a single misspecified model. Theoretical exploration and characterization of all of these matters is likely to prove difficult, given the number of aspects at play; however such work, even if confined to very specific combinations of generating process/model/scoring rule, would be of value. We leave such explorations for future work. The evolution of forecast density combinations in economics Bayesian Theory. Wiley Series in Probability & Statistics Combining probability forecasts Predictive density and conditional confidence interval accuracy tests Comparing predictive accuracy Likelihood-based scoring rules for comparing density forecasts in tails Of quantiles and expectiles: consistent scoring functions, choquet representations and forecast rankings Economic forecasting Higher order elicitability and Osbands principle Optimal density forecast combinations Pooling operators with the marginalization property Optimal prediction pools Tests of conditional predictive ability Making and evaluating point forecasts Quantiles as optimal point forecasts Probabilistic forecasts, calibration and sharpness Strictly proper scoring rules, prediction, and estimation Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation Combining predictive distributions Combining density forecasts A test for superior predictive ability The role of the information set for forecastingwith applications to risk management Generalised density forecast combinations Generic conditions for forecast dominance Focused Bayesian prediction Marginalization and linear opinion pools Combining density forecasts using focused scoring rules Comparing possibly misspecified forecasts Higher moment constraints for predictive density combination Combining probability forecasts Density forecasting: a survey Robust forecast evaluation of expected shortfall Table 1 , where the true DGP is the GARCH(1,1)-t ν=3 model.