key: cord-0680141-554tyoz5 authors: Chan, Joshua C. C.; Poon, Aubrey; Zhu, Dan title: Efficient Estimation of State-Space Mixed-Frequency VARs: A Precision-Based Approach date: 2021-12-21 journal: nan DOI: nan sha: 18e35fccf321f7f5f984f88a04e9696590d46b94 doc_id: 680141 cord_uid: 554tyoz5 State-space mixed-frequency vector autoregressions are now widely used for nowcasting. Despite their popularity, estimating such models can be computationally intensive, especially for large systems with stochastic volatility. To tackle the computational challenges, we propose two novel precision-based samplers to draw the missing observations of the low-frequency variables in these models, building on recent advances in the band and sparse matrix algorithms for state-space models. We show via a simulation study that the proposed methods are more numerically accurate and computationally efficient compared to standard Kalman-filter based methods. We demonstrate how the proposed method can be applied in two empirical macroeconomic applications: estimating the monthly output gap and studying the response of GDP to a monetary policy shock at the monthly frequency. Results from these two empirical applications highlight the importance of incorporating high-frequency indicators in macroeconomic models. proach for drawing the missing observations of the low-frequency variables in the statespace MF-VAR model. Precision-based sampling approaches for state-space models were first considered in Chan and Jeliazkov (2009) and McCausland, Miller, and Pelletier (2011) , building upon earlier work by Rue (2001) on Gaussian Markov random fields. Due to their ease of implementation and computational efficiency, these precision-based samplers are increasingly used in a wide range of empirical applications. Recent examples include modeling trend inflation Potter, 2013, 2016; Chan, 2017; Hou, 2020) , estimating the output gap (Grant and Chan, 2017a,b) , macroeconomic forecasting (Cross and Poon, 2016; Cross, Hou, and Poon, 2019) , modeling timevarying Phillips curve (Fu, 2020) , and fitting various moving average models (Chan, 2013; Chan, Eisenstat, and Koop, 2016; Dimitrakopoulos and Kolossiatis, 2020; Zhang, Chan, and Cross, 2020) and dynamic factor models (Kaufmann and Schumacher, 2019; Beyeler and Kaufmann, 2021) . Our paper extends the precision-based sampling approach to state-space models with missing observations. More specifically, we derive the joint distribution of the missing observations (of the low-frequency variables) conditional on the high-frequency data and the model parameters. With the standard assumptions of Gaussian errors, we show that the conditional distribution of the missing observations is also Gaussian. A key feature of this conditional distribution is that its precision matrix is block-bandedi.e., it is sparse and its non-zero elements are arranged along a diagonal band. As such, the precision-based sampler of Chan and Jeliazkov (2009) can be applied to draw the missing observations. In particular, this novel approach allows us to draw all the missing observations in one step, and is especially efficient compared to standard filtering methods when the observation equation has a complex lag structure. In addition, we allow the user to impose any linear constraints, both exactly and approximately, when sampling the missing observations. This feature is crucial in mixedfrequency applications as linear inter-temporal restrictions are typically imposed to map the high-frequency missing observations to match the observed values of the low-frequency variables. Our paper is related to the recent works by Eckert, Kronenberg, Mikosch, and Neuwirth (2020) and Hauber and Schumacher (2021) , who also consider a precision-based sampling approach for settings with missing observations. However, they focus on dynamic factor models and the latter does not consider imposing linear constraints. As such, their methods are not directly applicable to state-space mixed-frequency VARs. We conduct a series of simulated experiments to illustrate the numerical accuracy and computational speed of the proposed precision-based approach. In particular, we estimate the state-space MF-VARs using the proposed samplers and standard filtering methods under a variety of settings. We show that the proposed precision-based approach has two key advantages. First, it is more computationally efficient compared to standard Kalman-filter based methods and it scales well to high-dimensional settings. Second, it often delivers superior accuracy in estimating the missing observations of the lowfrequency variables compared to standard filtering methods. As the number of lowfrequency variables increases relative to high-frequency variables, the accuracy of standard filtering methods deteriorates significantly in estimating the missing observations due to numerical issues. We demonstrate the proposed precision-based approach using two empirical macroeconomic applications. In the first application, we consider a large mixed-frequency Bayesian VAR with stochastic volatility and adaptive hierarchical priors to generate latent monthly estimates of real GDP and the corresponding output gap estimates using the framework of Morley and Wong (2020) . More specifically, we estimate a 22-variable mixed-frequency Bayesian VAR consisting of 21 monthly macroeconomic and financial indicators and a quarterly real GDP measure using the proposed precision-based approach. We find that the monthly estimates of real GDP track all NBER recession dates well and give plausible values during the COVID-19 pandemic. Furthermore, the monthly output gap estimates, in specific periods, can differ from the Congressional Budget Office (CBO) quarterly output gap measure. A potential explanation for this difference could be the additional information extracted from the higher frequency monthly financial indicators. This highlights the importance of incorporating higher frequency indicators when estimating the output gap. In the second empirical application, we extend the Bayesian Proxy VAR in Caldara and Herbst (2019) to a mixed-frequency setting. More specifically, we expand their VAR with only monthly variables to include a quarterly real GDP measure. We find that all the impulse responses of the monthly endogenous variables to a monetary policy shock, identified via a proxy variable, display precisely the same dynamics as presented in Caldara and Herbst (2019) . However, the key difference is that the response of real GDP to a monetary policy shock appears to be more subdued compared to the response of industrial production. For example, the posterior median response of industrial production falls to a low of −0.5%, while the posterior median response of real GDP never falls below −0.2%. This result therefore suggests that the response of industrial production to a monetary policy shock might not be a good proxy for the response of the real economy. This again highlights the value of mixed-frequency VARs. The remainder of the paper is organised as follows. Section 2 discusses the precision-based sampling approach for drawing the missing observations in a state-space MF-VAR model. Section 3 presents the results from a simulation study comparing the proposed mixedfrequency precision-based samplers against standard Kalman-filter based techniques. Section 4 illustrates how the proposed mixed-frequency precision-based approach can be applied to two popular empirical macroeconomic applications. Finally, Section 5 concludes. This section introduces the proposed precision-based samplers for drawing the missing observations of the low-frequency variables within a state-space MF-VAR. More specifically, in the first subsection, we derive the conditional distribution of the missing observations given the observed data for the model and provide an efficient algorithm to draw from this conditional distribution. Next, in the second subsection, we show how the draws of the conditional distribution of the missing observations can be constrained to incorporate inter-temporal restrictions. Following Schorfheide and Song (2015) , we express the autoregression at the highest observed frequency. More specifically, let y o t denote the n o × 1 vector of high-frequency variables that are observed and let y u t represent the n u × 1 vector of low-frequency variables that are unobserved or only partially observed. A standard example frequently employed in the literature is as follows: y o t consists of n o monthly variables that are observed at every month t, whereas y u t consists of n u quarterly variables at monthly frequency that are only observed every 3 months (or the linear combination of the 3 monthly variables is observed). Then, a mixed-frequency vector autoregression (MF-VAR) with p lags for y t = (y o ′ t , y u ′ t ) ′ of dimension n = n o + n u can be written as: where t = p + 1, . . . , T , b 0 is an n × 1 vector of intercepts, B 1 , . . . , B p are the n × n VAR coefficients and Σ is error covariance matrix. In what follows, the analysis is based on the joint distribution of y p+1 , . . . , y T conditional on the initial conditions y 1 , . . . , y p . Below we derive the joint distribution of the unobserved (low-frequency) variables conditional on the observed (high-frequency) variables. Stacking Y = (y ′ 1 , . . . , y ′ T ) ′ , we can rewrite (1) as a standard linear regression in matrix form: In the above expression, 1 T −p is a (T − p) × 1 column vector of ones, I T −p is the identity matrix of dimension T − p, and O n is the n × n zero matrix. Note that H is of dimension T n × (T − p)n and is banded -i.e., it is a sparse matrix whose non-zero elements are arranged along a diagonal band. Furthermore, one can write Y as a linear combination of the observed (high-frequency) and unobserved (low-frequency) variables as: where vector of the observed variables, Y u = (y u ′ 1 , . . . , y u T ′ ) ′ is a T n u ×1 vector of the unobserved variables, and M o and M u are, respectively, T n×T n o and T n × T n u selection matrices that have full column rank. Substituting (3) into (2), we have Now, conditional on Y o (and the model parameters B = (b 0 , B 1 , . . . , B p ) ′ and Σ), the joint density of Y u can be expressed as which is a T n u × T n u non-singular matrix (as HM u has full column rank). Furthermore, let µ Then, by completing the square in Y u , one can write the conditional density of Y u as Thus, we have shown that the conditional distribution of the missing observations given the observed data is Gaussian: Since H, Ξ and M u are all band matrices, so is the precision matrix K Y u . Therefore, we can use the precision-based sampler of Chan and Jeliazkov (2009) to draw Y u efficiently. We also note that the conditional distribution derived in (5) has the same structure even when we allow for time-varying covariance matrices in the state-space MF-VAR model. The only minor change in the expression is that the block-diagonal matrix Ξ now depends on the time-varying covariance matrices Σ t , t = p + 1, . . . , T . So far the vector of missing observations Y u is unrestricted. In practice, however, inter-temporal constraints on Y u are often imposed to map the missing values to the observed values of the low-frequency variables. For example, a commonly employed inter-temporal constraint for log-differenced variables is the log-linear approximation of Mariano and Murasawa (2003, 2010) . More specifically, suppose y u i,t is the missing monthly value of the i-th variable at month t. Let y u i,t denote the corresponding observed quarterly value (note that y u i,t is only observed for every third month). Then, a standard log-linear approximation to an arithmetic average of the quarterly variable can be expressed as: Stacking the inter-temporal constraints in (6) over time, we obtain where M a is the k×T n u matrix specifying the k linear restrictions in (6), and Y u contains the observed values of the low-frequency variables. As an example, for balanced monthly and quarterly variables, k = T n u /3. Since for many commonly employed inter-temporal restrictions, the relationships between the missing and observed data are approximate rather than exact, we also consider a version with measurement or approximation errors: where O is a fixed diagonal covariance matrix that encodes the magnitude of the measurement errors. Next, we discuss how one can impose the hard and soft inter-temporal restrictions in (7) and (8), respectively. First, to incorporate the information encoded in the hard intertemporal restrictions, we aim to sample Y u from the Gaussian distribution in (5) subject to the linear constraint given in (7). An efficient way to do so is given in Algorithm 2.6 in Rue and Held (2005) and Algorithm 2 in Cong, Chen, and Zhou (2017) . More specifically, we first draw Z from the unconstrained distribution as Z ∼ N µ Y u , K −1 Y u . We then correct for the constraint by computing It can be shown that Y u has the correct distribution, i.e., it follows the N µ Algorithm 1 describes an efficient implementation in Rue and Held (2005) that avoids explicitly computing the inverse of Using this implementation, the additional computational cost for correcting the constraint is relatively low for k ≪ T n u . For large k, this algorithm would involve a few large, dense matrices, and the computations could be more intensive. Given the parameters µ Y u and K Y u , complete the following steps. Next, to incorporate the soft inter-temporal restrictions, one can view (7) as a new measurement equation and the Gaussian distribution of Y u in (5) as the "prior distribution". Then, by standard linear regression results, we obtain where Since the matrices M a , O and K Y u are all banded, so is K Y u . Hence, the precision-based sampler of Chan and Jeliazkov (2009) can be directly applied to sample Y u efficiently. Compared to Algorithm 1 for the hard inter-temporal constraints, sampling from (9) is much faster and scales well to high-dimensional settings. For approximate inter-temporal restrictions such as Mariano and Murasawa (2003, 2010) , the latter sampler is naturally preferable. For other exact inter-temporal restrictions, empirically one can approximate these hard restrictions by setting the diagonal elements of O to be very small (e.g., 10 −8 ). Therefore, we use the sampling scheme in (9) as the baseline. We conduct a simulation study to assess the speed and accuracy of the proposed precisionbased methods for drawing the latent missing observations of the low-frequency variables relative to Kalman-filter based methods. In what follows, all data-generating processes (DGPs) assume the following VAR structure with p = 4 lags: is an n × 1 vector of mixed-frequency data, y o t is the n o × 1 vector of the high-frequency variables and y u t is the n u × 1 vector of the low-frequency variables. We consider DGPs of different dimensions with T = 500: small (n = 5, n o = 4, n u = 1), medium (n = 11, n o = 10, n u = 1) and large (n = 21, n o = 20, n u = 1). We also investigate settings with a larger number of unobserved variables (n u = 5). For each simulated dataset r = 1, . . . , R, we estimate the missing observations y u,r t using 3 methods: precision-based sampler with hard inter-temporal constraints in (7), precision-based sampler with soft constraints in (8), and the simulation smoother of Carter and Kohn (1994) as implemented in the code provided by Schorfheide and Song (2015) . Lastly, for all VARs we assume the standard normal-inverse-Wishart prior with non-informative hyperparameters. To assess the accuracy of the proposed methods, we compute the mean squared errors (MSEs) of the estimated missing observations against the actual simulated values. The MSEs of all methods are computed using R = 10 simulations, and the results are displayed in Table 1 . We also report the computation times, based on 20,000 MCMC draws with a burn-in period of 10,000 draws 1 . Since the three methods aim to draw from the same 1 The computation times are based on a standard desktop with an Intel Core i7-7700 @ 3.6GHz distribution -namely, the conditional distribution of the missing observations given the observed data and model parameters -in principle they should give the same MSEs (modulo Monte Carlo errors). Indeed, they tend to give very similar MSEs when the ratio n o /n u is sufficiently large. However, when n u , the number of variables with missing observations, is large relative to n o , the number of fully observed variables, the accuracy of the Kalman-filter based method appears to deteriorate significantly relative to the proposed methods, possibly due to numerical errors. In terms of runtime, it is clear that the proposed precision-based methods are more computationally efficient compared to the Kalman-filter based method across a range of settings. Table 1 : Mean squared errors of the estimated missing observations and computation times using three methods: the proposed precision-based method with hard inter-temporal constraints (Precision-hard), the precision-based method with soft intertemporal constraints (Precision-soft) and the simulation smoother of Carter and Kohn (1994) implemented in Schorfheide and Song (2015) Table 1 reports the runtimes of full MCMC estimation. When the dimension of the VAR increases, the part of the posterior sampler that simulates the VAR coefficients dominates, and it gives the impression that the runtimes of the three methods converge. To better understand how the proposed methods perform across a wider range of settings, next we compare only the runtimes of sampling the missing observations. First, Figure 1 reports the runtimes of sampling 10 draws of the missing observations using the three methods for a range of n o and n u . It is clear that both precision-based methods compare favorably to the Kalman-filter based method, and both scale well to high dimensional settings. In addition, the variant with soft constrictions is especially efficient when there are a large number of variables with missing observations. processor and 16 GB of RAM and the code is implemented in MATLAB. Next, Figure 2 reports the runtimes of sampling ten draws of the missing observations for a range of sample sizes T and lag lengths p. While both precision-based methods perform well, the version with soft constrictions does substantially better and scales well to very large T and p. It is also worth mentioning that to apply the Kalman filter, and one needs to redefine the states so that the observation equation depends only on the current (redefined) state. When p is large, the dimension of this new state vector is large. And that is the reason why the Kalman-filter based method becomes very computationally intensive when p is large. In contrast, the computational costs of the precision-based methods remain low even for long lag lengths. We demonstrate the proposed mixed-frequency precision-based samplers via two popular empirical macroeconomic applications. First, we show that the proposed samplers can be incorporated efficiently within a state-of-the-art large Bayesian VAR with stochastic volatility and global-local priors. Furthermore, we also show that we can derive monthly estimates of the US output gap using the framework of Morley and Wong (2020) . In the second application, we extend the methodology in Caldara and Herbst (2019) by proposing a novel mixed-frequency Bayesian Proxy VAR estimated using the proposed samplers. Since the seminal works by Bańbura, Giannone, and Reichlin (2010) and Koop (2013) , large Bayesian VARs have become increasingly popular in empirical macroeconomics. However, large Bayesian VARs tend to be over-parameterised, and a key way to overcome this problem is to implement shrinkage priors, such as the Minnesota prior (Doan, Litterman, and Sims, 1984; Litterman, 1986) and the more recent global-local priors (see Polson and Scott, 2010; Huber and Feldkircher, 2019; Cross, Hou, and Poon, 2019) 2 . Another popular fea-ture researchers and practitioners incorporate within a large Bayesian VAR is stochastic volatility (SV). There are a large number of studies that document the importance of allowing for SV (or a time-varying covariance structure) when modelling macroeconomic data (see, e.g., Clark, 2011; Clark and Ravazzolo, 2015; Carriero, Clark, and Marcellino, 2019) . Recently, Chan (2021) introduced a class of Minnesota-type adaptive hierarchical priors for large Bayesian VARs with SV. In a nutshell, these new priors combine the advantages of both the Minnesota prior (e.g., rich prior beliefs such as cross-variable shrinkage) and global-local priors (e.g., heavy-tails and substantial mass around 0). These new priors are shown to provide superior forecasts compared to both the Minnesota prior and conventional global-local priors. In this first application, we extend the large Bayesian VAR in Chan (2021) to a mixedfrequency state-space setting and estimate the model using the proposed precision-based samplers. We mostly follow the model assumptions specified in Chan (2021) and estimate a 22-variable mixed-frequency VAR with SV (denoted here as MF-BVAR-SV) with p = 4 lags. The variables consist of 21 monthly macroeconomic indicators and a quarterly real GDP measure 3 . This exercise is motivated by recent interest in more timely monthly estimates of real GDP (see, e.g., Brave, Butters, and Kelley, 2019) . A more timely measure of GDP allows policymakers to react to economic shocks more rapidly. Therefore, our primary focus of this application is in generating the latent monthly estimates for real GDP and the corresponding output gap. The complete list of the 22 variables and their transformations are presented in Table 3 in the appendix. The sample covers 1960M1-2021M3 (and 1960Q1-2021Q1) , which includes the pandemic period. The 21 monthly macroeconomic indicators are selected so that they are broadly similar to the dataset considered in Morley and Wong (2020) . The majority of the variables are log-differenced to ensure stationarity. In addition, we impose the standard inter-temporal constraint of Mariano and Murasawa (2003, 2010) Another major advantage of producing monthly real GDP estimates from the MF-BVAR-SV model is that we can derive the corresponding monthly output gap via a multivariate Beveridge and Nelson (BN) trend-cycle decomposition from Morley and Wong (2020) . Recently, Berger, Morley, and Wong (2021) applied this BN trend-cycle decomposition to a mixed-frequency VAR to nowcast the US output gap. However, their mixed-frequency VAR follows the stacked approach where the model is expressed at the lowest observed frequency. More specifically, their mixed-frequency VAR is essentially a multi-equation U-MIDAS model. Therefore, it can only produce a quarterly measure of the output gap. In contrast, our MF-BVAR-SV model is a state-space mixed-frequency model where we can directly produce a higher-frequency monthly estimate of the output gap. To the best of our knowledge, this is the first study to apply the multivariate BN trend-cycle decomposition within a state-space mixed-frequency VAR framework. Following Morley and Wong (2020) , a finite-order VAR(p) can be represented in the following companion form: where X t is vector of the n stationary variables, µ is a vector of the unconditional means F is the companion matrix, G maps the VAR forecast errors to the companion form, and e t is a vector of forecast errors. Note that the unconditional means can be written as The pink line depicts an output gap measure derived using the HP filter using monthly industrial production. The grey shaded areas denote NBER recession dates. It is clear from the figure that the measure derived using the HP filter tends to produce large positive output gap estimates at the start of recessions. In contrast, both our monthly output gap estimates and the CBO measure tend to track the NBER recession dates well. However, there are clear differences between them in specific periods. For instance, from mid-2010 to 2019, our output gap estimates tend to be lower than the CBO measure. A potential explanation for these differences could be the inclusion of financial variables in constructing our output gap measure. According to Coibion, Gorodnichenko, and Ulate (2018) , the CBO uses the production function approach to estimate the potential output. In their modelling approach, they only consider five sectors of the economy, and the financial sector is not included. Therefore, the CBO could potentially underestimate the output gap relative to our estimates after the Great Recession since they do not consider financial variables in their model. In addition, our monthly measure provides more timely output gap estimates for policymakers to monitor the economy in real time. This highlights the importance of incorporating higher frequency indicators in the model when deriving an output gap measure. In this second application, we show how the proposed mixed-frequency precision-based samplers can be used for structural analysis. Specifically, we extend the Bayesian Proxy (BP)-VAR in Caldara and Herbst (2019) to a mixed-frequency setting, which we denote as MF-BP-VAR. While Caldara and Herbst (2019) estimate both a 4-and 5-equation BP-VAR, here we only consider the endogenous variables from the 5-equation model as that is their preferred model. The five-equation BP-VAR model consists of the federal funds rate (FFR), the log of manufacturing industrial production (IP), the unemployment rate (UE), the log of the producer price index (PPI), and a measure of a corporate spread (Baa corporate bond yield relative to the ten-year treasury constant maturity), which they denote as BAA spread. All five of these variables are observed at the monthly frequency. We extend this model to a state-space mixed-frequency VAR by including the log of real GDP, observed at the quarterly frequency. Intuitively, real GDP should be a better measure of the real economy than IP. In fact, the share of industrial production to real GDP has been declining since the early 2000s. For instance, the share of industrial production to real GDP was about 71 per cent at the end of 2000, and by the end of 2007, this share had fallen to about 65 per cent 4 . We consider two MF-BP-VARs with differernt dimensions. In the first case, we estimate a 5-equation MF-BP-VAR without IP; it contains FFR, UE, PPI, BAA spread, and real GDP. For the second case, we consider all 6 variables, including IP 5 . We preserve all the model assumptions as specified in Caldara and Herbst (2019) . We estimate the MF-BP-VARs with p = 12 lags using data from 1994M1-2007M6 (and 1994Q1-2007Q2 for real GDP) . Given that the quarterly real GDP variable enters the model in log-level, we impose an inter-temporal constraint similar to that in Schorfheide and Song (2015) : where y GDP,Q t and y GDP,m t are the observed log quarterly and latent monthly real GDP variable at time t, respectively. Panel (a) of Figure 6 displays the impulse response of the endogenous variables to a one standard deviation monetary shock identified using the 5-equation MF-BP-VAR, and panel (b) displays the corresponding impulse response from the 6-equation MF-BP-VAR, which includes IP. The proxy variable used to identify the monetary policy shock in both models is precisely the same as specified in Caldara and Herbst (2019) . All impulse responses of the monthly endogenous variables from both models display exactly the same dynamics as presented in Caldara and Herbst (2019) . This implies that adding quarterly real GDP variable does not change the dynamics of the monthly endogenous variable to a monetary policy shock. However, in both the models, the response of real GDP to a monetary policy shock appears to be muted or subdued compared to the response of IP. For example, in panel (a), the posterior median response of IP falls to a low of −0.5 per cent. In contrast, the posterior median response of real GDP tends to be higher than −0.2 per cent on average. Therefore, this suggests that the response of IP may be overstating the negative impact of a monetary policy shock on the real economy. The main results we would like to highlight are the contemporaneous and cumulative responses to real GDP. The cumulative response of real GDP for both models is zero. This is consistent with the classical dichotomy theory, where nominal shocks do not affect the real economy in the long run. The contemporaneous responses to real GDP −0.04 and −0.05 for the 5-and 6-equation MF-BP-VARs, respectively. This suggests that a one standard deviation surprise in real GDP, holding all other things constant, will elicit an immediate monetary policy response of about five basis points. However, this same interpretation cannot be made for IP as estimates imply an inverse relationship between the FFR and IP. Furthermore, this highlights the importance of including real GDP rather than IP as a measure of the real economy. We have introduced two precision-based samplers with hard and soft inter-temporal constraints to draw the low-frequency variables' missing values within a state-space MF-VAR. The simulation study shows that the proposed methods are more accurate and computationally efficient in estimating the low-frequency variables' missing values than standard Kalman-filter based methods. We also show how the mixed-frequency precision-based samplers can be applied to two popular empirical macroeconomic applications. Both empirical applications illustrate the importance of incorporating high-frequency indicators in macroeconomic analysis. For future research, it would be useful to extend the proposed methods to handle realtime datasets with ragged edges. In addition, developing precision-based samplers for dynamic factor models with complex missing data patterns would be another interesting future research direction. This appendix provides details of the 22-variable dataset in the first empirical application. Specifically, Table 3 describes the 22 variables and their transformations. The dataset is sourced from the US FRED database, and the sample covers from 1960M1-2021M3 (and 1960Q1-2021Q1) . All the data is transformed to stationarity. Large Bayesian vector auto regressions Nowcasting the output gap Reduced-form factor augmented VAR-Exploiting sparsity to include meaningful factors Forecasting economic activity with mixed frequency BVARs A new 'big data' index of US economic activity Monetary policy, real activity, and credit spreads: Evidence from Bayesian proxy SVARs Large Bayesian vector autoregressions with stochastic volatility and non-conjugate priors On Gibbs Sampling for State Space Models The Stochastic Volatility in Mean Model with Time-Varying Parameters: An Application to Inflation Modeling Large Bayesian VARMAs Efficient simulation and integrated likelihood estimation in state space models A bounded model of time variation in trend inflation, NAIRU and the Phillips curve Real-time density forecasts from Bayesian vector autoregressions with stochastic volatility Macroeconomic forecasting performance under alternative specifications of time-varying volatility The cyclical sensitivity in estimates of potential output Fast simulation of hyperplane-truncated multivariate normal distributions Macroeconomic forecasting with large Bayesian VARs: Global-local priors and the illusion of sparsity Forecasting structural change and fat-tailed events in Australian macroeconomic variables Bayesian analysis of moving average stochastic volatility models: modeling in-mean effects and leverage for financial time series Forecasting and conditional projection using realistic prior distributions Tracking economic activity with alternative high-frequency data Is the slope of the Phillips curve time-varying? Evidence from unobserved components models Macroeconomics and the reality of mixed frequency data Reconciling output gaps: Unobserved components model and Hodrick-Prescott filter Precision-based sampling with missing observations: A factor model application Time-varying relationship between inflation and inflation uncertainty Adaptive shrinkage in Bayesian vector autoregressive models Forecasting with Bayesian vector autoregressions Bayesian estimation of sparse dynamic factor models with order-independent and ex-post mode identification Forecasting with medium and large Bayesian VARs Bayesian Multivariate Time Series Methods for Empirical Macroeconomics UK regional nowcasting using a mixed frequency vector auto-regressive model with entropic tilting Regional output growth in the United Kingdom: More timely and higher frequency estimates from 1970 Forecasting With Bayesian Vector Autoregressions -Five Years of Experience A new coincident index of business cycles based on monthly and quarterly series Simulation smoothing for state-space models: A computational efficiency analysis Real-time forecasting and scenario analysis using a large mixed-frequency Bayesian VAR Estimating and accounting for the output gap with large Bayesian vector autoregressions Shrink globally, act locally: Sparse Bayesian regularization and prediction Fast sampling of Gaussian Markov random fields with applications Gaussian Markov Random Fields: Theory and Applications Real-time forecasting with a mixed-frequency VAR Stochastic volatility models with ARMA innovations: An application to G7 inflation forecasts