S4 conference

Confirmed Speakers

Investigating the impact of residualized likelihoods in Bayesian multilevel models with normal residuals – Jonathan Templin

Multilevel models (i.e., mixed-effects models) are used to predict outcomes with one or more sources of dependency, such as in clustered observations or repeated measures. In frequentist settings, the dominant estimation method for multilevel models with normally distributed residuals at each level (i.e., general linear mixed-effects models) is residual maximum likelihood (REML), which provides unbiased estimates of variance components. Use of non-residualized normal distributions (i.e., maximum likelihood or ML) results in negatively biased estimates of the variance components, with the size of the bias related to the sample size and the number of fixed effects in the model. In REML-estimated models, however, the benefit of unbiased variance components extends beyond just the estimates of variance components. As the standard errors for fixed effects depend on the variance components, in ML-estimated models, negatively biased estimates from non-residualized normal distributions produce standard errors that are also negatively biased. Critically, these REML-related advantages are most pronounced in smaller higher-level samples, in which the use of ML instead can result in too-small variance estimates and, consequentially, too-small standard error estimates, leading to greater rates of Type I error for corresponding fixed effects.

In Bayesian multilevel models, the data likelihood most commonly used is the non-residualized normal distribution—the same as in standard ML in the frequentist version—but with two notable procedural differences. First, prior distributions can be used to reduce the influence of the likelihood, which can be particularly advantageous in small higher-level samples. Second, common Bayesian estimation programs display posterior distribution summaries using an expected a posteriori (EAP) estimate. Given that random effects variances are likely to have positively skewed distributions, the use of an EAP estimate (i.e., a mean) instead of a maximum a posteriori (MAP) estimate (i.e., a mode; the analog to a maximum likelihood estimate used in standard ML in frequentist models) can obscure the negative bias in the variance components. However, the incremental benefits of using a non-residualized likelihood function in Bayesian multilevel models have not yet been explored. The purpose of this study is to fill this gap and demonstrate the effects of doing so in small higher-level samples.

In this presentation, we show preliminary results for our attempt to bring an analog to the residualized likelihood of REML into Bayes—the development of a residualized likelihood function in a Markov chain Monte Carlo algorithm for Bayesian multilevel models. We first show that, similar to ML-based frequentist estimation results, the use of a traditional ML-inspired non-residualized likelihood leads to posterior distributions of the variance components with negative bias in both their posterior mode (i.e., the MAP estimate of central tendency) and their posterior variance. These same problems then propagate to the posterior variance of the analog of the fixed-effect parameters: as expected, the extent of downward bias in their posterior variance is most pronounced in smaller higher-level samples. We conclude by demonstrating how Bayesian multilevel models with residualized likelihoods may be useful in research and practice with small sample sizes.

Jonathan Templin
University of Iowa