Dr. Joop Hox
Small Data At Any Level? Problems and Solutions
The estimation methods commonly employed in multilevel analysis assume large sample sizes. Precisely what is a large enough sample size is unknown, but simulation studies suggest that having fewer than twenty groups definitely is a small sample of groups. Nevertheless, researchers encounter data where they need to analyze data from small samples.
In multilevel analysis, small samples can also be the result of having small groups. This is the case, for example, in dyadic research, where the unit of analysis is a couple. In such research, if there are some missing data, the average groups size can actually become lower that two. Also, in longitudinal research, it is common to have only a small number of measurement occasions, having only three measurement occasions is not uncommon.
This presentation reports the available evidence on sufficient sample sizes, and discusses analysis strategies that can mitigate the problems that occur with small sample sizes, such as estimation accuracy and statistical power.
Dr. Todd D. Little
Texas Tech University
On the Merits of Parceling
Parceling is a data pre-processing strategy by which two or more items are averaged to create a new aggregate indicator to use in both exploratory and confirmatory factor models (aka. Latent variable modeling, structural equation modeling). First introduced by Cattell over half a century ago, the practice of parceling has been a hotly debated practice and even earned the moniker “the items versus parcels controversy.” In this lecture, I will outline the arguments both pro and con regarding the items versus parcels controversy. I will conclude with why the items versus parcels needn’t be one and provide compelling reasons for why parcels are highly preferred.
Dr. Marija Maric
University of Amsterdam
When less is more: Single-case research in youth clinical practice
Single-case experimental designs (SCEDs) are increasingly recognized as a valuable alternative for Randomized Controlled Trials (RCTs) to test intervention effects in youth populations. Given the heterogeneous nature of youth and family problems, SCEDs may be the most optimal way to investigate intervention outcomes, either because the condition is rare (e.g., certain comorbidity) or because analyses on a group level would imply loss of information (i.e., finding no intervention effect while effect is present in a certain subgroup). The current presentation will provide an overview of a single-case method as a way to investigate effectiveness of youth interventions, along with the challenges related to assessment and data-analyses, accompanied by solutions and illustrations from single-case research with youth and families suffering from anxiety disorders, negative self-esteem, comorbid ADHD and anxiety disorders, and child abuse.
Dr. Patrick Onghena
One by one: The design and analysis of replicated randomized single-case experiments
Single-case experiments are "true" experiments on single cases. In a single-case experiment, the experimental manipulation is introduced within-case by using several experimental phases or fast alternation of experimental conditions, and by using repeated measurements to assess the outcome. Randomization and replication can be included to strengthen the internal and external validity, respectively. Data analysis proceeds by paying attention to the specific characteristics of the single-case setup and the specific design that is used. A combination of structured visual analysis, calculation of effect size measures, causal inference, and meta-analysis seems most promising.