To the editor: We appreciate the insightful comments made by Drs. Potkin and Siu in their reference to our article. As the editorial accompanying our article observed, the proportion of missing data in our study was small (<20%). The likelihood-based mixed-model repeated-measures analysis is generally robust to departures from the missing at random assumption when the sample size is large and the number of dropouts is small (1). In our study, results were consistent between the last observation carried forward and mixed-model repeated-measures analyses of the primary efficacy endpoint (day 14) based on the intent-to-treat data set. The mixed-model repeated-measures analyses utilized PANSS score data obtained at all visits, including the dropout visit.
Missing data arise under different conditions. Missing completely at random means that the observed data are observed at random and the missing data are missing at random, whereas missing at random means only that the missing data are missing at random (2). It is not possible to test the missing at random assumption, since by definition it involves data we do not have (i.e., the missing data). However, it is possible to examine whether the data are consistent with the missing completely at random assumption because the definition of missing completely at random involves observed data, and, in particular, we can examine whether the observed data are observed at random. Although dropout plots are helpful, these could be misleading in instances in which few dropouts occur. We have tested the missing completely at random assumption based on the dropouts during the monotherapy period using a nonparametric method suggested by Diggle et al. (3, 4). We do not find evidence to reject the null hypothesis of completely random dropouts during the monotherapy treatment period in this study (p=0.52).
In general, the mixed-model repeated-measures models are better suited than last observation carried forward models to handle longitudinal data in the presence of dropouts commonly arising in psychiatric clinical trials in which the missing at random assumption appears reasonably plausible. However, we do acknowledge that missing data in psychiatric clinical trials pose serious threats to valid statistical inference when such data are not missing at random. Because the validity of the missing at random assumption cannot be assessed from the data, a sensible approach to inference is to perform sensitivity analyses that account for varying degrees of selection bias (5).
Finally, we thank Drs. Potkin, Sui, and Hamer for their thoughtful comments and the Journal for providing the forum to share this discussion. We hope that such exchanges will help researchers as they explore methodologies to best interpret the complexities of clinical trial data.
1.Mallinckrodt CH, Sanger TM, Dube S, Debrota DJ, Molenberghs G, Carroll RJ, Potter WZ, Tollefson GD: Assessing and interpreting treatment effects in longitudinal clinical trials with missing data. Biol Psychiatry 2003; 53:754–7602.Rubin DB: Inference and missing data. Biometrika 1976; 63:581–5923.Diggle PJ: Testing for random dropouts in repeated measurement data. Biometrics 1989; 45:1255–12584.Diggle PJ, Heagerty P, Liang K-L, Zeger SL: Analysis of Longitudinal Data, 2nd ed. New York, Oxford University Press, 20025.Scharfstein DO, Rotnitzky A, Robins JM: Adjusting for non-ignorable drop-out using semiparametric non-response models (with discussion). J Am Statistical Assoc 1999; 94:1096–1146
The authors’ disclosures accompany the original article.
This letter (doi: 10.1176/appi.ajp.2009.09070959rr) was accepted for publication in August 2009.