The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×

OBJECTIVE: The goal of this study was to review the reported methods of rater training, assessment of interrater reliability, and rater drift in clinical trials of treatments for depressive disorders. METHOD: Two psychiatrists independently identified all original reports of clinical trials relevant to depressive disorders published between 1996 and 2000 in the American Journal of Psychiatry and the Archives of General Psychiatry. Reported methods of rater training, assessment of interrater reliability, and rater drift were systematically summarized. RESULTS: Sixty-three original papers met criteria for inclusion. Only 11 (17%) of the studies reported the number of raters. Only two (9%) of the 22 multicenter and four (10%) of the 41 single-center trials documented rater training. Only nine (22%) of the single-center trials and three (14%) of the multicenter trials reported interrater reliability, despite a median number of five raters (range=2–20). Only three (5%) of the 63 articles reported rater drift. CONCLUSIONS: Few published reports of clinical trials of treatments for depressive disorders document adequately the number of raters, rater training, assessment of interrater reliability, and rater drift.