The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Site maintenance Monday, July 8th, 2024. Please note that access to some content and account information will be unavailable on this date.
Book Forum: TextbooksFull Access

Handbook of Psychiatric Measures

Reviewing a comprehensive book of psychiatric rating instruments provides an opportunity to comment about the pros and cons of rating scales and their impact on the practice of psychiatry. Before 1960, differences in clinical judgment among psychiatrists, especially in research protocols, created the need for reliable and valid correlates of psychiatric states. With the development of potent psychotropic drugs, the need for measures of baseline psychiatric states and their change following treatment grew rapidly. Researchers needed standardized instruments for inclusion criteria (i.e., diagnosis) as well as a means of assessing the degree to which the criteria changed following treatment (i.e., outcome).

Although standardized psychiatric assessments bring a measure of reliability and validity to clinical judgment, there are problems in their interpretation. First, some scales have been created on an ad hoc basis and have assumed scientific validity simply because there is nothing better to be used in their place. Not all rating scales are reliable or valid. Second, the instrument may not accurately reflect the clinical condition of the patient. For example, the Hamilton Depression Rating Scale, perhaps the most widely used and best known of all rating scales, has only one question out of 17 (or more) assessing mood. A patient can have a very high Hamilton depression scale score by answering yes to questions about sleep and anxiety and not be depressed. Third, comparison of patients based on rating scale scores may not be accurate because definitions of the rating points on a scale (e.g., 0–4, 1–7) may have been arbitrarily created without clinical or statistical evidence that the interval between one set of rating points (say between 1 and 2) is the same as the interval between another set of rating points (say between 6 and 7). In other words, a patient whose score is twice as high as another’s on the same scale is not necessarily twice as ill.

The fourth and perhaps most unfortunate consequence of our rating scale era is the assumption that a reduction in a rating scale score is the equivalent of clinically meaningful therapeutic response. For example, until very recently, the test of response to antidepressant treatment in a clinical trial was a 50% reduction in baseline rating scale scores, although the patient might still be clinically depressed. The discrepancy between response in a clinical trial and failure to respond in a practitioner’s office has led psychiatrists to complain that research results do not necessarily correspond to clinical experience (1).

Despite these problems, the use of psychiatric instruments can have a positive effect on training young psychiatrists. Learning to use rating scales can sharpen diagnostic skills and enable trainees to evaluate treatment outcome better, but rating scales may also interfere with psychiatric education. I have often heard residents (and Board examination candidates) describe a psychiatric diagnosis and treatment as scores on a rating scale. Complex human experience, for some young clinicians, has been reduced to a score on a paper-and-pencil test.

Despite these failings, rating scales and other psychiatric assessment measures are here to stay, and probably for the better, as long as their use is tempered with clinical wisdom and experience. This is certainly illustrated by the Handbook of Psychiatric Measures, a splendid, encyclopedic volume of all forms of psychiatric measures produced by an APA task force. I wish I had had this handbook when I was beginning my academic career, and I strongly recommend it to all mental health researchers as well as clinicians who wish a more complete understanding of the current basis for diagnosis and outcome in evidence-based psychiatry. The handbook is large, weighty, and comprehensive. It is divided into three sections and 32 chapters. Each psychiatric assessment measure (usually a rating scale) receives two extremely well-organized pages. Each individual measure is discussed in six sections: goals, description, practical issues, psychometric properties, clinical utility, and references and suggested readings. There are chapters on diagnostic measures such as the Structured Clinical Interview for DSM-IV, side effect measures, and quality-of-life measures as well as chapters discussing typical outcome measures for different psychiatric conditions. Diagnostic and treatment assessments for children and adolescents, as well as for the elderly, are described, and guidelines on the use and interpretation of psychiatric rating scales are included. A CD-ROM is included with the text. All in all, this is a splendid accomplishment, and kudos go to the Task Force for the hard work that is evident in this volume. Let us hope that psychiatric assessment measures may become useful to clinicians as well as researchers. Let us further hope that researchers will not misunderstand instruments as a substitute for understanding complex human experience.

By the American Psychiatric Association Task Force for the Handbook of Psychiatric Measures. Washington, D.C., American Psychiatric Association, 2000, 820 pp., $79.95.

Reference

1. Salzman C: Why don’t clinical trial results always correspond to clinical experience? Neuropsychopharmacology 1991; 42:265-267Google Scholar