The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×

To the Editor:

Dr. Perkins raises an important issue regarding the prediction of low-prevalence events. Since there is no biological marker specific to schizophrenia and the prevalence of the disease is low, any screening tool for schizophrenia must establish extremely high specificity to be applicable to an entire population. Indeed, our proposed screening tool has the appropriate specificity (99.7%), but this value, as Dr. Perkins states, may indeed vary in other populations. We would therefore like to underscore the notion that until this tool is validated in other populations, it should be considered a screening tool rather than a diagnostic marker. Individuals with scores indicating increased risk for future psychosis should be rescreened periodically, as preliminary data indicate that the predictive value of this tool becomes greater as the time between testing and the first psychotic episode diminishes (1). More important, the values of the screening tool can be confirmed unequivocally only in a true prospective study, which is currently planned.

Mr. Storch and colleagues indicate that there is a high degree of overlap between patients and nonpatients that might reduce the sensitivity of the model in predicting schizophrenia. The distribution overlap not only reduces sensitivity but also specificity, which, as already discussed, is a major problem for any screening tool. Matching patients to their nonpatient schoolmates attenuated both the sensitivity and specificity shortcomings. Table 2 in the article presented the overall distribution of the patients and matched nonpatients and therefore did not fully demonstrate the ability of the matching procedure to discern between patients and nonpatients. The power of the matching procedure is better exemplified in Table 1, where, for example, 24% of the patients had scores falling below the lowest range of their matched nonpatients on intellectual functioning. Mr. Storch et al. are also concerned with the validity of the intellectual and behavioral measures. The intellectual measures were all revised Hebrew versions of common measures of verbal and nonverbal intelligence (i.e., shorter versions, as in the case of Raven’s Progressive Matrices—R, the Otis test of mental ability, or similar tests in a pen-and-paper format [Arithmetic—R and Similarities—R tests]) (2), and scores on these tests have been shown to be equivalent to scores on IQ tests (Gal, 1986). The behavioral measures have been described in more detail in a recent article by our group (1).

Regarding the cutoff values, the behavioral measures had a normal distribution in the general population, and the lowest two quintiles therefore represented performance at 1 SD below the mean, a value that seems to be a reasonable cutoff point between normal and subnormal performance (e.g., in intellectual performance) and can be easily applied in clinical settings.

References

1. Rabinowitz J, Reichenberg A, Weiser M, Mark M, Kaplan Z, Davidson M: Cognitive and behavioural functioning in men with schizophrenia both before and shortly after first admission to hospital: cross-sectional analysis. Br J Psychiatry 2000; 177:26–32Crossref, MedlineGoogle Scholar

2. Lezak MD: Neuropsychological Assessment, 3rd ed. New York, Oxford University Press, 1995Google Scholar