Research in recent decades has shown that neuropsychological deficits in schizophrenia are highly prevalent. Deficits in attention, processing speed, memory, working memory, and executive function have all been consistently reported in patients with schizophrenia (1).
Meta-analytic techniques have been instrumental in the synthesis of the enormous literature on cognitive functioning in schizophrenia. A meta-analytic approach has several advantages (2). First, by quantitatively combining the results of a number of studies, the power of statistical testing is increased substantially. Second, studies are differentially weighted by sample size. Finally, by extracting information quantitatively from existing studies, meta-analysis allows us to examine more precisely the influence of potential moderators on effect size.
Recent meta-analyses of specific cognitive domains and measures in schizophrenia have highlighted a processing speed deficit as central to cognitive impairment in schizophrenia (3, 4). Processing speed refers to the number of correct responses an individual is able to make during a task within a given amount of time. Digit symbol coding tasks are ideally suited to measuring this ability, for which participants are required to correctly substitute symbols and digits using a key under timed conditions (5). Because of their ease of use and data from recent meta-analyses suggesting that they might be useful in clinical settings for screening or for assessment of treatment effects in schizophrenia patients (3), tests of this type have been used increasingly in research. This practice has produced additional information that is eligible for inclusion in meta-analyses focusing on processing speed.
The goals of this study were twofold. The first goal was to extend the findings of Dickinson and colleagues (3) and of other meta-analyses that have shown a pronounced impairment in processing speed by incorporating recent reports of studies that used coding tasks. The second goal was to closely examine the role of study characteristics as potential moderators of coding task performance. Cognitive performance is susceptible to the influence of factors such as medication effects, illness severity, and research design. Consequently, it is important to carefully consider the role of these factors in mediating any apparent impairment in performance.
Data Sources and Study Selection
We first located the studies identified by Dickinson et al. (6—41). We omitted one of these studies because it did not report descriptive statistics with sufficient detail for an effect size to be calculated (42). We then located subsequently published studies, using the same methods as Dickinson et al. (3). We conducted a systematic search of MEDLINE and PsycINFO for the period directly following that used in the original article—May 2006 to January 2009. Based on titles and abstracts, we identified 30 articles for potential inclusion. The inclusion criteria specified that the study must have included a coding task plus measures of at least two other cognitive domains; the diagnosis of schizophrenia patients must have been accomplished using contemporary diagnostic criteria; the results must have been reported with enough detail for an effect size to be calculated; and the study must have been reported in English. Seven of the 30 articles met the inclusion criteria (43—49). In addition to these, we included one paper that was in press at the time (one of the authors [A.R.] had access to the original data) (50) and the three studies that were omitted by Dickinson et al. (3) in an effort to reduce heterogeneity (51—53). Thus, we included a total of 11 new studies.
Data Extraction and Synthesis
For each study, we extracted the sample size of each group and the mean and standard deviation for six tests of cognitive function. Following Dickinson et al., we combined the digit symbol substitution test with variant coding tasks, such as the symbol digit modalities test, which represented the main measure of processing speed. Additional tests of processing speed that we included were the Trail Making Test, Part A, and the letter and category verbal fluency tests. In addition to these tests, we included the Trail Making Test, Part B, and the Wisconsin Card Sorting Test categories completed and perseverative errors subtests as measures of executive function. We did not incorporate any other measure used by Dickinson et al. (3) because the newer studies did not include them.
Where it was available, we extracted information for a number of study characteristics. These variables were identified as potential sources of heterogeneity and were collected to be used as covariates in a metaregression analysis. These included publication year; chronicity of illness (first episode compared with chronic illness); sample size; mean age; IQ (seven studies used the full-scale WAIS-R [7, 8, 13, 20, 24, 29, 45], 12 used the short-form WAIS-R [9, 10, 12, 14, 15, 22, 23, 32, 34, 38, 40, 47], four used estimated IQ scores based on reading measures [19, 43, 44, 50], and one used Ravens IQ ); and the mean daily dose of antipsychotic medication in chlorpromazine equivalents for both first- and second-generation antipsychotics (both of which were used in all studies). Studies used different methods to calculate chlorpromazine equivalent doses (54, 55). Because the reports did not provide sufficient information to allow independent calculation of chlorpromazine equivalents, we used reported values.
Meta-analyses were conducted using the Comprehensive Meta-Analysis software package (56). Hedges' g was calculated for all measures; this method estimates an effect size based on the standardized difference between group means and is weighted by the inverse within-study variance, which corrects for sample size bias (57). A random-effects model was implemented, which estimates an error term for between-study variance based on the assumption that effect size magnitude varies between studies; this method is a conservative effect size estimate, provides more accurate estimates for confidence intervals, and substantially reduces the possibility of type I error (57, 58).
Next, a meta-influence analysis was conducted. The meta-influence analysis is a form of sensitivity analysis that allows examination of the possible influence of individual studies on the overall meta-analysis summary estimate. Using this method, an average effect size is calculated while leaving out one study at a time from the set of studies available for the meta-analysis.
We also applied a homogeneity analysis and examined potential publication bias.
It is important to investigate homogeneity because low homogeneity, or high interstudy heterogeneity, could lead to bias in the overall effect size estimate. Heterogeneity can be investigated using a metaregression analysis, which examines potential moderators of the effect size (59). A lack of moderator effect suggests that the moderator in question does not affect the average effect size estimate. We computed the χ2 statistic Q and the I2 statistic in order to assess the homogeneity of effect sizes. Moderator analyses were conducted using random-effects metaregression analyses with the study characteristics as covariates using the METAN and METAREG commands in Stata, version 10 (Stata Corp., College Station, Tex.). Because of the increased risk of type I error when making multiple comparisons, we used a permutation tests approach (using 1,000 Monte Carlo simulations) to calculate p values (60).
Publication bias is the tendency on the part of investigators, reviewers, and editors to submit or accept manuscripts for publication based on the direction or strength of the study findings (61). In order to explore publication bias, we created a funnel plot for each measure used in the study; this represents the relationship between the effect size estimate for each study and the corresponding standard error, which gives a measure of precision. If there is no evidence of publication bias, the points should be symmetrically distributed within the funnel.
Analyses were first conducted on those studies identified by Dickinson et al. (3) and then on the extended set of studies, both with and without those identified by Dickinson et al.
Thirty-six studies with 1,915 schizophrenia patients and 1,416 healthy comparison subjects were identified in the original meta-analysis by Dickinson et al. (3). Table 1 lists the effect sizes calculated for each cognitive measure alongside those reported by Dickinson et al. The effect size calculated for coding tasks was large at —1.57 (Q=38.85, df=35, p=0.30), precisely the same value reported by Dickinson et al. The effect sizes calculated for other cognitive measures are in line with those reported by Dickinson et al. such that the order of magnitude is identical, with category fluency as the second largest, with an effect size of —1.37 (Q=7.85, df=7, p=0.35).
Table 1. Meta-Analysis of Differences in Digit Symbol Coding and Other Cognitive Measures Between Schizophrenia Patients and Healthy Comparison Subjectsa
Forty-seven studies with 4,135 schizophrenia patients and 2,292 healthy comparison subjects were entered into the extended meta-analysis. Table 1 lists the effect sizes calculated for each cognitive measure along with the percentage change from the replication study results caused by including the 11 new studies, as well as effect sizes and percentage change for all 47 studies. For all 47 studies, the coding task effect size was —1.50 (Q=205.67, df=46, p<0.001), a decrease of 5% from the replication study. The effect sizes for other measures were in line with those reported by Dickinson et al. (3) insofar as they were of the same order of magnitude, and category fluency was the second largest, with an effect size of —1.31 (Q=16.10, df=8, p=0.03). The sensitivity analysis did not indicate that any one study exerted particular influence on the average effect size. Compared to the replication meta-analysis, there was a substantial increase in the degree of heterogeneity for coding. Heterogeneity for other measures remained relatively unchanged from the replication meta-analysis.
For coding tasks, variation in effect size that was attributable to heterogeneity was high (I2=77.64%). Metaregression analyses were therefore conducted in order to investigate possible sources of heterogeneity and their influence on coding task findings. Of the six potential moderator variables examined, three were significantly associated with heterogeneity: publication year, IQ difference between case and comparison subjects, and chlorpromazine equivalent daily dose.
Forty-seven studies with 4,135 schizophrenia patients and 2,292 healthy comparison subjects were entered into the metaregression. Coding task effect size was significantly related to year of publication (metaregression β coefficient=0.04, 95% confidence interval [CI]=0.01—0.06, p=0.007). This relationship, represented in Figure 1, shows that the more recently an article was published, the smaller the effect size was for coding tasks. When we stratified the analysis using studies published between 1992 and 1996 compared with those published between 2006 and 2009, the effect sizes were —1.60 (Q=4.45, df=8, p=0.814) and —1.19 (Q=57.60, df=8, p<0.001), respectively.
Figure 1. Association Between Publication Year and Symbol Coding Task Effect Size
IQ difference between case and comparison subjects. Twenty-four studies with 1,487 schizophrenia patients and 1,274 healthy comparison subjects were entered into the metaregression. Coding task effect size was significantly related to difference in IQ between comparison and schizophrenia samples (metaregression β coefficient=—0.032, 95% CI=—0.055 to —0.01, p=0.006). As shown in Figure 2, this relationship is such that the better matched comparison subjects and patients were on IQ score, the smaller the effect size was for coding tasks. The effect size for articles with an IQ difference of more than 10 points was —1.68 (Q=19.90, df=11, p=0.047), and for those with an IQ difference of less than 10 points, —1.18 (Q=39.69, df=11, p=0.001). The intercept value was —1.00, implying that if there were no IQ difference between comparison and schizophrenia groups, there would still be a sizable effect size for the coding tasks. Excluding studies that used non-Wechsler-based IQ measures from the metaregression analysis did not change the moderating effect of IQ.
Figure 2. Association Between IQ Difference and Symbol Coding Task Effect Sizea
aThe position of the data point at the top left suggests that it might be an outlier; in that particular study, the schizophrenia sample had a marginally higher IQ than the comparison sample (a difference of 0.35 IQ points). However, the relationship between IQ difference and effect size remains significant when this study is removed from the analysis (metaregression β coefficient=-0.024, 95% CI=—0.047 to -0.001, p=0.037).
Chlorpromazine equivalent daily dose
Fifteen studies with 865 schizophrenia patients and 565 comparison subjects were entered into the metaregression analysis. Coding task effect size was significantly related to chlorpromazine equivalent daily dose (metaregression β coefficient=—0.001, 95% CI=—0.002 to —0.0003, p=0.007). This association, represented in Figure 3, shows a strong relationship between medication and symbol coding tasks such that the smaller the chlorpromazine equivalent daily dose, the lower the coding task effect size. The overall coding task effect size for studies included was —1.66 (Q=26.82, df=14, p=0.020), which is not substantially different from the overall coding task effect size. Stratifying the analysis by the studies with the four highest compared with the four lowest drug doses resulted in effect sizes of —2.04 (Q=3.63, df=3, p=0.304) and —1.24 (Q=0.44, df=3, p=0.933), respectively.
Figure 3. Association Between Mean Chlorpromazine Equivalent Daily Dose and Coding Task Effect Sizea
aThe position of the point in the lower right-hand corner of the figure suggests that it might be an outlier; however, as indicated by the size of this data point, its contribution to the metaregression analysis was relatively small. When it is removed, the relationship remains significant (metaregression β coefficient=—0.001, 95% CI=—0.002 to —0.00001, p=0.031).
Metaregression analyses were applied for all other cognitive measures using the same moderator variables. This was done despite the relatively low amount of heterogeneity for these tasks, because given the lack of power of heterogeneity tests, it is advised that metaregression analyses be conducted even in the absence of significant Q and I2 statistics (56). There were no significant relationships found between any of the cognitive tasks (including the Trail Making Test, Part A; Trail Making Test, Part B; the Wisconsin Card Sorting Test subscales; and verbal fluency tasks) and any of the moderator variables.
In the analysis of publication bias, coding tasks were the only measure to show any asymmetry. However, asymmetry in funnel plots can arise as a consequence of heterogeneity, of which there is a substantial amount for coding tasks (62).
A number of meta-analyses have found that the effect size of digit symbol coding tasks in schizophrenia is significantly larger than the effects of other cognitive measures (3, 4). This study successfully replicated this finding, supporting the assertion by Dickinson et al. (3) that a processing speed impairment is a "central feature of the cognitive deficit in schizophrenia" (p. 1). However, the addition of 11 new studies to the meta-analysis suggested an alternative interpretation of this result. The studies included in the analysis comprised several types of study design, and by utilizing these features we were able to closely examine factors that might modify the relationship between schizophrenia and coding task performance. We found that coding task effect size varied as a function of publication year, IQ difference between comparison and schizophrenia samples, and chlorpromazine equivalent daily dose. Because of the confounding influence of these moderator variables, it is possible that previous meta-analyses overestimated the effect size for coding tasks. This implies that if the moderator variables were controlled for, then coding task performance might not stand out as being particularly impaired.
The strongest moderator variable relationship observed in our study was between chlorpromazine equivalent daily dose and coding task effect size, such that the higher the daily dose, the greater the impairment. Dickinson et al. (3) found that the coding task effect size remained unchanged when they limited the analysis to neuroleptic-naive patients. However, as they acknowledged, that analysis consisted of only three studies, and the use of neuroleptics was limited in two of them. When we stratified our analysis by low compared with high neuroleptic daily dose, it resulted in a difference in effect size of 0.8, corresponding to a large effect (63). These results indicate that antipsychotic dosage has a sizable impact on processing speed. It should be noted, however, that the moderator analysis suggests that even if medication effects were excluded, the remaining deficit in coding tasks in schizophrenia patients would still be substantial.
IQ was the other important moderating variable. While the moderating effect of IQ may seem relatively unsurprising given that subtest and full-scale IQ scores are correlated (64), it is also an important finding. If substantial differences in IQ between comparison and schizophrenia samples are ignored, researchers may be at risk of interpreting a large effect size for a specific cognitive ability as a selective impairment when in fact it is artificially inflated by general impairment in IQ. However, our results indicate that a large deficit in coding task performance in patients with schizophrenia remains even if IQ is accounted for.
It is unclear why coding task effect sizes should be smaller the more recently an article has been published. This relationship could be underlain by several factors. First, it might be due to improvements in study design over time; for example, IQ matching between schizophrenia and comparison samples significantly predicts coding task effect size. Second, the effect of publication year could be explained by the moderating effect of medication, such that the two confounding factors interact. The extrapyramidal side effects of conventional antipsychotic medications are likely to impair psychomotor performance and are more likely to have been used in the earlier studies. Conversely, the newer atypical antipsychotic medications that are likely to have been used in a higher proportion of patients in the more recent studies are generally thought to induce fewer adverse psychomotor side effects (65), although there is some evidence to suggest that atypical antipsychotics, in particular risperidone, may adversely affect working memory (66—68). Alternatively, different dosages of these medications may have been used over different periods. Unfortunately, detailed information on antipsychotic type was rarely reported.
It is possible that all these moderating variables overlap or are confounded in some way. For example, patients with lower IQ may also be on higher doses of medication because they have a more severe form of the disorder, or medication may adversely affect IQ (and digit symbol coding). Unfortunately, statistical methods for dealing with confounding in meta-analyses apart from stratification are not, to our knowledge, readily applicable.
The results of the moderator analysis conducted by Dickinson et al. (3) differ slightly from our own, as we did not find moderating effects of age and chronicity. This could be a result of the disparate methods of analysis. There are two types of moderator analysis—subgroup and metaregression analyses. Dickinson et al. used the former type and used a median split to create groups. We applied the latter type, thus using the full range of data. Furthermore, in our metaregression analysis, a random-effects approach was implemented whereby the contribution of each study was weighted. Another reason for differences in results between the two studies is the addition of a significant amount of data in our study, which might have resulted in a change in the relationships between variables.
Impaired performance on category fluency measures was second to coding tasks and was followed by letter fluency; this pattern of results is also evident in other meta-analyses using these measures (3, 4). These results complement other studies that show greater impairment for category fluency than for letter fluency (69). Fluency is a complex measure of both processing speed and other distinct cognitive abilities (70, 71). It is thought that impairment in executive control underlies poor performance in general on verbal fluency tests but that a specific compromise to the semantic store hinders performance on category fluency in schizophrenia (69). Notably, the metaregression analyses did not reveal any significant moderator variables for either of the fluency tasks, or indeed for any of the other cognitive measures. This suggests that the effects for these measures are not subject to distortion by confounding factors.
These results should be viewed with the caveat that metaregression is a form of observational association and therefore cannot be used to make causal inferences about the data (60). There may be confounding factors that underlie the relationships reported here. Moreover, given the sparse reporting of study characteristics, the metaregression analyses captured only some of the studies included in the meta-analyses. Despite these shortcomings, our results suggest that research investigating processing speed in schizophrenia should consider the effect of potential moderating factors. Another potential issue for this article is that the largest study included in the meta-analysis is the CATIE trial (46). Inclusion of this study could be considered problematic as it used normative data rather than a comparison group in order to calculate effect sizes. The use of norms as opposed to comparison subjects is an ecologically valid approach and is used both in standard neuropsychological assessment and in research (1). However, our sensitivity analysis demonstrated that inclusion of the CATIE study did not bias the results of the meta-analysis or the moderator analysis. Removing the CATIE study from the analysis did not substantially change the average effect size estimate of the meta-analysis, and the effect of year of publication remained statistically significant without CATIE.