The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×

Abstract

Objective:

Reducing unsuccessful treatment trials could improve depression treatment. Quantitative EEG (QEEG) may predict treatment response and is being commercially marketed for this purpose. The authors sought to quantify the reliability of QEEG for response prediction in depressive illness and to identify methodological limitations of the available evidence.

Method:

The authors conducted a meta-analysis of diagnostic accuracy for QEEG in depressive illness, based on articles published between January 2000 and November 2017. The review included all articles that used QEEG to predict response during a major depressive episode, regardless of patient population, treatment, or QEEG marker. The primary meta-analytic outcome was the accuracy for predicting response to depression treatment, expressed as sensitivity, specificity, and the logarithm of the diagnostic odds ratio. Raters also judged each article on indicators of good research practice.

Results:

In 76 articles reporting 81 biomarkers, the meta-analytic estimates showed a sensitivity of 0.72 (95% CI=0.67–0.76) and a specificity of 0.68 (95% CI=0.63–0.73). The logarithm of the diagnostic odds ratio was 1.89 (95% CI=1.56–2.21), and the area under the receiver operator curve was 0.76 (95% CI=0.71–0.80). No specific QEEG biomarker or specific treatment showed greater predictive power than the all-studies estimate in a meta-regression. Funnel plot analysis suggested substantial publication bias. Most studies did not use ideal practices.

Conclusions:

QEEG does not appear to be clinically reliable for predicting depression treatment response, as the literature is limited by underreporting of negative results, a lack of out-of-sample validation, and insufficient direct replication of previous findings. Until these limitations are remedied, QEEG is not recommended for guiding selection of psychiatric treatment.

Major depressive illness remains a leading worldwide contributor to disability despite the growing availability of medications and psychotherapies (1). The persistent morbidity is partly due to the difficulty of treatment selection. An adequate “dose” of cognitive-behavioral therapy for depression is 10–12 weeks (2). An antidepressant or augmentation medication trial requires at least 4 weeks at an adequate dosage (2). Patients may spend months to years searching through options before responding to treatment (3). Knowing sooner whether a treatment will be effective could increase the speed and possibly the rate of overall treatment response. The high potential value of treatment prediction biomarkers has spurred extensive research. Unfortunately, it has also encouraged commercial ventures that market predictive tests to both patients and physicians, often without the support of evidence of clinical efficacy (4). Inappropriate use of invalid “predictive” tests could easily increase health care costs without benefiting patients (5).

Predictive biomarker research emphasizes pretreatment and “treatment-emergent” biomarkers. Treatment-emergent markers are physiologic changes that precede and predict the response to effective treatment. They may represent physiologic processes mediating the clinical response, whereas pretreatment markers may represent moderating factors. If we could confidently predict a treatment’s efficacy or nonefficacy 1–2 weeks into a treatment trial, we could move much more quickly through clinical decision trees. For novel therapies such as brain stimulation, treatment-emergent markers could also guide “closed-loop” treatment, where an aspect of the stimulation is titrated in direct response to the physiologic marker (6, 7).

Electroencephalography (EEG) is a promising source of psychiatric biomarkers. Unlike serum chemistry or genetic variation, EEG directly measures brain activity. EEG is potentially more cost-effective than neuroimaging techniques, such as functional MRI (fMRI) and nuclear medicine computed tomography (PET/SPECT), which have also been proposed as biomarkers (810). EEG recordings can be more feasibly implemented in a wide variety of clinical settings, and it has essentially no safety concerns, whereas PET involves radiation and MRI cannot be used in the presence of metal foreign bodies.

Psychiatric biomarker studies have emphasized quantitative EEG, or QEEG (see the text box). Baseline and treatment-emergent biomarkers, as qualitatively reviewed in recent years (1113), include simple measures such as loudness dependence of auditory evoked potentials (LDAEP) (1422), oscillatory power in the theta and alpha ranges (see the text box) (14, 2339), and the distribution of those low-frequency oscillations over the scalp (35, 37, 4045). With the increasing power of modern computers, biomarkers involving multiple mathematical transformations of the EEG signal became available. These include a metric called cordance (23, 26, 4657) and a proprietary formulation termed the Antidepressant Treatment Response (ATR) index (5761). Each is based on both serendipitous observations and physiologic hypotheses of depressive illness (11, 12). LDAEP is believed to measure serotonergic function, oscillations are linked to top-down executive functions (62, 63), and cordance may reflect cerebral perfusion changes related to fMRI signals. ATR and related multivariate markers (6466) merge these lines of thought to increase predictive power. Recent studies (including the Canadian Biomarker Integration Network in Depression [CAN-BIND], the International Study to Predict Optimized Treatment–Depression [iSPOT-D], and the Establishing Moderators and Biosignatures of Antidepressant Response for Clinical Care study [EMBARC]) have sought to create large multicenter data sets that may allow more robust biomarker identification (40, 6772).

Basics of EEG Terminology and Biomarkers

  • Montage: placement of individual sensors (electrodes) on a patient’s scalp. The most common is the international 10-20 system, but many alternatives exist, particularly as the number of sensors increases above 64.

  • Quantitative EEG (QEEG): analysis of EEG through standardized and reproducible mathematical algorithms, as opposed to the visual inspection more common in neurologic diagnosis.

  • Alpha, theta, beta, gamma: patterns of rhythmic (sine-wave-like) electrical activity believed to be important for cognition and brain network coordination. Each occurs at a specific frequency (cycles per second, or Hz): 5–8 Hz for theta, 8–15 Hz for alpha, 15–30 Hz for beta, and above 30 Hz for gamma. The definitions are not exact and the boundaries of each band vary between authors.

  • Evoked potential: the average brain response to a repeated stimulus, e.g., a pure tone played 100 times. Averaging across the individual presentations (trials) removes background noise, identifying the common/repeatable component.

  • Source localization: applying mathematical transformations that estimate which brain regions likely gave rise to the electrical activity recorded at the scalp. This “inverse problem” has infinite solutions, and many algorithms have been proposed to narrow this to a single best answer.

  • Cordance: a measure combining multiple mathematical transforms of EEG power across electrodes, often in the prefrontal cortex. Theorized to measure activity related to cerebral perfusion.

Despite the rich literature, the value of QEEG as a treatment response predictor in depressive illness remains unclear. This is in part because there has been no recent meta-analysis aimed at the general psychiatrist or primary care practitioner. The last formal American Psychiatric Association position statement on EEG was issued in 1991 (73), at which time personal computers had a fraction of the computing power of today’s computers. A 1997 American Academy of Neurology report (74) focused on QEEG in epilepsy and traumatic brain injury. The most recent report, from the American Neuropsychiatric Association, was similarly cognition oriented (75). All of these reports are over a decade old. More recent reviews have delved into the neurobiology of QEEG but have not quantitatively assessed its predictive power (1113). The closest was a 2011 meta-analysis that combined imaging and EEG to assess the role of the rostral cingulate cortex in major depression (76).

To fill this gap in clinical guidance, we performed a meta-analysis of QEEG as a predictor of treatment response in depression. We cast a broad net, considering all articles on adults with any type of major depressive episode, receiving any intervention, and with any study design or outcome scale. This approach broadly evaluated QEEG’s utility without being constrained to specific theories of depression or specific markers. We complemented that coarse-grained approach with a meta-regression investigating specific biomarkers to ensure that inconsistent results across the entire QEEG field would not mask a single effective marker.

Method

Our review focused on two primary questions: What is the overall evidence base for QEEG techniques in predicting response or nonresponse in the treatment of depressive episodes? Given recent concerns about reliability in neuroimaging (10, 77), how well did published studies implement practices that support reproducibility and reliability?

We searched PubMed for articles related to EEG, major depression, and response prediction (see the online supplement). We considered articles published in any indexed year. From these, we kept all that reported prediction of treatment response, to any treatment, in any type of depressive illness, using any EEG metric. Our prospective hypothesis was that EEG cannot reliably predict treatment response. We chose broad inclusion criteria to maximize the chance of a signal detection that falsified our hypothesis. That is, we sought to determine whether there is sufficient evidence to recommend the routine use of any QEEG approach to inform psychiatric treatment. This is an important clinical question, given the commercial availability and promotion of psychiatric QEEG. We did not include studies that attempted to directly select patients’ medication based on an EEG evaluation, an approach sometimes termed “referenced EEG” (78). Referenced EEG is not a diagnostic test, and as such does not permit the same form of meta-analysis.

The meta-analysis of diagnostic markers depends on 2×2 tables summarizing correct and incorrect responder and nonresponder predictions (79). Two trained raters extracted these from each article, with discrepancies resolved by discussion and final arbitration by the first author. Where necessary, table values were imputed from other data provided in the article (see the online supplement). For articles that examined more than one marker or treatment (19, 29, 52, 57, 60, 67, 80), we considered them as separate studies. We reasoned that treatments with different mechanisms of action (e.g., repetitive transcranial magnetic stimulation [rTMS] versus medication) may have different effects on reported biomarkers, even if studied by a single investigator. For studies that reported more than one method of analyzing the same biomarker (23, 34, 57), we used the predictor with the highest positive predictive value. This further increased the sensitivity and the chance of a positive meta-analytic result. Articles that did not report sufficient information to reconstruct a 2×2 table (14, 15, 17, 21, 25, 28, 32, 33, 42, 43, 8192) were included in descriptive and study quality reporting but not in the main meta-analysis.

For quality reporting, we focused on whether the study used analytic methods that increase the reliability of conclusions. Chief among these is independent sample verification or cross-validation—reporting the algorithm’s predictive performance on a sample of patients separate from those originally used to develop it. Cross-validation has repeatedly been highlighted as essential in the development of a valid biomarker (10, 11, 61, 74, 93). Our two other markers of study quality were total sample size and correction for multiple hypothesis testing. Small sample sizes falsely inflate effect sizes (93), and correction for multiple testing is a foundation of good statistical practice.

We conducted univariate and bivariate meta-analyses using R’s mada package for analysis and metafor for visualizations (9496). The univariate analysis summarized each study as the natural logarithm of its diagnostic odds ratio, using a random-effects estimator (79). Bivariate analysis used sensitivity and specificity following the approach of Reitsma et al. (97). From the bivariate analysis, we derived the area under the summary receiver operator curve and computed an area-under-the-curve confidence interval by 500 iterations of bootstrap resampling with replacement. For the univariate analysis, we report I2 as a measure of study heterogeneity. As secondary analyses, we separated studies by biomarker type (LDAEP, power features, ATR, cordance, and multivariate) and by treatment type (medication, rTMS, or other). These were then entered as predictor variables in bivariate meta-regressions. Finally, to assess the influence of publication bias, we plotted log(diagnostic odds ratio) against its precision, expressed as both the standard error (funnel plot) and the effective sample size (98). We tested funnel plot asymmetry with the arcsine method described in Rücker et al. (99), as implemented in the meta package (100). This test has been suggested to be robust in the presence of heterogeneity and is the recommended choice of a recent working group (101). All of the above were preplanned analyses. Our analysis and reporting conform with the PRISMA guidelines (102); the checklist is included in the online supplement. The supplement also reports an alternative approach using standardized mean differences.

Results

Descriptive Study Characteristics

Our initial search produced 995 articles, to which we added 28 articles from other sources (see Figure S1 in the online supplement). Ninety of these appeared to discuss response prediction, and 76 articles, covering 81 biomarkers, were eligible for descriptive analysis. Of these, 53 articles, discussing 57 biomarkers, included sufficient information for meta-analysis. The majority of articles that did not include sufficient 2×2 table information still reported a statistically significant result (22/24, 91.7%).

Studies varied in the degree of treatment resistance, included and excluded diagnoses, details of EEG recording, and analytic and statistical approach (see Table S1 in the online supplement). Seventy percent (57/81) were studies of response to medication, with most of the remaining (17%, 14/81) predicting response to rTMS. Citalopram/escitalopram and venlafaxine were the most commonly studied medications, representing 23% (13/57) and 19% (11/57) of medication studies, respectively. Most reported markers were from resting-state EEG (70%, 57/81) and did not source-localize the EEG data (79%, 64/81). The most heavily represented biomarkers were low-frequency EEG power (31%, 25/81) and cordance (19%, 15/81).

No study was a preplanned independent-sample replication of a previous investigation with identical medication regimens and outcome measures. A few markers, however, were studied repeatedly with similar designs. Three LDAEP studies attempted to predict response to citalopram (19, 21, 103). They had inconsistent results that appeared to be dependent on source-localization technique. Olbrich et al. (67) used a vigilance marker that had previously been validated (using different recording/analysis methods) in smaller data sets. Cook et al. used cordance to predict the response to varying medication protocols, but a series of their studies (26, 54, 55) found better-than-chance prediction using the same equipment, outcome measures, and decision rule (cordance decrease at 1 week of treatment). Bares et al. used different patient populations (bipolar depression and major depressive disorder), treatments, and response definitions but also repeatedly reported successful response prediction with a 1-week cordance decrease (47, 50, 51). A pair of studies with a relatively large sample size found that the ATR predicted response to different medications (59, 60). These studies were based on earlier reports of cordance and power biomarkers by the same researchers using different medication regimens (104). The larger ATR studies reported a slight modification of a previously unpublished version of ATR (version 4.1 in the study reports, compared with version 4.0 in the trial protocol and previous poster presentations). Widge et al. (61) reported that the same version of ATR did not predict response to rTMS. Reports from the iSPOT-D study were hypothesis driven and meant to test biomarkers that had previously been reported, although they used a different medication protocol (25, 40, 69). Finally, theta power source-localized to the anterior cingulate cortex was reported by multiple laboratories as a predictor of response to different monoaminergic medications (14, 29, 31). A recent report from the EMBARC study (105) (which did not report information necessary for meta-analysis) also found cingulate theta to predict antidepressant response, although theta changes did not differ between patients receiving sertraline and those receiving placebo.

Study Quality

Study sizes were generally small, with a median N of 25. The distribution was trimodal (see Figure S2 in the online supplement), with peaks at approximately N=20, N=85, and N=660. The latter reflects reports from the recently concluded iSPOT-D study (40, 67, 69).

Most studies did not meet the quality metrics. Forty studies reported testing only a single EEG feature or finding no significant results and thus did not require correction for multiple comparisons. Of the 36 studies that tested multiple features, 67% (24/36) did not report use of a statistical correction. Of 71 markers reported to have significant predictive validity, only six (8%) were studied with cross-validation or another out-of-sample verification. Three of these were from the same first author (106108). One article reported using cross-validation but did not include cross-validated algorithm performance in its main text or abstract (60).

Overall Efficacy

For all biomarkers taken together, the meta-analysis suggested predictive power above chance (Figures 14). The meta-analytic estimate of sensitivity was 0.72 (95% CI=0.67–0.76), specificity was 0.68 (95% CI=0.63–0.73), and log(diagnostic odds ratio) was 1.89 (95% CI=1.56–2.21). These correspond to an area under the curve of 0.76 (95% CI=0.71–0.80). The univariate analysis did not suggest study heterogeneity as a driver of results (I2=0%; Q=55.9, p=0.48). This implies that, in general, QEEG may have predictive power for treatment response in depressive illness. No biomarker or treatment type showed significantly greater predictive power than another. In bivariate meta-regressions (see Tables S3 and S4 in the online supplement), the Akaike information criterion increased from its omnibus value of −115.7 to −104.1 for a model split by biomarker type and to −107.7 for a model split by treatment type. Increases in Akaike information criterion imply that model terms have no true explanatory power (109). This is further supported by most meta-regression model coefficients failing to reach significance. We considered the possibility that these results reflect older studies identifying incorrect candidates, with newer studies homing in on true effects. A bivariate meta-regression of diagnostic accuracy against publication year showed no effect (p>0.27, Z-test on regression coefficients).

FIGURE 1.

FIGURE 1. Results of a Meta-Analysis of Quantitative EEG (QEEG) Biomarkers in Depression Treatment: Sensitivitya

a The figure shows results, presented as forest plots, for sensitivity for prediction of clinical antidepressant response based on QEEG biomarkers. Meta-analytic estimates show modest predictive power for clinical response. (For more information about the studies listed, see Tables S1 and S2 in the online supplement.) Markers are indicated as follows: TMS=transcranial magnetic stimulation; Vfx=venlafaxine; ATR=Antidepressant Treatment Response index; Theta=theta power; Rbx=reboxetine; ACC=anterior cingulate cortex; OFC=orbitofrontal cortex; Bup=bupropion; Esc= Escitalopram; Clo=clomipramine; Map=maprotiline.

FIGURE 2.

FIGURE 2. Results of a Meta-Analysis of Quantitative EEG (QEEG) Biomarkers in Depression Treatment: Specificitya

a The figure shows results, presented as forest plots, for specificity for prediction of clinical antidepressant response based on QEEG biomarkers. Meta-analytic estimates show modest predictive power for clinical response. (For more information about the studies listed, see Tables S1 and S2 in the online supplement.) Markers are indicated as follows: TMS=transcranial magnetic stimulation; Vfx=venlafaxine; ATR=Antidepressant Treatment Response index; Theta=theta power; Rbx=reboxetine; ACC=anterior cingulate cortex; OFC=orbitofrontal cortex; Bup=bupropion; Esc= Escitalopram; Clo=clomipramine; Map=maprotiline.

FIGURE 3.

FIGURE 3. Results of a Meta-Analysis of Quantitative EEG (QEEG) Biomarkers in Depression Treatment: Log(Diagnostic Odds Ratio)a

a The figure shows results, presented as forest plots, for the log(diagnostic odds ratio) for prediction of clinical antidepressant response based on QEEG biomarkers. Meta-analytic estimates show modest predictive power for clinical response. (For more information about the studies listed, see Tables S1 and S2 in the online supplement.) Markers are indicated as follows: TMS=transcranial magnetic stimulation; Vfx=venlafaxine; ATR=Antidepressant Treatment Response index; Theta=theta power; Rbx=reboxetine; ACC=anterior cingulate cortex; OFC=orbitofrontal cortex; Bup=bupropion; Esc= Escitalopram; Clo=clomipramine; Map=maprotiline.

FIGURE 4.

FIGURE 4. Results of a Meta-Analysis of Quantitative EEG (QEEG) Biomarkers in Depression Treatment: Summary Receiver Operator Curvea

a The figure shows results for the summary receiver operator curve for sensitivity of prediction of clinical antidepressant response based on QEEG biomarkers. The modest predictive power for clinical response shown in Figures 1–3 is also visible in the summary receiver operator curve, where the area under the curve is estimated at 0.76. rTMS=repetitive transcranial magnetic stimulation.

Funnel-plot analysis suggested that QEEG’s apparent predictive power is driven by small studies with strong positive results. The plot was specifically depleted in studies with smaller effect sizes that may not have reached prespecified significance thresholds (Figure 5A), and there was a tight correlation between effect size and the reciprocal of effective sample size (Figure 5B). The arcsine test for funnel plot asymmetry rejected the null hypothesis (t=6.33, p=4.64×10−8).

FIGURE 5.

FIGURE 5. Influence of Publication Bias in Results of a Meta-Analysis of Quantitative EEG (QEEG) Biomarkers in Depression Treatmenta

a Panel A is the funnel plot of study effect size (log of diagnostic odds ratio [DOR]) against the standard error of that effect size. Dashed lines represent the meta-analytic estimate and its 95% confidence interval. Small studies with effect sizes between 0 (no effect) and approximately 2 (modest effect) are underrepresented. Panel B is a scatterplot of effect size (log of diagnostic odds ratio) against the reciprocal of effective sample size, showing that the two are linearly related. The overlaid line is a robust linear regression fit. The association of effect size with study size holds across biomarker types, reflected here by different marker shapes. rTMS=repetitive transcranial magnetic stimulation.

Discussion

QEEG is commercially promoted to psychiatrists and our patients as a “brain map” for customizing patients’ depression treatment. Our findings indicate that QEEG, as studied and published to date, is not well supported as a predictive biomarker for treatment response in depression. Use of commercial or research-grade QEEG methods in routine clinical practice would not be a wise use of health care dollars. This conclusion is likely not surprising to experts in QEEG, who are familiar with the limitations of this literature. It is important, however, for practicing psychiatrists to understand the limitations, given the availability of QEEG as a diagnostic test. At present, marketed approaches do not represent evidence-based care. This mirrors other biomarker fields, such as pharmacogenomics and neuroimaging, for which recent reviews (4, 110) suggest that industry claims substantially exceed the evidence base. Like those markers, QEEG may become clinically useful, but only with further and more rigorous study.

We showed that the QEEG literature generally describes tests with reasonable predictive power for antidepressant response (sensitivity, 0.72; specificity, 0.68). This apparent utility, however, may be an artifact of study design and selective publication. We observed a strong funnel plot asymmetry, indicating that many negative or weak studies are not in the published literature. Of those that were published, many have small sample sizes. Small samples inflate effect sizes, which may give a false impression of efficacy (111). This is doubly true given the wide range of options available to EEG data analysts, which can lead to inadvertent multiple hypothesis testing (93). We also identified a common methodological deficit in the lack of cross-validation, which could overestimate predictive capabilities. Taken together, the findings suggest that community standards in this area of psychiatric research do not yet enforce robust and rigorous practices, despite recent calls for improvement (11, 77, 93). Our results indicate that QEEG is not ready for widespread use. Cordance and cingulate theta power are closest to proof of concept, with studies reporting successful treatment prediction across different medication classes and study designs (14, 29, 31, 4749, 51, 105). ATR has been successful across medication classes, but only when tested by its original developers (58, 59). A direct and identical replication of at least some of those findings is still necessary. These design and reporting limitations suggest that QEEG has not yet been studied or validated to a level that would make it reliable for regular clinical use.

We designed this meta-analysis for maximum sensitivity, because we sought to demonstrate QEEG’s lack of maturity as a biomarker. This makes our omnibus meta-analytic results overly optimistic and obscures three further limitations of QEEG as a response predictor. First, we accepted each individual study’s definition of the relevant marker without enforcing consistent definitions within or between studies. For example, alpha EEG has been defined differently for different sets of sensors within the same patient (34) and at different measurement time points (60). Enforcing consistent definitions would attenuate the predictive signal, because it reduces “researcher degrees of freedom” (77). On the other hand, an important limitation of our meta-analysis is that it could not identify a narrow biomarker. If QEEG can predict response to a single specific treatment or response in a biologically well-defined subpopulation, that finding would be obscured by our omnibus treatment. Marker-specific meta-analysis (as in reference 76) would be necessary to answer that question.

Second, we did not consider studies as negative if they found significant change in the “wrong” direction. For instance, theta cordance decline during the first week of treatment is believed to predict medication response (26, 47, 48, 51, 52, 55). Two studies reported instead that a cordance increase predicted treatment response (46, 53). LDAEP studies have reported responders to have both higher (17, 19, 20) and lower (15) loudness dependence compared with nonresponders. This could be explained by differences in collection technique, or in the biological basis of the interventions (e.g., the inconsistent study used noradrenergic medication, whereas LDAEP is thought to assess serotonergic tone). It could also be explained by true effect sizes of zero, and modeling these discrepancies differently would reduce our estimates of QEEG’s efficacy.

Third, and arguably most important, depression itself is heterogeneous (6, 112). Defining and subtyping it is one of the major challenges of modern psychiatry, and there have been many proposals for possible endophenotypes (6, 9, 12, 113, 114). When we consider that each primary study effectively lumped together many different neurobiological entities, the rationale for QEEG-based prediction is less clear. As an example, a recent attempt to validate an obsessive-compulsive disorder biomarker, using the originating group’s own software, showed a significant signal in the opposite direction from the original study (115). Furthermore, studies often predict antidepressant response for patients receiving medications with diverse mechanisms of action. Considering that patients who do not respond to one medication class (e.g., serotonergic) often respond to another (e.g., noradrenergic or multireceptor), it does not make sense for any single EEG measure to predict response to multiple drug types. Similarly, although the goal of many recent studies is to explicitly select medication on the basis of a single EEG recording (40, 70, 72, 78), this may not be possible given the many ways in which neurotransmitter biology could affect the EEG. Reliable electrophysiologic biomarkers may require “purification” of patient samples to those with identifiable circuit or objective behavioral deficits (6, 116) or use of medications with simple receptor profiles. It may also be helpful to shift from resting-state markers to activity recorded during standardized tasks (6) as a way of increasing the signal from a target cortical region. Task-related EEG activity has good test-retest reliability, potentially improving its utility as a biomarker (71).

We stress that our meta-analysis means that QEEG as currently known is not ready for routine clinical use. It does not mean QEEG research should be stopped or slowed. Many popular QEEG markers have meaningful biological rationales. LDAEP is strongly linked to serotonergic function in animals and humans (117). Cordance was originally derived from hemodynamic measures (11, 54). Neither cordance nor ATR changed substantially in placebo responders, even though both changed in medication responders (26, 59, 117). The theta and alpha oscillations emphasized in modern QEEG markers are strongly linked to cognition and executive function (62, 118). Our results do not imply that QEEG findings are not real; they call into question the robustness and reliability of links between symptom checklists and specific aspects of resting-state brain activity. If future studies can be conducted with an emphasis on rigorous methods and reporting, and with specific attempts to replicate prior results, QEEG still has much potential.

From the Department of Psychiatry and Behavioral Sciences, University of Minnesota, Minneapolis; the Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School, Boston; the Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Mass.; the Department of Psychiatry, Stanford University School of Medicine, Palo Alto, Calif.; Butler Hospital and Warren Alpert Medical School of Brown University, Providence, R.I.; the Department of Psychiatry, University of Wisconsin School of Medicine and Public Health, Madison; the Department of Psychiatry, University of Miami Miller School of Medicine, Miami.
Address correspondence to Dr. Widge ().

Drs. Widge and Deckersbach have pending patent applications related to the use of electrographic markers to characterize patients and select neuromodulation therapies. Dr. Widge has received device donations and consulting income from Medtronic. Dr. Rodriguez has served as a consultant for Allergan, BlackThorn Therapeutics, and Rugen Therapeutics. Dr. Carpenter has served as a consultant for Magstim and has received research clinical trial support from Cervel, Janssen, NeoSync, and Neuronetics. Dr. Kalin has received research support from NIMH; he has served as a consultant for CME Outfitters, the Pritzker Neuropsychiatric Disorders Research Consortium, the Skyland Trail Advisory Board, and TC MSO (parent company of Actify Neurotherapies); and he receives remuneration from Elsevier as co-editor of the journal Psychoneuroendocrinology. Dr. Nemeroff has received grants or research support from NIH and the Stanley Medical Research Institute; he has served as a consultant for Bracket (Clintara), Dainippon Pharma, Fortress Biotech, Intra-Cellular Therapies, Janssen Research and Development, Magstim, Prismic Pharmaceuticals, Sumitomo Navitor Pharmaceuticals, Sunovion, Taisho Pharmaceutical, Takeda, TC MSO, and Xhale; he has served on scientific advisory boards for the American Foundation for Suicide Prevention (AFSP), the Anxiety Disorders Association of America (ADAA), Bracket (Clintara), the Brain and Behavior Research Foundation, the Laureate Institute for Brain Research, Skyland Trail, and Xhale and on directorial boards for ADAA, AFSP, and Gratitude America; he is a stockholder in AbbVie, Antares, BI Gen Holdings, Celgene, Corcept Therapeutics, OPKO Health, Seattle Genetics, and Xhale; he receives income or has equity of $10,000 or more from American Psychiatric Publishing, Bracket (Clintara), CME Outfitters, Intra-Cellular Therapies, Magstim, Takeda, and Xhale; and he holds patents on a method and devices for transdermal delivery of lithium (patent 6,375,990B1) and a method of assessing antidepressant drug therapy via transport inhibition of monoamine neurotransmitters by ex vivo assay (patent 7,148,027B2). The other authors report no financial relationships with commercial interests.

Preparation of this work was supported in part by grants from the Brain and Behavior Research Foundation, the Harvard Brain Science Initiative, and NIH (MH109722, NS100548) to Dr. Widge. The authors further thank Farifteh F. Duffy, Ph.D., and Diana Clarke, Ph.D., of the American Psychiatric Association, for critical administrative and technical assistance throughout preparation.

This article is derived from work done on behalf of the American Psychiatric Association (APA) and remains the property of APA. It has been altered only in response to the requirements of peer review. Copyright © 2017 American Psychiatric Association. Published with permission.

References

1 Roehrig C: Mental disorders top the list of the most costly conditions in the United States: $201 billion. Health Aff (Millwood) 2016; 35:1130–1135Crossref, MedlineGoogle Scholar

2 American Psychiatric Association: Practice Guideline for the Treatment of Patients With Major Depressive Disorder, Third Edition. Washington, DC, American Psychiatric Association, 2010. https://psychiatryonline.org/pb/assets/raw/sitewide/practice_guidelines/guidelines/mdd.pdfGoogle Scholar

3 Trivedi MH, Rush AJ, Wisniewski SR, et al.: Evaluation of outcomes with citalopram for depression using measurement-based care in STAR*D: implications for clinical practice. Am J Psychiatry 2006; 163:28–40LinkGoogle Scholar

4 Rosenblat JD, Lee Y, McIntyre RS: Does pharmacogenomic testing improve clinical outcomes for major depressive disorder? A systematic review of clinical trials and cost-effectiveness studies. J Clin Psychiatry 2017; 78:720–729Crossref, MedlineGoogle Scholar

5 Cassel CK, Guest JA: Choosing wisely: helping physicians and patients make smart decisions about their care. JAMA 2012; 307:1801–1802Crossref, MedlineGoogle Scholar

6 Widge AS, Ellard KK, Paulk AC, et al.: Treating refractory mental illness with closed-loop brain stimulation: progress towards a patient-specific transdiagnostic approach. Exp Neurol 2017; 287:461–472Crossref, MedlineGoogle Scholar

7 Lo M-C, Widge AS: Closed-loop neuromodulation systems: next-generation treatments for psychiatric illness. Int Rev Psychiatry 2017; 29:191–204Crossref, MedlineGoogle Scholar

8 Kambeitz J, Cabral C, Sacchet MD, et al.: Detecting neuroimaging biomarkers for depression: a meta-analysis of multivariate pattern recognition studies. Biol Psychiatry 2017; 82:330–338Crossref, MedlineGoogle Scholar

9 Drysdale AT, Grosenick L, Downar J, et al.: Resting-state connectivity biomarkers define neurophysiological subtypes of depression. Nat Med 2017; 23:28–38Crossref, MedlineGoogle Scholar

10 Woo C-W, Chang LJ, Lindquist MA, et al.: Building better biomarkers: brain models in translational neuroimaging. Nat Neurosci 2017; 20:365–377Crossref, MedlineGoogle Scholar

11 Wade EC, Iosifescu DV: Using electroencephalography for treatment guidance in major depressive disorder. Biol Psychiatry Cogn Neurosci Neuroimaging 2016; 1:411–422Crossref, MedlineGoogle Scholar

12 Olbrich S, Arns M: EEG biomarkers in major depressive disorder: discriminative power and prediction of treatment response. Int Rev Psychiatry 2013; 25:604–618Crossref, MedlineGoogle Scholar

13 Olbrich S, van Dinteren R, Arns M: Personalized medicine: review and perspectives of promising baseline EEG biomarkers in major depressive disorder and attention deficit hyperactivity disorder. Neuropsychobiology 2015; 72:229–240Crossref, MedlineGoogle Scholar

14 Mulert C, Juckel G, Brunnmeier M, et al.: Rostral anterior cingulate cortex activity in the theta band predicts response to antidepressive medication. Clin EEG Neurosci 2007; 38:78–81Crossref, MedlineGoogle Scholar

15 Linka T, Müller BW, Bender S, et al.: The intensity dependence of auditory evoked ERP components predicts responsiveness to reboxetine treatment in major depression. Pharmacopsychiatry 2005; 38:139–143Crossref, MedlineGoogle Scholar

16 Gallinat J, Bottlender R, Juckel G, et al.: The loudness dependency of the auditory evoked N1/P2-component as a predictor of the acute SSRI response in depression. Psychopharmacology (Berl) 2000; 148:404–411Crossref, MedlineGoogle Scholar

17 Jaworska N, Blondeau C, Tessier P, et al.: Response prediction to antidepressants using scalp and source-localized loudness dependence of auditory evoked potential (LDAEP) slopes. Prog Neuropsychopharmacol Biol Psychiatry 2013; 44:100–107Crossref, MedlineGoogle Scholar

18 Jaworska N, De Somma E, Blondeau C, et al.: Auditory P3 in antidepressant pharmacotherapy treatment responders, non-responders, and controls. Eur Neuropsychopharmacol 2013; 23:1561–1569Crossref, MedlineGoogle Scholar

19 Juckel G, Pogarell O, Augustin H, et al.: Differential prediction of first clinical response to serotonergic and noradrenergic antidepressants using the loudness dependence of auditory evoked potentials in patients with major depressive disorder. J Clin Psychiatry 2007; 68:1206–1212Crossref, MedlineGoogle Scholar

20 Lee B-H, Park Y-M, Lee S-H, et al.: Prediction of long-term treatment response to selective serotonin reuptake inhibitors (SSRIs) using scalp and source loudness dependence of auditory evoked potentials (LDAEP) analysis in patients with major depressive disorder. Int J Mol Sci 2015; 16:6251–6265Crossref, MedlineGoogle Scholar

21 Linka T, Müller BW, Bender S, et al.: The intensity dependence of the auditory evoked N1 component as a predictor of response to citalopram treatment in patients with major depression. Neurosci Lett 2004; 367:375–378Crossref, MedlineGoogle Scholar

22 Linka T, Sartory G, Gastpar M, et al.: Clinical symptoms of major depression are associated with the intensity dependence of auditory event-related potential components. Psychiatry Res 2009; 169:139–143Crossref, MedlineGoogle Scholar

23 Arns M, Drinkenburg WH, Fitzgerald PB, et al.: Neurophysiological predictors of non-response to rTMS in depression. Brain Stimul 2012; 5:569–576Crossref, MedlineGoogle Scholar

24 Knott V, Mahoney C, Kennedy S, et al.: Pre-treatment EEG and its relationship to depression severity and paroxetine treatment outcome. Pharmacopsychiatry 2000; 33:201–205Crossref, MedlineGoogle Scholar

25 Arns M, Etkin A, Hegerl U, et al.: Frontal and rostral anterior cingulate (rACC) theta EEG in depression: implications for treatment outcome? Eur Neuropsychopharmacol 2015; 25:1190–1200Crossref, MedlineGoogle Scholar

26 Cook IA, Hunter AM, Abrams M, et al.: Midline and right frontal brain function as a physiologic biomarker of remission in major depression. Psychiatry Res 2009; 174:152–157Crossref, MedlineGoogle Scholar

27 Heikman P, Salmelin R, Mäkelä JP, et al.: Relation between frontal 3–7 Hz MEG activity and the efficacy of ECT in major depression. J ECT 2001; 17:136–140Crossref, MedlineGoogle Scholar

28 Hunter AM, Korb AS, Cook IA, et al.: Rostral anterior cingulate activity in major depressive disorder: state or trait marker of responsiveness to medication? J Neuropsychiatry Clin Neurosci 2013; 25:126–133Crossref, MedlineGoogle Scholar

29 Korb AS, Hunter AM, Cook IA, et al.: Rostral anterior cingulate cortex theta current density and response to antidepressants and placebo in major depression. Clin Neurophysiol 2009; 120:1313–1319Crossref, MedlineGoogle Scholar

30 Narushima K, McCormick LM, Yamada T, et al.: Subgenual cingulate theta activity predicts treatment response of repetitive transcranial magnetic stimulation in participants with vascular depression. J Neuropsychiatry Clin Neurosci 2010; 22:75–84Crossref, MedlineGoogle Scholar

31 Pizzagalli D, Pascual-Marqui RD, Nitschke JB, et al.: Anterior cingulate activity as a predictor of degree of treatment response in major depression: evidence from brain electrical tomography analysis. Am J Psychiatry 2001; 158:405–415LinkGoogle Scholar

32 Spronk D, Arns M, Barnett KJ, et al.: An investigation of EEG, genetic, and cognitive markers of treatment response to antidepressant medication in patients with major depressive disorder: a pilot study. J Affect Disord 2011; 128:41–48Crossref, MedlineGoogle Scholar

33 Woźniak-Kwaśniewska A, Szekely D, Harquel S, et al.: Resting electroencephalographic correlates of the clinical response to repetitive transcranial magnetic stimulation: a preliminary comparison between unipolar and bipolar depression. J Affect Disord 2015; 183:15–21Crossref, MedlineGoogle Scholar

34 Arns M, Cerquera A, Gutiérrez RM, et al.: Non-linear EEG analyses predict non-response to rTMS treatment in major depressive disorder. Clin Neurophysiol 2014; 125:1392–1399Crossref, MedlineGoogle Scholar

35 Bruder GE, Sedoruk JP, Stewart JW, et al.: Electroencephalographic alpha measures predict therapeutic response to a selective serotonin reuptake inhibitor antidepressant: pre- and post-treatment findings. Biol Psychiatry 2008; 63:1171–1177Crossref, MedlineGoogle Scholar

36 Micoulaud-Franchi J-A, Richieri R, Cermolacce M, et al.: Parieto-temporal alpha EEG band power at baseline as a predictor of antidepressant treatment response with repetitive transcranial magnetic stimulation: a preliminary study. J Affect Disord 2012; 137:156–160Crossref, MedlineGoogle Scholar

37 Price GW, Lee JW, Garvey C, et al.: Appraisal of sessional EEG features as a correlate of clinical changes in an rTMS treatment of depression. Clin EEG Neurosci 2008; 39:131–138Crossref, MedlineGoogle Scholar

38 Tenke CE, Kayser J, Manna CG, et al.: Current source density measures of electroencephalographic alpha predict antidepressant treatment response. Biol Psychiatry 2011; 70:388–394Crossref, MedlineGoogle Scholar

39 Li C-T, Hsieh J-C, Huang H-H, et al.: Cognition-modulated frontal activity in prediction and augmentation of antidepressant efficacy: a randomized controlled pilot study. Cereb Cortex 2016; 26:202–210Crossref, MedlineGoogle Scholar

40 Arns M, Bruder G, Hegerl U, et al.: EEG alpha asymmetry as a gender-specific predictor of outcome to acute treatment with different antidepressant medications in the randomized iSPOT-D study. Clin Neurophysiol 2016; 127:509–519Crossref, MedlineGoogle Scholar

41 Bruder GE, Stewart JW, Tenke CE, et al.: Electroencephalographic and perceptual asymmetry differences between responders and nonresponders to an SSRI antidepressant. Biol Psychiatry 2001; 49:416–425Crossref, MedlineGoogle Scholar

42 Spronk D, Arns M, Bootsma A, et al.: Long-term effects of left frontal rTMS on EEG and ERPs in patients with depression. Clin EEG Neurosci 2008; 39:118–124Crossref, MedlineGoogle Scholar

43 Quraan MA, Protzner AB, Daskalakis ZJ, et al.: EEG power asymmetry and functional connectivity as a marker of treatment effectiveness in DBS surgery for depression. Neuropsychopharmacology 2014; 39:1270–1281Crossref, MedlineGoogle Scholar

44 Pathak Y, Salami O, Baillet S, et al.: Longitudinal changes in depressive circuitry in response to neuromodulation therapy. Front Neural Circuits (eCollection), Jul 29, 2016. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4965463/Crossref, MedlineGoogle Scholar

45 Noda Y, Zomorrodi R, Saeki T, et al.: Resting-state EEG gamma power and theta-gamma coupling enhancement following high-frequency left dorsolateral prefrontal rTMS in patients with depression. Clin Neurophysiol 2017; 128:424–432Crossref, MedlineGoogle Scholar

46 Adamczyk M, Gazea M, Wollweber B, et al.: Cordance derived from REM sleep EEG as a biomarker for treatment response in depression: a naturalistic study after antidepressant medication. J Psychiatr Res 2015; 63:97–104Crossref, MedlineGoogle Scholar

47 Bares M, Brunovsky M, Kopecek M, et al.: Changes in QEEG prefrontal cordance as a predictor of response to antidepressants in patients with treatment resistant depressive disorder: a pilot study. J Psychiatr Res 2007; 41:319–325Crossref, MedlineGoogle Scholar

48 Bares M, Brunovsky M, Kopecek M, et al.: Early reduction in prefrontal theta QEEG cordance value predicts response to venlafaxine treatment in patients with resistant depressive disorder. Eur Psychiatry 2008; 23:350–355Crossref, MedlineGoogle Scholar

49 Bares M, Brunovsky M, Novak T, et al.: The change of prefrontal QEEG theta cordance as a predictor of response to bupropion treatment in patients who had failed to respond to previous antidepressant treatments. Eur Neuropsychopharmacol 2010; 20:459–466Crossref, MedlineGoogle Scholar

50 Bares M, Novak T, Brunovsky M, et al.: The change of QEEG prefrontal cordance as a response predictor to antidepressive intervention in bipolar depression: a pilot study. J Psychiatr Res 2012; 46:219–225Crossref, MedlineGoogle Scholar

51 Bares M, Novak T, Kopecek M, et al.: The effectiveness of prefrontal theta cordance and early reduction of depressive symptoms in the prediction of antidepressant treatment outcome in patients with resistant depression: analysis of naturalistic data. Eur Arch Psychiatry Clin Neurosci 2015; 265:73–82Crossref, MedlineGoogle Scholar

52 Bares M, Brunovsky M, Novak T, et al.: QEEG theta cordance in the prediction of treatment outcome to prefrontal repetitive transcranial magnetic stimulation or venlafaxine ER in patients with major depressive disorder. Clin EEG Neurosci 2015; 46:73–80Crossref, MedlineGoogle Scholar

53 Broadway JM, Holtzheimer PE, Hilimire MR, et al.: Frontal theta cordance predicts 6-month antidepressant response to subcallosal cingulate deep brain stimulation for treatment-resistant depression: a pilot study. Neuropsychopharmacology 2012; 37:1764–1772Crossref, MedlineGoogle Scholar

54 Cook I, Leuchter A: Prefrontal changes and treatment response prediction in depression. Semin Clin Neuropsychiatry 2001; 6:113–120Crossref, MedlineGoogle Scholar

55 Cook IA, Leuchter AF, Morgan ML, et al.: Changes in prefrontal activity characterize clinical response in SSRI nonresponders: a pilot study. J Psychiatr Res 2005; 39:461–466Crossref, MedlineGoogle Scholar

56 Erguzel TT, Ozekes S, Gultekin S, et al.: Neural network based response prediction of rTMS in major depressive disorder using QEEG cordance. Psychiatry Investig 2015; 12:61–65Crossref, MedlineGoogle Scholar

57 Iosifescu DV, Greenwald S, Devlin P, et al.: Frontal EEG predictors of treatment outcome in major depressive disorder. Eur Neuropsychopharmacol 2009; 19:772–777Crossref, MedlineGoogle Scholar

58 Caudill MM, Hunter AM, Cook IA, et al.: The antidepressant treatment response index as a predictor of reboxetine treatment outcome in major depressive disorder. Clin EEG Neurosci 2015; 46:277–284Crossref, MedlineGoogle Scholar

59 Hunter AM, Cook IA, Greenwald SD, et al.: The antidepressant treatment response index and treatment outcomes in a placebo-controlled trial of fluoxetine. J Clin Neurophysiol 2011; 28:478–482MedlineGoogle Scholar

60 Leuchter AF, Cook IA, Gilmer WS, et al.: Effectiveness of a quantitative electroencephalographic biomarker for predicting differential response or remission with escitalopram and bupropion in major depressive disorder. Psychiatry Res 2009; 169:132–138Crossref, MedlineGoogle Scholar

61 Widge AS, Avery DH, Zarkowski P: Baseline and treatment-emergent EEG biomarkers of antidepressant medication response do not predict response to repetitive transcranial magnetic stimulation. Brain Stimul 2013; 6:929–931Crossref, MedlineGoogle Scholar

62 Cavanagh JF, Frank MJ: Frontal theta as a mechanism for cognitive control. Trends Cogn Sci 2014; 18:414–421Crossref, MedlineGoogle Scholar

63 Fries P: Rhythms for cognition: communication through coherence. Neuron 2015; 88:220–235Crossref, MedlineGoogle Scholar

64 Mumtaz W, Xia L, Mohd Yasin MA, et al.: A wavelet-based technique to predict treatment outcome for major depressive disorder. PLoS One 2017; 12:e0171409. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5289714/Crossref, MedlineGoogle Scholar

65 Schmidt FM, Sander C, Dietz M-E, et al.: Brain arousal regulation as response predictor for antidepressant therapy in major depression. Sci Rep 2017; 7:45187Crossref, MedlineGoogle Scholar

66 Al-Kaysi AM, Al-Ani A, Loo CK, et al.: Predicting tDCS treatment outcomes of patients with major depressive disorder using automated EEG classification. J Affect Disord 2017; 208(suppl C):597–603Crossref, MedlineGoogle Scholar

67 Olbrich S, Tränkner A, Surova G, et al.: CNS- and ANS-arousal predict response to antidepressant medication: findings from the randomized iSPOT-D study. J Psychiatr Res 2016; 73:108–115Crossref, MedlineGoogle Scholar

68 Arns M, Gordon E, Boutros NN: EEG abnormalities are associated with poorer depressive symptom outcomes with escitalopram and venlafaxine-XR, but not sertraline: results from the multicenter randomized iSPOT-D study. Clin EEG Neurosci 2017; 48:33–40Crossref, MedlineGoogle Scholar

69 van Dinteren R, Arns M, Kenemans L, et al.: Utility of event-related potentials in predicting antidepressant treatment response: an iSPOT-D report. Eur Neuropsychopharmacol 2015; 25:1981–1990Crossref, MedlineGoogle Scholar

70 Trivedi MH, McGrath PJ, Fava M, et al.: Establishing Moderators and Biosignatures of Antidepressant Response in Clinical Care (EMBARC): rationale and design. J Psychiatr Res 2016; 78(suppl C):11–23Crossref, MedlineGoogle Scholar

71 Tenke CE, Kayser J, Pechtel P, et al.: Demonstrating test-retest reliability of electrophysiological measures for healthy adults in a multisite study of biomarkers of antidepressant treatment response. Psychophysiology 2017; 54:34–50Crossref, MedlineGoogle Scholar

72 Lam RW, Milev R, Rotzinger S, et al.: Discovering biomarkers for antidepressant response: protocol from the Canadian Biomarker Integration Network in Depression (CAN-BIND) and clinical characteristics of the first patient cohort. BMC Psychiatry 2016; 16:105Crossref, MedlineGoogle Scholar

73 American Psychiatric Association Task Force on Quantitative Electrophysiological Assessment: Quantitative electroencephalography: a report on the present state of computerized EEG techniques. Am J Psychiatry 1991; 148:961–964LinkGoogle Scholar

74 Nuwer M: Assessment of digital EEG, quantitative EEG, and EEG brain mapping: report of the American Academy of Neurology and the American Clinical Neurophysiology Society. Neurology 1997; 49:277–292Crossref, MedlineGoogle Scholar

75 Coburn KL, Lauterbach EC, Boutros NN, et al.: The value of quantitative electroencephalography in clinical psychiatry: a report by the Committee on Research of the American Neuropsychiatric Association. J Neuropsychiatry Clin Neurosci 2006; 18:460–500Crossref, MedlineGoogle Scholar

76 Pizzagalli DA: Frontocingulate dysfunction in depression: toward biomarkers of treatment response. Neuropsychopharmacology 2011; 36:183–206Crossref, MedlineGoogle Scholar

77 Poldrack RA, Baker CI, Durnez J, et al.: Scanning the horizon: towards transparent and reproducible neuroimaging research. Nat Rev Neurosci 2017; 18:115–126Crossref, MedlineGoogle Scholar

78 DeBattista C, Kinrys G, Hoffman D, et al.: The use of referenced-EEG (rEEG) in assisting medication selection for the treatment of depression. J Psychiatr Res 2011; 45:64–75Crossref, MedlineGoogle Scholar

79 Gatsonis C, Paliwal P: Meta-analysis of diagnostic and screening test accuracy evaluations: methodologic primer. AJR Am J Roentgenol 2006; 187:271–281Crossref, MedlineGoogle Scholar

80 Ulrich G, Haug H-J, Fähndrich E: Acute vs chronic EEG effects in maprotiline- and in clomipramine-treated depressive inpatients and the prediction of therapeutic outcome. J Affect Disord 1994; 32:213–217Crossref, MedlineGoogle Scholar

81 Canali P, Sferrazza Papa G, Casali AG, et al.: Changes of cortical excitability as markers of antidepressant response in bipolar depression: preliminary data obtained by combining transcranial magnetic stimulation (TMS) and electroencephalography (EEG). Bipolar Disord 2014; 16:809–819Crossref, MedlineGoogle Scholar

82 Kalayam B, Alexopoulos GS: A preliminary study of left frontal region error negativity and symptom improvement in geriatric depression. Am J Psychiatry 2003; 160:2054–2056LinkGoogle Scholar

83 Salvadore G, Cornwell BR, Sambataro F, et al.: Anterior cingulate desynchronization and functional connectivity with the amygdala during a working memory task predict rapid antidepressant response to ketamine. Neuropsychopharmacology 2010; 35:1415–1422Crossref, MedlineGoogle Scholar

84 Wang Y, Fang Y, Chen X, et al.: A follow-up study on features of sensory gating P50 in treatment-resistant depression patients. Chin Med J Engl Ed 2009;122:2956–2960MedlineGoogle Scholar

85 Staedt J, Hünerjäger H, Rüther E, et al.: Sleep cluster arousal analysis and treatment response to heterocyclic antidepressants in patients with major depression. J Affect Disord 1998; 49:221–227Crossref, MedlineGoogle Scholar

86 Knott VJ, Telner JI, Lapierre YD, et al.: Quantitative EEG in the prediction of antidepressant response to imipramine. J Affect Disord 1996; 39:175–184Crossref, MedlineGoogle Scholar

87 Luthringer R, Minot R, Toussaint M, et al.: All-night EEG spectral analysis as a tool for the prediction of clinical response to antidepressant treatment. Biol Psychiatry 1995; 38:98–104Crossref, MedlineGoogle Scholar

88 Kupfer DJ, Ehlers CL, Pollock BG, et al.: Clomipramine and EEG sleep in depression. Psychiatry Res 1989; 30:165–180Crossref, MedlineGoogle Scholar

89 Kasper S, Katzinski L, Lenarz T, et al.: Auditory evoked potentials and total sleep deprivation in depressed patients. Psychiatry Res 1988; 25:91–100Crossref, MedlineGoogle Scholar

90 Frank E, Jarrett DB, Kupfer DJ, et al.: Biological and clinical predictors of response in recurrent depression: a preliminary report. Psychiatry Res 1984; 13:315–324Crossref, MedlineGoogle Scholar

91 Kupfer DJ, Spiker DG, Coble PA, et al.: Sleep and treatment prediction in endogenous depression. Am J Psychiatry 1981; 138:429–434LinkGoogle Scholar

92 Paige SR, Hendricks SE, Fitzpatrick DF, et al.: Amplitude/intensity functions of auditory event-related potentials predict responsiveness to bupropion in major depressive disorder. Psychopharmacol Bull 1995; 31:243–248MedlineGoogle Scholar

93 Blackford JU: Leveraging statistical methods to improve validity and reproducibility of research findings. JAMA Psychiatry 2017; 74:119–120Crossref, MedlineGoogle Scholar

94 R Core Team: R: A Language and Environment for Statistical Computing. Vienna, R Foundation for Statistical Computing, 2014. http://www.R-project.org/Google Scholar

95 Doebler P: mada: Meta-Analysis of Diagnostic Accuracy. Vienna, R Foundation for Statistical Computing, 2015. http://CRAN.R-project.org/package=madaGoogle Scholar

96 Viechtbauer W: Conducting meta-analyses in R with the metafor package. J Stat Softw 2010; 36:1–48CrossrefGoogle Scholar

97 Reitsma JB, Glas AS, Rutjes AWS, et al.: Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews. J Clin Epidemiol 2005; 58:982–990Crossref, MedlineGoogle Scholar

98 Deeks JJ, Macaskill P, Irwig L: The performance of tests of publication bias and other sample size effects in systematic reviews of diagnostic test accuracy was assessed. J Clin Epidemiol 2005; 58:882–893Crossref, MedlineGoogle Scholar

99 Rücker G, Schwarzer G, Carpenter J: Arcsine test for publication bias in meta-analyses with binary outcomes. Stat Med 2008; 27:746–763Crossref, MedlineGoogle Scholar

100 Schwarzer G: meta: General Package for Meta-Analysis. Vienna, R Foundation for Statistical Computing, 2015. http://CRAN.R-project.org/package=metaGoogle Scholar

101 Sterne JAC, Sutton AJ, Ioannidis JPA, et al.: Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ 2011; 343:d4002Crossref, MedlineGoogle Scholar

102 Moher D, Liberati A, Tetzlaff J, et al.: Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009; 6:e1000097Crossref, MedlineGoogle Scholar

103 Mulert C, Juckel G, Augustin H, et al.: Comparison between the analysis of the loudness dependency of the auditory N1/P2 component with LORETA and dipole source analysis in the prediction of treatment response to the selective serotonin reuptake inhibitor citalopram in major depression. Clin Neurophysiol 2002; 113:1566–1572Crossref, MedlineGoogle Scholar

104 Cook IA, Leuchter AF, Morgan M, et al.: Early changes in prefrontal activity characterize clinical responders to antidepressants. Neuropsychopharmacology 2002; 27:120–131Crossref, MedlineGoogle Scholar

105 Pizzagalli DA, Webb CA, Dillon DG, et al.: Pretreatment rostral anterior cingulate cortex theta activity in relation to symptom improvement in depression: a randomized clinical trial. JAMA Psychiatry 2018; 75:547–554Google Scholar

106 Khodayari-Rostamabad A, Reilly JP, Hasey GM, et al: Using pre-treatment electroencephalography data to predict response to transcranial magnetic stimulation therapy for major depression, in Engineering in Medicine and Biology Society, 2011 Annual International Conference of the IEEE. IEEE, 2011. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6091584Google Scholar

107 Khodayari-Rostamabad A, Reilly JP, Hasey G, et al: Using pre-treatment EEG data to predict response to SSRI treatment for MDD, in Engineering in Medicine and Biology Society, 2010 Annual International Conference of the IEEE. IEEE, 2010. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5627823Google Scholar

108 Khodayari-Rostamabad A, Reilly JP, Hasey GM, et al.: A machine learning approach using EEG data to predict response to SSRI treatment for major depressive disorder. Clin Neurophysiol 2013; 124:1975–1985Crossref, MedlineGoogle Scholar

109 Akaike H: A new look at the statistical model identification. IEEE Trans Automat Contr 1974; 19:716–723CrossrefGoogle Scholar

110 Botteron K, Carter C, Castellanos FX, et al: Consensus Report of the APA Work Group on Neuroimaging Markers of Psychiatric Disorders. Washington, DC, American Psychiatric Association, 2012Google Scholar

111 Button KS, Ioannidis JPA, Mokrysz C, et al.: Power failure: why small sample size undermines the reliability of neuroscience. Nat Rev Neurosci 2013; 14:365–376Crossref, MedlineGoogle Scholar

112 Insel TR, Wang PS: Rethinking mental illness. JAMA 2010; 303:1970–1971Crossref, MedlineGoogle Scholar

113 Williams LM: Defining biotypes for depression and anxiety based on large-scale circuit dysfunction: a theoretical review of the evidence and future directions for clinical translation. Depress Anxiety 2017; 34:9–24Crossref, MedlineGoogle Scholar

114 Webb CA, Dillon DG, Pechtel P, et al.: Neural correlates of three promising endophenotypes of depression: evidence from the EMBARC study. Neuropsychopharmacology 2016; 41:454–463Crossref, MedlineGoogle Scholar

115 Widge AS, Zorowitz S, Link K, et al.: Ventral capsule/ventral striatum deep brain stimulation does not consistently diminish occipital cross-frequency coupling. Biol Psychiatry 2016; 80:e59–e60Crossref, MedlineGoogle Scholar

116 Widge AS, Arulpragasam AR, Deckersbach T, et al: Deep brain stimulation for psychiatric disorders, in Emerging Trends in the Social and Behavioral Sciences. Edited by Scott RA, Kosslyn SM. New York, John Wiley & Sons, 2015Google Scholar

117 Juckel G: Serotonin: from sensory processing to schizophrenia using an electrophysiological method. Behav Brain Res 2015; 277:121–124Crossref, MedlineGoogle Scholar

118 Mueller EM, Panitz C, Hermann C, et al.: Prefrontal oscillations during recall of conditioned and extinguished fear in humans. J Neurosci 2014; 34:7059–7066Crossref, MedlineGoogle Scholar