The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
EditorialFull Access

Demonstrating Drug Action

Informative clinical trials are designed to test a hypothesis about treatment efficacy by contrasting two or more treatments in a well-defined disease population with validated outcome criteria and analyzing the results using prospectively defined statistical approaches. Ideally, the study is driven by an important clinical question, which is clearly asked and designed to generate a parsimonious answer. Judging the validity of a study involves evaluating several methodological questions: Is the study based on a clear question? Are the study design and population appropriate for the question asked? Do the outcome measures match the goals of the study? Is there evidence that the data collection is rigorous? Is the data analysis appropriate? Last, is the interpretation of the data forthright? Several additional issues contribute to the relevance of the study results, including the composition of the study population (therefore, the generalizability of the results) and the magnitude of the study effect (suggesting whether the treatment will make a clinical difference). Clinical trials characteristically attempt to show a difference between two or more treatments for a condition (one frequently being placebo), run in distinct but well-matched populations (i.e., parallel groups). There are other evaluation methodologies, but these are more specialized and harder to use reliably.

Some studies, especially industry-sponsored registration trials, ask a simple question about the superiority of one treatment over another on prospectively identified criteria. All answers are informative if the study is rigorously conducted, even negative ones (where the standard treatment separates from placebo, but the experimental treatment fails as well). Failed studies (where the standard treatment in addition to the experimental treatment fails to differentiate from placebo) can be technically informative. The more complex the question, the more difficult the design. Many complex questions need several sequential experiments or a well thought-out complex design. This issue of the Journal publishes three highly informative clinical efficacy studies that illustrate many of these points.

The study by Hirschfeld et al. tests the hypothesis that risperidone monotherapy for treatment of mania produces a greater clinical response than treatment with placebo in well-defined acute manic patients with a priori outcome criteria. The importance of this study lies in demonstrating a new monotherapy for mania that might have a lower side effect burden. This study meticulously defines its patient groups, clearly reports data, and discusses the statistics it uses for analysis. The results clearly distinguish a priori results from exploratory analyses, thus distinguishing study outcomes from exploratory suggestions. These authors have also compared the effect sizes of their primary outcome data with other comparable studies in the literature. While these results are not from concurrently run analyses, the effect sizes provide an estimate of outcomes across different studies. One interesting and always controversial aspect of design in this study is its obvious compromise between “need to know” and feasibility represented by its short 3-week treatment duration. The outcome with this compromised design verifies a significant signal at 3 weeks against placebo but does not purport to show the full magnitude of drug action. A next clinically informative study might be to contrast two treatments with each other without placebo, making a longer study design feasible. There are additional questions not answered by these data, which require future study. Would response be enhanced by concomitant mood stabilizers? Would outcome be maintained with a longer course of treatment? Are there important clinical subgroups who do best with monotherapy?

The study in this issue by Meltzer et al., which tests four novel putative antipsychotic compounds in the treatment of schizophrenia, represents a innovative trial design meant to efficiently carry out concept testing with antipsychotics. There are four hypotheses in this trial, one around each of the four compounds. Being novel hypotheses, the supporting data are not robust but are still strong enough to create a rationale for human testing. It is especially important to pursue novel schizophrenia hypotheses in human studies, since the molecular pathophysiology is unknown and animal model data are remote from clinical phenomenology. The design in this study includes four distinct protocols, one for each novel drug, but with a shared placebo and a shared active comparator group. Design and implementation and the analysis of this complex study approach all support the utility of the design. It is likely that the high dropout rate and the variability may have been the costs of this difficult design, but these factors did not obscure informative outcomes. Two of the four compounds show a signal for antipsychotic action and, therefore, compel more study. It is not expected that this kind of a trial design will necessarily reflect the magnitude of clinical effect or provide answers to subtle questions of subgroup response or even to optimal dose but merely provide a signal of antipsychotic efficacy. It is of note that both the positive data with the neurokinin-3 and serotonin 2A/2C receptor antagonists and the negative data with the central cannabinoid and the neurotensin receptor antagonists are of great value in guiding future investigations into the neural systems involved in schizophrenia therapeutics.

The questions that have arisen in the public and scientific literature lately about the use of SSRIs in children and adolescents are addressed for one of the currently available SSRIs by Wagner et al. The issue of whether it is effective to use SSRIs in childhood and adolescent depression has been repeatedly raised over the last years in the context of our field failing to produce clear efficacy answers in children. Depressed children are being treated with SSRIs in greater and greater numbers, without demonstrated efficacy in the age group. The difficulty of demonstrating efficacy with tricyclic antidepressants in children has fueled suspicions that there may exist an age-dependent resistance to treatment. The importance of this well-designed large study for therapeutic strategies in children and adolescents cannot be overstated. It is important that the methodology of this study is solid and the numbers adequate to test the efficacy question asked. The result that citalopram reduced depression more than placebo in this child and adolescent population provides a clear answer for physicians that will (in combination with results from additional studies) guide treatment decisions. It is especially gratifying to see an early onset of action at 1 week of treatment, suggesting an advantage that can be followed up in future studies. This study also set a high methodologic standard for psychiatric diagnosis in pediatric studies. It would be an understatement to say that more such studies are needed.

One would always wish for more in terms of information from drug trials in psychiatric diseases. A common physician complaint about these trials is that they fail to sufficiently inform clinical practice because of restricted entry criteria, fixed-dose design, and limited duration of treatment. It is true that initial registration trials have a goal of demonstrating superiority over placebo to become approved for the market. But this does not rule out additional Phase 4 studies done in large enough patient cohorts to fully inform pressing clinical issues. How do comorbid conditions alter drug response? What treatments are effective in medication nonresponders? What kinds of actions can be expected with long-term treatment? It will be important for industry to address these kinds of Phase 4 questions for clinical use as well as the registration trials.

One particular issue in our field makes informative clinical trials particularly difficult. This is that we do not know what exactly we are treating in terms of its biology. Psychiatric diagnoses are not based on molecular pathology (rather, phenomenology), and new drugs are not directed toward known, disease-related molecules (rather, toward hypotheses). Therefore, we may not be recruiting the correct patient populations for a particular treatment nor have a drug directed toward the disease pathophysiology. Moreover, we may not be measuring anywhere near the optimal outcome measures in our trials (e.g., consider the constraint of only measuring “fatigue” in the treatment of anemia, and not having a RBC count). Nonetheless, even though we do not yet have our molecular targets, we cannot give up on drug development. Indeed, we already have treatments for our diseases, and these may be better treatments than we deserve, based on the state of our knowledge. We need now to hone the treatments that we have and develop the valuable clinical trial methodologies to carry us into the future. Meanwhile, we need to translate the rich basic knowledge accumulating in neuroscience into advances for therapeutics.

Address reprint requests to Dr. Tamminga, UT Southwestern Medical School, Dallas, TX 75390; (e-mail).