One of the achievements of modern psychiatry has been the adoption of rigorous scientific methods in the investigation of somatic and psychosocial treatments for mental illness. The institution of reliable diagnostic methods, the evolution of strategies such as fixed-dose medication administration, the use of well-specified treatment manuals, and the elaboration of therapist training and certification procedures have permitted sufficient experimental control to enable researchers to identify efficacious treatments. However, this control, while important for the initial testing of a treatment approach, limits the degree to which results can be generalized to the population of people in need of treatment. It also does not provide the information necessary for clinicians to tailor treatment strategies to the social and cultural needs of their individual patients, who live in a wide variety of circumstances. Although psychiatric research has been moving to a more inclusive, community-based approach, further progress is needed. This article suggests ways in which clinical investigators might conceptualize the differences between clinic-based and community-based interventions and provides practical steps so that investigators can begin to make the transition in their own research.
The ultimate goal of clinical intervention research—whether based in academic medical centers or in real-world community settings—is to find a way to improve the care and lives of people suffering from specific illnesses, disorders, and/or disabilities. Although the goal is the same in each setting and the methods are in many cases similar, the questions guiding research are different. Neither type of research, by itself, is sufficient to reach the goal of improving patients’ care and lives; both are necessary.
The research process begins when a promising treatment approach is identified and/or developed and the decision is made to test it. Traditional, academically based, randomized clinical trials, sometimes called "efficacy" research, test the experimental intervention—whether biological, pharmacological, and/or psychosocial—against a placebo or alternate control treatment, focusing on a single specific main outcome. This approach follows a regulatory or Food and Drug Administration model (1) and is designed to test whether the experimental intervention has a statistically significant effect greater than that of placebo or an alternative treatment. To make this determination, investigators must control, as much as possible, all the variables extraneous to the intervention. They must ensure the internal validity of the study. As a result, subject inclusion criteria are defined as narrowly as possible, procedures are standardized, a single main outcome is defined, and subjects are randomly assigned to receive an active or an inactive, alternate, or control treatment. These procedures reduce the error variance, maximize the signal-to-noise ratio, and decrease the chances of type I and II errors. The data from such a trial are used to determine whether the experimental treatment is active—that is, whether there are statistically and clinically significant differences between the treatment and control conditions.
Community-based intervention trials, sometimes called "effectiveness" research, also test a treatment intervention but occur within the context of the community environment. Conducting a trial in a community setting requires a shift in focus from the mindset of an academically based, regulatory-model, randomized clinical trial. A community-based trial is not a simple replication of a randomized clinical trial with more subjects and more diverse outcome measures conducted in a naturalistic setting. It is not a regulatory-model trial with special attention to power and intervention fidelity. To conduct a community-based trial and to target the need for external validity, a whole new set of issues needs to be considered by the investigator designing the trial, including
The feasibility of conducting research within the chosen setting.
The generalizability of the setting to those for which community care is likely to be provided.
The acceptability of the chosen intervention and its potential consequences for patients, family members, providers, and potential payers (such as insurance companies).
The similarity of the study group to those most in need of treatment.
The relevance of the outcome assessments to the goals of those being treated and those involved in their care.
The mechanisms necessary to ensure that the intervention can continue, if successful, after the experimental trial is complete.
A community-based intervention study typically begins with a treatment or components of treatments that have already been shown to be active in a randomized clinical trial, such as medication plus cognitive behavior therapy for treatment of depression. In developing the experimental design and choosing the variables to be measured, an investigator needs to consider how this treatment might be affected by the various contexts (social, cultural, economic, and psychological) in which the trial is conducted. For example, how will people who screen positive for the disorder in question be convinced that they can benefit from treatment and should join the study? Will it matter to recruitment, retention, or study results if some of the people receiving this treatment live in substandard housing and do not have access to transportation? If some of the people who are participating in the study refuse medication but are willing to accept psychotherapy, can the treatment protocol be modified? How will co-occurring substance abuse affect treatment compliance? How will community clinicians be convinced to follow the treatment protocol? In other words, in community-based research, investigators must somehow cope with the "noise" introduced by real life without compromising their ability to draw conclusions about the effect of treatment.
For someone accustomed to traditional, academically based, randomized clinical trials, it might be helpful to think of community-based intervention trials as experiments that are designed to understand how the noise or error variance that exists in community settings (that would be controlled in a randomized clinical trial) affects the intervention, those providing the intervention, and the potency of its effect. The variables to be considered in a community-based trial can include many specific factors, including additional clinical illnesses and disabilities, the treatment setting, the attitudes and training of community clinicians, the system of financing and the organization of care, family, friends, and patients’ and clinicians’ life stresses, personalities, motivations, perceptions, culture or ethnicity, gender, and socioeconomic status.
Both academically based and community-based randomized clinical trials are critically important in reaching the goal of improving mental health care and the lives of those suffering from mental disorders. However, making the transition from conducting research in an academic setting to designing and conducting a community-based trial is a difficult task. What follows is one way to think about community-based intervention research that could facilitate the transition from the academic setting to the community.
The initial goal in moving beyond clinic-based trials is to improve the generalizability or external validity of the study design and to maximize the public health significance of the intervention (see, for example, the web site of the Association of Schools of Public Health for an explanation of the distinction between a public health approach and a medical approach: http://www.asph.org/aa_section.cfm/3/87). However, this entails a large number of choices that the investigator must make. Each choice can have a direct effect on both the study’s total cost and the degree of external validity. Many challenging questions arise in adapting a randomized clinical trial to a community setting. For example,
Who should be in the study? Which groups and subgroups? How many? How much diversity is enough? Which population characteristics are important: severity of illness, comorbidities, socioeconomic status, gender, age, race or ethnicity, or something else?
Which and what type of sites or communities should be chosen? How many? How much diversity in a site is important?
Which outcome assessments should be chosen? Why?
There is no one way to proceed. But a public health perspective is a useful guide in making choices. It encourages investigators to look beyond the DSM disorder categories to issues of community health and the social context of illness and functioning. It also provides a framework in which to think through the answers to the questions. What follows are public-health-oriented questions related to the sample, the site/community, and the outcome measures that can help frame the decision-making process.
Questions related to the clinical epidemiology and cultural manifestations of the disorder should be considered when planning an expansion into a community setting. For example,
Who is most likely to be debilitated by the disorder? Can those individuals be accessed by the research team? Are there culturally based social conventions or beliefs (e.g., "Men don’t get depressed") or power relationships (e.g., women do not seek care without the permission of their fathers or husbands) that serve as barriers to access to care?
Are there racial or cultural differences in the prevalence or manifestations of the disorder? If yes, can the design of the study take this into account?
Are those most affected by the disorder currently being treated? Is there a group that particularly needs to be reached? Where can it be reached?
Are there additional or co-occurring medical or psychiatric conditions that are particularly prevalent among certain groups with the target disorder?
Are there cultural factors that could affect the acceptability of treatments or treatment providers?
Before planning an expansion of an intervention, investigators need to know about community settings, the people who live there, their perspectives on mental illness and the associated stigma, their feelings about the utility and acceptability of mental health treatment and those who provide it, and the prevalence of the disorder within the community. From a public health perspective, we would like to reach those with the greatest burden of disease and/or with the least access to care.
Investigators should be aware of the type of care that is available in the community, how it is used, and what practitioners (social workers, psychologists, nurses, and psychiatrists) are accustomed to and willing to do. The investigators should consider questions such as
Are there existing, culturally sanctioned sources of care available in the community (e.g., primary care providers or clergy)? Are they used? How do they interact with other sources? For providers who are not accep to the target population, what are the factors that make them unacceptable (e.g., race, social class, lack of personal connection to the community, or type of training)?
What services and drugs are insurance companies or government programs currently willing to pay for? Whom will they reimburse?
Is the treatment intervention site one that is likely to be generalizable? Will major modifications need to be made at the research site that are not likely to be made in other locations?
If the site is currently providing treatment, how is staffing organized, what is the organizational culture and climate, what is the patient flow like, what types of patients come in, and which individuals in the community is the site missing?
Are there time, cost, or other difficulties (such as professional guild conflicts) that make implementation of the treatment difficult or impossible in a particular community setting?
Investigators moving into a community setting need to take into account the feasibility both of the research design and of sustaining the treatment after the research is over. Expensive, complex, and/or time-consuming interventions, even if shown to be highly efficacious, have little chance of being implemented in community settings or paid for by insurance or government programs. While cost of treatment should not be the first consideration, for interventions designed for community settings, a consideration of feasibility issues such as cost, cultural "fit" with the population, and ease of implementation are key.
Finally, investigators need to consider the assessment of outcomes. From a public health perspective, DSM diagnostic categories and symptom counts are not particularly useful in and of themselves. While it is important to track symptoms and diagnostic categories for comparative purposes and for an overview of the intervention, a larger perspective is necessary. Attention needs to be given to determining the most salient outcomes for the target population.
Each person seeking treatment or being treated in the mental health system holds his or her own set of preferences regarding what he or she wants or needs from treatment. These preferences are likely to be strongly influenced by gender, age, length and severity of illness, diagnosis or diagnoses, cognitive functioning, current levels of self-esteem or self-efficacy, size and extent of social networks, cultural or subcultural norms, as well as location of residence (e.g., urban versus rural). Presence of symptoms may not be the most salient feature of the illness to the patient, his or her family, or friends. Being able to function in everyday activities of life is likely the most important issue. What those activities are is defined by the context in which a person lives. For example, the daily expectations of functioning for those suffering from a mental disorder will vary depending on the community they live in, their family circumstances, the nature of their job, and the like. Investigators need to explore what outcomes are relevant for those targeted for the intervention before they settle on an outcomes assessment battery (2).
Most of the studies in the literature that are characterized as clinical effectiveness research are expanded versions of efficacy research. Typically, these studies have a large sample, the site is in the community, and the outcomes assessment includes nonsymptom variables such as quality of life. While the generalizability afforded by the inclusion of more diverse samples, settings, and measures does expand the knowledge necessary for moving efficacious treatments into community settings and addressing the problems of individuals, more complex issues need to be considered.
For example, in every clinical trial, particularly in community settings, the results can be somewhat ambiguous and perhaps confusing, particularly when there is little difference between outcomes in the control and experimental groups. Some in the treatment group will fail to improve or will even get worse. Some in the control group will improve dramatically, while others will seem to stabilize. This was the case in a Fort Bragg study that was designed to test whether comprehensive mental health services for children would improve outcomes (3, 4). Even after 5 years, there were no differences between the children receiving a substantial amount of care and those who received little. However, because of the way the study was designed, the investigators did not know what accounted for the lack of difference.
To be able to make sense of ambiguous outcomes, investigators need to design studies that can answer more than, "Does treatment A work better than treatment B?" More complex questions and more diverse methods are necessary. Community-based studies need to be designed to help us understand how the treatment works or does not work for this particular population and under these circumstances and why.
Another way to think about the design of the study is to ask, "What will clinicians in practice settings like this need to know?" The question for a clinician in a practice setting is not, "What on average works best for this disorder?" The question for each clinician is, "What should I do for this specific patient given what I know about the person, his or her life, background, and preferences, the disorder, and the known treatments, and what are the likely outcomes?"
To address these clinical-practice questions, methods and perspectives that have not typically been a part of clinical treatment trials are useful. They come from the social and behavioral sciences. To understand these, a change of focus is needed. The focus needs to be on the situation in which that treatment is being administered, not on the experimental treatment itself. The act of providing treatment is a behavior. It takes place in a social, cultural, economic, and political context. This context affects the behaviors of both the person providing the treatment and the one receiving it. Treatment process research, which aims to identify mechanisms of action in behavioral treatments, is one step in this direction. But to be able to provide information the clinician in practice needs, a much deeper linkage and collaboration with the social and behavioral sciences is necessary.
The theory, methods, and empirical findings of the social and behavioral science literatures have not typically been integrated into clinical research (5). But they have the potential to help us understand many important aspects of the treatment situation. For example, these literatures can help inform our understanding of the relationship between the patient and the clinician and how individual psychological characteristics might affect that relationship and treatment outcomes (6, 7); how people outside the treatment relationship might affect the treatment approach, patient participation in treatment, and treatment outcomes (8); how organizational rules, incentive structures, and morale can affect how clinicians feel about their jobs and treat their patients (9); and how ethnic, gender, age cohort, and religious cultures affect a whole host of human attitudes and behaviors (10, 11), including whether a treatment or clinician will be accepted. However, thus far, little of the information from these and other studies has been incorporated into clinical trials. It is our hope that a new coalition of scientists can change that.
To understand how, why, when, and for whom treatments work and to provide information for practicing clinicians, the teams designing and conducting research need to be more diverse than in the past. A collaborating team requires a partnership among clinical treatment researchers, mental health services researchers, and social and behavioral scientists. Each has an important role.
Clinical treatment researchers have unique experience that nonclinician researchers do not have. This experience is critical to the design of a community-based trial. Their understanding of the disorders and the treatment process make them essential 1) for identifying key places where the behavioral sciences might improve our understanding, 2) for ensuring the integrity of clinical intervention protocols, and 3) for ensuring the safety of human subjects participating in the research.
Mental health services researchers frequently lack sufficient clinical expertise and/or the theoretical background to do this type of research on their own. What they do bring is experience in service and community settings. They know how to implement a study in the field, retain a focus on public health problems, and analyze and interpret complex data from these community settings. They understand the larger context in which treatments are provided, including issues related to the financing of care, the organizational structure of practice settings, and community politics.
Social and behavioral scientists have typically not been included in either clinical or services research. But they have a powerful collection of perspectives, theories, and methodological tools for understanding behavior in social and cultural contexts. However, they do not understand clinical issues and frequently have a difficult time understanding how their research could be applicable to clinical research or issues of public health (5).
Collaboration among these groups of researchers needs to begin even before a design or the choice of a community site is finalized. To be able to design and test meaningful interventions in community settings that are likely to succeed, investigators need to understand the sociocultural context of the people and their community. In order to achieve this understanding, much more than a pilot test of the intervention is necessary. Social scientists who understand in-depth qualitative methods, such as cultural anthropologists, need to be brought into teams. While conducting informal focus groups is a way to begin this process, experts in qualitative methods are critical for identifying elements of organizational, social, and cultural contexts that need to be considered in the design of the study.
In a recent article in Psychiatric Services, Ware et al. (12) provided a succinct summary of the qualitative approach that is needed:
Ethnography is a research method traditionally used by anthropologists to investigate unfamiliar cultures.… Typically ethnographic data are collected through participant observation and open-ended interviewing.… Both techniques are characterized by an interactive process: repeated reformulation and investigation of new research questions that arise from the answers to previous questions.
Among other things, ethnography elicits and represents "insider points of view."…Representation takes the form of explaining the meanings that insiders ascribe to their experiences—the ways they make sense of the world.…
Analysis in ethnography therefore consists of the interpretation and articulation of insider meanings.… This practice reflects the fact that in ethnography, research "subjects" are defined as expert informants.
Examples make it clear why ethnography is a critical first step in designing intervention trials that are successful.
Consider first the case of an ongoing study funded by the National Institute of Mental Health (MH-56864), in which the investigators wanted to treat depression among poor women. The investigators planned to use antidepressant medication and cognitive behavior therapy. However, they found that many of the African American women in their sample perceived their depressive symptoms as inevitable consequences of their social condition (poverty, crime, and abuse), and they refused to take medication. It was ineffective to approach them with a medicalized conceptualization of depression. They did, however, agree to group cognitive behavior therapy. Other women in the study were ashamed of their illness and did not want anyone in their families to find out. They did not want to come to a mental health clinic but agreed to treatment in other settings. Only through qualitative work can investigators understand how the target population perceives and responds to the symptoms they are experiencing and tailor the intervention to fit their needs. This understanding is critical to subject recruitment, retention, and cooperation with the study protocol.
Compliance problems can also occur for investigators who want to change the practice behavior of community-based clinicians. Lin et al. (13) discovered this in a study designed to test whether providing in-depth knowledge about the treatment of depression to primary care physicians would improve compliance with treatment guidelines. The investigators discovered that primary care physicians would adhere to guideline-based treatment only with continuing on-site support from mental health personnel. The investigators had designed the intervention without taking into account that primary care physicians have little time and little incentive to correctly diagnose and treat depression (14). Had they begun the study by conducting qualitative research to understand the practice and reimbursement constraints that exist for primary care physicians, they could have designed a very different intervention, one that addressed the constraints under which primary care physicians must work.
Many other factors in the community have the potential to "sabotage" a carefully conceptualized study. There can be personnel chaos within the clinic or center that has been chosen as the intervention site. Subjects may not have the time or resources to commute to the assessment site. Subjects may not trust interviewers from another cultural background. The predominate culture of the community may make it difficult for family members to acknowledge illness in their family and to participate in multi-family group therapy. All of these contextual factors can be explored and understood with ethnographic research and then can be used to inform the design of the study.
While this approach to study design is new to mental health research in general, there is an area of intervention research that has embraced this approach. Research studies funded by the National Institutes of Health (NIH) on AIDS prevention and treatment are currently using qualitative approaches to adapt and develop interventions. For example, Gorman (DA-10879) is using ethnography to learn how to adapt standard drug abuse interventions for men who have sex with men and are intravenous drug abusers. McGrath (NR-04377) is using ethnography to gain the information necessary to design an HIV/AIDS peer-group prevention program for adolescents of Pacific island descent. Rotheram-Borus and colleagues (MH-61513) are using an ethnographic study to inform an HIV preventive intervention they are developing for migrant workers in Anhui province, China. Coates (MH-42459) is using ethnography to collect the information he needs to get access to the social network of intravenous drug users in San Francisco so that researchers can find ways to mobilize the users’ social networks to get out messages on the prevention of HIV and change the networks’ norms and behaviors regarding needle exchange and safe sex. Abstracts of these ongoing grants can be accessed through the NIH CRISP system (http://commons.cit.nih.gov/crisp3/CRISP.Generate_Ticket).
While the target populations for mental health interventions can rarely be considered quite as exotic as the populations typically studied by anthropologists or those in ethnographies of HIV/AIDS patients, they are not typically as much like the clinical researchers designing the study as we would like them to be. They likely have different conceptions about mental illness and the degree of associated stigma, about the mental health system and its utility or acceptability, about what makes seeking treatment worthwhile, and about what outcomes are most important. So to design the most acceptable intervention and research protocol for a particular time, place, and population, mental health researchers need to work collaboratively with qualitative experts and statisticians familiar with longitudinal intervention designs.
In considering the design options in community settings, the noise and threats to the power of the results are likely to appear in a number of places. With all of the sources of potential noise, it is difficult to know where to begin. However, it is important for investigators to keep in mind that no study can control or assess all of the elements of noise. The investigators need to see this decision process as one akin to triage. They should consider which issues are most likely to threaten the power of the study and thus their ability to develop meaningful conclusions about what works for whom. This should be done on the basis of the evidence from the ethnography, from their understanding of the intervention itself, and from the services researchers’ experience in similar settings and in consultation with social and behavioral scientists who can provide insight into how and when sociocultural and behavioral processes operate.
For example, if the clinicians carrying out the study work in very different clinical settings with an array of insurers, then the nature of the organizational culture and climate they experience needs to be controlled or assessed (9). If an investigator thinks that the intervention itself might be sensitive to the emotional styles of the clinician and patient, the investigator will want to be able to control or assess that aspect of variability (15). If an ethnography shows that there are very different social support networks available for people in the study who have different types of housing (e.g., live in group homes or single-room-occupancy dwellings), the investigator might want to assess the functioning of the networks (16).
There is no right or simple answer about which factors need to be taken into account. Because there is no simple answer, investigators should allow themselves the time to explore these issues with every part of the team: social and behavioral scientists (e.g., services researchers, ethnographers, and statisticians) who are familiar with the identified behavioral and social processes—whether organizational, cultural, relational, or individual.
The approach described here is obviously a more time- and resource-intensive process than simply transporting an efficacious treatment to a community setting. It includes
Developing a public health goal and research question.
Using qualitative methods to understand the target population and its sociocultural context and to identify the most significant noise in that context.
Developing the intervention, research design, and analysis approach in collaboration with a multidisciplinary team, using the information from the ethnography as the fundamental touchstone or reality check.
But if each of these steps is followed, we can perhaps avoid or at least reduce the number of trials that show no effect. This must be our goal if we hope to answer the fundamental questions faced by clinicians in everyday practice: what works for whom, under what circumstances, and why?
Received Nov. 20, 2000; revision received Aug. 21, 2001; accepted Aug. 29, 2001. From the Division of Services and Intervention Research, NIMH; and the Department of Psychiatry, Western Psychiatric Institute and Clinics, University of Pittsburgh, Pittsburgh. Address reprint requests to Dr. Hohmann, Services Research and Clinical Epidemiology Branch, Division of Services and Intervention Research, National Institute of Mental Health, 6001 Executive Blvd., Rm. 7135, MSC 9631, Bethesda, MD 20892-9631; email@example.com (e-mail).The views expressed in this article are those of the authors and should not be construed as the official position of NIMH.