Privacy of the doctor-patient relationship is under assault from many directions: insurers, managed care reviewers, employers, pharmacy benefits managers, and the government, to name just a few. This diminution in the protections afforded patients’ confidences ought to be of concern to all physicians (1). But it is of special concern to psychiatrists, since the threat that patients’ communications to psychiatrists could be disclosed outside the treatment setting may discourage patients from seeking badly needed psychiatric care. Privacy is a foundational requirement for psychiatric treatment.
Crafting safeguards for medical privacy, though, is complicated by the necessity of taking into account legitimate and critical uses for medical record information. Research using large medical databases, discussed in this issue by Simon and colleagues, is one such example. Simon and his collaborators document the unquestioned importance of research goals that can be pursued only by using large depositories of medical information, including demonstration of the effects of current policies constricting mental health services in both the public and private sectors. Thus, the very real imperative to protect patients’ privacy must somehow accommodate the need to use medical databases to improve psychiatric care for all patients.
Among the solutions suggested by privacy advocates and researchers, some are unlikely to resolve this dilemma. It is worth taking a minute to peer down these blind alleys. Persons seeking maximal protections for patients’ privacy sometimes urge that no researcher be allowed access to identifiable medical information without patients’ explicit informed consent. This is the historic rule that has governed disclosure of medical record data, and it is a good one when research uses of data can be specified at the time the information is collected. But forbidding researchers access to existing records, accumulated without the specific intent to use them for research purposes, unless they first obtain the permission of the hundreds, thousands, or tens of thousands of persons whose data are involved, in practice would preclude most studies of this sort. Not only is the cost of contacting patients and former patients often prohibitive, but response rates are disappointing and introduce biases of unknown quality and dimensions (2).
An alternative sometimes offered by both sides in this debate is to ask patients at the inception of their care—perhaps when they enter a new health plan, first meet their physician, or are admitted to a hospital—whether they would grant a blanket consent to the use of their medical record information in future research projects. This approach, however, preserves the semblance of informed consent while stripping it of its content. Without knowing the purpose of the study, the data required, who will be collecting the data, the safeguards that will be in place, or what items might, by then, be recorded in their medical records, patients would be given no meaningful choice. They are equally likely to err in the direction of releasing information that they would want withheld if they were aware of all the details, and of refusing to disclose data for studies that they would likely support if only they had known more about them.
Members of the research community, in turn, may suggest that researchers simply be permitted access to confidential information without elaborate privacy protections, on the grounds that they can be trusted to protect subjects’ rights. The federal Office of Protection from Research Risks, however, reported 10 cases over an 8-year period in which allegations were made of improper disclosure of confidential information, and some of these complaints were substantiated (3). Nor does a frequently heard contention, that removing identifying information from data solves all privacy problems, hold water. Residual information in "deidentified" databases, including birth date, zip code, gender, and the like, when matched with publicly available data sets such as voter registration lists, can permit the accurate linkage of medical record information with particular persons (4). The more data sets to which a researcher has access, the easier such reidentification becomes.
If these approaches leave much to be desired, what will work? I think something very much like the thoughtful approach suggested by Simon et al. represents the best means of reconciling the competing interests:
1. Where there is a clear intent at the time that medical information is being collected to use it for research purposes, patients’ informed consent should be a prerequisite to access to those data.
2. In cases in which researchers seek access to existing databases of information not collected for research purposes, review by an institutional review board (IRB) should be required to determine whether criteria for waiving patients’ consent have been met. Proposed U.S. Department of Health and Human Services regulations on confidentiality of electronic medical data (although problematic in other respects ) augment existing criteria in a useful way (5). In essence, investigators would need to demonstrate the importance of the research, the minimal nature of risk to subjects whose data would be used, the infeasibility of conducting the study without those data and without waiving consent, and the presence of protections for patients’ privacy, including removal of identifiers and access to the fewest data necessary. Studies occurring outside the IRB system would no longer be able to evade review.
3. Stringent penalties should be created for misuse of research data. This is one of the areas where the current regulatory system is deficient. Penalties for misuse—including attempts to reidentify information provided in "deidentified" form—place no burden on the legitimate research community and are likely to reassure the public regarding the safety of data in researchers’ hands. Federal regulatory agencies may need broader authority to investigate breaches of privacy (3). Contractual mechanisms, in which researchers agree to stipulations on the use of the information they are receiving, may also be helpful (4).
As thought is being given to how best to protect patients’ privacy while facilitating essential research, two other issues—one bureaucratic, the other technological—should be considered. IRBs currently bear much of the burden for protecting patients whose data are acquired by researchers. However, there is reason to believe that they may not do a very good job, in part because of a lack of guidance and in part because of the many other responsibilities they carry (3). Ways of bolstering IRB function and of focusing members’ attention on privacy issues need to be found; some modification of the federal regulations that govern IRBs may be required. On the technological side, impressive algorithms are being developed to allow computerized data sets to be altered to protect privacy while minimizing the distortion to the analyses performed with those data (4). This work is intriguing, promising, and worthy of further support.
Responsible research and strong privacy protections are not incompatible. Neither should be sacrificed as we fashion policy in this area.
Address reprint requests to Dr. Appelbaum, Department of Psychiatry, University of Massachusetts Medical School, Worcester, MA 01655; firstname.lastname@example.org (e-mail).