These are the times that try psychotherapists’ souls. Federal and state mental-health budget cuts have reduced the number of people who can afford one-on-one psychotherapy sessions to address their problems. Managed care companies demand to see proof that various psychological treatments work, and even then, they reimburse the cost of 2 or 3 months of psychotherapy at most. Meanwhile, in slick television commercials and in the pages of magazines, pharmaceutical firms tout pills for depression and other mental ailments as superior to old-fashioned talk therapy. Although psychiatrists, who are physicians, can prescribe these drugs, most psychotherapists—including psychologists, social workers, and clergy—cannot.
Today, the financial survival of any medical treatment or procedure rests on published evidence for its effectiveness. In that environment, the science of psychotherapy has assumed special urgency. Psychologists with backgrounds in both research and treatment stand at ground zero of efforts to conduct psychotherapy studies and then integrate the findings into clinical practice.
In August, the American Psychological Association (APA) in Washington, D.C., approved a policy statement on “evidence-based practice in psychology.” An 18-member committee of researchers and clinicians concluded that psychotherapists should monitor scientific evidence of the effectiveness of treatments for specific mental conditions. Data could then be weighed against their own judgment on how to treat particular patients. Treatment needs to address such vital issues as a patient’s personality traits, ethnic background, and religious beliefs, the committee added.
APA President Ronald F. Levant, a psychologist at the University of Akron in Ohio, sponsored the initiative as a way to broaden the definition of scientifically grounded psychotherapy. The impetus was largely economic. To the chagrin of many clinicians, various funding agencies and insurers are beginning to restrict reimbursements so that the only treatments eligible are the 2 dozen or so deemed “empirically supported” by APA’s Society of Clinical Psychology in 1998.
The list of science-backed psychotherapies emphasizes a handful of approaches grounded in concrete procedures that are described in training manuals. For instance, in cognitive therapy for depression, a therapist assists patients in identifying and correcting faulty beliefs, such as a tendency to regard any setback as confirmation of one’s failure as a person. Cognitive therapy includes homework assignments, for instance, a patient trying out a challenging new hobby and monitoring negative thoughts as they crop up.
In practice, clinicians usually encounter patients who display multiple problems and therefore need more than the “brand name–therapy” options on the 1998 list, remarks Carol D. Goodheart, a clinical psychologist in Princeton, N.J., who headed APA’s latest evidence-based-practice committee. The new APA statement represents a step toward much-needed explorations both of what makes for effective psychotherapy “in the field” and of how much practitioners know about science-endorsed approaches, Goodheart says.
The sticky issue of how to define evidence-based practice is far from settled. Psychologist Larry E. Beutler of the Pacific Graduate School of Psychology in Palo Alto, Calif., who helped assemble the 1998 list of approved treatments, argues that APA’s new policy statement mistakenly gives clinical judgment equal status with scientific findings. He suggests that it’s a “political concession” to the organization’s predominant membership of psychotherapy practitioners rather than researchers. However, even members of Goodheart’s committee disagree sharply about the direction psychotherapy research should take.
“Evidence-based practice is the most consequential, incendiary topic in mental health in recent years,” says psychologist John C. Norcross of the University of Scranton in Pennsylvania. A nagging question with a long history lies at the heart of the debate, notes Norcross, a member of Goodheart’s committee: Do treatments cure disorders, or do relationships heal people?
Much psychotherapy research relies on a method that boasts impeccable credentials among scientists who test new medications and medical procedures. In what are called randomized controlled trials, investigators identify a group of people with a common diagnosis, such as depression, and assign them at random to receive one of two or more treatments. These treatments typically include psychotherapy, a drug, or what amounts to a placebo, such as supportive counseling or no counseling during a waiting period.
In these investigations, weekly psychotherapy sessions usually extend for no more than 3 months. Researchers examine whether patients who complete psychotherapy do better than others do on a variety of measures, such as mood and social functioning, during treatment and for 6 months to 2 years afterwards.
Randomized controlled trials probe for beneficial effects of specific treatments. They don’t investigate improvement due to factors that occur in any form of therapy, such as a good working relationship between a therapist and a patient.
Many such trials indicate that certain psychotherapy techniques for alleviating depression, anxiety, eating disorders, and other ailments yield symptom improvement “substantially over and above” that produced by the therapy relationship and other general influences, remarks Boston University psychologist David H. Barlow, a member of APA’s evidence-based-practice committee.
Over the past 40 years, cognitive therapy for depression has earned its stripes in randomized controlled trials, concludes psychiatrist Aaron T. Beck of the University of Pennsylvania in Philadelphia in the September Archives of General Psychiatry. Beck is the founding father of cognitive therapy.
Consider a study published in the same journal in April. Two months of cognitive therapy worked as well as 2 months of antidepressant medication in lessening hopelessness and other symptoms of major depression, concluded a team led by two psychologists, the University of Pennsylvania’s Robert J. DeRubeis and Steven D. Hollon of Vanderbilt University in Nashville. Their study included 240 people with depression.
About half of the patients receiving either therapy or drug treatment improved markedly, compared with one-quarter of those given inert pills for 2 months.
Moreover, one-third of cognitive therapy patients given one to three “booster sessions” of the psychotherapy over the next year sustained their improvement, a figure similar to that for patients who continued to receive their antidepressants for 1 year.
“Randomized controlled trials are far from perfect, but they’re the best method we have to detect causal influence in psychotherapy,” says Vanderbilt’s Hollon, a member of APA’s evidence-based-practice committee. In his opinion, such studies represent the gold standard for establishing whether various types of psychotherapy deliver emotional relief.
Still, few clinicians’ training programs include cognitive therapy and only a minority of practicing therapists uses that technique. Beck suspects that cognitive therapy will become more widely used as more professionals become qualified to teach novices about it.
Out of control
Not everyone is brimming with optimism for psychotherapies bearing scientific seals of approval. According to psychologist Drew Westen of Emory University in Atlanta, many clinicians correctly view evidence-based psychotherapies as having limited relevance to real-life psychotherapy.
Treatments studied to date in randomized controlled trials are a far cry from the practices of most psychotherapists in the community, regardless of their training or theoretical orientation, according to Westen, a member of APA’s evidence-based-practice committee. Researchers need to use clinical practice as a natural laboratory to learn about psychological interventions worth testing more rigorously, he contends.
In a review of psychotherapy research, published in the July 2004 Psychological Bulletin, Westen described national surveys of clinicians indicating that psychological treatments in the community last considerably longer than do those studied in randomized controlled trials. Even cognitive therapists treat depressed patients for an average of nearly 1 year.
Moreover, most patients display not a single psychological condition but a mix of symptoms caused or intensified by troubling personality patterns, Westen asserts. The manual-based procedures used in experiments are, in his opinion, too narrow to provide adequate assistance in those cases.
For instance, a person’s anxiety, depression, and alcohol abuse could reflect an extreme sensitivity to rejection by others or, instead, a tendency to become immersed in negative feelings. In either case, as weeks or months of treatment pass and sources of distress become apparent to the patient, the symptoms being experienced often change, as do compelling circumstances in his or her life. The therapist modifies the treatment accordingly.
In Westen’s view, current psychotherapy research also suffers from a poor choice of control treatments. He argues that studies typically pit a therapy designed to work, such as cognitive therapy administered by well-trained therapists convinced of its effectiveness, against a treatment designed to fail, such as supportive counseling provided by individuals who know that the researchers view their approach as minimally effective at best.
Rare comparisons of patients receiving either of two forms of genuine psychotherapy have yielded no clear winners, Westen notes. In head-to-head comparisons, for example, a few months of cognitive therapy for depression works about as well as the same amount of interpersonal therapy does. The latter form of one-on-one talk therapy, which is also outlined in a training manual, focuses on helping the patient find ways to resolve conflicts with others, to adjust to new roles in life, and to foster better relationships.
Psychologist Bruce E. Wampold of the University of Wisconsin–Madison has combed through data from psychotherapy studies and concludes that a good working relationship between therapist and patient plays a larger role in sparking psychological progress than any particular treatment technique does.
If that’s correct, then practical insights will be difficult to glean from the current crop of randomized controlled trials of psychotherapy. “The methodological tail is wagging the therapeutic dog,” Westen says.
Wampold’s emphasis on the messy collaboration between patient and therapist raises a particularly thorny problem for standard psychotherapy studies, says psychologist J. Stuart Ablon of Massachusetts General Hospital in Newton. “Randomized controlled trials come from the study of medicines, not human interactions,” Ablon says. “You can’t fully control how psychotherapy is administered.”
Ablon first explored the inner workings of psychotherapy encounters in a series of studies directed by psychologist Enrico E. Jones of the University of California, Berkeley. Jones, who died in 2003, pioneered the use of a 100-item rating instrument, called the Process Q-set, to describe and classify videotaped psychotherapy sessions.
After watching a recorded session, observers rank the 100 items according to how well each describes what happened, from those most characteristic to least characteristic of the therapy hour. Q-set items address three facets of a therapy session: the patient’s behavior and attitudes, such as achieving a new understanding or linking a feeling to past behavior; the therapist’s actions and attitudes, such as communicating clearly; and the nature of the interaction, such as whether patient and therapist jointly explore self-image or sexual feelings.
An item, for instance, might read, “Therapist adopts supportive stance” or “Patient’s treatment goals are discussed.”
In a 1998 study, experienced psychotherapists with a range of backgrounds used the Q-set to describe sessions in which either cognitive or psychodynamic therapists treated depressed patients. Psychodynamic clinicians try to bring a patient’s unconscious conflicts to light, examine how the patient’s relationship with the therapist mirrors other relationships, and strive for personality change as well as symptom improvement. The study also included therapists practicing psychoanalysis, which is similar to psychodynamic therapy but typically longer and more intense.
Ablon and Jones’ analysis of the Q-set data indicated that cognitive therapists usually blended psychodynamic techniques into their treatment, while psychodynamic therapists often examined faulty thinking and irrational beliefs just as cognitive therapists did. Moreover, therapists who relied most on psychodynamic techniques, regardless of how they labeled themselves, displayed the greatest success in alleviating depression.
In a 2002 paper that Ablon calls “a shocker,” clinicians and psychology graduate students rated videotaped sessions of therapists practicing what they considered either cognitive or interpersonal therapy. The researchers found that, at least in the sessions with depressed patients, both treatments fit the definition of cognitive therapy, suggesting that a single therapy had been compared with itself. The two sets of therapists were similarly effective.
Q-set studies of other therapy sessions, including psychoanalytic treatment, typically reveal recurring themes in patient-therapist interactions, Ablon says. Therapists who recognize these themes and find ways to deal with them enjoy the most success, he adds.
For instance, in a psychoanalytic case videotaped over a 6-year period, the analyst gradually realized that his patient regularly became confused when the conversation shifted to her disturbing sexual feelings and relationships. At first, the analyst offered explanations in an attempt to clarify matters. After recognizing that pattern, he had more success by helping the woman confront her tendency to “play dumb” to avoid thinking about her sexuality.
Many proponents of randomized controlled trials regard Q-set studies as a swamp of correlations that can’t establish what actually helps a patient. Moreover, many psychoanalysts frown on what they consider to be superficial attempts to measure what they do.
Ablon sees a future for his approach, though, as a guide for mainstream research. “We need to study treatments in randomized trials that resemble what clinicians do in the real world,” he says.
Brent D. Slife stood before an audience at the annual APA meeting held in Washington, D.C., in August, and filed the equivalent of a philosophical antitrust suit against psychotherapy researchers.
Slife, a psychologist at Brigham Young University in Provo, Utah, bemoaned what he called “the almost dogmatic status” of the philosophy of empiricism in guiding examinations of psychotherapy. The conviction that scientific knowledge comes only from observable experiences is a value, not a “fact of the world,” Slife said. It is as if scientists studying reading would examine only how well individual readers identify the words that they see and ignore how readers get meaning out of a sentence.
Scientists should explore psychotherapy through qualitative methods that don’t involve statistical analyses, Slife suggested. One type of qualitative study might use transcripts of psychotherapy sessions to track changes in a patient’s conflicts and concerns.
Qualitative research provides an opening to explore approaches that have been ignored in randomized controlled trials, such as humanistic and existential psychotherapy, Slife says.
Proponents of evidence-based psychotherapy see little value in qualitative research. “If we took Slife’s approach, we’d quickly get booted out of the health care system,” says Barlow.
In the quarrelsome world of psychotherapy studies, there’s one issue that everyone agrees on: Psychotherapists are fighting an uphill battle to procure more than minimal health insurance coverage for their services. Norcross remarks, “The sad reality is that insurance companies largely respond to financial considerations, not psychotherapy research.”