by University of Kansas
Credit: Pixabay/CC0 Public Domain
The history of psychology is littered with unfortunate examples of treatments that caused more harm than benefit to patients. For instance, in the mid-20th century lobotomies were a common practice to treat mental illness, with poor results. More recently, so-called conversion therapy was targeted at the LGBTQ community in an attempt to change their sexual orientation—a practice that according to the Human Rights Campaign “can lead to depression, anxiety, drug use, homelessness, and suicide.”
In 2007, the late Scott Lilienfeld, a prominent researcher in psychology, published work outlining other contemporary treatments in psychology that evidence suggested could be potentially harmful to patients. But how thorough was that evidence of harm? What do people seeking treatment need to know?
Now, a team of psychologists at the University of Kansas has parsed the data behind these potentially harmful treatments to determine as clearly as possible if the evidence is convincing. Among these were the DARE program designed to urge young people to abstain from drug use; “Scared Straight” programs to show young people the consequence of crime; boot camps for conduct disorder; and “critical incident stress debriefing” (also known as psychological debriefing) common after first responders deal with a violent incident.
Their research has just been published in the peer-reviewed journal Clinical Psychology: Science and Practice.
“Psychologists have traditionally been concerned about whether some therapies are better than others, but we haven’t considered as much as we should whether some therapies hurt people unintentionally,” said lead author Alex Williams, program director of psychology at the KU Edwards Campus. “In 2007 the famous psychologist Scott Lilienfeld published a paper in which he put together a list of therapies that were intended to help people but seemed from the scientific evidence to actually cause harm. We were curious—using new metascientific, statistical metrics, how credible is evidence these therapies hurt people? If therapies hurt people, we want to stop using them. But maybe they do actually help people, and that’s worth knowing, too.”
Williams’ co-authors are Yevgeny Botanov of Pennsylvania State University, Robyn Kilshaw of the University of Utah, Ryan Wong of the University of Victoria, and John Kitchener Sakaluk of Western University. The team—several of whom once were doctoral students in psychology at KU together—looked at the data underpinning more than 70 different research studies on these interventions and extracted from them over 560 statistical evaluations of the interventions.
“There’s an appreciable difference in terms of methodology—different approaches to reviews and secondary data analysis,” Sakaluk said. “Lilienfeld’s paper falls into what’s described as a narrative review where a researcher sits down and makes their own subjective read of the literature. Although these forms of reviews can be informative, there are other forms of reviews that are more formalized and rely on certain kinds of systematic procedures to say, ‘Here’s how I arrived at this conclusion—it’s not just based on the power of my own critical thinking ability.’ In our case, we’re going in and repurposing that statistical information to compute and aggregate these metrics in different clusters for each therapy. Lilienfeld observed some very interesting and important patterns, but it’s mostly from the armchair, whereas we get into the muck of the reported summary data and then we’re abstracting from that summary data.”
The team found many of the treatments cited by Lilienfeld haven’t produced enough data to evaluate their potential harm—for example, boot camp interventions. Meanwhile, other potentially harmful treatments showed ambiguous results.
“Harm for both DARE and grief counseling appeared unlikely,” the authors wrote. They noted, though, that while their evaluation suggested grief counseling may benefit patients, DARE lacks evidence it benefits students.
However, two treatments were affirmed by the researchers as likely to be harmful as Lilienfeld asserted.
The authors found, “The plausible extent of harm for CISD [Critical Incident Stress Debriefing] and Scared Straight interventions … appeared more consistent, and under pessimistic effect selection specifically, could be substantial.”
According to the researchers, therapists need to exercise caution in only using psychological interventions with credible evidence of helping patients. They need to immediately cease using psychological treatments that have real potential to harm patients.
“If nobody was doing these treatments or these interventions, this paper would be less worthwhile,” Botanov said. “The thing about it, in 2007 when Lilienfeld wrote about this, and when we started this project, these were still things that were going on. I mean, Scared Straight, for example, you know, there was a TV show called ‘Scared Straight.’ These treatments are promoted by people on podcasts and on TV shows. You can find them in the world easily. People seeking help are being given these treatments. Also importantly, agencies, funders, and governments are paying for the treatments. So, if you ask why we look again at the data and why we do it in a different way, it’s because there’s a need out in the world to show evidence that perhaps we shouldn’t do some of this stuff. Perhaps therapists should just stop using this, and perhaps we could have more informed consumers. That’s one of the main things that drove us.”
Leave a Reply