I am tempted to join Kirschenbaum & Jourdan (2005) and Castonguay, Constantino & Holtforth (2006) and throw the rest of psychotherapy research out the EST window, screaming, “Empathy is the answer! All hail unconditional positive regard!”
But how? I found no answers here – for, as they said, it’s not what you do, but what they perceive that you do, that makes the difference between a good therapeutic outcome and a bad one. Where’s the part about “this is how you do it?” I want examples.
However, I found the evidence regarding the impact of the client’s view quite impacting: it is what’s perceived that matters. If you change your perceptions to only let in good things, then your life will be bright and merry, and if therapy consists of a blindingly beautiful bright star of good things (the person-centered therapist), then there will be a good outcome. Is this to lead us to the conclusion that this type of interaction is filling a giant hole in the client’s life? That nothing matters in life except for what we let ourselves believe? If so, can we bottle this “core condition” thing and market it as an emotional perception manipulation pill that will make you emotionally invincible? I would buy it. Seriously.
Monday, September 24, 2007
The Empathy Pill
Posted by Thrasher at 6:55 PM 2 comments
Monday, September 17, 2007
That's so meta.
Is it me, or did we just read an article that began “Once upon a time,” only to find out that it was a 32-page-long thick-tongued meta-analysis of the depressing state of psychological science? Talk about false advertising.
There are two things I walked away from the Westen, Novotny & Thompson-Brenner (2004) article thinking: the DSM is worthless, and so are psychology articles written by biased researchers (and by biased researchers, I mean all researchers).
I propose three solutions:
1) A DSM Congress.
2) Data in place of language.
3) Open-source data sharing.
Westen, et al. outlines many concrete reasons why the current DSM is hindering the validity of research on ESTs: it doesn’t apply to most people and it leaves out entire conditions which have come to be ignored as a result of not being mentioned (p. 634), among many others. Many of the overall assumptions of EST research seem to be based on the fact that we must draw lines between disorders, and those lines are based on DSM categorizations, which are bogus. The majority of Westen, et al.’s problems with the assumptions in EST research would be solved if an overhaul of our diagnostic system were put into place. Based on this article and the class discussion on September 5th, it seems that the system should be dynamic so as to adjust to the changing literature, and public so that researchers always know the rationalization behind everything. A DSM congress, modeled after the current structure of the US congress, seems to make sense. With knowledgeable representatives, a system to propose bills and amendments, a public record, and the ability to constantly change the system, we would not have to worry about rewriting the constitution every twenty years, or giving researchers reasons to write novella after meta-analytic novella on the problems contained in an unchangeable manual written two decades ago. A DSM congress would allow the diagnostic system to actually benefit from up-to-the-minute research discoveries, and create an open forum for debate. A DSM congress would allow growth in places where, with the current DSM-IV-TR state of affairs, we are crying out for change in the direction of an abyss.
In their section on “Maximizing the Efficacy of Clinical Trials,” Westen, et al. elaborates on what scientists should be reporting in their publications, repeatedly pointing out cases in which researchers clearly misinformed the reader by wrapping their information up to “tell the best story” (p. 653). For example, they pointed out that when setting criteria for qualifying a participant as having completed the therapy, “many of the reports…used different definitions in different analyses. The only reason we even noticed this problem was that we were meta-analyzing data that required us to record Ns, and noticed different Ns in different tables” (p. 654).
In general, it seems that this is a problem with language – articles are written for people to read. But it is silly to read data in the form of words and expect it to lack bias. How many times have you looked at the procedure and results section of a psychology article and scanned for vitals? We don’t need to read these sections – we need a list. What’s more is that lists can be standardized. I say throw out everything between the intro and the discussion, and replace it with a standardized chart created by the journal that is publishing the study before the paper is even submitted. The procedure and the results section would then contain the ALL information we actually need, much like the little box for carbon on the periodic table tells us that it’s got 6 protons and weighs 12.01 au. Let the reader know everything, save time, and conserve space. [edit] Meanwhile, slanted opinions and biases can be shared in the introduction and the discussion sections - this might even lead the discussion sections to grow in length, as the reader has not just had to read statistics in the form of words, and has the energy to absorb a well-synthesized argument for why their data is significant. [end edit]
Regardless of whether the above proposal makes any sense, if research is to move forward, it is clear that we need to change our approach to data sharing. The more people have access to a given data set, the better research will be and the faster we will find the answers we are looking for. Sharing data is not unlike the concept of sharing source-code associated with software programs, termed “open-source software.” In the past, hoarding source code as company secrets has rendered software weak, such as Microsoft’s Windows and pretty much everything associated with it (have you ever tried to successfully use Windows Media Player? I mean come on). Whereas, take for example the success of Mozilla Firefox, the most highly functional and secure internet browser, and also the first to use open-source code. Firefox blows Internet Explorer out of the water with its ability to block pop-ups, its easy-to-use interface, and overall top-notch security. And it’s free. Amazing!
All of the complaints about the fact that money and grant-writing skills guide the research agenda might be appeased if the field utilizes the internet medium in the same way that the tech boom has done. Unfortunately, we are still using clunky programs like PsycInfo and then manually downloading articles as PDF files, which are essentially pictures set in a certain order. It’s 2007 and psychological science is using pictures of text as our main source of information. I’m trying to look for the technological progress in this state of affairs, but find none.
So, I say, democratize the DSM, force researchers to show all their cards if they are going to get published, and put aside pride for the sake of progress by creating an open-source data sharing medium that will utilize research to its fullest extent.
Or maybe we're doomed. Maybe, in the words of Cory Doctorow in his essay Metacrap, "It's wishful thinking to believe that a group of people competing to advance their agendas will be universally pleased with any hierarchy of knowledge. The best that we can hope for is a detente in which everyone is equally miserable."
Posted by Thrasher at 4:24 PM 6 comments
Tuesday, September 11, 2007
Dodo? Doe!
As presented to a field where a slew of people strive to quantify actions, predict behaviors, and quantify emotions, Chambless & Hollon (1998) and Hunsley & Di Giulio (2002) do a decent job of outlining the need to be critical in the assessment of effective therapies. However, while it is imperative that there is well-tested empirical support for therapies, for the sake of patients as well as for the reputation of psychology as a respected science, the two articles do not really resolve anything. Hunsley & Di Giulio’s statistical reassessment of review articles does give weight to the claim that not all psychotherapies are equivalent, yet their fight against fire with fire leads me to be skeptical of their argument just as they are asking us to be skeptical of the articles they review. If they use statistics to beat the statistics of others, how are they gaining any ground other than to stomp their feet and claim that their analysis is better? It’s like claiming blue is just better than red because, well, I said so. Additionally, while the Chambless & Hollon article is useful as a reference for someone designing an empirically supported treatment study, the only concrete guideline they call for is that a treatment be successful in two different studies by independent research teams. Their argument strives toward abstraction with phrases like “evaluators are urged to carefully examine data graphs” (p.13), and the arbitrary requirement that three of everything exist (e.g., three participants should benefit from treatment, p. 13). That said, Chambless & Hollon’s call for intense self-critique is vital if the field is to gain and retain any sort of scientific respect.
Posted by Thrasher at 6:15 AM 1 comments
Sunday, September 2, 2007
The Dot-Matrix Star Galaxy of the Mind
Upon reading Persons (1986) and Widiger & Clark (2000), I was thoroughly disgruntled and disappointed with the misdirection and falsehoods that seem to be created as a result of using “clinical expertise” and intuition to diagnose clinical disorders in the DSM-IV-TR. Persons (1986) is right: we need to focus on individual features and phenomena, and not let language, and thus laziness, define how we approach research. Yes, it is convenient to save money by using schizophrenia to shed light on aspects of thought disorder, but it’s also just plain misguided. Widiger & Clark (2000) also seemed to hit it right-on when they proposed that the future DSM-V “consist of an ordered matrix of symptom-cluster dimensions, a diagnostic table of the elements that are used in combinations to describe the rich variety of human psychopathology.” Upon reading this, I pictured myself floating through a three-dimensional dot-matrix star galaxy of human psychological traits, a diagnostic utopia where I can hover in a certain dimension, put my hand out, and touch the cloud-like area where people lean toward borderline traits or generalized anxiety.
Regardless of how we come to conclusions about diagnoses, it seems clear that the entire concept of mental disorder is rather fudgey at best. Even the craziest of psychopaths might be considered questionably sane, if one really takes the DSM-IV’s classification seriously, as they are not distressed or impaired. The diagnosis of psychopathy, judging from the Psychopathy Checklist-Revised, sets half of its diagnostic criteria on a “conflict between that individual and society,” despite the DSM-IV’s goal to “prevent the misuse of diagnostic labels for the purpose of social control” (Allen, 1998). Though truthfully, there is evidence that psychopaths have it good, as unemotional traits found in psychopaths low in fear correlate negatively with internalizing psychopathology (Blonigen, Hicks, Krueger, Patrick, & Iacono, 2005), and some say that these unemotional personality traits may point to more positive outcomes in cognitive and psychosocial functioning (Hall et al., 2006), and resilience against depression and anxiety (Benning, Patrick, Hicks, Blonigen, & Krueger, 2003). This would be an argument for a more symptom-oriented approach, as such interesting correlations with low-fear personalities are probably not unique to people like Jeffery Dahmer – they might reside in “normal” individuals as protective factors, and would be overlooked in a diagnostic system that only looks at conditions arbitrarily labeled “disorders.”
Posted by Thrasher at 11:03 PM 1 comments