Practitioner's Guide to Using Research for Evidence-Informed Practice. Allen Rubin. Читать онлайн. Newlib. NEWLIB.NET

Автор: Allen Rubin
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Психотерапия и консультирование
Год издания: 0
isbn: 9781119858584
Скачать книгу
to be sure you aren't missing something relevant and valuable. Things can get pretty complicated very quickly as you try searching different combinations of terms, synonyms, and databases. Some databases track the search terms that you have used, while others do not. If the database you are using doesn't automatically track your search terms, it's likely worth the effort to record the search terms that you are trying. Because searching can be a bit of an art form, and you will likely try several combinations of words and phrases, it can quickly become easy to forget which terms you have tried. Keeping a sheet of paper nearby to jot down your search terms along the way can help you avoid repeating your efforts and wondering, “Did I remember to try searching for the terms family violence, domestic violence, and interpersonal violence?” Also, you'll probably have to wade through many irrelevant references to be sure to find the real gems.

      When working with similar clients, you may not need to repeat this process each time. What you find the first time might apply again and again. This is especially true if many of your clients experience similar problems and needs. However, keep in mind that the evidence might change over time. Therefore, if several months or more elapse after your search, you might want to repeat it to see if any new studies have emerged supporting different interventions. Moreover, some newer studies might be more applicable to your client's unique characteristics or your unique practice situation.

      When conducting your own search, you don't have to read every study that you find. You can examine their titles and abstracts to ascertain which ones are worth reading.

      For example, many years ago, when Rubin conducted a review of the effectiveness of EMDR versus exposure therapy in treating PTSD, he encountered an abstract depicting a study that concluded that EMDR helps “bereaved individuals experience what they believe is actual spiritual contact with the deceased” (Botkin, 2000, p. 181). He could tell from the title of the study that it was not relevant to his review regarding PTSD. (But given its bizarre claim, he read it anyway!)

      As we've already intimated, the individual studies and reviews that you'll find in your search might vary greatly in regard to their objectivity and rigor. The journal peer review process offers a level of quality assurance. In the peer review process, typically two or three other researchers offer critical feedback and help journals decide whether or not an article is appropriate for publication. Therefore, published articles in peer-reviewed journals have at least been exposed to some process of review and critique. However, the rigor of the review process in journals can vary greatly. Some very strong research studies do not appear in journal articles, while some relatively weak studies do get published. Some studies and reviews, whether in journal articles or other sources, for example, will be conducted and reported by individuals or groups with vested interests. But reviews and studies can be flawed even when no vested interests are involved. Some objective investigators do the best they can with limited resources to overcome some practical obstacles that keep them from implementing their study in a more ideal manner. A while back, for example, Rubin conducted an experiment evaluating the effectiveness of EMDR in a child guidance center (Rubin et al., 2001). He had no funding for the study and conducted it simply because – as a professor – he was expected to do research and was quite curious about whether EMDR was really as effective with children as its proponents were touting it to be. The administrative and clinical leaders in the center projected that in a year's time over 100 clients would participate in his study. They were wrong. It took three years for them to refer 39 clients to his study.

      Some flaws are egregious and fatal. That is, they destroy the credibility of the study's findings. To illustrate a fatally flawed fictitious study, suppose Joe Schmo invents a new therapy for treating anxiety disorders. He calls it psyschmotherapy. If it is effective, he will be rich and famous. To evaluate its effectiveness, he uses his own clinical judgment to rate client anxiety levels – on a scale from 0 to 100 – before and after he provides psyschmotherapy to 10 of his clients. His average before rating is 95, indicating extremely high anxiety. His average after rating is 10, indicating extremely low anxiety. He concludes that psyschmotherapy is the most effective intervention available for treating anxiety disorders – a miracle cure, so to speak. You probably can easily recognize the egregious bias and utter lack of trustworthiness evident in Joe Schmo's study.

      While these flaws may not be fatal, they are important. If you can find studies less flawed than that one, you'd probably want to put more stock in their findings. But if that study is the best one you can find, you might want to be guided by its findings. That is, it would offer somewhat credible – albeit quite tentative – evidence about the comparative effectiveness of the two treatment approaches. Lacking any better evidence, you might want – for the time being – to employ the seemingly more effective approach until better evidence supporting a different approach emerges or until you see for yourself that it is not helping your particular client(s).

      Unlike these fictitious examples, it is not always so easy to differentiate between reasonable “limitations and fatal flaws; that is, to judge whether the problems are serious enough to jeopardize the results or should simply be interpreted with a modicum of caution” (Mullen & Streiner, 2004, p. 118). What you learn in the rest of this book, however, will help you make that differentiation, and thus help you judge the degree of caution warranted in considering whether the conclusions of an individual study or a review of studies merit guiding your practice decisions.

      As discussed earlier in this and the preceding chapter, a common misinterpretation of EIP is that you should automatically select and implement the intervention that is supported by the best research evidence, regardless of your practice expertise, your knowledge of idiosyncratic client circumstances and preferences, and your own practice context. No matter how scientifically rigorous a study might be and no matter how dramatic its findings might be in supporting a particular intervention, there always will be some clients for whom the intervention is ineffective or inapplicable. When studies declare a particular intervention a “success,” this is most often determined by group-level statistics. In other words, the group of clients who received the successful intervention had better outcomes, on average, than those who did