Figure 1.2 represents a newer, more sophisticated diagram of the EIP model (Haynes et al., 2002). In this diagram, practitioner expertise is shown not to exist as a separate entity. Instead, it is based on and combines knowledge of the client's clinical state and circumstances, the client's preferences and actions, and the research evidence applicable to the client. As in the original model, the practitioner skillfully blends all of the elements at the intersection of all the circles, and practice decisions are made in collaboration with the client based on that intersection.
Figure 1.3 is a multidisciplinary iteration of the three-circle model called the Transdisciplinary Model of EIP. This model was developed in a collaborative effort across allied health disciplines, including social work, psychology, medicine, nursing, public health (Satterfield et al., 2009). Figure 1.3 retains elements of earlier EIP models; however, it also includes several changes that reflect the perspectives of the varied disciplines and practice contexts within which the EIP process is used. Practice decision making is placed at the center, rather than practitioner expertise, recognizing that decision making is a collaboration that could involve a team of practitioners as well as clients, whereby an individual practitioner's skills and knowledge inform but do not wholly describe the central decision-making process. Practitioner expertise is instead moved to one of the three circles and is conceptualized as resources. These resources include competence in executing interventions, conducting assessments, facilitating communication, and engaging in collaboration with clients and colleagues. Client-related factors, including characteristics, state, need, and preferences, are combined into one circle. The concept of a “client” is explicitly expanded to highlight communities in order to reflect the multiple levels of practice – from micro to macro levels and from individuals to large groups and systems – as reflected in the multiple disciplines. Finally, an additional circle is added to the outside of the interlocking circles to represent the context within which services are delivered in recognition of how the environment can impact the feasibility, acceptability, fidelity, and adaptation of practices in context.
FIGURE 1.2 Newer EIP model.
Modified from Haynes et al., (2002).
FIGURE 1.3 The transdisciplinary model of evidence-informed practice.
From “Toward a Transdisciplinary Model of Evidence-Based Practice,” by Satterfield et al. (2009). Reprinted with permission of John Wiley & Sons, Inc.
The cyclical process of EIP can be conceptualized as involving the following five steps: (a) formulating a question, (b) searching for the best evidence to answer the question, (c) critically appraising the evidence, (d) selecting an intervention based on a critical appraisal of the evidence and integrating that appraisal with practitioner expertise and awareness of the client's preferences and clinical state and circumstances, and (e) monitoring client progress. Depending on the outcome observed in the fifth step, the cycle may need to go back to an earlier step to seek an intervention that might work better for the particular client, perhaps one that has less evidence to support it, but which might nevertheless prove to be more effective for the particular client in light of the client's needs, strengths, values, and circumstances. Chapter 2 examines each of these five steps in more detail.
1.3.5 What Are the Costs of Interventions, Policies, and Tools?
When asking which approach has the best effects, we implicitly acknowledge that for some target problems there is more than one effective approach. For example, the book Programs and Interventions for Maltreated Children and Families (Rubin, 2012) contains 20 chapters on 20 different approaches whose effectiveness with maltreated children and their families has been empirically supported. Some of these programs and interventions are more costly than others. Varying costs are connected to factors such as the minimum degree level and amount of experience required in staffing, the extent and costs of practitioner training, caseload maximums, amount number of treatment sessions required, materials and equipment, and so on. The child welfare field is not the only one in which more than one empirically supported approach can be found. And it is not the only one in which agency administrators or direct service practitioners are apt to deem some of these approaches to be unaffordable. An important part of practitioner expertise includes knowledge about the resources available to you in your practice context. Consequently, when searching for and finding programs or interventions that have the best effects, you should also ask about their costs. You may not be able to afford the approach with the best effects, and instead may have to settle for one with less extensive or less conclusive empirical support.
But affordability is not the only issue when asking about costs. Another pertains to the ratio of costs to benefits. For example, imagine that you were to find two empirically supported programs for reducing dropout rates in schools with high dropout rates. Suppose that providing the program with the best empirical support – let's call it Program A – costs $200,000 per school and that it is likely to reduce the number of dropouts per school by 100. That comes to $2,000 per reduced dropout. In contrast, suppose that providing the program with the second best empirical support – let's call it Program B – costs $50,000 per school and that it is likely to reduce the number of dropouts per school by 50. That comes to $1,000 per reduced dropout – half the cost per dropout than Program A.
Next, suppose that you administer the dropout prevention effort for an entire school district that contains 20 high schools, and that your total budget for dropout prevention programming is $1 million. If you choose to adopt Program A, you will be able to provide it in five high schools (because 5 × 200,000 = one million). Thus, you would be likely to reduce the number of dropouts by 500 (i.e., 100 in each of five schools). In contrast, if you choose to adopt Program B, you will be able to provide it in 20 high schools (because 20 × 50,000 = one million). Thus, you would be likely to reduce the number of dropouts by 1,000 (i.e., 50 in each of 20 schools). Opting for Program B instead of Program A, therefore, would double the number of dropouts prevented district wide from 500 to 1,000. But does that imply that opting for Program B is the best choice? Not necessarily. It depends on, in part, just how wide the gap is between the strength of evidence supporting each approach. If you deem the evidence supporting Program B to be quite skimpy and unconvincing despite the fact that it has the second best level of empirical support, while deeming the evidence supporting Program A to be quite strong and conclusive, you might opt to go with the more costly option (Program A) that is likely to prevent fewer dropouts, but which you are more convinced will deliver on that promise in light of its far superior empirical support. (In fact, if you can show funders that Program A reduces 100 dropouts per school, you'd have a decent chance of getting future funding enabling you to provide it in more than five schools, and perhaps all 20.)
Depending on such factors as your budget and your assessment of the quality and amount of empirical support each approach has, in some situations you might opt for a less costly program with less empirical support, whereas in other situations you might opt for a more costly program with better empirical support.