Thematic and episodic frames are categorized as generic (able to generalize across issues; Scheufele, 2004) and emphasis (drawing attention to relevant contributions; Druckman, 2001). We are interested in how researchers define thematic and episodic frames in their investigations. Researchers may include a definition in the literature review along with other definitions to ground readers in framing theory, or when operationalizing these frames for measurement/testing.
In terms of discourse units (units of analysis), scholars have identified news items or articles as discourse units (Husselbee & Elliot, 2002), some use the proposition (Pan & Kosicki, 1993), and some focus on visual features (King & Lester, 2005). Three roles have been identified for visual elements including: 1) the text is coded—visuals ignored (Matthes & Kohring, 2008), 2) visuals are directly coded as a component of a frame (as a unit of analysis) (Esser & D’Angelo, 2003), 3) visual elements are not the main component of a frame but are discussed when interpreting the frame (Parmelee, 2002). In terms of identifying frames, some scholars identify multiple frames per discourse unit while others extract one frame (Kerbel, Apee, & Ross, 2000).
Matthes (2009) found the majority of framing research examined in his content analysis of framing research did not test hypotheses regarding framing analysis—most were descriptive. For years scholars have maintained that framing research is mostly descriptive and largely atheoretical (Roskos-Ewoldson, 2003). While descriptive research is highly useful to the field, a less descriptive approach is necessary to advance and build our understanding of thematic and episodic framing theory. Theory testing involves deriving hypotheses about the nature and the structure of frames, including episodic and thematic frames. Research methods ←42 | 43→textbooks described the statistical hypothesis testing strategy as the hypothetico-deductive method. The process is defined as a researcher deducing one statement (or a few statements) from the theory and comparing that statement with many observations. If the observations tend to match the statement (e.g., not due to chance, p<.05), then the hypothesis is considered as confirmed, and confidence in the statement is substantially increased (Stiles, 2009).
We included the following theoretical linkage variables: presence of hypotheses testing (or research questions); linkage to theory through antecedents (e.g. conditions of production) or consequences (e.g. potential effects) and use of other research methods (e.g. survey, experiment, interview) or extra-media data (e.g. Census, CDC reports) (Mattes, 2009; Riffe & Frietag, 1997).
We assert one of the reasons health news dominates thematic and episodic framing research is the connection these frames have to attribution of responsibility—an outcome directly related to possible policy solutions. Iyengar’s work demonstrated thematically-framed news stories about social issues/problems lead people to hold society responsible for the problem/issues, while episodically-framed stories lead people to hold individuals responsible for the problem/issue (1991). We measured if and how researchers measured or tested attribution of responsibility for health problems. We coded for references cited in relation to testing or measuring attribution of responsibility. For example, Nathanson (1999) four factors on whether public policy solutions will be sought for health issues, Stone’s work on public policy (2002), and Wallack, Dorfman, Jernigan, and Themba (1993) media advocacy strategy, or other references to reframing issues in terms of causality and treatment responsibility attribution. If researchers are examining thematic and episodic frames in health news in this context, we gain knowledge about how the framing of issues shift from individual to societal responsibility and what this means for public support of policy solutions.
We assessed the following methodology variables in an effort to provide as much information and clarity about the research conducted using thematic and episodic frames: a) inductive, deductive or both, b) quantitative, qualitative or mixed, c) data-gathering method, d) thematic/episodic frame cited or operationalized, e) other frames operationalized, if yes, generic or issue-specific, and f) sample (random, purposive, census). Variables specific to each data-gathering method were coded as well.
Our definition of “inductive” and “deductive” refers to the methodology used in the research we examined. In other words, we did not use these terms to refer ←43 | 44→to broad epistemological orientations (Matthes, 2009). Iyengar’s episodic and thematic frames are examples of deductive, quantitative frames, but many researchers examine thematic and episodic frames in conjunction with other frames (e.g. generic frames or issue-specific frames) using mixed methods. For example, researchers might use the inductive method of an exploratory analysis of content to identify specific issue frames in a small sample. Those frames would be defined in a codebook and coded in a quantitative content analysis (Simon & Xenos 2000; Husselbee & Elliot). Other researchers might mix a qualitative method with a quantitative method; combining inductive with deductive (e.g. focus groups with a series of experiments using thematic and episodic frames).
Coding Instrument. Descriptive variables included journal title, publication year, country(ies) where research was conducted, broad subject of research, subtopics of research, medium investigated, and type of content investigated. Method variables coded included: quantitative, qualitative or mixed methods; data-gathering; inductive, deductive or mixed methods; and sampling. Specific variables were coded based on data-gathering. For content analysis, the following method variables were coded: unit of analysis, visual unit of analysis, coding of frames, intercoder reliability, coding based on numbers or text, data reduction techniques, and coding manual or computer-assisted. For experiments, the following method variables were coded: number of factors, name of factors, dependent variables, mediating variables, moderating variables. For surveys, the following method variables were coded: number of variables and name of variables.
In terms of conceptual variables, coding determined whether the definitions of thematic and episodic frames were explicitly translated for operationalization or if they were cited in the literature to ground the reader only. It’s not uncommon for scholars to present multiple definitions of frames in literature reviews of framing studies including research involving thematic and episodic frames and attribution of responsibility. If other framing definitions were included in the research, we distinguished between them in the following way: we coded for definitions presented in the literature (coded up to five including the scholar(s) and the name of the frame(s). If another frame(s) along with thematic and episodic was operationalized, it was coded. First, we identified the frame(s) that was operationalized and second, we determined whether the frame(s) was either generic or issue-specific. We coded for main findings of each study (up to four). The main findings had to be identified by the author(s) of the study.
←44 | 45→
Theory variables included the use of hypotheses, research questions and descriptive results. We did not distinguish if a study reported both hypotheses and research questions. If hypotheses were reported, we coded the study as hypotheses “present = 1” no matter if research questions were also present. We coded if antecedents or consequences of other frames and/or theories were present. The following levels were used to code antecedents: a) merely discussed (without data presented), b) interview data, c) factual data presented (survey, experiment), and d) content analysis (press releases, documents, government reports or other content). We coded the same levels for the presence of consequences of other frames and/or theoretical data.
The first author and a second coder conducted the first full round of coding on a 10% random sample yielding insufficient reliability (Hayes & Krippendorf, 2007). The codebook was carefully refined based on the results of the first round of coding. A second full round of coding on a 10% random sample with the revised codebook and the same coders resulted in sufficient reliabilities (see appendix B). The first author performed the coding on