Riege also introduces the notion of different forms or measures of validity, identifying three types (other authors identify more or different types, and/or give them different names):
construct (whether the constructs which are being used to measure concepts of interest are appropriate)
internal (the quality of the explanation of the phenomena examined)
external (whether the findings can be extrapolated beyond the case studied; the equivalent of generalisation).
Most interestingly, however, he introduces four alternative, or parallel, ways of judging the quality of a piece of case study research: credibility, trustworthiness (transferability), confirmability and dependability (see also Lee, Mishna and Brennenstuhl 2010). These also have the benefit of being phrased in more common-sense language.
Such alternative criteria for judging the quality or worth of research have been taken up quite widely by qualitative researchers. Box 3.3 gives four different recent formulations, showing the alternative terms used by – or which may be applied to – positivist/post-positivist, interpretivist and/or constructivist, or quantitative and qualitative, forms of research. There are clearly many overlaps between these formulations.
Box 3.3 Alternative Criteria for Judging the Quality of Research
Denzin and Lincoln (2005b, p. 24)
positivist/post-positivist paradigms – internal and external validity
constructivist paradigm – trustworthiness, credibility, transferability, confirmability
Guba and Lincoln (2005, p. 196)
positivism/post-positivism – conventional benchmarks of ‘rigor’: internal and external validity, reliability and objectivity
constructivism – trustworthiness and authenticity, including catalyst for action
Farquhar (2012, pp. 100–110)
classical approaches – construct validity, internal validity, reliability, generalizability
interpretivist views – credibility, transferability, dependability, confirmability
an ethnographic contribution – authenticity, plausibility, criticality
Denscombe (2014, pp. 297–300)
quantitative research – validity, reliability, generalizability, objectivity
qualitative research – credibility, dependability, transferability, confirmability
Taking Denscombe’s formulation, he explains credibility as ‘the extent to which qualitative researchers can demonstrate that their data are accurate and appropriate’ (p. 297), perhaps through the use of techniques like respondent validation (asking your respondents to comment on and confirm your findings), grounded data (provided through extensive fieldwork) and triangulation. Dependability involves the researcher demonstrating that ‘their research reflects procedures and decisions that other researchers can ‘see’ and evaluate in terms of how far they constitute reputable procedures and reasonable decisions’ (p. 298, emphasis in original).
Transferability has to do with the researcher supplying ‘information enabling others to infer the relevance and applicability of the findings (to other people, settings, case studies, organizations, etc.)’ (p. 299, emphasis in original). And confirmability involves recognising the role of the self in qualitative research and keeping an open mind, by, for example, not neglecting data that do not fit the preferred analysis and checking rival explanations.
Interestingly, Farquhar also brings in an ethnographic contribution, which she derives from Golden-Biddle and Locke (1993). Their concern was with how ethnographic writing was convincing (or not), identifying three elements of convincingness: authenticity, plausibility and criticality. These elements could, of course, be seen as analogues for credibility, dependability and transferability.
There are, then, other languages available to case study researchers – particularly, perhaps, those approaching their case studies from a qualitative perspective – with which to evaluate and justify the quality of their research and findings. Most researchers, though, have sought to remain true to the older, more conventional ideas, derived from quantitative/positivist research, of validity and reliability when assessing the results of case study (and other forms of) research.
Thus, Gibbert, Ruigrok and Wicki (2008) offer a meta-analysis of 159 articles based on case studies published during the period 1995–2000 in ten management journals, focusing on their methodological sophistication. They conclude that researchers have placed too much emphasis on external validity and need to pay more attention to internal and construct validity.
Diefenbach (2009), in an article pejoratively titled ‘Are Case Studies More Than Sophisticated Storytelling?’ identifies 16 criticisms of case study research, particularly when based on interviews. These criticisms relate to all aspects of research design, data collection and analysis, but focus in particular on validity and reliability issues. He concludes that: ‘many qualitative case studies either do not go far beyond a mere description of particular aspects or the generalisations provided are not based on a very sound methodological basis’ (p. 892).
One of the strongest contemporary advocates of case study, Yin (2013), offers rather more hope in this respect. He discusses a range of different approaches that have been taken towards addressing validity and generalisation in case study evaluations, including alternative explanations, triangulation, logic models (which represent ‘the key steps or events within an intervention and then between the intervention and its outcomes’, p. 324) for validity, and analytic generalisation and theory. In the particular context of case study evaluations, he recommends paying more attention to the questions posed for the case study, being clearer about what it is that makes the case study complex, and focusing carefully on the methods used.
As with generalisation, then, there is a need for case study researchers to be aware of, and to address, issues of validity and reliability posed by their research. You may choose to do this in a conventional positivist/post-positivist fashion, using the language of construct, external and internal validity and reliability. You may choose to locate your case study in a constructivist/interpretivist paradigm, and use the language of trustworthiness, credibility, transferability and confirmability. Or you can adopt the procedures suggested by other case study researchers, such as Yin.
Other Issues
Other authors have raised somewhat different issues regarding the perceived weaknesses of case study, though they could also be seen as the same issues approached in different ways, or concerns faced by specialised forms of case study.
Mahoney (2000), a political scientist, focuses on the issue of causal inference, i.e. how we infer what is causing something to happen. He discusses three strategies of causal inference in what he calls small-n analysis (i.e. studies of small numbers of cases):
nominal comparison in cross-case analysis (which ‘entails the use of categories that are mutually exclusive and collectively exhaustive’, p. 390)
ordinal comparison in cross-case analysis (which ‘entails rank ordering cases into three or more categories based on the degree to which a given phenomenon is present’, p. 399)
within-case analysis.
The third of these strategies – which, unlike the first two, can be applied