Greene and David (1984) took a similar stance, stating that ‘generalizing from multiple case studies – within the structure provided by a multiple case study design – has a sound basis in (inductive) logic’ (p. 82). More recently, Jensen and Rodgers (2001) also appear to agree, arguing that meta-analysis (i.e. the collection and reanalysis of collections of case studies addressing the same topic; meta-analysis is discussed further in Chapter 7, where a range of existing meta-analyses of case studies are identified and discussed) may be used to cumulate what they refer to as ‘the intellectual gold’ of case study research.
So the multiple case study design offers one obvious and acceptable strategy for generalising from case study findings. But what if you do not have the results of multiple case studies available, and it would be impractical for you to go beyond a single case study design (or just a limited number)? Is it still possible to then argue for generalisability?
Evers and Wu (2006) argue that it is possible to generalise from single cases, but suggest that this is not easy or straightforward:
Being able to generalise reasonably from a single case is a complex and difficult matter. But… the task is abetted by three important factors. First, cases possess considerably more structure than is commonly supposed… Second, researchers bring to a case much more knowledge than is often supposed… Finally, an ongoing trajectory of inquiry through time and changing circumstances makes it less likely that a stable match between patterns of researcher expectations and what is observed is sheer coincidence. (p. 524)
What they seem to be arguing here is that the researcher’s experience is a key factor, which relates to their knowing how typical or not the case being studied is, their having carried out other, and perhaps similar, case studies before, and their having an informed awareness of other relevant research. It would not, by comparison, be advisable for a young, inexperienced, honours or postgraduate student, carrying out a case study for perhaps the first time, to seek to generalise from their findings.
Thomas (2011b) reminds us that generalisation is an issue throughout the social sciences, by no means confined to case study research. He reasons that: ‘to argue that to seek generalizable knowledge, in whatever form – everyday or special – is to miss the point about what may be offered by certain kinds of inquiry, which is exemplary knowledge’ (p. 33, emphasis in original). We study particular cases for their interest and what we can learn from them. Whether these findings can be applied to other cases may be beyond the scope of the study, and is at least partly the business of other researchers to determine.
Mjoset (2006) suggests a further, pragmatist, stance towards generalisation, going beyond the natural/social science dichotomy, and uses a case study of the Israeli/Palestinian conflict as an example. This is, though, what might be called a critical case: understanding the Israeli/Palestinian conflict is of widespread interest – as it directly affects millions of people and many more indirectly – particularly if it helps to lead to some sort of solution of it (this case study is discussed in more detail in Chapter 7).
Ruzzene (2012) offers a further response to the dilemma, arguing that ‘the emphasis should be placed on the comparability of the study rather than on the typicality of the case’ (p. 99). This again suggests a kind of multiple case study approach, even if the case studies are carried out by different researchers (and may not have been carried out yet). In other words, one might study a school class or a small business, producing findings which others who were interested in schools or businesses could explore to assess their relevance.
What all of these authors and examples have in common is the realisation that there is no easy answer to the issue of generalisation. Yet it has to be faced, and addressed, every time a case study is carried out. Are you undertaking a case study to compare it to other, similar or related, case studies, whether carried out by you or others? Are you undertaking a typical, exemplary or indicative case study, the findings from which should be more broadly applicable? Or are you undertaking a case study which is of interest for its very particularity or extreme nature?
It is, of course, possible – and probably quite common, particularly among novice researchers – that the researcher does not (yet) know the answers to these questions. But the questions still need to be recognised and addressed as well as they can be (the practical issues involved are discussed further in the section on Sampling and Selection Issues in Chapter 8).
Validity and Reliability
The concepts of validity – is the way in which you are collecting your data appropriate for answering the questions you wish to answer? – and reliability – would another researcher collecting the same data in the same way produce much the same results? –are clearly related to that of generalisability. Each addresses aspects of how other researchers, viewing your research results, would judge their quality and usefulness.
Kazdin (1981), working in the context of clinical psychology, notes that ‘The case study has been discounted as a potential source of scientifically validated inferences, because threats to internal validity cannot be ruled out in the manner achieved in experimentation’ (p. 183). However, he then identifies a set of procedures which can, at least partly, overcome these threats:
Specific procedures that can be controlled by the clinical investigator can influence the strength of the case demonstration. First, the investigator can collect objective data in place of anecdotal report information. Clear measures are needed to attest to the fact that change has actually occurred. Second, client performance can be assessed on several occasions, perhaps before, during, and after treatment. The continuous assessment helps rule out important rival hypotheses related to testing, which a simple pre- and posttreatment assessment strategy does not accomplish. Third, the clinical investigator can accumulate cases that are treated and assessed in a similar fashion. Large groups are not necessarily needed but only the systematic accumulation of a number of clients. As the number and heterogeneity of clients increase and receive treatment at different points in time, history and maturation become less plausible as alternative rival hypotheses. (p. 190)
That Kazdin is working within a scientific framework is clear from his use of words like ‘objective’ and ‘fact’, and in his reliance on careful measurement (he is also clearly discussing quantitative case studies). His two other suggested strategies are similar to those advocated to enhance generalisation, and would also be helpful for qualitative and social researchers: the assessment of the case over time (the use of time series research designs in combination with case studies is discussed further in Chapter 6), and the accumulation of multiple case studies.
Riege (2003) considers which validity and reliability tests can most appropriately be used at each stage of case study research. He argues that:
The validity and reliability of case study research is a key issue… A high degree of validity and reliability provides not only confidence in the data collected but, most significantly, trust in the successful application and use of the results… The four design tests of construct validity, internal validity, external validity and reliability are commonly applied to the theoretical paradigm of positivism. Similarly, however, they can be used for the realism paradigm, which includes case study research… In addition to using the four ‘traditional’ design tests, the application of four ‘corresponding’ design tests is recommended to enhance validity and reliability, that is credibility, trustworthiness (transferability), confirmability and dependability. (p. 84)
Riege here brings in the notion of paradigms, which can be expressed more simply as our ways of thinking about the world, contrasting the positivist paradigm (the foundation of conventional science, which argues that there is a real world which we can measure and understand) with what he calls realism (which others would call post-positivist, the belief that, while there is a real world out there, and we may try to comprehend it, we accept that