The understanding that it was possible to perform linguistic analysis and that it might be useful in forensic contexts eventually found its way into a number of fictional texts. In the short story “A Scandal in Bohemia” (Doyle, 1891), the famous detective Sherlock Holmes performs a form of authorship profiling on an anonymous note addressed to him. Holmes states that the author of the note must be German on the basis of grammatical constructions in the note and, in particular, the placement of part of the verb in the sentence: “This account of you we have from all quarters received” (Doyle, 1891, p. 63).
Holmes performs a basic contrastive analysis between various languages, concluding that “a Frenchman or Russian could not have written that. It is the German who is so uncourteous to his verbs” (Doyle, 1891, p. 63; see also Boucher & Perkins, 2020). The significance here is that, by Arthur Conan Doyle’s time, there was some inherent understanding that it was possible to identify an anonymous author’s native language from the way they used a second language.
In a much later suspense-crime novel, The Devil’s Teardrop (1999), its author Jeffery Deaver demonstrates the importance of understanding cross-linguistic influences on written language. His fictional forensic linguist and document examiner Parker Kincaid highlights that linguistic features contained in a threat letter did not correspond to patterns in a number of unrelated languages that one would expect from a genuine non-native English speaker. This led Kincaid to conclude that the author was only pretending to be foreign.
These imaginary scenes from crime fiction are precursors to many elements in current day authorship profiling, identifying sociolinguistic and dialectal features that allow analysts to make inferences regarding the linguistic persona authoring a text. We see real-life applications of authorship profiling approaches within this book in Chapter 3 on the Cyprus faked confession, in Chapter 4 on the Spanish fraudster Rodrigo Noguera Iglesias, Chapter 5 on the darkweb pedophile, and in the analysis in Chapter 9 of understanding ISIS communications.
Much has changed since those early attempts to explain how language worked or did not work in certain contexts. In the last 40 years, forensic linguistics has coalesced into an applied field of its own, taking knowledge from different areas of linguistics and applying it to forensic contexts such as language-based issues in the legal system.
Given that linguistic analysis encompasses every aspect of language, casework is very varied. Forensic linguistics extends beyond just linguistics and into multidisciplinary approaches, either through teamwork or by individuals with experience in other areas. Indeed, linguistic analysis has been applied to research and forensic contexts in areas such as psychology, as can be seen in Chapter 10 on the language of suicide.
Casework is also rarely as straightforward and conclusions never as certain—certainly never expressed in percentages as portrayed on television and films. This exaggeration of forensic linguistic analysis capability creates a CSI effect that real-life analysts cannot live up to. Chapters 4 and 8 demonstrate how client instructions and linguistic questions evolve as cases progress, especially when forensic linguists become involved at the outset or in the early part of an investigation. Features in the data may suggest to the analyst that a reformulation of the linguistic question is necessary; for example, what the client may believe is an authorship question may instead be a question of context falsification.
DOING FORENSIC LINGUISTICS
What constitutes being a “forensic linguist”? It is not a formally recognized profession, nor is it statutorily regulated (Clarke and Kredens, 2018). One does not have to be a forensic linguist to “do” forensic linguistic analysis. Being a forensic linguist may, perhaps, be best described as an identity, one taken on by professional practitioners when undertaking forensic casework involving linguistic analysis. However, that identity is dependent on practitioners grounding their analysis of language in linguistic theory. An in-depth understanding of how language works and familiarity with the analytical tools available from the forensic linguistic toolkit is material to competently undertaking analysis. Without integrating linguistic theory into language analysis and an appreciation of the underlying reasons as to why linguistic patterns occur and the external influences that give rise to variations in those patterns, conclusions drawn on the identification of linguistic patterns are meaningless (Nini, 2018).
There is a disconnect between the work of forensic linguistic researchers and forensic linguistic practitioners in that research often assumes idealized case situations whereas, in reality, forensic linguistic casework practitioners have to work with nonidealized data. Furthermore, research tends to be tightly focused on a narrow aspect of forensic linguistic analysis, whereas casework can be messy in approach and data.
In real-life cases, data can be very varied and often very limited. Donlan and Nini (Chapter 3) worked with an 85-word statement; Picornell (Chapter 8) worked with emails 57 to 350 words long, totaling some 1,650 words for each author. At the other extreme, Queralt’s data (Chapter 4) covered more than 10 years and comprised over 300,000 words of emails and chat logs. Comparison data can also be limited, so much so that practitioners end up comparing material from different genres, for example, letters with diary entries or emails. Grant and Grieve’s comparison data (Chapter 2) of 28,000 words in the same genre for their known author is very rare indeed.
GOOD PRACTICE VERSUS BEST PRACTICE
This book provides examples of forensic linguistic casework employing current good practice. It must be stressed that no claim is being made that the processes and methodologies applied constitute best practice—each case is unique, and the field continues to evolve as research develops and technology advances. The evolution of language, as well as its complexity and incredible diversity of contexts, also precludes a one-size-fits-all approach. The variety of work undertaken is very broad, both in practice and potentiality. Due to the variation in both, what is often encountered in forensic linguistic casework, and what could be encountered, makes it impossible to detail what best practice would be across cases. The best methodological approach for a specific case depends on the type of available data.
We can, however, talk about best practice at a higher level, where what matters is that the analyst’s approach is measured, scientifically rigorous, and validated. At the casework consultation level, this requires that analysts be aware of the limits of their analysis (Clarke and Kredens, 2018), staying within the bounds of their own expertise, recognizing the dangers of confirmation bias (as stressed in Chapters 2 and 9), and grounding conclusions with linguistic explanations. It also involves managing client expectations about what linguistic analysis can realistically achieve and providing them with the reasoning behind any conclusions or opinions. This is important in aiding investigators, law enforcement, and lawyers to assess the strength and weaknesses of forensic linguistic analysis as an evidential resource. This becomes even more important when forensic linguists act as expert witnesses. Tensions exist between lawyers who aim to win their case and analysts who should be acting as objective and independent experts. In this scenario, the forensic linguist’s overriding duty is to the trier-of-fact, the court, assisting it to reach informed decisions, irrespective of who instructs and pays them.
For