The enormous impact of the h‐index on scientometric analysis (the study of measuring and analyzing science) is illustrated by Prathap (2010), who argues that the history of bibliometrics can be divided into a pre‐Hirsch and a post‐Hirsch period. Between 2005 and 2010, there were approximately 200 papers published on the subject (Norris and Oppenheim 2010). Since then, applications of the h‐index go well beyond the bibliometric field, ranging from assessing the relative impact of various human diseases and pathogens (McIntyre et al. 2011) to evaluating top content creators on YouTube (Hovden 2013). There is even a website available where you can obtain a prediction of your own personal h‐index between 1 and 10 years in the future based on regression modeling (see Acuna et al. 2012).7 However, such predictive models have been the subject of criticism (Penner et al. 2013).8
9.2 The h‐Index
The h‐index (a.k.a., the Hirsch index or the Hirsch number) was originally proposed by Hirsch as a tool for determining theoretical physicists' relative academic productivity. Since its inception, this index has attracted the attention of the scientific community for assessing the scientific performance of a researcher based on bibliometric data. Prior to widespread use of the h‐index, the individual scientific performance was assessed using unidimensional metrics, such as the number of articles published, the number of citations received by the published articles, or the average number of citations per article. The h‐index has a bidimensional nature, simultaneously taking into account both the quality and quantity of scientific output, because it is based on an aggregate set of the researcher's most cited papers along with the associated citations received by those publications. The h‐index can also be applied to quantify the productivity and impact of a group of researchers belonging to a department, university, or country.
Among the advantages of the h‐index is its simplicity and ease of calculation. It aims at reflecting high‐quality works, as it combines both citation impact (citations received by the papers) with publication activity (number of papers published). The h‐index is not influenced by a single, successful paper that has received many citations. Nor is the h‐index sensitive to less frequently cited publications. Furthermore, increasing the number of publications will not necessarily affect the h‐index. By definition of Hirsch (2005), “A scientist has index h if h of his N papers have at least h citations each, and the other (N − h) papers have at most h citations each.”
9.3 Criticisms of the h‐Index
Besides the popularity of the h‐index, some criticism has been drawn and an enormous number of modifications and extensions of the h‐index have since appeared (Costas and Franssen 2018; Meho 2007; Schreiber 2007; Vinkler 2007; Adler et al. 2008; Schreiber et al. 2012; Waltman and van Eck 2012). The h‐index is not as objective as the research community would like it to be. By definition, it is biased in favor of mature researchers over younger researchers. A mature researcher with moderate research impact is expected to have a higher h‐index than a young researcher at the beginning of his or her career, even if the latter eventually develops into a researcher with a higher impact factor.
There are a number of other situations mentioned in the literature where the h‐index may provide misleading information about a researcher's impact and productivity. For example, the lack of sensitivity of the h‐index to the excess citations of the h‐core papers (the set of papers whose citations contribute toward h‐index) is a frequently noted disadvantage (Egghe 2006a,b,c; Kosmulski 2007). The h‐index does not take into account important factors that differentiate the ways research activity develops and is transferred, such as the distinction between research fields and specialties (van Leeuwen 2008). For example, an h‐index of 20 for an applied physicist would be a fair score, whereas the same figure would be wishful thinking for a theoretical mathematician.
Abramo et al. (2013) reveal yet another example of the problematic use of the h‐index for measuring research performance of institutions. The most profound argument against using the h‐index for ranking larger bodies (such as institutions, departments, etc.) is the influence of faculty size in calculating the h‐index value. Because the organizations are comprised of greatly varying numbers of faculty and research staff, the h‐index value is significantly affected. Thus, various modifications and extensions of the h‐index have appeared in literature starting almost immediately after its introduction.
9.4 Modifications and Extensions of the h‐Index
In an effort to address the shortcomings of the h‐index, several modifications and extensions of this index have been proposed. At least 37 h‐index variants (a.k.a. h‐type indicators or h‐index related indices) are found in literature (see Panaretos and Malesios 2009; Bornmann et al. 2011; Schreiber et al. 2012; Zhang 2013). Of course, various authorities favor certain indicators over others. Each of these indicators are intended to address one or more of the limitations presented by the original h‐index. For example, some of the h‐index variants are no longer robust to the number of excess citations of the highly cited articles in the h‐core (Jin 2006), meaning that excess citations can actually have an affect on the ranking. Other h‐related indices weigh the paper's contribution toward the index based on the number of authors (Schreiber 2008a). In a multilevel meta‐analysis, Bornmann et al. (2011) noted that “some h‐index variants have a relatively low correlation with the h‐index” and “can make a non‐redundant contribution to the h‐index.” These variants are primarily included in the modified impact index (Sypsa and Hatzakis 2009) and the m‐index (Bornmann et al. 2008).9
Table 9.1 Describing some of the popular h‐type indicators. Among the vast literature on h‐index variants, we single out the g‐index, the A‐index, the R‐index, the h w ‐index, and the h m ‐index. Alonso et al. (2010) proposed the use of the geometric mean of the h‐ and g‐indices, which they called the hg‐index, as a remedy to the high sensitivity of the g‐index to single highly cited papers. Despite the fact that h‐index variants fix some of the problems of the h‐index (e.g. the problem of the h‐index being robust to the number of citations of the h‐core's highly cited articles), there are situations where the features of the new index itself constitute a drawback. For example, some h‐index variants are influenced by the presence of one highly cited paper (Alonso et al. 2010; Costas and Bordons 2008). The g‐index specifically is limited by its extreme sensitivity to highly cited papers in a scientist's portfolio (Costas and Bordons 2008).
Table 9.1 List of h‐type indices and their descriptions.
Indicators | Definition/significance | References |
w‐index |
The highest number w
|