Where do we currently stand in terms of single metrics? Is their standard application effective for evaluation purposes? The common use of bibliometric indicators is currently illustrated by the incorporation of the h‐index (and in some instances related indices) in bibliometric databases such as the WoS and Scopus (van Eck and Waltman 2008). In addition, use of search engines by university administrators to determine the h‐index of their faculty is increasingly popular. For example, the h‐index is frequently used for departmental reports or advancement to tenure (Dodson 2009), signifying that at least this indicator is here to stay. However, caution should be exercised, mainly because of discrepancies between the various search engines.
Unfortunately, even faculty members can fall into this trap. In Greece, where the research performances of the faculty were recently evaluated using their h‐indices, the discrepancy between h‐indices retrieved from various citation search engines was huge, raising serious doubt as to the validity and usefulness of the evaluation method (Hellenic Quality Assurance and Accreditation Agency 2012).
Of course, it is convenient to measure the research activity and impact in terms of a single metric, and perhaps this is why the h‐index has achieved almost universal acceptance within the scientific community and by academic administrators. Most of us want to see where we stand in relation to the rest of scientific community in our research field; however, it is widely acknowledged that this is an oversimplification. The advent of the h‐index was accompanied by primarily positive but also some negative reception. Among the positive outcomes was the revival of research assessment practices and the development of valid tools to support them. On the negative side, the h‐index strongly influenced promotions and grants, which has led many researchers to be preoccupied with the h‐index beyond its true value. This explains why extensive efforts are underway to come up with a better index.
If we require indices or metrics to reliably assess research performance, we should look at more than one indicator. Efforts toward defining additional measures should continue, and these measures should provide a more general picture of the researcher's activity by examining different aspects of academic contribution. For example, a measure should be found that reflects and recognizes the overall lifetime achievements of a researcher, and favors those with a broader citation output compared to others who have published one or two papers with a large number of citations. A new index favoring researchers who constantly publish interesting papers would also be beneficial, in comparison to scientists that have only published a few highly cited articles. In other words, emphasis on consistency over a single outbreak is preferable. Another useful development would be constructing a measure that can be utilized for interdisciplinary comparisons (Malesios and Psarakis 2014). To date, there is no universal quantitative measure for evaluating a researcher's academic stance and impact. The initial proposal for incorporating the h‐index for this purpose – although it was adopted with great enthusiasm – has proven in practice to suffer from several shortcomings.
References
1 Abramo, G. , D'angelo, C.A. , and Viel, F. (2013). The suitability of h and g indexes for measuring the research performance of institutions. Scientometrics 97 (3): 555–570.
2 Acuna, D.E. , Allesina, S. , and Kording, K.P. (2012). Future impact: predicting scientific success. Nature 489: 201–202.
3 Adler, R. , Ewing, J. and Taylor, P. (2008). Citation statistics. Joint IMU/ICIAM/IMS‐Committee on Quantitative Assessment of Research. [WWW] Available at: http://www.mathunion.org/fileadmin/IMU/Report/CitationStatistics.pdf. [12/14/2017]
4 Alonso, S. , Cabrerizo, F.J. , Herrera‐Viedma, E. et al. (2010). hg‐index: a new index to characterize the scientific output of researchers based on the h‐ and g‐indices. Scientometrics 82: 391–400.
5 Anderson, T.R. , Hankin, R.K.S. , and Killworth, P.D. (2008). Beyond the Durfee square: enhancing the h‐index to score total publication output. Scientometrics 76: 577–588.
6 Bollen, J. , Van de Sompel, H. , Hagberg, A. et al. (2009). A principal component analysis of 39 scientific impact measures. PLoS One 4 (6): e6022.
7 Bornmann, L. , Mutz, R. , and Daniel, H.‐D. (2008). Are there better indices for evaluation purposes than the h‐index? A comparison of nine different variants of the h‐index using data from biomedicine. Journal of the American Society for Information Science and Technology 59 (5): 830–837.
8 Bornmann, L. , Mutz, R. , Hug, S.E. et al. (2011). A multilevel meta‐analysis of studies reporting correlations between the h‐index and 37 different h‐index variants. Journal of Informetrics 5 (3): 346–359.
9 Costas, R. and Bordons, M. (2008). Is g‐index better than h‐index? An exploratory study at the individual level. Scientometrics 77 (2): 267–288.
10 Costas, R. and Franssen, T. (2018). Reflections around ‘the cautionary use’ of the h-index: response to Teixeira da Silva and Dobránszki. Scientometrics 115 (2): 1125–1130.
11 Department for Business, Innovation and Skills (BIS) (2014) Triennial review of the research councils (BIS/14/746) [WWW] UK Government. Available from: https://www.gov.uk/government/publications/triennial-review-of-the-research-councils [Accessed 4/19/2017]
12 Dodson, M.V. (2009). Citation analysis: maintenance of the h‐index and use of e‐index. Biochemical and Biophysical Research Communications 387 (4): 625–626.
13 Egghe, L. (2006a). How to improve the h‐index. The Scientist 20 (3): 14.
14 Egghe, L. (2006b). An improvement of the h‐index: the g‐index. ISSI Newsletter 2 (1): 8–9.
15 Egghe, L. (2006c). Theory and practice of the g‐index. Scientometrics 69 (1): 131–152.
16 Egghe, L. (2007). From h to g: the evolution of citation indices. Research Trends 1 (1): https://www.researchtrends.com/issue1-september-2007/from-h-to-g/.
17 Egghe, L. and Rousseau, R. (2008). An h‐index weighted by citation impact. Information Processing & Management 44: 770–780.
18 Falagas, M.E. , Pitsouni, E.I. , Malietzis, G.A. et al. (2007). Comparison of PubMed, Scopus, Web of Science, and Google Scholar: strengths and weaknesses. The FASEB Journal 22 (2): 338–342.
19 Franceschet, M. (2010). A comparison of bibliometric indicators for computer science scholars and journals on Web of Science and Google Scholar. Scientometrics 83 (1): 243–258.
20 Hellenic Quality Assurance And Accreditation Agency (HQAA) (2012). Guidelines for the members of the external evaluation committee (Available at: http://www.hqaa.gr/data1/Guidelines%20for%20External%20Evaluation.pdf)
21 Hirsch, J.E. (2005). An index to quantify an individual's scientific research output. Proceedings of the National