A Guide to the Scientific Career. Группа авторов. Читать онлайн. Newlib. NEWLIB.NET

Автор: Группа авторов
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Биология
Год издания: 0
isbn: 9781118907269
Скачать книгу
authors have argued, the use of indices to assess only one component of a researcher's work, like citations, is unfair (Kelly and Jennions 2006; Adler et al. 2008; Sanderson 2008). Measuring the scientific performance of a researcher by using only bibliometric data is already more or less restrictive by default, as is measuring the citation performance with only a single one of the metrics previously described. The idea that using only one or two indicators may be adequate for assessing research performance has been increasingly criticized. The h‐index and the h‐type indicators still have strong intrinsic problems and limitations that make them unsuitable as “unique” indicators for this purpose (see Costas and Bordons 2008). Other problems such as an age bias, dependence on the research field (van Leeuwen 2008), or the influence of self‐citations on these indices make their use questionable in principle (Schreiber 2008b). The h‐index has also been criticized on a theoretical basis as well (Waltman and van Eck 2012).

      Unfortunately, even faculty members can fall into this trap. In Greece, where the research performances of the faculty were recently evaluated using their h‐indices, the discrepancy between h‐indices retrieved from various citation search engines was huge, raising serious doubt as to the validity and usefulness of the evaluation method (Hellenic Quality Assurance and Accreditation Agency 2012).

      Of course, it is convenient to measure the research activity and impact in terms of a single metric, and perhaps this is why the h‐index has achieved almost universal acceptance within the scientific community and by academic administrators. Most of us want to see where we stand in relation to the rest of scientific community in our research field; however, it is widely acknowledged that this is an oversimplification. The advent of the h‐index was accompanied by primarily positive but also some negative reception. Among the positive outcomes was the revival of research assessment practices and the development of valid tools to support them. On the negative side, the h‐index strongly influenced promotions and grants, which has led many researchers to be preoccupied with the h‐index beyond its true value. This explains why extensive efforts are underway to come up with a better index.

      If we require indices or metrics to reliably assess research performance, we should look at more than one indicator. Efforts toward defining additional measures should continue, and these measures should provide a more general picture of the researcher's activity by examining different aspects of academic contribution. For example, a measure should be found that reflects and recognizes the overall lifetime achievements of a researcher, and favors those with a broader citation output compared to others who have published one or two papers with a large number of citations. A new index favoring researchers who constantly publish interesting papers would also be beneficial, in comparison to scientists that have only published a few highly cited articles. In other words, emphasis on consistency over a single outbreak is preferable. Another useful development would be constructing a measure that can be utilized for interdisciplinary comparisons (Malesios and Psarakis 2014). To date, there is no universal quantitative measure for evaluating a researcher's academic stance and impact. The initial proposal for incorporating the h‐index for this purpose – although it was adopted with great enthusiasm – has proven in practice to suffer from several shortcomings.

      1 Abramo, G. , D'angelo, C.A. , and Viel, F. (2013). The suitability of h and g indexes for measuring the research performance of institutions. Scientometrics 97 (3): 555–570.

      2 Acuna, D.E. , Allesina, S. , and Kording, K.P. (2012). Future impact: predicting scientific success. Nature 489: 201–202.

      3 Adler, R. , Ewing, J. and Taylor, P. (2008). Citation statistics. Joint IMU/ICIAM/IMS‐Committee on Quantitative Assessment of Research. [WWW] Available at: http://www.mathunion.org/fileadmin/IMU/Report/CitationStatistics.pdf. [12/14/2017]

      4 Alonso, S. , Cabrerizo, F.J. , Herrera‐Viedma, E. et al. (2010). hg‐index: a new index to characterize the scientific output of researchers based on the h‐ and g‐indices. Scientometrics 82: 391–400.

      5 Anderson, T.R. , Hankin, R.K.S. , and Killworth, P.D. (2008). Beyond the Durfee square: enhancing the h‐index to score total publication output. Scientometrics 76: 577–588.

      6 Bollen, J. , Van de Sompel, H. , Hagberg, A. et al. (2009). A principal component analysis of 39 scientific impact measures. PLoS One 4 (6): e6022.

      7 Bornmann, L. , Mutz, R. , and Daniel, H.‐D. (2008). Are there better indices for evaluation purposes than the h‐index? A comparison of nine different variants of the h‐index using data from biomedicine. Journal of the American Society for Information Science and Technology 59 (5): 830–837.

      8 Bornmann, L. , Mutz, R. , Hug, S.E. et al. (2011). A multilevel meta‐analysis of studies reporting correlations between the h‐index and 37 different h‐index variants. Journal of Informetrics 5 (3): 346–359.

      9 Costas, R. and Bordons, M. (2008). Is g‐index better than h‐index? An exploratory study at the individual level. Scientometrics 77 (2): 267–288.

      10 Costas, R. and Franssen, T. (2018). Reflections around ‘the cautionary use’ of the h-index: response to Teixeira da Silva and Dobránszki. Scientometrics 115 (2): 1125–1130.

      11 Department for Business, Innovation and Skills (BIS) (2014) Triennial review of the research councils (BIS/14/746) [WWW] UK Government. Available from: https://www.gov.uk/government/publications/triennial-review-of-the-research-councils [Accessed 4/19/2017]

      12 Dodson, M.V. (2009). Citation analysis: maintenance of the h‐index and use of e‐index. Biochemical and Biophysical Research Communications 387 (4): 625–626.

      13 Egghe, L. (2006a). How to improve the h‐index. The Scientist 20 (3): 14.

      14 Egghe, L. (2006b). An improvement of the h‐index: the g‐index. ISSI Newsletter 2 (1): 8–9.

      15 Egghe, L. (2006c). Theory and practice of the g‐index. Scientometrics 69 (1): 131–152.

      16 Egghe, L. (2007). From h to g: the evolution of citation indices. Research Trends 1 (1): https://www.researchtrends.com/issue1-september-2007/from-h-to-g/.

      17 Egghe, L. and Rousseau, R. (2008). An h‐index weighted by citation impact. Information Processing & Management 44: 770–780.

      18 Falagas, M.E. , Pitsouni, E.I. , Malietzis, G.A. et al. (2007). Comparison of PubMed, Scopus, Web of Science, and Google Scholar: strengths and weaknesses. The FASEB Journal 22 (2): 338–342.

      19 Franceschet, M. (2010). A comparison of bibliometric indicators for computer science scholars and journals on Web of Science and Google Scholar. Scientometrics 83 (1): 243–258.

      20 Hellenic Quality Assurance And Accreditation Agency (HQAA) (2012). Guidelines for the members of the external evaluation committee (Available at: http://www.hqaa.gr/data1/Guidelines%20for%20External%20Evaluation.pdf)

      21 Hirsch, J.E. (2005). An index to quantify an individual's scientific research output. Proceedings of the National