Evaluation, Research

Hcéres: On the proper usage of research evaluation criteria

Published on

The San Francisco Declaration on Research Assessment (DORA), published in 2012, and the Leiden Manifesto of 2015 both set out to improve existing research evaluation practices, particularly by drawing attention to the misuse of certain bibliometric indicators. In support of the recommendations made in these declarations, Hcéres has decided to issue a statement explaining how their principles are put into action in the context of our work to evaluate research institutions.

At a time when the scientific community has made clear its determination to adopt better practices to allow more qualitative evaluation of research, Hcéres has reiterated its support for the principles enshrined in the DORA and the Leiden Manifesto. Hcéres implements these same principles in its evaluations of research institutions, based on:

  • Peer review in accordance with international standards and recognising the importance of transparency, collegiality and equality of treatment;
  • Evaluation which includes the right to reply, allowing the subjects of the evaluation to express their opinion on their evaluation;
  • Multi-criteria evaluation which avoids the use of just a few indicators to embrace a more comprehensive understanding of the results obtained;
  • Evaluation with a clear qualitative dimension and based on self-evaluation;
  • An evaluation process which combines the commensurability of research activities and results with a clear awareness of their disciplinary specificities by using Guides to research output and activities;
  • A dynamic process of evaluation, seeking to adapt to the ecosystem in question and including regular revision of the reference systems used.

The DORA and the Leiden Manifesto draw particular attention to the use of two indicators which have been the subject of serious criticism from the scientometric community: the journal impact factor (DORA) and the H index (Leiden Manifesto).

Hcéres therefore issues two recommendations to the experts it appoints, concerning the use of bibliometric indicators in the evaluation of research institutions and their scientific output:

  • to prioritise the scope of the results achieved, without necessarily and exclusively relying upon bibliometric indicators, which are to be considered as working tools for use in conjunction with qualitative evaluations;
  • if publication impact indicators are used, they should be applied with a clear understanding of their inherent limitations.

Furthermore, the Hcéres Science and Technology Observatory publishes bibliometric indicator reports, used in the process of evaluating territorial coordinations and research bodies. These reports are shared with the subjects of evaluations and include a methodological appendix. Indicators are calculated in terms of articles published, then standardised to account for disciplinary specificities. Some indicators are used to produce summary reports on research evaluations, and national summaries on a discipline-by-discipline basis.

Further reading...

On the Hcéres website

Compiled by Hcéres in collaboration with the relevant academic communities, these guides define the scope of the factors to be evaluated, and where relevant establishes a hierarchy between them and the quality indices used to evaluate them.

Download