Science has always been based on sharing. Sharing methodologies, sharing results, sharing data, sharing theories and, to some extent, sharing projects. The speed at which knowledge progresses has always been correlated to mutualisation. International cooperation between researchers and teams has always been an efficient accelerator of knowledge in all scholarly fields.
Researchers may be tempted to avoid sharing for many reasons, mostly due to the competitive pressure put on them by assessment procedures. The main consequences of this are summarised in the famous slogan “publish or perish” and are often considered unavoidable by those who still believe that science should only be driven by competition. This leads to frequent collateral damage in the form of over-publication, fragmentation and even, occasionally, fraud.
The digital era opened a wide range of opportunities for research methods and obviously for the dissemination of research results, making knowledge more accessible. However, digitalisation reinforced the myth of objectivity in numbers, reducing research quality to a few quantitative indicators, which naturally results in false assumption. Surveys suggest that many juries and commissions still evaluate researchers on the basis of the sum of the ‘journal impact factors’ attributed to each of their published articles1. This simplistic and misleading approach must be reversed to ensure that assessment systems reflect the qualities that Open Science requires from modern researchers.
Evaluation must remain an independent and unconstrained mechanism, but it has to be rigorous and constantly aim to achieve a clear objective: advancing scholarly research. And yet each individual’s merit and his or her role in collective activity always deserve recognition.
If openness is to become the rule, incentives must be implemented to reward all players in accordance with their contribution. Multiple criteria evaluation must thus prevail, with each criterion carrying different weights. This must be done in consideration of the research field and the nature of the assessment (individual, team or project). In all cases, “proxy” assessment tools like the journal impact factor should now be banned as a direct measure for research quality. Commitment to the San Francisco Declaration on Research Assessment and to the Leiden Manifesto must be encouraged.
The European University Association (EUA) has long been at the forefront of the transition to Open Science in Europe. Since 2014, membership consultations have gathered information about European universities’ Open Access experiences, providing the basis for EUA actions and strengthening the voice of universities in European policymaking. The EUA Expert Group on Science 2.0/Open Science has been guiding these actions since it was established in 2016.
Longitudinal analysis of EUA membership consultations shows limited progress on Open Access to research publications and data, while persistent challenges like research assessment remain unresolved. Indeed, current research assessment practices do not incentivise or reward researchers for making research outcomes openly available.
The Expert Group and EUA Secretariat developed the EUA Roadmap on Research Assessment in the Transition to Open Science and launched an Expert Subgroup on Research Assessment in 2018. Going forward, EUA’s priorities in this field will be to gather and share information via membership consultations, to initiate dialogue between key actors by organising events and to formulate good practice and policy recommendations.
Professor Bernard Rentier
Chair of the EUA Expert Subgroup on Research Assessment, Vice-President of the Belgian Federal Council for Science Policy (FRWB – CFPS) and former Rector of the University of Liège, Belgium
Professor Martine Rahier
Vice-President of EUA and former Rector of the University of Neuchâtel, Switzerland
© European University Association