Pitfalls and opportunities in open science policy evaluation

Akhil Bansal, Enlli Lewis

OA is a central pillar of open science. By making digital information freely accessible to everyone, OA aims to maximize the benefits of scientific advancement in an equitable way. However, open science goes beyond OA. Evaluating the success of open science policies and their contribution to reduced health and access inequalities is critical but requires meaningful evaluation frameworks. In this blog post, we discuss the strengths and limitations of commonly used evaluation metrics for OA publishing policies in the broader context of open science and consider their health equity implications for the global research community.     

Scientific research can have profound and far-reaching benefits for the economy, the environment, public policy, health and quality of life.

Open science aims to maximize the benefits of scientific research by promoting science that is more accessible, inclusive and transparent.1 In essence, its goal is to ensure everyone has a basic right to share in scientific advancement and its benefits (Universal Declaration of Human Rights [Article 27.1]).2 Within this context, open access (OA) publishing policies aim to make research more widely and freely available to all.

A range of metrics exist for evaluating the success of OA publishing policies, but they tend to evaluate success within a specific domain or context. Critical appraisal of current evaluation frameworks is required to avoid misinterpretation of the data and overstating OA policy successes. Meaningful open science metrics should help to identify opportunities to close (not perpetuate) global health and access inequalities.    

Current metrics and their limitations

Unidimensional metrics of OA success

The United Nations Educational, Scientific and Cultural Organization defines OA as the: “free access to information and unrestricted use of electronic resources for everyone. Any kind of digital content can be OA, from texts and data to software, audio, video, and multi-media”.3 OA can apply to non-scholarly content (such as music, films and novels), but in the context of scientific research, the majority of OA discussions focus on the removal of paywall barriers for peer-reviewed journal publications. It is relevant to note that the voices of important scientific knowledge consumer and implementation groups (e.g. clinicians, policymakers) are under-represented in these discussions.

OA policy success (from the point of view of a company, university or country) is often evaluated as the number (or proportion) of articles published OA by that entity. However, these metrics do not account for how research is discussed and amplified after publication.

Multidimensional frameworks for evaluating publication impact

Alternative metrics, such as Altmetrics, and PlumX are more nuanced evaluation frameworks that assess the impact of publications using more multidimensional data than traditional article impact metrics. Such metrics combine citation data with measures of usage and research engagement from online channels. Their broader evaluation frameworks offer insights into the extent to which OA publishing of research contributes to the open science goal of making scientific discourse more equal and democratic as well as open and transparent.

Examples of the data components included in these evaluation frameworks are: social media posts (and reposts) about the data, press coverage, and saves to online reference manager tools.

Open science evaluation beyond publications

While these measures enrich the evaluation of research impact and can be used to evaluate the relative impact of OA and non-OA published research, they have limitations when used to measure the impact of open science policies as a whole (Table 1). A more inclusive approach to relevant knowledge producers and media channels is needed.

Table 1. Potential limitations of open science policy evaluation metrics.

LimitationConsideration details
Oversight of non-digital modes of communicationScientific information is communicated through a number of channels and media, not only digital channels. Examples include bilateral and personal discussions, journal clubs and conferences. The use of different communication channels varies across different economic regions, with different patterns of usage between LMICs versus higher-income countries. 
Omission of some knowledge producersResearchers who publish their work in peer-reviewed journals represent only a portion of all knowledge producers. Knowledge is produced by a variety of stakeholders, including non-academic clinicians, essential and frontline workers, volunteers and so on.
Assumed minimum level of digital accessOA impact metrics focus on research outputs available online, which requires individuals involved in the dissemination process to have a minimum level of resource and digital access (e.g. reliable electricity, easy internet access, technical support).4 One-third of the global population is not yet online, with LMICs far less active online than those from higher-income countries.5
Short-term overestimation of research impactOnline impact may not translate into ‘real-world’ impact.Evaluation frameworks that focus on short-term online attention may overestimate the impact of some research while overlooking that with longer-term societal impact, potentially misgearing the reward and motivation systems for researchers.
Lack of differentiation between positive and negative attentionDigital attention is not synonymous with positive attention and does not necessarily reflect research quality. Online ridicule or perpetuation of misinformation may result in high alternative metrics scores.
LMIC, lower- and middle-income country

Recommendations for future open science policy development and evaluation

The creation of truly inclusive open science policies and evaluation frameworks requires the involvement of a wide range of stakeholders, including a diverse representation from a wide range of countries and income groups.

Meaningful evaluation of the success of future open science policies must then focus on metrics that actually matter. Examples of some targeted and open science-specific evaluation metrics that should feed into future open science policy assessments are summarized in Table 2.

Table 2. Considerations for future open science policy evaluation metrics.

ConsiderationDetails
Proportion of research outputEvaluation frameworks that consider total number of OA publications produced by a particular group/country are less insightful than those that report proportional data. A meaningful framework would seek to understand whether an increase in the absolute numbers of OA publications (e.g. authors from LMICs) correlates with overall trends in publishing activity by a particular group. Examples of this approach exist for specific research topics, including disparities in the contribution of LMIC to:palliative care research,6 global oncology authorship and readership patterns,7 health policy and systems research.8
Topic relevanceData-mining topic trends in OA research could be used to: help understand whether current OA research represents all key global issues (in line with the United Nations Sustainable Development Goals)9 identify and interrogate the key drivers for OA research output.
Researcher representativenessImproved access to scientific research through open science activities should improve career prospects, nurture new research collaborations, increase exposure of ideas and provide new funding opportunities. In this context, metrics designed to assess the impact of open science policies on researchers’ careers should take this into consideration. Potential metrics that could subsequently be investigated are: experience diversity among conference attendees experience diversity among journal peer reviewers results of researcher surveys conducted in LMICs.
Reader representativeness     OA article metadata could be analysed to identify and categorize those who are accessing knowledge to assess whether the readership is representative of all those who should have access to knowledge (e.g. patients, clinicians, policymakers).
Non-academic research impactTo minimize the limitations of publication metrics, research impact could be assessed through impact case studies.The case studies could report on the nature, scale and beneficiaries of research impact, requiring a more interdisciplinary and longitudinal approach to impact assessment, using qualitative and quantitative approaches combined with bibliometric analysis. The REF impact report could be used as a potential template for such an approach. The REF evaluates the impact of research in UK higher education institutions.10
LMIC, lower- and middle-income country; REF, Research Excellence Framework

In conclusion, superficial use and acceptance of metrics about OA publishing success as a proxy for open science success can mask, and potentially perpetuate, health equity disparities. Creating robust, targeted and inclusive open science metrics is no easy task, but is critical for the meaningful assessment of future open science policies and their impact.

References

  1. United Nations Educational, Scientific and Cultural Organization. UNESCO Recommendation on open science. Available from: https://www.unesco.org/en/open-science/about?hub=686 (Accessed 6 June 2023).
  2. United Nations. Universal Declaration of Human Rights. Available from: https://www.un.org/en/about-us/universal-declaration-of-human-rights (Accessed 6 June 2023).
  3. UNESCO. Open access publications: what is open access? Available from: https://en.unesco.org/open-access/what-open-access (Accessed 6 June 2023).
  4. Bezuidenhout L et al. Hidden concerns of sharing research data by low/middle-income country scientists. Glob Bioeth 2018;29:39–54. doi: 10.1080/11287462.2018.1441780
  5. International Telecommunication Union. Global connectivity report 2022: achieving universal and meaningful connectivity in the Decade of Action. Available from: https://www.itu.int/itu-d/reports/statistics/global-connectivity-report-2022/ (Accessed 6 June 2023).
  6. Pastrana T et al. Disparities in the contribution of low- and middle-income countries to palliative care research. J Pain Symptom Manage 2010;39:54–68. doi: 10.1016/j.jpainsymman.2009.05.023
  7. Bourlon MT et al. Global Oncology Authorship and Readership Patterns. JCO Glob Oncol 2022;8:e2100299. doi: 10.1200/GO.21.00299.         
  8. English KM et al. Increasing health policy and systems research capacity in low- and middle-income countries: results from a bibliometric analysis. Health Res Policy Syst 2017;15:64. doi: 10.1186/s12961-017-0229-1.
  9. World Economic Forum. Why the world needs to embrace open science. 21 October 2021. Available from: https://www.weforum.org/agenda/2021/10/why-open-science-is-the-cornerstone-of-sustainable-development/ (Accessed 6 June 2023).
  10. King’s College London and Digital Science. The nature, scale and beneficiaries of research impact: an initial analysis of Research Excellence Framework (REF) 2014 impact case studies. Available from: https://www.kcl.ac.uk/policy-institute/assets/ref-impact.pdf (Accessed 6 June 2023).

Akhil Bansal is a physician at St Vincent’s Health Network in Sydney, and a self-employed researcher and writer.

Enlli Lewis is a Regulatory Policy and Research Coordinator at 1Day Sooner.