This week, we signpost an investigation revealing integrity issues in peer review using AI, and we explore the global support behind eLife’s reviewed preprint model. We share key insights from the Open Science Conference 2025 on how AI-driven tools can enhance transparency in open science, and we showcase how a Gates Foundation grant is advancing diamond open access in the USA. We also read about the concerns and recommendations on AI use in research assessment. Finally, we celebrate progress as the STM community comes together to champion research integrity at STM 2025.
To read:
AI use and integrity challenges in peer review via Nature | 5-minute read
The International Conference on Learning Representations 2026 is now addressing the widespread use of artificial intelligence (AI)-generated peer reviews after one researcher offered a reward for analysing submissions and reviews for AI-generated text. Pangram Labs identified that out of 75 800 reviews for about 19 500 submissions, around 21% were entirely written by AI, and more than half received substantial AI assistance. Organizers will use automated detection systems to detect AI misuse and will penalize authors and reviewers for undisclosed AI involvement. The article also highlights the broader crisis in peer review workload within machine learning, adding significant strain on reviewers.
Global support grows for eLife’s reviewed preprint model via The Publication Plan | 4-minute read
After eLife introduced a `reviewed preprint` model, which publishes all peer-reviewed manuscripts with reviewer comments and editorial assessments, the journal underwent indexing changes and will no longer receive an impact factor. Despite this, over 100 academic institutions and research funders worldwide – including Caltech, King’s College London, Aarhus University, the Gates Foundation and the Chinese Academy of Sciences – have committed to maintaining recognition of research published in eLife for recruitment, promotion and funding decisions. Damian Pattinson (Executive Director at eLife) emphasizes that eLife’s model is “one that prioritises scientific quality, transparency, and integrity over outdated prestige metrics”.
AI-powered solutions drive open science forward via STM Publishing News | 4-minute read
OpenAIRE showcased several AI-driven tools to improve transparency and reproducibility in research at the Open Science Conference 2025 in Hamburg. One example was the European Open Science Resources Registry, which uses natural language processing to organize and evaluate open science policies and best practices. Another was the European Open Science Cloud Open Science Observatory, described as a “smartwatch” for open science, providing real-time monitoring and insights into research activities across Europe. These tools were demonstrated through live sessions and posters, collectively representing how AI has a widespread role in supporting evidence-based policy, improving data discovery and building open, connected research infrastructures.
Gates Foundation grant fuels growth of diamond open access in the USA via STM Publishing News | 4-minute read
Diamond open access is gaining momentum in the USA, thanks to a $20 6886 grant from the Gates Foundation. The funding supports the first national initiative to identify and document free community-governed, non-commercial, peer-reviewed journals. Led by Lyrasis, the Big Ten Academic Alliance Center for Library Programs and the California Digital Library, the project aims to provide an overview of the current landscape, assess infrastructure needs and offer policies to make scholar-owned publishing sustainable. Advocates believe this initiative could redefine research sharing by breaking down commercial barriers, promoting diversity and driving a significant shift in the future of academic publishing.
Concerns grow over AI use in research assessment via Research Information | 4-minute read
There is a growing scepticism among academics and professionals about the widespread use of AI in scientific research assessment, according to the Research Excellence Framework–AI report created by the University of Bristol with funding from Research England. Concerns centre on the adoption of AI tools without clear governance or national oversight, with critics warning that this could undermine confidence in the evaluation process. Read the full report for a set of actionable recommendations for universities and governing agencies.
To engage with:
Celebrating progress: STM 2025 unites for research integrity via STM | 3-minute read
STM Innovation & Integrity Days 2025 will take place on 9–10 December in London, starting with the Innovator Fair and expert workshops on 9 December, followed by Research Integrity Day on 10 December. The programme will feature lightning talks, startup exhibits and Vesalius Innovation Awards by Karger, as well as sessions on paper mills, image fraud, detection tools and the STM Integrity Hub’s 2026 strategy. All sessions require paid registration, and no audio or video recordings will be allowed.
Enjoy our content? Read last week’s digest and check out our latest quarterly update!
Don’t forget to follow us on Bluesky and LinkedIn for regular updates!