Featuring the newest adherents to Plan S, how to make sure your data are re-useable and whether or not open access policies actually work.
Wellcome and Gates join Plan S via Nature
Since the launch of Plan S in September, the question of how much impact it will have on research beyond the EU has been a hot topic of conversation. The plan has caused controversy: although supporters welcome its uncompromising commitment to fully open access research papers, its critics argue that it threatens society journals and impinges on academic freedom. Now, two of the world’s largest biomedical research funders have also signed up to the plan. Both the Bill & Melinda Gates Foundation and the Wellcome Trust have been trailblazers for open access and already have far-reaching open access policies. By joining Plan S, however, they aim to reduce the number of different policies their researchers must abide by and lend credence to this ambitious project of the European Commission. The decision bodes well for the supporters of Plan S, who are hoping for broader, international adoption of the plan’s ten open access principles.
Are your spreadsheets FAIR? via F1000
It’s all very well and good sharing data sources, but unless they can be used easily by other researchers, there isn’t much point sharing them in the first place. The FAIR principles of data sharing, that recommend all data shared be Findable, Accessible, Interoperable and Re-usable have been designed to help researchers make their shared datasets as useful as possible. This checklist produced by F1000 gives simple dos and don’ts on how best to manage your spreadsheets so that other researchers with access can make optimal use of them. The document also supports the creation of a ‘data dictionary’ to accompany your data, making it easier for other researchers and authors returning to old work to get to grips with the content of the sheet quickly.
The announcement of the Plan S launch and other open access policies has certainly made a splash in the open access world, but do these policies actually impact researcher behaviour? The answer, perhaps unsurprisingly, is ‘yes, sometimes’. In the first broad analysis of open access publishing rates in relation to publication policies, it was found that only approximately two-thirds of research funded under an open access policy was actually published open access. There was broad variation between funders, however, with approximately 90% compliance for the National Institutes of Health (NIH) and the Wellcome Trust, but less than 25% for other funders such as the Social Sciences and Humanities Research Council of Canada. Several reasons were put forward as to why these differences might exist. One of them is that publishers in the social sciences and humanities are generally less well equipped to provide open access than those in the natural sciences. Another notable feature that links the NIH and the Wellcome Trust is the structure of their funding – researchers do not receive full funding until the publication process is complete and the paper is available via open access. The article concludes that these incentives, along with strong educational policies on open access and accessible open access infrastructures, are what underpin a successful policy.
Peer review – the best of a bad lot in research evaluation methods via the New York Times
We are all familiar with the shortcomings of peer review: it can be biased, it can take a long time, it is carried out by reviewers who are overstretched and often not trained, and it doesn’t always catch bad research. The topic of peer review has been in the news a lot recently after several high-profile retractions, but this has been a problem for a long time: Andrew Wakefield’s notorious paper falsely claiming a link between the measles, mumps and rubella vaccine and autism is now 20 years old. But what can we do about this? Surely academic experts in the research field are the most qualified to write reviews, so is it the system that is broken? This article explores some of the ways to guard against the inherent biases and shortcomings of peer review as a system, and also look at what journal editors could do to maintain quality in the papers they publish.