Weekly digest: what’s happening in open science?

Steph Macdonald

Featuring a summary of the findings from the COMPare study, the success of transparent peer review pilot schemes and alternative methods for measuring academic career progression

COMPare(ing) responses to misreported clinical trials via BioMed Central

When reporting clinical trials, it is important that any of their shortcomings, for example in the study design or in the methodology, are disclosed in the papers detailing the trial results. Clinical trial registries, such as the ClinicalTrials.gov registry, are in place to ensure correct outcome reporting. These registries, which require all trial outcomes to be disclosed before the start of the trial, have gained support from organizations including the World Health Organization and the International Committee of Medical Journal Editors. The importance of pre-specifying clinical trial outcomes is also highlighted in the International Conference of Harmonisation of Good Clinical Practice and the Consolidated Standards of Reporting Trials, which are endorsed by the majority of academic journals. Despite these recommendations, clinical trial misreporting remains a big issue in many leading medical journals. COMPare aimed to explore the possibility of publishing misreported trial correction letters and the responses from authors received for five high-tier journals, namely The New England Journal of Medicine, JAMA, The BMJ, Annals of Internal Medicine and The Lancet.

Of the 58 correction letters submitted by COMPare, only 23 were published (two in the Annals of Internal Medicine, two in The BMJ and 16 in The Lancet), 20 of which received a response from the research team involved in the trial. The responses included many “inaccurate and problematic statements or misunderstandings”, with some responses even dismissing or denying allegations of misreporting. It is impossible to say whether the inaccuracy in clinical trial reporting is the result of genuine misunderstandings, perhaps stemming from a lack of education. Only eight of the research teams acknowledged that there was some discrepancy in the reported results and pre-specific trial outcomes, and only one misreported clinical trial was updated with a correction. The study provides further evidence for clinical trial researchers and journal editors falling short of clinical reporting guidelines.

Transparent peer review pilots flying high via The Publication Plan

Last year, an open letter describing the benefits of transparent peer review in ensuring unbiased and constructive feedback was published. Since then, the concept of open peer review has gained traction, and both Elsevier and Wiley have launched transparent peer review pilot schemes in some of their journals. The pilot schemes offer an opt-in transparent peer review service and the option for reviewers to remain anonymous. A recent report in Nature Communications assessed the behaviour of reviewers from five Elsevier journals using a transparent review system in comparison to reviewers from five journals using a more traditional model of peer review. Although willingness to review manuscripts and turnaround time were unaffected by a transparent peer review process, most reviewers opted to remain anonymous. Similar findings have been seen in Wiley’s peer review pilot scheme, with 83% of authors opting for a transparent peer review. Both reports demonstrate the success of a transparent peer review process, providing that an anonymous option for reviews is available.

Has traditional publishing lost its impact (factor)? via The Chronicle of Higher Education

As open access awareness increases, more scholars are turning away from traditional subscription journals towards less well-known open access journals. Despite this trend, some early career researchers worry about the impact that publishing in such journals might have on their career paths. With decisions surrounding tenure applications strongly centred around journal impact factor, many scholars are calling for the use of alternative methods for ranking researchers. One solution would be to use altmetrics to assess the number of times a research article is shared on social media. Another option, already being trialled by humanities and social sciences disciplines, is the Humane Metrics in Humanities and Social Sciences initiative, which includes ‘openness’ as a metric for measuring academic progression. Whatever route is chosen for measuring research impact, more effort is needed from institutions and individual departments to ensure researchers feel free to publish open access without worrying about jeopardizing their careers.