It is unsurprising that 69% of the 3,144 global researchers surveyed by Economist Impact (2022), as part of the Confidence in research: researchers in the spotlight project, stated that the Covid-19 pandemic increased the importance of separating high-quality research from misinformation. The rapid spread of misinformation (Nsoesie et al., 2020), combined with an increased demand for research evidence, meant that just under a quarter (23%) of researchers surveyed by Economist Impact (2022) believed that challenging misinformation online is now one of their primary roles.
The ‘Confidence in research’ project - a global collaboration between Elsevier, Economist Impact and leading science organisations, including Sense about Science - sought to explore the attitudes of researchers on how the Covid-19 pandemic affected the practice of undertaking and communicating research (Economist Impact, 2022). Among its key recommendations were a need for researchers to collaborate with social media companies to tackle misinformation, and build public trust and understanding of research through promotional and educational campaigns. Furthermore, it identified a need to prepare researchers for public facing roles (e.g., through mentorship and training), and tackle inequality in research. However, without negating the importance of such endeavours, equal attention should be given to the process of research generation and the mechanisms of research dissemination.
As in the case of the well-known unsubstantiated therapeutic effects of hydroxychloroquine on coronavirus, the spread of misinformation sometimes arises from non-robust and methodologically flawed research entering wider circulation. However, this begs the question, how do erroneous claims and methodological faux pas penetrate the great firewall of peer review? The process of subjecting scientific manuscripts, including the methods, results, and claims, to expert scrutiny, prior to publication. A process that should afford us some confidence that claims made are valid, credible, trustworthy, and based on sound methodology. A quick scan of Retraction Watch, which tracks peer reviewed articles that have been retracted, indicates that the process is not working as it should. However, the solution here is not abandonment, or replacement, but instead the strengthening and the nurturing of the system. Certainly, Economist Impact (2022) found that 74% of researchers believe that peer review gives respondents the confidence to cite another piece of research. Foremost, researchers, particularly early career researchers (ECRs), need to be afforded the skills to effectively carry out peer review, and we need to incentivise these individuals to engage with the peer review process. Given that peer review is an entirely voluntary endeavour, and without incentive (barring a few notable exceptions, e.g., Web of Science’s Peer Review Recognition Service), why should we expect researchers to commit their limited time to the sluggish and slow process of closely scrutinising scientific manuscripts? Universities, publishers, and funders need to recognise and reward peer review, perhaps by rewarding reviewers for their time, or alternatively recognising ‘peer review’ as a significant contribution to the research endeavour when assessing for permanency or tenure.
Then, there is the emerging challenge of information making its way into the public domain before being subjected to peer review. Here, of course, referring to the use of ‘preprints’ (earlier versions of manuscripts that are publicly accessible, but not yet peer reviewed or published in a journal). Certainly, there are tangible benefits to using preprints, including the solicitation of feedback that preprints generate prior to publication, and more notably the expediting of the availability of information. Undoubtedly, the pandemic presented a distinct need to get information into the hands of other researchers and policy-makers quickly. This view was shared by approximately half of the researcher’s (52%) surveyed by Economist Impact (2022). However, it is necessary to balance these benefits against the distinct challenges posed by preprints in terms of ethics and integrity. While they inevitably speed up the process of research dissemination, they may inadvertently slow the scientific process down if the claims entering scientific and public discourse are erroneous and methodologically flawed. One of the most significant problems is that the public, and in some cases the media, may not be appropriately equipped to recognise the difference between a preprint and peer reviewed article. We should certainly be open to new and novel modes of dissemination, including the use of preprints, but we should proceed with caution.
We can take it back a step further still. Researchers need to have the confidence to design and carry-out robust, trustworthy and replicable research, which according to Economist Impact’s (2022) findings, is not universally the case. Certainly, the prevalence of questionable research practices across disciplinary fields that undermine the veracity of scientific claims is alarming (John et al., 2012; Yu Xie, 2021; Gopalakrishna, 2022). Researchers should have the skillsets, knowledge, and confidence to effectively avoid practices that bring about erroneous claims. Given that the researchers surveyed by Economist Impact (2022) acknowledged that the pandemic has made them more likely to adopt better research practices (e.g., 48% stating that they are more likely to communicate uncertainties and caveats in their research), it would be wasteful not to capitalise on this trend and upskill them. Training in research design, the use of robust research methodology, statistical analysis, and reproducibility as well as effective mentorship (again, esp. of ECRs) will feasibly go a considerable distance in addressing this concern.
That being said, it’s crucial that we acknowledge that the system is stacked against us. Hypercompetitive research environments that value frequent publication in high-impact, peer-reviewed journals, coupled with a bias towards statistically significant findings and novel research questions, certainly do to not incentivise good scientific practice. On this basis, the scientific ecosystem needs a shake-up. At the very least, we need to stop incentivising poor scientific research (e.g., ending the reliance on citation metrics when assessing for tenure, permanency, and promotion). At the most, we need to incentivise good research practice (e.g., rewarding open science, reproducible research, and peer review). Prescriptive policies have a role to play here too. Making manuscript publication contingent on the availability of all data and code used is essential for the assessment of research claims and for facilitating replication attempts. While researchers are not without agency and should take responsibility for their own research conduct, the combination of systemic pressures, saturated workloads, and emerging responsibilities (e.g., combating false information online) do not make it easy for researchers to carry out their work with integrity.
These issues are not new. The Covid-19 pandemic merely exacerbated existing trends, giving a renewed significance to these issues. However, researchers’ desires to do things better, presents a new opportunity. We can use the momentum brought about by the pandemic and this desire to improve to upskill researchers and provide them with the means of designing and carrying out robust research. This alone is not enough, and we need buy-in from universities, funders and publishers too. A significant overhaul of existing infrastructure and key incentive structures are needed. Stakeholders thoughout the system, from individual researchers to publishers, need to take responsibility and enact change. In doing so, without a doubt, we can bring about a fairer, more functional, veracious and efficient research enterprise.