Addressing bias against non-native English-speaking researchers in scientific publishing

Abhishek Goel
4 min readFeb 17, 2021
FotografiaBasica/iStock

“Language needs work. Consider getting your paper edited by a native English speaker.”

“The author needs a native English-speaking co-author to thoroughly revise the grammar of this manuscript.”

“This paper is not written in sound English and cannot be accepted in its current form.”

It’s not uncommon for authors submitting their research papers to journals to receive such comments at the peer review stage.

And it’s completely understandable for authors to be confused or angry after receiving these comments despite getting their manuscripts edited by a professional language editing company.

We had founded Cactus Communications on the premise that we’d have language experts improve the language of manuscripts written by non-native English-speaking researchers to improve their chances of acceptance at an English journal. When we received emails from authors asking us to explain the negative journal comments, we’d have seasoned reviewers assess the language of these manuscripts.

We’d find a few issues that are typically addressed by the journal’s copyediting team but rarely a reason for a manuscript being returned. So what warranted such comments from the journal?

Over time, we started seeing a trend. The authors of these papers usually had distinctly Asian names. We largely catered to Asian researchers from Japan, South Korea, and China, so we’d often see manuscripts authored by these researchers be returned with the suggestion to have them edited for language. Oddly enough, the comments rarely touched upon the science of the study.

When we launched the company to help researchers overcome the language barrier, we had a fair idea of the obstacles researchers had to face. Funding, lack of proficiency with the lingua franca English, and lack of time were some of them. But bias against non-native English-speaking researchers was new territory for us.

It’s important to acknowledge that cognitive bias is a natural human tendency. Our brain is hardwired to make faulty decisions. And as with anything driven by the human subconscious, it is difficult to regulate or eliminate it without the aid of process-related checks and balances.

Organizations committed to creating an inclusive environment conduct training programs to build awareness around biases and help employees avoid falling in the bias trap. But how do you do this in the peer review process?

Peer reviewing has traditionally been a voluntary process. Peer reviewers are invited to evaluate pre-published research papers and are not bound by the guidelines of any organization.

Protocols to eliminate bias in the peer review are typically one-sided. Most journals adopt the single-blind model of peer review, wherein the peer reviewers’ names are not shared with the author (to avoid instances of authors influencing reviewers for a favorable verdict). The author names, however, are made available to the reviewers.

This problem is avoided with the double-blind model of peer review, where neither party is privy to the other’s name. But this does not eliminate the probability of the paper getting desk-rejected by the journal editor owing to bias.

To understand the problem in depth, we started participating in international conferences like the Council of Science Editors (CSE) Annual Meetings that discussed topics like bias in publishing. We started engaging with publishers, journal editors, and authors (both native and non-native English speakers) to understand all perspectives.

Once we learnt enough about the problem, we started asking our authors to include in their cover letters to the journal a note that said that their paper has been edited for language by an editing company.

In 2013, we launched Editage Insights, which, along with serving as a repository of resources for authors and publishers, was our largest initiative to help address this bias. We consciously represented the voice of Asian researchers and non-native English speakers.

I can’t say we’ve dealt with the problem. Far from it in fact. But we have been instrumental in educating the publishing community and we will continue to work toward eliminating the problem. But this is one approach and only the first step.

As long as humans are involved in deciding which paper is published, unconscious bias will continue to exist. Smart tech will be instrumental in eliminating this problem. Processes and platforms powered by artificial intelligence (AI) and machine learning will eliminate reliance on human decision making — and biases, as a result — and pick papers based purely on submission readiness, rather than considering irrelevant factors like the authors’ institution, location, or ethnicity.

Luckily, players in the research and publishing landscape are becoming aware of the need to supplement human expertise with technology. Tools like R Pubsure, UNSILO Evaluate, scite, AIRA, and Ripeta leverage AI to aid objective assessments of research manuscripts.

The challenge that AI faces now is cleaning training datasets so that past human bias does not get adapted by machine models and taint future output. Discourse around this concern is already building, with some calling for measures to be taken sooner rather than later. What remains to be seen is if Big Tech can acknowledge the problem and take steps (soon!) toward addressing it.

--

--

Abhishek Goel

Co-founder and CEO of Cactus Communications (cactusglobal.com). Scicomm, customer delight, happy workplace, mindfulness, coaching