tl;dr-ELT

too long; didn’t read- ELT

I was rereading some recent work on linguistic bias in academia & found myself thinking about how often “clarity” is treated as a neutral, objective standard. Yet the more you look at how clarity is judged, the more it resembles a social category rather than a linguistic one. A study by Haley Lepp & Daniel Scott Smith (2025) makes this point with unusual precision.

Their paper, “You Cannot Sound Like GPT”: Signs of language discrimination & resistance in computer science publishing, examines how multilingual scholars are evaluated in one of the world’s most influential AI conferences.

Although the context is computer science, the findings resonate deeply with ELT, academic literacy, & the politics of English as a global gatekeeper.

The study

The researchers analysed 76,453 peer reviews from the International Conference on Learning Representations (one of the most influential global conferences in AI & machine learning) & conducted 14 interviews with multilingual scholars. They looked specifically at how reviewers praised or criticised “clarity”, treating these comments not as objective assessments but as expressions of language ideology.

One line from the reviews captures the tone:

The authors are either novices, not native English speakers, or the Principal Investigator didn’t bother to help writing it.”

The team also examined how things changed after the release of ChatGPT in late 2022, given the widespread assumption that LLMs would “fix” linguistic inequality.

The findings

1. Linguistic bias is measurable & persistent

Papers with more authors from countries where English is not institutionalised received:

  • more negative comments on clarity
  • fewer positive comments
  • lower overall ratings

This held even when controlling for scientific quality indicators such as novelty, correctness & replicability.

This echoes long‑standing work in raciolinguistics (e.g. Flores & Rosa, 2015) showing that linguistic judgements often track social assumptions rather than linguistic facts.

2. ChatGPT hasn’t removed bias- it’s shifted it

Despite the rapid rise in LLM‑assisted writing, the study found only a muted change in reviewer behaviour. The only group showing a statistically significant improvement post‑ChatGPT were authors affiliated with Chinese institutions.

Interviewees explained why: reviewers now treat certain “ChatGPT‑like” features as new signals of linguistic inauthenticity. As one participant put it:

You cannot sound like GPT.” So the semiotics of suspicion simply moved from grammar to style.

3. Multilingual scholars still feel pressure to erase linguistic identity

Interviewees described:

  • avoiding certain structures that might “give away” their L1
  • using ChatGPT strategically but cautiously
  • worrying that both errors & overly polished prose could trigger negative assumptions

This aligns with decades of research on linguistic insecurity & the hidden labour multilingual writers perform to meet shifting norms (e.g. Canagarajah, 2002; Lillis & Curry, 2010).

4. “Good English” continues to be conflated with “good science”

The authors argue that clarity critiques often function as proxies for assumptions about who belongs in science. This is consistent with broader critiques of English‑only publishing ecosystems, where access to English correlates strongly with class, geography & institutional privilege.

Why this matters for ELT

Although the study focuses on computer science, its implications for English language teaching are profound. It reminds us that:

  • linguistic standards are socially constructed
  • multilingual writers navigate shifting expectations
  • tools like ChatGPT don’t dismantle linguistic hierarchies- they often reconfigure them

Teacher Takeaways?

  • Encourage learners to see academic English as a genre with conventions, not a measure of intelligence or scientific worth.
  • Discuss with advanced learners how AI tools can support writing while still requiring critical awareness of style, tone & audience expectations.
  • I guess you could highlight examples of multilingual scholars who publish successfully, emphasising that linguistic diversity is a resource rather than a deficit.

Final thought

This study is a powerful reminder that linguistic fairness doesn’t emerge automatically from new technology. It requires conscious attention to the ideologies that shape how English is evaluated- in journals, in global academic networks & in classrooms & in classrooms.

How do you help learners recognise the power dynamics behind what counts as “good English”?

Leave a Reply

Welcome to my blog

take the legwork out of reading!

There’s a lot of fascinating information out there, but sometimes we just don’t have time to find it & actually read it.
This is where this blog comes in.

I’m here to give you a summary of interesting studies, journalism & news related to the world of ELT, language learning, linguistic research & anything else that catches my eye.
I always include the link, so you can check it out for yourself.

Let’s connect
Follow tl;dr-ELT on WordPress.com

Discover more from tl;dr-ELT

Subscribe now to keep reading and get access to the full archive.

Continue reading