tl;dr-ELT

too long; didn’t read- ELT

 Why do people get taken in by misinformation? We usually point to ideology, fear or poor media literacy. But what if part of the answer is much simpler — & much closer to home for language teachers? What if it has something to do with how misinformation is written?

A recent study in Frontiers in Artificial Intelligence digs into how language itself can distinguish misinformation from factual communication, with some uncomfortable implications for how we think about clarity, credibility & persuasion.

The study

The researchers analysed over 24,000 social media posts across three corpora: verified COVID-19 misinformation, general COVID-19 discourse & Monkeypox-related posts. Using computational linguistic methods, they compared texts on three main dimensions: readability, rhetorical markers (such as exclamation & question marks) & persuasive or fear-related vocabulary.

Readability was measured using standard indices like Flesch Reading Ease & Flesch–Kincaid Grade Level. Emotional persuasion was tracked via a conservative lexicon of urgency- & fear-related terms. Punctuation was used as a proxy for rhetorical style. The goal was not just to spot misinformation, but to understand how it sounds.

The findings

COVID-19 misinformation was dramatically harder to read than both general COVID content & Monkeypox posts. Its average Flesch Reading Ease score was around 11, placing it firmly in the “very difficult” range, with a reading level equivalent to post-graduate academic prose. In contrast, Monkeypox posts sat closer to mainstream news readability.

Misinformation also used more than twice as many fear- or urgency-related words as factual content, despite avoiding obvious emotional punctuation like exclamation marks. This creates what the authors describe as “covert emotionality”: emotionally loaded language wrapped in a calm, serious, authoritative tone.

Interestingly, Monkeypox communication relied far more on overt urgency, with higher exclamation use, while general COVID discourse showed more question marks, reflecting uncertainty & information-seeking in earlier pandemic stages.

In short: misinformation often sounds complex, restrained & technically credible, while still quietly pushing emotional buttons.

Why this matters

These findings resonate strongly with work in psychology & persuasion, particularly Petty & Cacioppo’s Elaboration Likelihood Model. Under cognitive load or stress, readers rely more on surface cues like perceived expertise than on careful evaluation. Dense, technical language can therefore increase credibility, even when the content is false.

This also echoes research on processing fluency, where difficulty is sometimes misread as depth, & on conspiracy discourse, which frequently borrows the register of science to claim authority. See Pennycook & Rand on analytical thinking & misinformation, or O’Connor & Weatherall on the social spread of false beliefs.

I feel this challenges a typical assumption: that clarity always signals quality, & complexity signals rigour.

Imagine two texts:

  • “Scientists warn of an urgent risk to public health”
  • “Emerging epidemiological evidence suggests multifactorial systemic vulnerabilities”

The second may feel more credible, even if it says less.

Teacher Takeaways?

  • Teach credibility as a linguistic effect, not just a factual one. Learners often equate complex syntax, abstract nouns & Latinate vocabulary with expertise. Consider activities where students compare texts that differ in readability but not in informational value, then discuss why one feels more trustworthy. This helps decouple “sounds academic” from “it’s reliable”.
  • Make register manipulation visible. Draw attention to features highlighted by the study: heavy nominalisation, long noun phrases, passive constructions & low interpersonal stance. Ask students to rewrite a dense, authoritative-sounding paragraph into clearer language without changing the meaning, then reflect on how the tone & perceived credibility shift.
  • Integrate critical reading into skills work, not as an add-on. When doing reading or listening tasks, include questions like: Who is positioned as the expert here? What linguistic choices create that impression? What is being obscured? This aligns well with EAP, exam prep & higher-level B2/C1 work, where learners increasingly encounter dense, pseudo-academic discourse online.

How often do we train learners to question how something is written, not just what it says?

Leave a Reply

Welcome to my blog

take the legwork out of reading!

There’s a lot of fascinating information out there, but sometimes we just don’t have time to find it & actually read it.
This is where this blog comes in.

I’m here to give you a summary of interesting studies, journalism & news related to the world of ELT, language learning, linguistic research & anything else that catches my eye.
I always include the link, so you can check it out for yourself.

Let’s connect
Follow tl;dr-ELT on WordPress.com

Discover more from tl;dr-ELT

Subscribe now to keep reading and get access to the full archive.

Continue reading