Why would linguists be interested in tracking someone’s eyes as they read? Because eye movements give us a moment‑by‑moment record of how readers process text — what they notice, what they skip, where they hesitate & how they build meaning. It’s one of the few tools that lets us observe reading as it happens, rather than relying on test scores or self‑reports after the fact.
This connects to something I explored recently in a post: reading isn’t a single skill but a bundle of micro‑processes that unfold in real time. Eye‑tracking lets us see those processes directly.
Large scale eye‑tracking studies are still surprisingly rare in SLA, yet they’re essential for understanding how people actually read in real time. Small lab studies can tell us a lot about specific processes, but only huge, multilingual datasets can reveal the deeper patterns — the ones shaped by writing systems, reading habits & cross linguistic transfer.
Until recently, we simply didn’t have the scale to answer those bigger questions. What happens when readers from different writing systems tackle the same English text? How much of their L1 reading behaviour carries over? The MECO project is one of the first to give us the data we need. In their new paper, Kuperman, Schroeder, Acartürk & colleagues (2025) expand the English as L2 dataset in Studies in Second Language Acquisition, offering an unusually broad, real world view of how multilingual readers navigate English.
The study
The team extended the MECO L2 corpus by collecting eye‑tracking data from 660 L2 English readers across 13 first‑language backgrounds, ranging from alphabetic systems (e.g. Danish, Basque) to [1]logographic (Chinese) & [2]abugida scripts (Hindi).
Participants completed:
- a multiline text‑reading task while their eye movements were recorded
- English proficiency measures (vocabulary, TOWRE subtests, CFT)
- background questionnaires on language use & demographics
The aim was not hypothesis‑testing but validating & characterising this expanded dataset: reliability, descriptive patterns, correlations & cross‑linguistic contrasts.
The findings
Several patterns echo earlier studies, but the expanded sample sharpens the picture:
- Reading comprehension & eye‑movement fluency correlate only weakly — fast readers aren’t necessarily better comprehenders.
- L1–L2 contrasts are larger for fluency than comprehension — readers’ eye‑movement patterns differ more than their understanding.
- Writing system matters: readers from logographic or abugida backgrounds show distinct fixation & skipping profiles compared with alphabetic‑L1 readers.
- Dataset reliability is high, with stable patterns across labs, languages & measures.
- Combined with the researchers’ earlier Wave 1 study, MECO now includes over 1,200 L2 English readers from 19 L1 backgrounds, making it one of the richest open datasets in SLA.
What might this all mean in the classroom?
Imagine two learners reading the same English sentence. A Basque L1 reader might skip more words & make fewer regressions, while a Chinese L1 reader might fixate more often on morphologically dense words — not because one is stronger or weaker, but because their L1 has trained different visual processing routines. They may reach similar levels of comprehension, but they take different routes to get there.
Teacher takeaways?
- Reading speed ≠ reading skill: eye‑movement fluency doesn’t reliably predict comprehension, so avoid equating “fast” with “good”.
- L1 shapes L2 reading more than we think: learners bring deeply ingrained visual habits from their writing system.
- Assessment needs nuance: mixed‑L1 classes may show very different reading profiles even at similar proficiency levels.
Have you noticed learners with different L1s reading differently in an L2?

[1] Logographic systems, where each character represents a morpheme or word. These systems rely heavily on visual pattern recognition & holistic processing.
[2] Abugida systems (e.g. Hindi), where each symbol represents a consonant with an inherent vowel, modified by diacritics. These systems require readers to process complex visual–phonological mappings.


Leave a Reply