Lexical processing in sign language: A visual mismatch negativity study

Neuropsychologia. 2020 Nov:148:107629. doi: 10.1016/j.neuropsychologia.2020.107629. Epub 2020 Oct 1.

Abstract

Event-related potential studies of spoken and written language show the automatic access of auditory and visual words, as indexed by mismatch negativity (MMN) or visual MMN (vMMN). The present study examined whether the same automatic lexical processing occurs in a visual-gestural language, i.e., Hong Kong Sign Language (HKSL). Using a classic visual oddball paradigm, deaf signers and hearing non-signers were presented with a sequence of static images representing HKSL lexical signs and non-signs. When compared with hearing non-signers, deaf signers exhibited an enhanced vMMN elicited by the lexical signs at around 230 ms, and a larger P1-N170 complex evoked by both lexical sign and non-sign standards at the parieto-occipital area in the early time window between 65 ms and 170 ms. These findings indicate that deaf signers implicitly process the lexical sign and that neural response differences between deaf signers and hearing non-signers occur at the early stage of sign processing.

Keywords: Deaf signers; Hong Kong Sign Language (HKSL); Lexical processing; Visual mismatch negativity (vMMN).

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Deafness*
  • Evoked Potentials
  • Hearing
  • Humans
  • Language
  • Sign Language*