It’s widely known that in the majority of people the left hemisphere is dominant for language. But how early does this lateralisation of function emerge? An obvious way to find out is to put babies in a brain scanner and see if their brains show the same left-sided preference for language, compared with other auditory stimuli, as is observed in adults. Of course, from a practical perspective, that’s easier said than done.
Ghislaine Dehaene-Lambertz and her colleagues scanned the brains of 24 infants, aged approximately two and a half, using fMRI. The researchers didn’t cheat – no sedatives were used – although an experimenter did show the babies toys, visible via a mirror, to help keep them calm. Data from just seven of the babies was usable. As Dehaene-Lambertz and her colleagues explained: ‘This high attrition rate underscores the fact that fMRI remains a challenge at this age.’
The basic paradigm involved playing the babies sentences spoken by their mother and by a stranger and comparing the activity this triggered against the activity triggered by music composed by Mozart.
Speech, but not music, triggered more activity in the left versus the right hemisphere of the babies’ brains. Obviously babies can’t yet understand speech. A possibility is that the left-hemisphere starts out with a bias for rapidly changing stimuli – ‘a bias’, the researchers explained, ‘that would be rapidly extended through learning to other properties of the speech signal…’.
Another finding was that a mother’s voice triggered significantly greater activity in language regions than did a stranger’s voice. Dehaene-Lambertz and her co-workers said this shows the mother’s voice ‘plays a special role in the early shaping of posterior language areas.’ A further differential effect of the mother’s voice is that it led to reduced activity in emotion-related regions. Perhaps, the researchers surmised, this was the neural basis of a ‘soothing effect’.
Also notable was that, as in adults, the ventral (lower) portion of the left temporal lobe, but not dorsal (upper) half, showed what’s known as a ‘repetition effect’ when the same four-second snippets of speech were replayed several times in succession. The ‘repetition effect’ is a reduction in activity with repetition, betraying a kind of memory for the repeated stimulus. The fact that one region of the temporal lobe showed this effect and another region didn’t suggests that by two months of age the left temporal lobe is already made up of different functional sub-regions.
‘A small but growing infant neuroimaging literature points to the existence, in the first few months of life, of a well-structured cortical organisation,’ the researchers concluded. However, they also cautioned that ‘acknowledging the existence of strong genetic constraints’ on the early organisation of language-related brain regions ‘does not preclude environmental influences’. Indeed, they added that: ‘The present results show clearly that learning also plays a major role in structuring the infant’s brain networks, inasmuch as the mother’s voice has a strong impact on several brain regions involved in emotion and communication …’.
Dehaene-Lambertz, G., Montavont, A., Jobert, A., Allirol, L., Dubois, J., Hertz-Pannier, L., & Dehaene, S. (2010). Language or music, mother or Mozart? Structural and environmental influences on infants’ language networks. Brain and Language, 114 (2), 53-65 DOI: 10.1016/j.bandl.2009.09.003