Brain, language acquisition and bilingualism Part one

Text by Fernanda Pérez-Gay Juárez

Photos of Priscila Vanneuville



Brain mechanisms of language acquisition

“… The mysterious fraternity which creates

the fact of calling the same things with the

same names since we are children.”

-Pedro Salinas, en Defensa del lenguaje (Defense of Language)

One of the most striking examples of brain plasticity—meaning the ability of our nervous system to generate new connections and take on different functions through learning—is certainly the possibility of using more than one linguistic code to communicate with the world. Bilinguals and multi-linguals represent an example of the enormous potential of our brain to adapt, adopt and manipulate new stimuli. But not only that, when we look closely at the processes of language acquisition, including those who learn just one language, we see reflected the immense brain flexibility that allows us to use sounds to communicate internal states, thoughts, emotions and intentions.


Our brain has a number of areas that act in a coordinated manner to understand and emit the language. This capability is one of the main features that differentiate us from other species, including those primates closest to us from a genetic point of view. Despite being able to generate a series of complex vocalizations to communicate some basic instincts, no other primate can articulate a speech through a combination of phonetic signs. Curiously, scientific evidence has shown us not only that our brain is equipped from birth to learn and emit language, but that it’s also prepared to do this with any language to which we are exposed—even with more than one at a time.

Before trying to explain the brain’s ability to learn multiple languages, we better take a couple of steps back and think about how we acquire our first language from birth. How a newborn learns to understand and combine a series of sounds to generate a speech with meaning and communicate with his peers? Although the question is not at all easy to answer, several studies in linguistics and neuroscience have contributed with some pieces to start putting together the puzzle of the flowering of language in children.



Infants as “linguistic universalists”

Children’s brains have a unique ability to decipher the codes of language, which the adult brain lacks. To understand why children have such clear language learning advantage, it is essential to examine how the infant’s brain is “committed” to certain patterns of intonation and word formation to which we are exposed in the early-years of our life.

Just as we were taught in elementary school, phonemes are combinations of sounds that represent the “bricks” of language, for example, PE, EF, TE, ES. The phonetic units are the individual sounds that make up the phonemes, i.e., “P”, “E”, “F”, “T”. Each language has approximately forty phonemes; most likely, they will combine to form words.


In the seventies, a study by the psychologist Peter Eimas showed for the first time that children under one year have a special talent to detect changes that differentiate the acoustic phonetic units of languages ​​all around the world. Regardless of the culture in which they grow, all infants are highly able to distinguish subtle changes that determine the boundaries between two different phonemes (e.g. between GE and TE, and between EL and EM), even for those phonemes that do not exist in the language to which they are exposed. This ability has been called “linguistic universality” that will allow children to learn any language to which they are exposed.

Compared with children, adults have a very poor performance in the task of discerning phonemes. In fact, adults only distinguish between those phonemes of the languages ​​they speak fluently. The question is, then: what happens after one year of age, when we become incapable of distinguishing the barrier between those phonemes that do not belong to our language? Wouldn’t it be wonderful if we could keep this ability to discriminate and recognize sounds of any language? Why, then, human nature involves the loss of this capacity of universal language?



Motor patterns of language

Coincidences do not exist in genetics or biology. We shouldn’t be surprised by the fact that this loss of phonetic recognition happens while the infant starts to babble his or hers first words. This loss, which may seem tragic, is not nature´s failure. To learn a specific language—our mother tongue—is not enough to merely recognize the differences between their phonemes. We need more precision to learn each of the phonetic variations of this language, and guess how likely this or that speech sound may occur. The focus of our brain in one language is called neural commitment to the native language.

Our nervous system circuits specialize during early childhood to detect—and eventually emulate—phonetic components (sounds) and prosody (tone) of our mother tongue more effectively. This change in the brain circuits of language, in addition to the storing of the phonemes learned in auditory circuits (sensory), involves the generation of motor circuits, i.e., language production, which correspond to the sounds the child hears.


Language production necessarily requires imitation. To develop speech, the child should imitate both the pace and the tone and the sound structure of his/hers native language. In the brain, it causes changes in the motor area—both Broca’s area and the adjacent area, dedicated to the movement of the vocal apparatus—coding in its neurons the so called generation patterns of language or motor patterns of language. These circuits encode some sort of instructions designed to deliver and combine sounds that correspond to the first language learned.

These changes in our brain, generated during the first year after birth, will persist for the rest of our lives and will also affect us when we try to learn languages ​​later on. To use them correctly, our brain “commits” with those phonemes needed through these motor circuits. This commitment involves a decreased attention of the child to the phonemes that don’t distinguish words in his/her language, which will prevent him to learn patterns that do not match those of her native language. This explains why, for example, the Japanese lost the ability to distinguish between /r/ and /l/, because in Japanese there is no relevant difference between those two sounds to form words.



The loss of “language universality” is the price to pay in order to articulate properly those sounds we need to communicate with our immediate environment. This sacrifice is not absurd: the ability of articulation is not an easy task, it requires much training and children fail to dominate completely until about 8 years old.

Stages of language acquisition

Let’s try to get now into the stages through which an infant develops this unique ability of the human species. Learning a language implies the knowledge of specific properties: the phonetic repertoire (more or less forty phonemes), its words or lexicon (a series of combination of those phonemes, with an associated meaning), and the complex grammatical information required for structuring sentences correctly.



As already described, during the first few months of life babies react to different languages ​​equally, regardless of exposure. However, over time, a child between four or five months begins to orient to the familiar language faster than any other unknown language. After six months will start one of the first stages of language acquisition: the establishment of the “phonetic repertoire”. At this time, infants perfect their recognition of a series of sounds as their own language; this learning involves the loss of sounds that do not correspond. Repeating, while children begin to show less sensitivity to speech sounds that are not present in their immediate environment, their ability to perceive those associated with the language to which they are exposed increases exponentially.


During this stage, the most frequent phonemes in their mother tongue will be established faster. The low-frequency phonemes take longer to be recognized and therefore to be produced (e.g. phonemes with X in Spanish—as XA, XE or XO—which appear much less often in the speech than phonemes such as PA GA or TA). This recognition is not only through hearing, gesticulation differences are one of the most important supports for phonetic discrimination, reinforcing the idea that socialization is crucial for acquiring language ability. Studies have shown that children between six and seven months can also discriminate phonemes of the language in which they are immersed by reading the lips of the speaker.

The second phenomenon, word learning, means recognizing complex structures formed by several chains of phonemes and associating them with a concept—usually, the first words are objects or people close to the child in question—. This happens shortly before the first year, while the child begins to articulate sounds imitating the adult. To facilitate the process, the child recognizes that there are combinations of sounds that occur together more frequently, and identifies that there are sounds that usually go in the beginning or end of a word. This helps the brain to separate sound sequences to distinguish one word from another.



To associate these combinations of phonemes with concepts, when children start talking, they mentally identify each word with a single object. If we put a child in front of a picture of a snake and teach her to associate the word with the picture, eventually the child will assume that “snake” is the only way to describe it. If then, we put her in a room with a picture of a snake and a picture of some other unknown thing for her (a car, a pencil or a key) and say the word “snake”, the little girl will turn immediately to the object not known. Although we can assign later more than one label to each object in the world, during the learning process our brain can only identify a word to a single object. This is called the “principle of mutual exclusivity” which, as we shall see later, is not present in bilinguals, whose brains can associate two labels—one in each tongue—to each object that they need to name.

Socialization is critical at this stage of learning. Those close to the child will signal the surrounding objects at the words that name them, so that the child begins to associate certain combinations of sounds with what they find in the world around them. Some studies have also shown that infants follow the eyes of their partners when they speak, identifying what they are trying to say, as if the look “signals” one object or another.



Finally, since the child starts to articulate and tries to put words together in sentences, he reaches the stage where he learns the precise grammatical rules of his language. During this process, through corrections of his parents and the people around him, the child will learn the correct sequence to spin phrases correctly in the language in question. The linguist Noam Chomsky argues that there must be a genetic basis encoding a “universal grammar”, since the examples of language that the child hears every day may not be enough to learn all the rules and grammatical subtleties of his language. In his theory, Chomsky called that the “poverty of the stimulus” referring to the poor role of language that we hear every day compared to the grammatical complexity of a language.

Therefore, in his view, there must be a genetic component that allows us the ability to use grammar correctly, regardless of the language in question. However, is not easy to find a biological ground for such assumption. To begin to explore whether there are genes that convey the “universal grammar” we would first have to understand how it is encoded in the neurons of Broca’s area, and then ask what genes might influence the development of these “circuits” or neural patterns. Needless to say, in a biological science level, we are still far from such discoveries.


Race / Breed

Language: innate or learned?

One of the most important debates in linguistics nowadays revolves around the question: Are we born with the ability to produce language? Or, do we learn to speak through our parental conditioning, like other behaviors? Skinner, the greatest representative of behaviorism, argued that we learn to speak thanks to the instruction of our parents, based upon proof and error; getting positive reinforcement for correct use of language and negative reinforcement when we make mistakes. Chomsky, however, was the first to contradict this behavioral hypothesis arguing that we have an innate predisposition for language. He suggested that our brains should have a language acquisition module that allows every one-year-old to produce vocalizations with some meaning.

As demonstrated in the preceding paragraphs, both were wrong. It is true that the ease of forming sensory and motor language circuits shows an innate ability to learn and produce language—probably encoded in the genes—which is absent in other species capable of other types of learning. However, considering that imitation is crucial for determining the brain’s malleability that allows us to start talking, and that discrimination of phoneme is based upon lip reading—an activity that necessarily implies the presence of an interlocutor—is clear that genes by themselves are not enough to generate language.



Recently, a special emphasis is placed on the importance of socialization to develop language skills. Children, who learn a language via software, audio or video based, show a lower performance than those who are motivated by people close to them through games and activities that involve them. The so-called “social brain”—the brain structures involved in our exchange with other human beings—is largely related to language learning.

There’s still much to be investigated, but when we asked why we communicate through that complex code that we call language, one thing is clear: this capability represents further evidence that Aristotle was right when he said that “man is a social animal.”

To be continued…

Related articles: