Is ChatGPT Mocking My Baby on the Language Learning Ring?

  • The feats of generative AI suggest that humans will be quickly replaced.
  • Yet, babies’ learning continues to serve as inspiration to train machines. If a 14-month-old baby is not able to write a dissertation like ChatGPT, he excels in other areas.
  • In this first episode of our series “Man vs. Machine”, we study language learning.

“Mom”, “dad”, “baby”… At just 14 months, my baby has learned to babble about ten words and to tirelessly repeat “no”. One might expect to shed a tear at each new addition to his vocabulary (his first “poop” was applauded, we must admit). In reality, we barely notice the newcomers, drowned in diction exercises in the style of an Armande Altaï in the Star Academy (2001, we admit it). In the midst of the “dibiditapatoutabouba” soup, “bird” sounds like a lucky coincidence.

Watching my baby struggle to say “ta” for “cat” in a very early wake-up, I remembered a meeting in 2018 with Yann Le Cun, director of AI research at Meta (Fair), where this deep learning pioneer explained collaborating with neurolinguist Emmanuel Dupoux, specialized in infant learning, to try to unravel the mysteries of babies’ learning power. The idea is to draw inspiration from babies to create more effective AIs. The child takes about three years to generate complex language. Where would the machine be after 14 months of training? In the arena of speech learning, would my baby take a hit from current algorithms or, on the contrary, knock out language models like ChatGPT developed by OpenAI?

“Dog” or “whimper”?

“Human language is of unparalleled complexity and the only agent that learns language effectively is the baby,” explains Marvin Lavechin, an AI specialist and language acquisition model expert who worked in Emmanuel Dupoux’s team. Before expressing themselves in a complex form of language, babies go through universal stages. “The child first produces vowels, then syllables, which are more difficult to pronounce from the point of view of mouth motricity,” says Séverine Alonso-Bekier, psychomotor therapist. “After that, he associates the syllables with each other and begins to form words” to build a sentence. At three years old, the child masters complex language, structured sentences, with notions of space and time. He doesn’t just designate an object. “He will say ‘my toy in the room’. He masters a certain number of parameters,” she adds.

One might think that three years to learn language is a long time. In reality, imagine being immersed in Japan for three years, without a dictionary or translator, you will not become bilingual. You will barely be able to distinguish certain sounds and understand others. Impressive cognitive performance of the child. Except that ChatGPT can compose a philosophy dissertation at a bac + 5 level, a plea, a political speech… It even succeeds in contests of prestigious schools. A three-year-old child doesn’t do any of that. Even less, my 14-month-old baby, who calls his dad “mom” half the time. Is the game over in advance? Not so fast.

Knowing a word means a lot of things: associating a sound with an object, knowing that the word “dog” represents the animal; or knowing how to recognize that the word “dog” is French, without necessarily understanding it. “We ran tests. We give the algorithm the word ‘dog’ and a word that sounds like a French word but isn’t, ‘whimper’, for example. Does the algorithm identify the word ‘dog’ as belonging to the French language? We realize that they learn exponentially more slowly than children,” details Marvin Lavechin. They need infinitely more data to achieve the same result. “There is an ocean between the learning speed in babies and in AI,” observes the researcher. Even the data on which babies learn is much more complex.

A slow and illogical machine

Today, algorithms are trained on audio books, very articulate speech, without background noise or variations in sound. On the contrary, a child learns in a noisy environment where two conversations can take place simultaneously, where outside noises cover voices. I sometimes speak to my baby from the kitchen while a New Order vinyl crackles next to him. In the presence of another adult, I don’t always take the time to articulate as I would alone with him. Is a machine comfortable with this type of data?

Indeed, researchers have tried to put AI in the same learning position as the baby. Researchers put microphones on very young children from 0 to two or three years old, collected all the speech received by the infant throughout the day. “We collected this data and trained our learning models on these recordings,” says Marin Lavechin. “The algorithm will face all sorts of situations: a mother telling a story to her child, the speech is very close and articulate; a mother talking while the television is on in the background. On this data, the algorithms ‘fail miserably’ in a masterful way,” smiles the AI specialist.

In a few years, children intuitively know how to conjugate verbs, they acquire the basics of physics by observing the world. “After two months, babies understand the notion of object permanence. When an object is hidden, it has not disappeared. In the meantime, they had to understand that the world is three-dimensional, that objects can be in front of other objects. Around 8 months old, they understand that an unsupported object will fall. Gravity, the effect of inertia…,” Yann Le Cun pointed out during this discussion. Today’s machines can produce language and text – after training on gigantic amounts of data – but they have no basic logic. Do we need to specify who is the winner of the fight?




Leave a Reply

Your email address will not be published. Required fields are marked *