The human brain, always stronger than AI – Libération

Cloning, genome sequencing, personalized medicine, data… Technologies are disrupting our lives and societies. The fourteenth edition of the European Forum on Bioethics, of which Libération is a partner, will focus on the theme “Artificial Intelligence and Us.” Leading up to the event, from February 7th to 10th in Strasbourg, Libération will publish (or republish) in this file a series of articles on the topics covered.

Despite the rapid progress of artificial intelligence algorithms, they are still far from equaling the human brain, let alone surpassing it. Researchers at the University of Oxford (UK) have compared the learning mechanisms in machines and the human brain and have highlighted, in a new study, a process exclusively used by biological brains that allows them to be more flexible and efficient than software. This is flattering for our little heads, and may possibly improve the reliability of artificial neural networks in the future.

“For both humans and machines, the essence of learning is to identify which components are responsible for errors in the information processing pipeline,” explains the study published in early January in Nature Neuroscience. When we understand where we went wrong, we know what needs to change to do better next time. The reasoning of an artificial intelligence algorithm is a black box: impossible to know what it is precisely doing between the input instruction and the output response. If we have taught a neural network to recognize cats and dogs by providing it with hundreds of photographs, for example, and it has labeled several cat photos as “dog,” how can we correct it since we have no idea of its internal logic? “It has long been assumed that the best way to do this was the backpropagation method,” continue the neurologists and computer scientists who authored the study. In short, the neural network analyzes the difference between the expected result and the result it provided, calculates a sort of error rate, and then “traces back” in its reasoning to adjust parameters until the error rate decreases.

An “catastrophic” ability to handle new information

“But learning in the brain is superior to backpropagation in many respects,” the researchers point out. The machine needs a lot of stimuli (lots of images of dogs and cats) to define its learning rules, whereas humans can learn from a single example. And the machine shows an “catastrophic” ability to handle new information (such as the addition of rabbit photos) after assimilating old data (dogs and cats). Machine learning shows its limitations.

The Oxford researchers attempt a metaphor to explain the problem: “Imagine a bear that sees a river. In the bear’s mind, this sight already predicts the sound of the flowing water and the smell of the salmon.” The three always go together. But one day, “the bear smells the salmon but doesn’t hear the sound of the water, perhaps because of an ear injury.” If its brain operated solely with a backpropagation mechanism, like AI, it would notice that there is an error: the sight of the river did not generate the sound and smell as expected. It would then reconfigure all the parameters of sight, sound, and smell, and “it would reduce its expectation of the smell the next time it sees a river.” This is a known defect of AI, the phenomenon of “catastrophic interference,” where a new association destroys other aspects of previously learned memories that should have remained intact.

“Prospective Configuration”

The biological brain, on the other hand, does a much better job of continuing to imagine the smell of salmon when hearing is impaired. It adapts better to changes in parameters because, according to the neurologists at Oxford observing the equations representing the connections that are created between our neurons, it sets in motion another type of mechanism they call “prospective configuration” and from which AI should draw inspiration. In short: instead of adjusting the parameters of their reasoning to verify if the result is better, neural networks should be able to prepare the expected result of their journey (smelling the salmon) before adjusting the parameters to achieve it. Then, all that’s left is to teach computers to fish.

Leave a Reply

Your email address will not be published. Required fields are marked *