New research has shown that the brain’s strategy for storing memories is more efficient than that of artificial intelligence (AI). The new study, conducted by SISSA scientists in collaboration with Kavli Institute for Systems Neuroscience & Center for Neural Computation, Trondheim, Norway, was published in Physical Review Letters.
Over the past few decades, Artificial Intelligence has proven very good at achieving outstanding goals in several fields. Chess is one of them: in 1996, for the first time, the Deep Blue computer beat a human player, the chess champion Garry Kasparov. Neural networks, real or artificial, learn by changing the connections between neurons. By making them stronger or weaker, some neurons become more active, others less, until a pattern of activity emerges. This model is what we call “a memory”. The AI strategy is to use long and complex algorithms, which iteratively tune and optimize connections.
The brain does this in a much simpler way: each connection between neurons changes only based on how active the two neurons are at the same time. When compared to the AI algorithm, this had long been thought to allow for the storage of fewer memories. But, in terms of memory and recovery capacity, this wisdom is largely based on the analysis of networks assuming a fundamental simplification: neurons can be considered as binary units. The new research, however, shows the opposite: the fewer memories stored using the brain strategy hinges on such an unrealistic assumption. When the simple strategy used by the brain to modify connections is combined with biologically plausible models for the response of individual neurons, that strategy works like, or better than, artificial intelligence algorithms. How could this be the case?
Paradoxically, the answer lies in introducing errors: when memory is actually recovered, it can be identical to the original input to be stored or related to it. The brain strategy leads to the retrieval of memories that are not identical to the original input, silencing the activity of those neurons that are only barely active in any pattern. In fact, those silenced neurons do not play a crucial role in distinguishing between the different memories stored within the same network. By ignoring them, neural resources can be focused on those neurons that count in an input to be stored and enable a higher capacity.
Overall, this research highlights how biologically plausible self-organized learning procedures can be just as efficient as slow and neurally implausible training algorithms. (ANI)
(This story has not been edited by Devdiscourse staff and is automatically generated from a syndicated feed.)
- New research has shown that the brain’s strategy for storing memories is more efficient than that of artificial intelligence (AI).
- AI can beat the human brain in chess, but not in memory, a study reveals