
An international group of scientists impulse neural network alternate training with “sleep” to perform two different operations without overwriting the connections obtained during the first task. Writes about it mother board.
“Now there is a huge trend to use the ideas of neuroscience and biology to improve the performance of machine learning algorithms. Sleep is one of them,” said Maxim Bazhenov, study co-author and UC San Diego specialist.
Artificial neural networks often reach superhuman heights. However, when it comes to sequential learning or solving one problem after another, they are not able to gain new knowledge without losing the old.
“After good AI training, it is very difficult to teach an entirely new operation. And if it still succeeds, then the old memory will be damaged, ”said Pavel Sanda, co-author of the study and researcher at the Czech Academy of Sciences.
He explained that in the neuroworld, such an action is called “catastrophic forgetting”, which can only be solved with the help of “memory consolidation”. This is a process that helps transform recent short-term memories into long-term memories, often occurring during periods of REM sleep.
According to the scientist, the reorganization of memory plays a big role in why people need to sleep at all. If this process does not work properly or is interrupted, then a person may experience serious mental impairment.
“This phenomenon can be observed in older people who recount childhood situations in detail, but with difficulty [вспоминают]what they ate for lunch yesterday,” Sanda said.
The researchers used previous work in memory plasticity and sleep modeling. They used a neural network to simulate sensory processing and reinforcement learning in the animal’s brain.
The scientists gave the model two separate tasks, in which it trained to distinguish between punishment and reward.
They then tested whether the AI would exhibit “catastrophic forgetting”. As a result, each training session on the second task destroyed the knowledge gained in the first.
But by making the algorithm mimic biological sleep by activating artificial neurons in the form of noise, the scientists found progress. The rapid alternation of supposedly resting phases with training on the second task allows the model to “remember” how to perform the first.
“This is another good demonstration that simple principles can lead to not-so-simple effects,” Sanda said.
Recall that in September, a British startup created a machine learning model for driving different types of cars.
In August 2021, DeepMind developed a generic Perceiver IO architecture to handle all types of inputs and outputs.
In February, Sonantic created an AI algorithm that mimics flirtatious speech patterns with new “non-verbal sounds” including sighs, pauses and light laughter.
Subscribe to Cryplogger news in Telegram: Cryplogger AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL+ENTER

An international group of scientists impulse neural network alternate training with “sleep” to perform two different operations without overwriting the connections obtained during the first task. Writes about it mother board.
“Now there is a huge trend to use the ideas of neuroscience and biology to improve the performance of machine learning algorithms. Sleep is one of them,” said Maxim Bazhenov, study co-author and UC San Diego specialist.
Artificial neural networks often reach superhuman heights. However, when it comes to sequential learning or solving one problem after another, they are not able to gain new knowledge without losing the old.
“After good AI training, it is very difficult to teach an entirely new operation. And if it still succeeds, then the old memory will be damaged, ”said Pavel Sanda, co-author of the study and researcher at the Czech Academy of Sciences.
He explained that in the neuroworld, such an action is called “catastrophic forgetting”, which can only be solved with the help of “memory consolidation”. This is a process that helps transform recent short-term memories into long-term memories, often occurring during periods of REM sleep.
According to the scientist, the reorganization of memory plays a big role in why people need to sleep at all. If this process does not work properly or is interrupted, then a person may experience serious mental impairment.
“This phenomenon can be observed in older people who recount childhood situations in detail, but with difficulty [вспоминают]what they ate for lunch yesterday,” Sanda said.
The researchers used previous work in memory plasticity and sleep modeling. They used a neural network to simulate sensory processing and reinforcement learning in the animal’s brain.
The scientists gave the model two separate tasks, in which it trained to distinguish between punishment and reward.
They then tested whether the AI would exhibit “catastrophic forgetting”. As a result, each training session on the second task destroyed the knowledge gained in the first.
But by making the algorithm mimic biological sleep by activating artificial neurons in the form of noise, the scientists found progress. The rapid alternation of supposedly resting phases with training on the second task allows the model to “remember” how to perform the first.
“This is another good demonstration that simple principles can lead to not-so-simple effects,” Sanda said.
Recall that in September, a British startup created a machine learning model for driving different types of cars.
In August 2021, DeepMind developed a generic Perceiver IO architecture to handle all types of inputs and outputs.
In February, Sonantic created an AI algorithm that mimics flirtatious speech patterns with new “non-verbal sounds” including sighs, pauses and light laughter.
Subscribe to Cryplogger news in Telegram: Cryplogger AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL+ENTER