In the case of healthy hearing, time issues Myth news

When the sound waves reach the inner ear, neurons raise vibrations and warns the brain. Coded in their signals, there is a wealth of information that allows us to follow conversations, recognize known voices, appreciate music and quickly locate a calling phone or crying child.

Neurons send signals, emitting jumps – short voltage changes that propagate along nerve fibers, also called functional potentials. Interestingly, auditory neurons can launch hundreds of spikes per second, and the time of their jumps with exquisite precision to adapt to the oscillation of incoming sound waves.

Thanks, the powerful new models of human hearing, scientists from the myth of the McGovern Institute for Brain Research said that this precise time is necessary for some of the most important ways in which we understand auditory information, including recognition of voices and the location of sounds.

Open access findings, Reported on December 4 in the journal Nature communicationShow how machine learning can help neuronauts understand how the brain uses auditory information in the real world. Professor MIT and investigator McGovern Josh McDermottwho conducted research explains that his team's models in order to examine the consequences of various types of hearing impairment and develop more effective interventions.

Learning sound

The auditory signals of the nervous system are so precisely defined that scientists have long suspected that time is important for our sound perception. Sound waves oscillate at speeds that determine their tons: low sounds move in slow waves, while sound waves with a high content oscillate more often. The auditory nerve, which provides information from the sound of hair cells in the ear to the brain, generates electrical jumps that correspond to the frequencies of these oscillations. “Functional potentials in the auditory nerve are released in very specific points in time in relation to the stimuli -shaped peaks,” explains McDermott, who is also the head of the MIT Department of Brain and Cognitive Sciences.

This relationship, known as blocking phases, requires neurons for their jumping with subordinate precision. But scientists did not really know how the temporary patterns for the brain were informative. In addition to the fact that it is scientifically intriguing, says McDermott, the question has important clinical implications: “If you want to design a prosthesis that provides the brain of electrical signals to recreate the ear function, it is probably very important to know what information in the normal ear actually matters,” he says.

It was difficult to examine experimentally; Animal models cannot offer a significant insight into how the human brain pulls out the structure of language or music, and the auditory nerve is unavailable to learning in humans. So McDermott and PhD student Mark Saddler “24 turned to artificial neural networks.

Artificial hearing

Neuronauramers have long used computing models to examine how sensory information can be decoded, but until recent progress in the scope of computing power and machine learning methods, these models were limited to simulation of simple tasks. “One of the problems with these previous models is that they are often too good,” says Saddler, who is now at the technical university in Denmark. For example, a calculation model, which aims to identify a higher jump in steam of simple tones, will probably achieve better results than people who are asked to do the same. “This is not a kind of task that we do every day while listening,” emphasizes Saddler. “The brain is not optimized to solve this very artificial task.” This mismatch limited the observations that can be pulled out of this earlier generation of models.

To better understand the brain, Saddler and McDermott, they wanted to question the hearing model to do things for which people use hearing in a real world, such as recognizing words and voices. This meant the development of an artificial neural network to simulate parts of the brain that receive a contribution from the ear. The network is given input data with about 32,000 simulated sensory neurons detecting the sound, and then optimized for various tasks in the real world.

McDermott says that scientists have shown that their model replied human hearing – better than any previous model of auditory behavior. In one test, the artificial neural network was asked to recognize words and votes in dozens of background noise, from the noise of the aircraft cabin to enthusiastic applause. In every respect, the model worked very similar to people.

When the band degraded the time of jumping in the simulated ear, their model could no longer match people's abilities to recognize votes or identify the location of sounds. For example, while McDermott's syndrome has previously shown that people use Pith to help them identify people's voices, the model revealed that this skill is lost without precisely time signals. “You need a fairly precise ankle moment to both take into account human behavior and work well in the task,” says Saddler. This suggests that the brain uses the temporary auditory signals because they help with these practical aspects of hearing.

The discoveries of the team show how artificial neural networks can help neuronauts understand how the information separated by the ear influence our perception of the world, both during hearing and when it is handicapped. “The ability to connect patterns in the auditory nerve with behavior opens many doors,” says McDermott.

“Now that we have those models that combine neural answers in the ear with auditory behavior, we can ask:” If we simulate different types of hearing loss, what will this affect our auditory abilities? ” – says McDermott. “This will help us diagnose hearing loss better and we think that there are also extensions to help us design better hearing aids or screw implants.” For example, he says: “The screw implant is limited in different ways – it can do some things, and not others. What is the best way to configure the snail implant to allow you to meditate in behavior? You can basically use models to tell you.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here