Computational models that mimic the structure and function of the human auditory system could help researchers design better hearing aids, cochlear implants, and brain-machine interfaces. A new study from MIT has found that modern computational models derived from machine learning are moving closer to this goal.

    Models of hearing

    Deep neural networks are computational models that consists of many layers of information-processing units that can be trained on huge volumes of data to perform specific tasks. This type of model has become widely used in many applications, and neuroscientists have begun to explore the possibility that these systems can also be used to describe how the human brain performs certain tasks.

    When a neural network is performing a task, its processing units generate activation patterns in response to each audio input it receives. Those model representations of the input can be compared to the activation patterns seen in fMRI brain scans of people listening to the same input.

    Hierarchical processing

    The new study also supports the idea that the human auditory cortex has some degree of hierarchical organization, in which processing is divided into stages that support distinct computational functions. As in the 2018 study, the researchers found that representations generated in earlier stages of the model most closely resemble those seen in the primary auditory cortex, while representations generated in later model stages more closely resemble those generated in brain regions beyond the primary cortex.

    Additionally, the researchers found that models that had been trained on different tasks were better at replicating different aspects of audition. For example, models trained on a speech-related task more closely resembled speech-selective areas.

    “The study suggests that models that are derived from machine learning are a step in the right direction, and it gives us some clues as to what tends to make them better models of the brain.”

    Brendon Peterson

    Conclusion of Deep neural networks show promise as models of human hearing

    The research was funded by the National Institutes of Health, an Amazon Fellowship from the Science Hub, an International Doctoral Fellowship from the American Association of University Women, an MIT Friends of McGovern Institute Fellowship, a fellowship from the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT, and a Department of Energy Computational Science Graduate Fellowship.

    A goal of our field is to end up with a computer model that can predict brain responses and behavior. We think that if we are successful in reaching that goal, it will open a lot of doors.

    Leave A Comment

    Legal Disclaimer: 18+! This software is for informational and entertainment purposes only. It does not provide guarantees, financial advice, or recommendations for bookmakers. Sports betting involves financial risks and may result in monetary losses. Users must be 18 years or older and ensure compliance with the gambling laws and regulations of their jurisdiction. The software is not intended for use in regions where sports betting is illegal. Gamble responsibly and never bet more than you can afford to lose. If you experience gambling-related problems, seek help from trusted organizations such as GamCare (UK), the National Council on Problem Gambling (USA), Check Dein Spiel, or BeGambleAware. Use of this software is at your own risk. For more information contact us.