![]() ![]() Received: JAccepted: FebruPublished: March 3, 2021Ĭopyright: © 2021 Tabas, von Kriegstein. University of California at Berkeley, UNITED STATES These neural principles can be extended to other auditory processing networks and sensory modalities, and can be incorporated to neurobiologically-inspired automatic speech recognition algorithms.Ĭitation: Tabas A, von Kriegstein K (2021) Neural modelling of the encoding of fast frequency modulation. The model explained previously unaccounted phenomena in human perceptual behaviour. This mechanism effectively increased the speed and efficiency of the recognition of formant transitions. Instead we develop a novel computational model in which the neural representations of formant transitions influence lower-level representations. Here, we show that the representational framework does not fully explain human behaviour. In this view, the brain processes auditory signals in a hierarchical constructive way, where the higher levels of the hierarchy, that represent the formant transition directions, are informed by the neural representations of individual frequencies at the lower levels, but not vice versa. To date, formant transitions are assumed to be processed according to a representational framework. One of the fundamental building blocks of speech are so-called formant transitions that characterise different speech sounds. ![]() The computational mechanisms that the human brain uses for excelling at speech recognition are far from understood. It is critical for smooth daily routines at the individual to the societal level. Humans’ ability to understand and produce speech is one of the most fascinating developments of evolution. In the brain, this mechanism is likely to occur at early stages of the processing hierarchy. Combined, our results indicate that predictive interaction between the frequency encoding and direction encoding neural representations plays an important role in the neural processing of FM. We also designed a new class of stimuli for a second perceptual experiment to further validate the model. It fully accounted for experimental data that we acquired in a perceptual experiment with human participants as well as previously published experimental results. The model was designed to encode sweeps of the duration, modulation rate, and modulation shape of formant transitions. To test this hypothesis, we developed a novel model of FM encoding incorporating a predictive interaction between the sweep and the spectral representation. ![]() We hypothesised that the sweep pitch shift is caused by a predictive feedback mechanism. Here, we found that a classical FM-sweep perceptual effect, the sweep pitch shift, cannot be explained by standard feedforward processing models. However, from neuroanatomy we know that there are massive feedback projections in the auditory pathway. To-date computational models use feedforward mechanisms to explain FM encoding. There are two representations of FM in the ascending auditory pathway: a spectral representation, holding the instantaneous frequency of the stimuli and a sweep representation, consisting of neurons that respond selectively to FM direction. In human speech, short rising and falling FM-sweeps of around 50 ms duration, called formant transitions, characterise individual speech sounds. Frequency modulation (FM) is a basic constituent of vocalisation in many animals as well as in humans.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |