Computer model mimics human audiovisual perception

Computer model mimics human audiovisual perception

A neural computation originally found in insects has now been shown to shed light on how humans merge sight and sound—even when illusions make us “hear” things we don’t actually see. Dr. Cesare Parise from the University of Liverpool has developed a biologically inspired model rooted in this principle.

Unlike older approaches that relied on abstract parameters, this new framework processes genuine audiovisual data from real-world inputs. The research, now published in eLife as the final Version of Record after a prior Reviewed Preprint, was described by editors as a significant and strongly evidenced contribution.

Understanding audiovisual illusions

When watching a person speak, the human brain effortlessly fuses visual and auditory cues. This coordination is so ingrained that it can be misled by inconsistencies.

The challenge of multisensory modeling

One of neuroscience’s longstanding difficulties has been to explain how the brain synchronizes signals from sight and hearing. Although existing models are mathematically advanced, they remain far removed from biology and depend mainly on experimenter-defined parameters rather than genuine sensory input.

“How does the brain decide when a voice matches moving lips, or whether footsteps align with the sounds they make?”
Author’s summary

This study presents a biologically grounded model that captures how the brain integrates real-world sights and sounds, bridging the gap between mathematical theory and perceptual reality.

more

EurekAlert! EurekAlert! — 2025-11-05