A new computer model developed at the University of Liverpool integrates sight and sound in a way that closely mimics human perception. Inspired by biological processes, this approach has potential applications in artificial intelligence and machine perception.
The model is based on a brain function originally discovered in insects that enables them to detect movement. Cesare Parise, Senior Psychology Lecturer, adapted this concept to create a system capable of processing real-life audiovisual signals, such as videos and sounds, instead of relying on abstract parameters used in earlier models.
This research is published in the journal eLife and has been reviewed according to Science X's editorial standards to ensure accuracy and reliability.
When observing someone speak, the brain automatically matches visual and auditory information. This process sometimes leads to perceptual illusions. For example:
"This latest work asks how does the brain know when sound and vision match?"
Earlier models attempted to explain audiovisual matching but were limited because they did not work directly with real audiovisual signals.
This innovative model bridges biology and technology by enabling realistic processing of sound and vision, improving machine perception based on how the human brain integrates sensory input.
Would you like the author’s summary to be more technical or more simplified?