Expectation key to seeing faces in ‘noisy images’
Primary page content
Whether or not you see a face hidden in the visual equivalent of white noise is highly influenced by whether you are expecting to see one in the first place, research suggests.
A new study into face pareidolia – a phenomenon where we might see the face of Jesus in a slice of toast, an alien on Mars, or animals in clouds – attempts to reveal the complex patterns of brainwaves in the ‘prior anticipatory period’ in order to predict if people will see a face or not.
Researchers from Goldsmiths, University of London, the University of Winchester and the Indian Institute of Technology Kharagpur believe that machine learning techniques could be used to predict if someone’s brain makes them more likely to see faces in a noisy image.
In an EEG laboratory at Goldsmiths, participants were shown ‘visual white noise’. None of the images actually contained a face, but participants – who were having their brainwaves recorded – were told that faces might be hidden in some of them. Each participant viewed 402 noise images, and was asked if they had seen a face or not after each image.
Using EEG to capture and analyse what was happening in the brain before the participant saw an image, Katsuri Barik, Professor Goutam Saha, and colleagues at the Indian Institute of Technology Kharagpur, developed mathematical algorithms to identify complex patterns in the brainwaves. These patterns could reliably predict participants’ responses to whether or not they reported seeing a face in an image. The accuracy of prediction was around 75%, which is much higher than chance level at 50% (i.e. tossing a coin to predict the outcome).
Their study found that the exact nature of brainwave patterns might differ across individuals, but for all participants alpha (around 10 cycles per second) brainwaves, and the way two hemispheres in the brain are activated differently in the anticipatory period, are crucially involved in this prediction scheme.
Goldsmiths’ Professor Joydeep Bhattacharya, study co-author, said: “Human brains are pattern recognition machines, sometimes finding patterns where there are none. Brains are also predictive, not reactive – we anticipate a stimulus before it is received by our senses. This study shows that it might be possible to detect the expectation hidden in the brain’s responses.
“We can’t yet say that a computer can decode our intentions from our brain activity. This is only a small-scale study, and further machine learning techniques are data-hungry, they require lots of data to train an algorithm, so mind-reading in a strict sense is still an elusive concept. Nevertheless, it is encouraging to see some promising evidence of predicting future decisions from prior brain responses.”
Being able to predict the impact of brain activity in anticipation of an event could offer potential advantages in preparing us to avoid dangerous situations. While face pareidolia often manifests as a harmless phenomenon, it is also found in infants, and rhesus monkeys, suggesting that it is possibly developed as a result of natural selection.
A machine learning approach to predict perceptual decisions: An insight into face pareidolia by Kasturi Barik, Syed Naser Daimi, Rhiannon Jones, Joydeep Bhattacharya, Goutam Saha is published in Brain Informatics vol. 6(2): 1-16.