New studies show that even subtle changes in digital images, designed to confuse computer vision systems, can also affect human perception
Computers and people see the world in different ways. Our biological and artificial systems in machines can not always pay attention to the same visual signals. Neuron networks trained to classify images can be completely misled by subtle image disorders that man would not even notice.
That AI systems can be deceived based on such opposite images, may indicate the basic difference between people and machine perception, but we have prompted us to examine whether people can also – in controlled test conditions – sensitivity to the same disorders. In a series of experiments published in Nature Communications, we found evidence that the human judgments actually affect the opposite interference.
Our discovery emphasizes the similarity of the vision of man and machine, but also shows the need for further research to understand the impact of opposite images on people, as well as AI systems.
What is the opposite picture?
The opposite image is subtly changed by the procedure, which means that the AI model is surely classifying the content of the image. This deliberate fraud is known as an opposite attack. Attacks can be focused on causing that the AI model classified, for example, a vase as a cat or can be designed so that the model can see everything except a vase.
On the left: an artificial neural network (Ann) correctly classifies the image as a vase, but is worried about a seemingly random pattern in the entire image (middle), with an intensity enlarged for illustrative purposes – the resulting image (on the right) is incorrect and trusting, incorrectly classified as a cat.
And such attacks can be subtle. In the digital image, each pixel in the RGB image is found on a scale of 0-255 representing the intensity of individual pixels. The opposite attack can be effective, even if no pixel is modulated by more than 2 levels on this scale.
Helpful attacks on physical objects in the real world can also be successful, for example, causing an error to identify the STOP sign as a speed limit sign. Indeed, safety concerns led researchers to examine ways to oppose attacks for opponents and reduce their risk.
How does human perception influence the opposite examples?
Previous studies have shown that people can be sensitive to high -size image disorders, which provide clear shape tips. However, it is less understood about the impact of more refined opposite attacks. Do people reject the disorder in the picture as a harmless, random sound of the image or can affect human perception?
To find out, we conducted controlled behavioral experiments. At the beginning we took a series of original photos and conducted two opposite attacks on each of them to create many pairs of disturbed images. In the animated example below, the original image is classified as a “vase” by the model. Two paintings disturbed by opposite attacks on the original image are then incorrectly classified by the model, with great certainty, because the goals opposite “Cat” and “Truck” respectively.
Then we showed people a few photos and asked the targeted question: “Which image is more like a cat?” Although no picture looks like a cat, they were obliged to make a choice and usually reported that they were making any choice. If brain activations are insensitive to subtle opposite attacks, we would expect people to choose each image in 50% of cases on average. However, we found that the selection indicator-which is called perceptual bias-it was reliably above the opportunity to a wide range of disturbed pairs of paintings, even when no pixel was corrected by more than 2 levels on this scale 0-255.
From the participant's point of view, it seems that they are asked to distinguish between two practically identical images. However, scientific literature is full of evidence that people use weak perception signals in making elections, signals that are too weak to express confidence or consciousness ). In our example, we can see a flower vase, but some activity in the brain informs us that this is a cat's trace.
On the left: Examples of couples of opposite paintings. The best pair of images is subtly disturbed, with a maximum level of 2 pixels, causing that the neural network incorrectly classified them as a “truck” and “Cat”. Human volunteer is asked “who is more like a cat?” The lower pair of images is of course more manipulated, with a maximum size of 16 pixel levels, which is to be incorrectly classified as a “chair” and “sheep”. This time the question is: “Which is more sheep?”
We have conducted a series of experiments that have ruled out potential artifactive explanations of our article for natural communication. In each experiment, the participants reliably chose the opposite picture corresponding to the targeted question for more than half of the time. Although the human vision is not as susceptible to concern for the opposite of the machine vision (the machines no longer identify the original class of the image, but people still see it clearly), our work shows that these disorders can, however, warn people towards decisions made by the machines.
The importance of safety and safety research and AI
Our main discovery that it can affect human perception – though subtly – opposing images give critical questions about AI safety and safety research, but through formal experiments to examine similarities and differences in the preservation of AI visual systems and perception of people, we can use insight into the construction of safer AI systems.
For example, our findings can inform about future research to improve the solidity of computer vision models by better adapting them to human visual representations. Measurement of human susceptibility to opposite concern can help assess this equalization for various computer vision architecture.
Our work also shows the need for further research on understanding the wider influence of technology not only on machines, but also on people. This, in turn, emphasizes the constant importance of cognitive science and neuronauk to better understand AI systems and their potential impact, because we focus on building safer, safer systems.