A study by American scientists has shown that modern artificial intelligence systems capable of analyzing images are vulnerable to manipulation using the alpha channel in image files – a set of data that is responsible for their transparency.
Researchers at the University of Texas at San Antonio found that the alpha channel, which controls image transparency, is often ignored when creating AI analysis tools. And it could become a vehicle for cyberattacks on medical and autopilot systems. To prove their thesis, a team of scientists led by Associate Professor Guinevere Chen developed an AlphaDog attack method that exploits this vulnerability in AI systems. Because of the alpha channel, humans and AI can perceive images very differently.
The scientists created 6,500 images and tested them on 100 AI models, including 80 open source systems and 20 cloud platforms such as ChatGPT. Tests have shown that the AlphaDog scheme is particularly effective against attacks in grayscale image regions. The AI model underlying the autopilot systems is vulnerable – the alpha channel allows manipulation of traffic sign recognition algorithms that contain fragments in shades of gray. As a result, road signs may be recognized incorrectly, and there is a risk of serious consequences.
Another potential target for attack is medical imaging systems in which AI helps with diagnosis. Here, X-rays, MRI and CT results are often presented in shades of gray, and they can also be manipulated using AlphaDog, which threatens to make a deliberately incorrect diagnosis. Finally, the attack scheme proposed by scientists is effective against facial recognition systems – this can lead to problems with the privacy of people and the security of protected objects. The study’s authors shared their findings with major technology companies, including Google, Amazon and Microsoft, to help address the identified vulnerabilities in AI systems.
If you notice an error, select it with the mouse and press CTRL+ENTER.