LEDs, cap and glasses: how to cheat face recognition systems

By | September 14, 2020
Analysis Department of Renaissance Credit Bank, tells about the prehistory and prospects of methods to deceive face recognition systems.

Modern neural network technologies for face recognition are being improved. But at the same time, there are also developing ways to mislead a neural network. As EADaily has already written, one of the ways to bypass photo biometric systems using special makeup was developed by a Yandex employee. Sergey Afanasiev, Executive Director and Head of the Statistical Analysis Department of Renaissance Credit Bank, tells about the prehistory and prospects of methods to deceive face recognition systems.

Ways to deceive facial recognition systems were invented even before the employee Yandex Grigory Bakunov in 2017 offered his method. For example, in 2014 on sale was a baseball cap equipped with LEDs, which illuminated the face for photo and video cameras. It proved to be a very simple and effective way to hide from video surveillance. Such a cap was worth only $ 15.
In 2015, the American mathematician Jan Goodfellow developed an algorithm for attacking neural networks that recognize objects in photos. He added specially generated noise to the panda’s photo and received an image that the neural network recognized as a gibbon, although the person still saw the panda in the processed photo.
This example became a textbook for scientific research and showed the “dark side” of the generative-comparative neural networks, which, by the way, was also invented by Goodfellow.

A year later, researchers from Carnegie Mellon University (USA) continued to develop Goodfellow ideas and suggested using neural networks to generate a special print for spectacle frames. The goal was the same – to bypass facial recognition systems. The developed neural network allowed selecting special prints that “turn” a person into any chosen celebrity or another person. For example, one of the researchers, wearing such glasses, became Mila Jovovic for the photo biometric system.
A year later, at one of the exhibitions, designer Jin-Cai Liu suggested wearing a portable projector on his head, which projects another person’s face. And although it was an art project without practical implementation, the idea was later picked up by Chinese researchers.
In 2018, engineers from Fudan University and Alibaba company suggested using infrared LEDs to hack into facial recognition systems. To do this, they attached the LEDs to the bottom of the visor of a conventional cap. The working principle is quite simple: infrared rays falling on the human face form spots that are not visible to the human eye, but the camera perfectly catches them. As a result, a photo with spots is sent to the facial recognition system in which the neural network cannot compare with a basic photo.

In addition to simple hacking, the engineers have developed a sophisticated LED mode. They have trained a model that cheats Open Source photometry FaceNet from Google. The model developed by scientists optimizes several parameters: the brightness of the beam, beam angle, and spot diameter. As a result, it is possible to upload a victim’s photo and the algorithm will give optimal LED calibration parameters so that FaceNet photo biometry recognizes the victim, not the cheater. For people with similar types of appearance (race, gender, etc.), the accuracy of passing the photo biometry for the victim was about 70%.
Such a cap can be made with your own hands by purchasing components in an ordinary electronics store. The only inconvenience of such a cap is that infrared light leaves burns on human skin. Therefore, the researchers did not recommend wearing a cap with working LEDs for more than one minute.

Despite the fact that neural network attacks can be used by criminals, scientists are trying to find a good mission in these developments – for example, to use neural network attacks as a fight against “Big Brother”. Recently, Chicago engineers have challenged technology companies that collect photos of users on the Internet in order to resell these data to police agencies and private companies. Scientists have developed a tool called Fawkes, which masks photos to protect them from photo biometric systems. The researchers were able to deceive face recognition systems from Amazon, Microsoft, and Chinese technology company Megvii. In just one month Fawkes has downloaded more than 50 thousand times from the website for developers. Now engineers are working on a free version for the mass user.
Over the years, photometric systems have invented many ways to break into them: special makeup, handkerchiefs and capes with other people’s faces, glass refractive masks, overhead masks with someone else’s face, glasses with LEDs, and even reflective clothing. For example, a suit with an infusion of reflective threads, because of which only the suit itself can be seen in the photo.

Leave a Reply

Your email address will not be published. Required fields are marked *

one × 3 =