I will never get tired of saying this: Strong passwords are still generally the best way to protect your data. Biometric authorization is getting better, but just as fingerprint readers often fall short, so too does facial-recognition software.
While it's now extremely difficult to fool commercially available facial-recognition software with still photos or videos, a team of researchers discovered a new method that worked almost as well as real faces: virtual-reality reconstructions made from publicly available Facebook photos and displayed on a smartphone.
This information comes from a research paper entitled "Virtual U: Defeating Face Liveness Detection by Building Virtual Models from Your Public Photos" by Yi Xu, True Price, Jan-Michael Frahm and Fabian Monrose at the University of North Carolina at Chapel Hill, presented earlier this month at the Usenix security symposium in Austin, Texas.
"With sufficient effort, a VR system can display an environment that is essentially indistinguishable from real-world input," the paper states. "Unless they incorporate other sources of verifiable data, systems relying on color image data and camera motion are prone to attacks via virtual realism."
The paper goes into a great deal of technical detail, but the general thrust, viewable on the research team's presentation slides, is simple enough: Facial-recognition software is supposed to keep your computer secure, but researchers can bypass it with three-dimensional virtual models build from public photos found on social media networks, then displayed on a smartphone.
In other words, you could pick out a specific person as as target, troll Facebook for several images of that person (high-resolution professional photos, such as those taken by wedding photographers, work best), input those images into 3D-modeling software to create a VR face of they target, then play the VR face on a smartphone to log in as that person.
Facial-recognition technology, the researchers admitted, has evolved to a sophisticated degree. Still photos won't work any more, because cameras now demand motion. Two-dimensional prerecorded videos won't work, because cameras demand specific actions and motion consistency -- a blink of an eye, a turn of a head.
This are all things that a 3D virtual model displayed on a smartphone screen can provide, however. (Smartphone screens may be two-dimensional, but then again, so are video cameras, so the appearance of being 3D is more important than the actuality.) The smartphone's motion detectors can adjust the VR face to turn as if it were a real person.
Making a 3D model is not that difficult if you can capture real faces for the project, and this is something most people are happy to provide, free of charge, without being aware of it. Social media outlets such as Facebook or Instagram are plastered with medium- and high-resolution photos of individual persons from a variety of different angles, and using a plethora of facial expressions. This abundance of visual data, the researchers argue, could be ambrosia to malefactors who might want to bypass visual security measures.
The researchers created two sets of VR models of 20 volunteers. The first set was made using high-resolution full-face 2D photos in a bright indoor setting. VR models made from the images fooled five commercially available facial-recognition technologies — BioID, KeyLemon, Mobius, 1U and True Key — every single time.
The second set involved social-media photos, especially from Facebook. After collecting photos of specific individuals from social media, editing the photos meticulously and programming them into VR simulations, the researchers found that they could fool even sophisticated cameras and programs with relative accuracy.
Not all programs were equally easy to fool, though. KeyLemon, Mobius and True Key all gave up their secrets easily, with success rates of 85 percent, 80 percent and 70 percent, respectively. BioID fell prey to the scam only 55 percent of the time, while with 1U, the researchers could not get it to work at all. Each time the VR simulation worked, however, it took fewer than two tries, on average, to get a result.
BioID and 1U were harder to crack not because they were smarter, but because they had a hard time recognizing even real faces, especially in outdoor settings.
"Our failure to spoof the 1U App, as well as our lower performance on BioID, using social media photos was directly related to the poor usability of those systems," the paper said.
Overall, however, when weighed for the accuracy of real facial recognition, the VR faces were about 97.5 percent as "acceptable" as the real faces.
The researchers argued that future facial-recognition systems should incorporate new proofs of life, such as infrared detection (already used by Windows Hello), or by flashing a bright light on the subject to see how the face illuminates.
Given the meticulous work involved in making a VR simulation, VR spoofing of facial-recognition software probably won't be a big threat for the next few years. The research is forward-looking, however, and imparts yet another important lesson about the amount of sensitive data we share online, without even realizing its sensitivity.