Many camera phones now use lightfield/AI hacks by taking multiple shots in rapid succession, then using the fact that the camera (phone/camera shake) is very rarely stationary between each frame, so the perspective shifts slightly, and one can derive a 2,5D or depth information (with a bit of nifty graphics co-processing).
Given people use face selfies as a biometric, this suggests two improvements to how that works
1. use the depth info as part of the biometric - this prevents still image replay attacks since a print or screen won't have depth info in it
2. use the actual camera shake as proof of liveness, but even more, use the specifics of how the camera moves as a "signature" which might prove to be relatively distinct for a given user (and would help prevent attacks with adversarial people "updating" their photo, for example (pretending to be a person by borriowng their phone and trying to replace their face id so later attacks would work - unique hand movement might be enough to make this hard to do:-)
No comments:
Post a Comment