Facebook Develops DeepFace, A Face Recognition Technology That Closely Replicates ‘Human-Level Performance’
Facebook (NASDAQ:FB) is trying to close the gap between humans and computers in facial recognition as the company says it has developed a technology that recognizes whether two different images are displaying the same face -- an ability that comes very close to replicating human ability to make the distinction.
The new technology, called DeepFace, is claimed to be 97.25 percent accurate, reducing the margin of error with current state-of-the-art technology by more than 25 percent. According to Facebook, DeepFace is closely approaching human-level performance, which has scored 97.5 percent in the same standardized test.
“In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify,” Facebook said in a research paper, released last week. “We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network.”
According to the paper, DeepFace maps out a 3D model of an “average” forward-looking face, and then makes a flat model, which is filtered by color to characterize specific facial elements. To make the technology more accurate, Facebook said it used 4.4 million images of faces from 4,030 different people on its network.
Although DeepFace still remains in the research phase, Facebook released the paper to get feedback from the research community ahead of presenting it at the IEEE Conference on Computer Vision and Pattern Recognition in June, MIT Technology Review reported Monday.
Here is an excerpt from the research paper, released by Facebook:
An ideal face classifier would recognize face in accuracy that is only matched by humans. The underlying face descriptor would need to be invariant to pose, illumination, expression, and image quality. It should also be general, in the sense that it could be applied to various populations with little modifications, if any at all. In addition, short descriptors are preferable, and if possible, sparse features. Certainly, rapid computation time is also a concern. We believe that this work, which departs from the recent trend of using more features and employing a more powerful metric learning technique, has addressed this challenge, closing the vast majority of this performance gap.
© Copyright IBTimes 2024. All rights reserved.