Verifying an image through facial recognition requires a 1-to-1 correspondence between the image being checked and an image in the database, using a variety of match criteria such as distance between the eyes, width of the nose, shape of the cheekbones and the depth of features on the image plane. Variations in illumination, including the angle and color of light and the depth of shadows created by lighting, can reduce the effectiveness of face recognition software by preventing a close match with a database photograph taken under different lighting conditions.
Facial recognition software depends on matching measurements of depth and distance, as well as light and dark areas on the facial plane. Database photographs used for matching generally have flat lighting with limited shadows and a full front view of the face. A photograph taken from a different angle, such as a profile or three-quarters view of the face, even under similar lighting conditions, reveals new patterns of light and dark areas that the program does not recognize. Efforts to fine-tune facial recognition software to accommodate lighting problems created by changes in perspective focus on creating photomontages of a face with variations in head position and angle.
Changing the source and position of illumination can cause facial recognition software to fail. For example, illumination from below the face creates not only areas of harsh light near the chin and cheeks, but also deep shadows, especially around the eyes. Since facial recognition programs match faces by analyzing similarities in depth of shadows and indentations as well as distances between features, the distortions caused by radical differences in lighting make it impossible to create a match. Research into ways to eliminate problems caused by variations in illumination suggests that using data on light reflected in the eyes, rather than from ambient light, can increase the odds of an accurate match.
One solution to many of the problems arising from variations in illumination such as differences in light and dark areas, as well as changes in the depth and dimension of features, is three-dimensional mapping. Switching from two- to three-dimensional imaging allows recognition software to store information on variables in the depth, apparent size and dimensions of facial features under different lighting conditions, and to create models of faces under different types of lighting, increasing the chances of a match.