Biometrics measures physical traits
By Ben Dawson,Contributing Editor
Fingerprints and faces can be visually inspected for personal identification.
Biometrics are measures of biological quantities or patterns but also means measurements of an individual's features, such as fingerprints, that can identify or authenticate a person. Used this way, a biometric is a password that cannot be forgotten, lost, or stolen.
Furrows, crypts, and other structures of the iris are presumed unique and do not change significantly throughout life. The use of these patterns for identification was proposed in the 1960s but effective methods for using the patterns were not developed until the 1990s. (Photo courtesy of Iridian Technologies)
There are many possible biometrics, including DNA, odor, gait, height, handwriting, and speech, but vision-based biometrics use image sensors and algorithms derived from machine vision. Applications for biometrics include controlling access to a building (physical access), authenticating a user to allow access to some resource (for example, accessing a secured Web site), and identifying a person from among others (for example, looking for terrorists at airports).
Biometrics systemsTo use this type of system, you first record a biometric as a reference in a process called enrolling, enrollment, or registration (see Fig. 1). Sensor data are processed to find and extract biometric data. These data are put into a data structure called a template and are stored in a database keyed to the user's name or some other identifier. This process is sometimes called templification.Biometrics systems that use face images can also store these images for error review and nonrepudiation. When biometrics systems are used for authentication, the presented data are converted to a template, and this template is matched with the templates generated by enrollment. This method is similar to template matching in machine vision.
Matching generates a score representing the quality of match between the presented biometrics and the enrolled biometrics. A common convention is to scale these scores from 0 to 10, with higher values representing a better match. The score might be based on the distance between measurement vectors, a normalized correlation coefficient, or a more complex algorithm.
If biometrics systems are used to verify identities, an identifier, such as a name, is presented to extract enrolled biometrics from the database. Then presented biometrics can be compared one-to-one with the enrolled biometrics. This kind of matching is called verification. If no identity claim is made, then the system must match the presented biometrics with a small (one-to-few) or large (one-to-many) number of enrolled biometrics. This kind of matching is called identification. As the number of people that must be matched increases, the matching time and the possibility of errors also increases.
False acceptance ratesTo make the binary decision to accept or reject the match, a threshold is used on the score. As in machine vision, thresholds are free variables that are set from prior knowledge. From a large population of users' measures, the probability of correctly or incorrectly accepting or rejecting a match at a specified threshold can be computed.FIGURE 1. In biometric systems, a record is captured as a reference. Data are then stored as a reference called a template and are stored in the database keyed user name. When biometric systems are used for authentication, presented data are converted to a template, and this template is matched with the templates generated by registration.
False acceptance rate (FAR) is the probability that the wrong person is accepted (fraud) using a particular threshold. False rejection rate (FRR) is the probability that the right person is rejected using a particular threshold (see Fig. 2).
The error curves in Fig. 2 show that changing the threshold to decrease one type of error rate increases the other type of error. Equal error rate (EER) is the error rate (probability) where these two error curves cross. EER is often used as a figure of merit for a biometrics system. Sometimes it is better to set the threshold higher or lower than the EER. For example, for verifying an automatic-teller-machine transaction, it may be better to favor accepting the wrong person over upsetting a customer by rejecting them, so we reduce the threshold for a lower FRR and have a higher FAR.
The process of matching and decision making is known as authentication. Once this occurs, another process, authorization, allows certain resources to be accessed. For example, certain people may be given authorized access to certain parts of a building based on their level of security clearance.
Fingerprint analysisThe fine ridges of skin on your hands, fingers, soles, and toes form unique patterns that can be analyzed for identification. Fingerprints are most commonly used, although a print of a baby's sole is sometimes taken to give a larger area for analysis. These ridge-and-furrow patterns evolved to provide additional friction for gripping, therefore fingerprint analysis is sometimes known as "friction ridge analysis."FIGURE 2. False acceptance rate (FAR) is the probability that the wrong person is accepted using a particular threshold. False rejection rate (FRR) is the probability that the right person is rejected using a particular threshold. Hypothetical distributions of FAR and FRR probabilities are shown as a function of threshold,t.
Fingerprints form during fetal development and are essentially unchanged throughout life. Their general pattern has a genetic basis (genotype), but the ridge details are unique to each individual (phenotype). Because of this combination, even identical twins with identical DNA will have slightly different fingerprints. Fingerprints became the standard for forensic identification in the early 1900s.
In the past, fingerprints were registered (enrolled) by inking the fingers and rolling them on a fingerprint card. The fingerprint card was then manually compared to the ink print of a suspect or to a 'latent' print taken from evidence. A latent print often has to be chemically treated to develop the print for comparison. Electronic and optical imaging methods have mostly replaced the ink-and-roll method of getting fingerprints and comparing them with latent prints.
There are about 50 vendors of fingerprint readers for personal identification. A common type of reader has a red LED light source that totally reflects off a glass window or prism and through optics into a CCD or CMOS image sensor. When you place your finger on the window, the ridges touch the glass while the furrows between ridges do not. Where ridges touch the glass, the index of refraction outside the glass changes and frustrates the total internal reflection at that point—the light is absorbed by the ridges. The sensor therefore 'sees' ridges as dark lines, while furrows stay bright.
Another type of fingerprint reader uses a large (perhaps 1.5 x 1.5 cm) integrated-circuit area sensor. The ridges and furrows of a finger placed on this sensor are sensed by capacitance or by radio-frequency coupling to the sensor array. Some readers have a single line of sensors, such as a linescan camera, over which the user draws a finger to form the image. A hybrid sensor passes an ac current through a polymer sheet to cause it to fluoresce, much like a night-light. A fingerprint image is formed from the ridges shorting out the fluorescence. The polymer sheet can be put directly on top of an area image sensor or can use optics similar to LED-based fingerprint readers.
Major fingerprint features, such as whorls, arches, and loops can classify fingerprints into groups. For detailed classification, positions and relationships of where ridges (or furrows) end or branch are measured. These points, called minutiae, are difficult to extract and measure and are challenging imaging problems (see Fig. 3).
FIGURE 3. Major fingerprint types include whorl (left), arch (middle), and loop (right). For detailed classification, positions and relationships of where ridges (or furrows) end or branch are measured.
However, people with small, simple, or missing fingerprints are difficult or impossible to enroll or match. These failure-to-enroll errors occur in about 2% to 4% of the population—more often in the very young and old, manual laborers, and petite women.
In your faceThe ability to recognize and identify faces is vital to social interaction, and the human brain is equipped to recognize and learn faces. Even a day-old infant can learn to identify a particular face and within a month knows his/her mother's face.The difficulty of providing some of the brain's ability to recognize faces to a machine-vision system has not deterred a few companies and many graduate students from trying. Among the problems in automatic face recognition are the variable appearance of the face under different lighting and poses, the difficulty of making accurate measures because the face is flexible, and the apparent similarity between faces. The fact that we recognize faces so easily makes everyone a natural critic of any computer-based face-recognition system.
A typical face biometrics system captures images with a visible-light camera and processes these images using a PC. The camera must have enough resolution and image quality to capture the required details of the face. Inexpensive, consumer-grade cameras can be used in office environments, but specialized cameras are needed in more demanding environments such as airports or outdoor areas.
Before the face can be identified, it must first be found in the image. This is a difficult problem. There can be multiple faces, the face might be partially obscured or blurred from movement, or there can be face-like objects that fool a face-finding algorithm. Skin color, approximate shape, stereo (depth), texture, and/or motion can differentiate possible faces from nonfaces (background). Each of the possible face locations is examined using correlation, neural networks, or some other pattern-matching algorithm. As in machine vision, searching for faces is usually done on coarse-to-fine scales (small to large image size), so approximate face locations can quickly be found in a small image and then refined to get precise face locations.
Automated face recognition is the most natural and easy-to-use biometric. The problems of lighting and pose variation, the plastic nature of the face, and the similarity of faces make this biometric less reliable than fingerprints. Unlike fingerprints, there is never a failure to enroll with a face biometric.
Class groupingsThe algorithms for generating face templates and matching them to the enrolled templates are generally proprietary but can be grouped into three classes: image-based, feature-based, and model-based. Image-based methods use the intensity of the pixels or derived measures as the biometrics. For example, eigenface methods generate a space of face images, with dimensions that account for face variability. A face is characterized by a vector in this space.Feature-based methods use features derived from the intensity image—such as intensity edges and 'blobs'—for matching or try to identify and match individual features such as the corners of the mouth. To overcome position and size variations, features can be analyzed on a local basis, or neural networks are used to provide tolerant measurements.
Model-based methods use the known structure of the face to constrain the matching of local features. Constraints can include how much energy it takes to stretch a model feature to a presented feature.
These are not sharp classes; for example, image data become features, and features can be used in model-based methods. As with machine vision, face biometrics can be improved by getting better data, using better algorithms, or adding more constraints.
Iris scansThe iris is the colored part of the eye surrounding the pupil. The furrows, crypts, and other structures of the iris are presumed unique and do not change significantly throughout life (see image on p. 25). The use of these patterns for identification was proposed in the 1960s, but effective methods for using the patterns were first developed in the 1990s.An image of the iris can be unrolled to produce a linear structure that is similar to a barcode. The 1s and 0s of this barcode are defined by structure of the iris and are the biometric data. As with fingerprints, the large number of independent bits in this code might result in errors on many bits but can still allow a match with high certainty.
A major problem in using the iris for identification is getting a good image of the iris, especially for registration. The iris is small, is hard to find in an image, and requires precise focus. The iris can be obscured by drooping eyelids and by pathologies (for example, Bell's Palsy). Once found, enough details of the iris have to be resolved to provide the bits that make up the biometric data.
Iridian Technologies (formerly IriScan; Marlton, NJ) uses a hand-held or adjustable mirror in which the user must align his/her eye and through which an image of the iris is taken. This forces the eye to be in the camera's field of view and at a typical distance from the camera. Iridian also has developed a sophisticated electro-optical and mechanical system for finding the eye and zooming in on the iris. This allows simpler hands-free use of the biometrics system, but at increased cost. There are reports of failures to enroll as much as 5% of the population using iris biometrics. Iris scanning has the lowest FAR of any biometric, but failures to enroll and difficulty in use have limited its commercial acceptance.
Company InformationIridian TechnologiesMarlton, NJ 08053-3159Web: www.iriscan.comNational Institute of Standards and Technology (NIST)
Gaithersburg, MD 20899-3460
Web: www.nist.gov