Lately, India and several other countries in the world have been the targets of terrorist attacks – kamikaze conspiracies that have destroyed innocent lives and public property. People and governments all over the world are gearing up to fight this menace and protect the innocent, and the answer lies in technology – and biometrics in particular, which is universally accepted as the one sure-fire way to ensure the protection and security of vulnerable populations and hopefully, someday, the elimination of terrorism altogether. From fingerprints to faceprints
Biometrics, or the measurement and analysis of biological data, is that arm of technology that could ensure a safer world, if scientists are to be believed. Biometrics refers to the technologies of identifying, recording and correlating characteristics of the human body such as fingerprints, eye retinas and irises, voice patterns, palm geometry and DNA. Fingerprints, one of the oldest forms of biometric analysis, were used in 14th century China and subsequently by 20th century police, and are still an unavoidable ingredient in pulp-detective fiction and TV shows.
For more than a decade now, technologists, researchers and startups all over the world seem convinced that the unique features of the face can be constructively used by artificial intelligence – computers could create more foolproof security systems. Facial structures, when translated into mathematical descriptions, could help security agencies to face up to the post-9/11 threats of infiltration into public spaces by suspected criminals. Face detection and face-tagging are already familiar to us as the wow-features of our digital cameras, photo-sharing sites such as iPhoto, Flickr and Picasa and the not-so-foolproof VeriFace facial recognition login function of Lenovo’s notebooks.
The first and most well-known, large-scale application of facial recognition software was in the June 2001 Super Bowl in Tampa, Florida, in which video security cameras recorded the facial features of thousands of fans entering the stadium and compared them with mug shots in the Tampa Police database. The cameras were installed with a face-recognition application called Face-IT, created by Visionics Corporation, New Jersey. Famously, the system made several ‘false positives’ (matched faces that were not really matching) and led to the arrest of not a single wanted criminal. Civil liberty activists raised the alarm about big brother tactics and infringements on privacy. Since then, there have been several advances in facial recognition technologies and in October 2008, Interpol proposed an automated face-recognition system for international borders. Again, voices were raised against the proposal as it could lead to an infringement of privacy, abuse by officials, and, like most facial recognition systems in the past, fall prey to inaccuracies. Yet, these systems are highly attractive to the powers-that-be as they can be used to prevent voter fraud, thwart the misuse of ATM machines, and, basically, provide an easy means to control large groups of people – the masses. But this is only possible if the error margins decrease. Perhaps the reason for the high error frequency lies in how the system works.
How facial recognition systems work
Some of us never forget a face. Humans have an intrinsic ability to remember hundreds of faces and, more often than not, connect each one of these faces with a name. The challenge for a face-recognition system is to be able to mimic this ability with at least an equivalent measure of accuracy, if not better, with minimal intervention.
Facial recognition technologies are usually used for verification (confirming whether a person is who he / she claims to be) and identification (matching unknown faces taken from surveillance footage with images in a database, e.g. criminal records). The techniques used for facial recognition can be geometrical (feature-based) or photometric (template-based). Traditionally, there have been four basic methods employed by facial recognition systems:
Eigenfaces
The famous mathematician David Hilbert was the first to use the term eigen (meaning ‘own’ or ‘peculiar to’) for a non-zero vector on which, when a particular linear transformation is applied, may change in length, but not in direction. The Eigenfaces technology was patented at MIT and it makes use of 2-D greyscale images that represent distinguishing features of a facial image. Values are assigned to these features and an average set is prepared. Using statistical calculations, a covariance matrix of eigenvectors is arrived at, each eigenvector representing an ‘eigenface’. Each new face that the system now encounters is evaluated on the basis of how it differs from the ‘mean face’. When a face is ‘enrolled’, the subject’s eigenface is mapped to a series of numbers. Understandably, it’s all to do with numbers in the end, and not really visuals. These are then compared to a ‘template’ in the database. For verification, a subject’s live template is compared against the enrolled template for identification, the comparison set increases but the process stays the same. The most significant drawbacks of eigenfaces lie in the preconditions – the images must always be frontal, full-face and the surroundings well lit for optimal results.
Feature analysis
Local feature analysis, used by Visionics in Tampa, is based on dividing the features of the faces into building blocks while simultaneously incorporating the relative position of each feature.
The interesting aspect of this software is that it even expects small movements of a feature and the resultant and simultaneous shifting of adjacent features that inevitably occurs. Unlike Eigenfaces, it can accept angles of 25 degrees (horizontal) and 15 degrees (vertical). Each human face has about 80 ‘nodal points’ – distinguishing peaks and valleys and the software measures aspects such as the distance between the eyes, the width of the nose, the shape of the cheekbones, the depth of the eye sockets and the length of the jaw line to create a ‘faceprint’. However, lighting and the angle to which the face was tilted towards the camera could adversely affect the results.
Neural network mapping
Artificial neuron networks are programmes that imitate the interconnected, unified responsive behavior of biological neurons and are often combined with Eigenface systems to bring out better results in facial recognition. An algorithm is used to determine the similarity of a live person’s face with an enrolled or a reference face. The system automatically re-adjusts the weight it assigns to individual features in the event of a false match.
Automatic face processing
A simplistic, but sometimes quicker technology, automatic face processing (AFP) uses the distance between prominent facial features such as the eyes, the end of the nose and the corner of the mouth to create its template. This is, however, not so efficient as the above systems.
3D Facial Recognition
In the quest for greater accuracy, the trend of the last decade is towards the development of a facial recognition software that uses a 3D model. the image of a person’s face is captured in 3D, allowing the system to note the curves of the eye sockets, for example, or the contours of the chin or forehead. Even a face in profile would suffice because the system uses depth, and an axis of measurement, which gives it enough information to construct a full face. The 3-D system usually proceeds thus:
Detection
Capturing a face either by scanning a photograph or photographing a person’s face in real time.
Position
Determining the location, size and angle of the head.
Measurement
Assigning measurements to each curve of the face to make a template with specific focus on the outside of the eye, the inside of the eye and the tip of the nose.
Representation
Converting the template into a code – a numerical representation of the face, and,
Matching
Comparing the received data with faces in the existing database. In case the 3D image is to be compared with an existing 3D image, it needs to undergo no alterations. Typically, however, photos that are stored are in 2D, and in that case, the 3D image needs a few changes. This is tricky, and is one of the biggest challenges in the field today.
In the Face Recognition Grand Challenge held in 2006 at Arlington, Virginia, the new algorithms proved to be ten times more precise than those of 2002. The advances even permitted the system to accurately identify and differentiate between identical twins.
Skin texture
Skin biometrics is another supportive technology being developed by companies such as Identix, which merged with identity solutions provider, Viisage in 2006 – using the uniqueness of skin texture for more precise results. The software takes a picture of a patch of skin – a skinprint, which is broken into smaller portions and converted into a mathematical space by an algorithm, picking the lines, pores and patterns that constitute the skin. Systems such as FaceIt have been developed to incorporate Eigenvectors, local feature analysis and surface texture analysis to optimize results. However, long hair, dim lights or sunglasses could hinder the system’s performance.
The commitment of resources by governments and venture capitalists and the labor of scientists and research students indicate face recognition is developing in the most unanticipated directions and face biometrics has already become an integral part of our daily life.
No comments:
Post a Comment