The face reconstruction part was done as an afterthought..It tries to reconstruct the faces using a selected number of eigenfaces.The reconstructed images are created in a directory named reconfaces .The images named reconphiX.png etc are adjusted by adding the average image data to create the final images -named reconX.png etc.It can be seen that the more number of eigenfaces used in reconstruction,the closer are the final images to the original ones.

For an initial set of 16 images in the original face image collection,when 6 eigenfaces were used to do the reconstruction, one of the the reconstructed images is given below

Then 15 eigenfaces were used for the reconstruction and it created the following image

This reconstruction is very close to the original face image

## Monday, June 30, 2008

## Sunday, March 23, 2008

### PyFaces

Pyfaces is a face recognition system implemented in Python using PIL for image processing and numpy for mathematical operations.The gui version uses some Tix widgets and Tkinter code.A command line version is also given.

Face recognition is a pattern recognition task in which relevant features of the face are used to identify a face.These features may be related to our notion of objects like eyes,nose,ear etc.Pyfaces uses the Eigenface approach which belongs to the template matching family of face recognition techniques.It is described by Matthew A. Turk and Alex P. Pentland in their paper titled "Face Recognition Using Eigenfaces." Eigenface recognition derives it's name from the German prefix "eigen", meaning "own" or "individual".The relevant features are called principal components and can be extracted by PCA methods.The algorithm captures the variation in a collection of face images, independent of any judgment of features .In mathematical terms,the algorithm finds the principal components of distribution of faces, or the eigenvectors of the covariance matrix of a set of face images ,treating an image as point (or vector) in a very high dimensional space .These eigenvectors can be thought of as a set of features that together characterize the variation between face images.THEORY

If we consider a face image as a 2 dimensional NXN array of 8 bit intensity values, we can represent the image as a vector of dimension N^2 .A 256X256 image can be thought of as a vector of dimension 65,536.It can also be thought of as a point in 65,536 dimensional space.A collection of images, then, maps to a collection of points in this huge space.Images of faces, being similar in overall configuration, will not be randomly distributed in this huge image space and thus can be described by a relatively low dimensional subspace. This subspace is termed as facespace and is characterised by vectors that best account for the distribution of face images within the entire image space.Each vector is of length N^2 and describes an NXN image .These vectors map the most significant variations between faces -also called principal components These are called eigenfaces because of the ghostly face like appearance.

Each eigenface represents only some features of the face which may or may not be present in the original image.If the feature is present in the original image,then the contribution of that eigenface in the sum of eigenfaces will be greater.so each eigenface will have an associated weight.Any face in the image set can be described as a linear combination of the best eigenfaces (to be more precise,a weighted sum of eigenfaces).If we select the best M eigenfaces they would span an M-dimensional facespace. When M<< N^2 ,dimensionality is reduced from N^2 to M*M. For a set of 16 images each of size 256X256, the number of calculations to be done is reduced from 65,536*65,536 to 16*16.Since we need only take the most significant eigenfaces (say 6),the calcualtions will be further reduced . This dimensionality reduction makes eigenface technique an ideal choice for face recognition .

The various steps in face identification using eigenfaces are:

1.Initialization of training images

- Construct a faces matrix with the training images.Each image is transformed into a vector of size =width*height and placed as a row of the matrix .Pack these vectors as rows of faces matrix.
- Calculate the average face.
- Normalize the training images by finding zero mean vectors (The average face is subtracted from the face matrix)
- Covariance matrix is found by A*A_transpose where A is the normalized face matrix.
- Compute eigenvectors and eigenvalues.sort eigenvectors to get most significant ones.(those with largest associated eigenvalues)
- Project the eigenvectors onto the face matrix, creating the facespace.Each row of facespace will correspond to an eigenface
- Calculate the set of weights associated with each training image.Once the initialization is done the facespace and weights can be cached to avoid recomputation.

2. Recognising an image

- Normalise input image vector by substracting the average face image .
- Input image is transformed into its eigenface components.Project input image onto the facespace.This creates the weight vector of input image[w1w2...wn] that describes the contribution of each eigenvector in representing the input face image.The weight vectors can be thought of as a point in space and the distances between input weight vector and weight vectors of training images can be calculated.
- Find minimum distance of input weight from training weights,this will indicate a matching face if the distance is within a threshold(to be determined empirically).

The primary advantage of the eigenface method is the system's speed and efficiency. The eigenface approach reduces the amount of data needed to identify an individual to 1/1000th of a full sized image (Lau Technologies, 1999). Presence of objects like beard,glasses etc does not decrease performance.

Disadvantages

The eigenface method has problems identifying faces in different light levels and pose positions. Better results are obtained when the input faces are preprocessed to remove background details, and all faces are equally oriented geometrically.The face must be presented to the system as a frontal view in order for the system to work.

PyFaces screenshot:

The threshold value of distance and the number of significant eigenvectors to be used in attempting to recognise a face are selected in the textboxes 'ThresholdVal' and 'Eigenvectors'.

The face image to be matched to a collection is selected using FileSelectBox on left top of the GUI and the folder containing the collection is selected using a DirSelectBox on left bottom of the GUI. On clicking match button,the selected image will be displayed on the first canvas and the matching image if any will be displayed on the second canvas.

The 'probes' directory contains images to be checked and 'gallery' contains the collection of face images.

The average face image will be produced in the current directory.The eigenfaces that correspond to the rows of facespace will be created in 'eigenface' directory.These are created just to make clear the idea of eigenfaces as containing the significant features of input images.

Subscribe to:
Posts (Atom)