Graduation Year

2009

Document Type

Dissertation

Degree

Ph.D.

Degree Granting Department

Computer Science and Engineering

Major Professor

Dmitry B. Goldgof, Ph.D.

Co-Major Professor

Sudeep Sarkar, Ph.D.

Keywords

Face, Deformable modeling, Strain pattern, Finite element method, Person identification

Abstract

Deformable modeling of facial soft tissues have found use in application domains such as human-machine interaction for facial expression recognition. More recently, such modeling techniques have been used for tasks like age estimation and person identification. This dissertation is focused on development of novel image analysis algorithms to follow facial strain patterns observed through video recording of faces in expressions. Specifically, we use the strain pattern extracted from non-rigid facial motion as a simplified and adequate way to characterize the underlying material properties of facial soft tissues. Such an approach has several unique features. Strain pattern instead of the image intensity is used as a classification feature. Strain is related to biomechanical properties of facial tissues that are distinct for each individual.

Strain pattern is less sensitive to illumination differences (between enrolled and query sequences) and face camouflage because the strain pattern of a face remains stable as long as reliable facial deformations are captured. A finite element modeling based method enforces regularization which mitigates issues (such as temporal matching and noise sensitivity) related to automatic motion estimation. Therefore, the computational strategy is accurate and robust. Images or videos of facial deformations are acquired with video camera and without special imaging equipment. Experiments using range images on a dataset consisting of 50 subjects provide the necessary proof of concept that strain maps indeed have a discriminative value.

On a video dataset containing 60 subjects undergoing a particular facial expression, experimental results using the computational strategy presented in this work emphasize the discriminatory and stability properties of strain maps across adverse data conditions (shadow lighting and face camouflage). Such properties make it a promising feature for image analysis tasks that can benefit from such auxiliary information about the human face. Strain maps add a new dimension in our abilities to characterize a human face. It also fosters newer ways to capture facial dynamics from video which, if exploited efficiently, can lead to an improved performance in tasks involving the human face. In a subsequent effort, we model the material constants (Young's modulus) of the skin in sub-regions of the face from the motion observed in multiple facial expressions.

On a public database consisting of 40 subjects undergoing some set of facial motions, we present an expression invariant strategy to matching faces using the Young's modulus of the skin. Such an efficient way of describing underlying material properties from the displacements observed in video has an important application in deformable modeling of physical objects which are usually gauged by their simplicity and adequacy. The contributions through this work will have an impact on the broader vision community because of its highly novel approaches to the long-standing problem of motion analysis of elastic objects. In addition, the value is the cross disciplinary nature and its focus on applying image analysis algorithms to the rather difficult and important problem of material property characterization of facial soft tissues and their applications.

We believe this research provides a special opportunity for the utilization of video processing to enhance our abilities to make unique discoveries through the facial dynamics inherent in video.

Share

COinS