Proc. SMC'97 | October 1997 |
Abstract: Earlier work suggests that mixture-distance can improve the performance of feature-based face recognition systems in which only a single training example is available for each individual. In this work we investigate the non-feature-based Eigenfaces technique of Turk and Pentland, replacing Euclidean distance with mixture-distance. In mixture-distance, a novel distance function is constructed based on local second-order statistics as estimated by modeling the training data with a mixture of normal densities. The approach is described and experimental results on a database of 600 people are presented, showing that mixture-distance can reduce the error rate by up to 73.9%. In the experimental setting considered, the results indicate that the simplest form of mixture distance yields considerable improvement. Additional, but less dramatic, improvement was possible with more complex forms. The results show that even in the absence of multiple training examples for each class, it is sometimes possible to infer an improved distance function from a statistical model of the training data. Therefore, researchers using Eigenfaces or similar pattern recognition techniques may find significant advantages by considering alternative distance metrics such as mixture-distance.