Summary

To automatically classify or localize different actions in video sequences is very useful for a variety of tasks, such as video surveillance, object-level video summarization, video indexing, digital library organization, etc. However, it remains a challenging task for computers to achieve robust action recognition due to cluttered background, camera motion, occlusion, and geometric and photometric variances of objects.

We present a  novel unsupervised learning method for learning human action categories. A video sequence is represented as a collection of spatial-temporal words by extracting space-time interest points. The algorithm learns the probability distributions of the spatial-temporal words and intermediate topics corresponding to human action categories automatically using a probabilistic Latent Semantic Analysis (pLSA) model. The learned model is then used for human action categorization and localization in a novel video, by maximizing the posterior of action category (topic) distributions. The contributions of this work are as follows:

  • Unsupervised learning of actions using 'video words' representation: We deploy a pLSA model with 'bag of video words' representation for video analysis;
  • Multiple action localization and categorization: Our approach is not only able to classify different actions, but also to localize different actions simultaneously in a novel and complex video sequence.

Our Algorithm

Resources

  • Juan Carlos Niebles, Hongcheng Wang and Li Fei-Fei, Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words, Accepted for Oral Presentation At British Machine Vision Conference (BMVC), Edinburgh, 2006.
    Full Text: PDF
  • Juan Carlos Niebles, Hongcheng Wang and Li Fei-Fei, Unsupervised Learning of Human Action Categories, in Video Proceedings, International Conference on Computer Vision and Pattern Recognition (VPCVPR), New York, 2006.
    Full Text: PDF (One Page)
    Video Demo: AVI
  • There is also a poster about this work, presented at IMA Workshop: Visual Learning and Recognition, Minneapolis, 2006.

Selected References