skip to main content

Enhanced skeleton visualization for view invariant human action recognition

Liu, Mengyuan ; Liu, Hong ; Chen, Chen

Pattern recognition, 2017-08, Vol.68, p.346-362 [Periódico revisado por pares]

Elsevier Ltd

Texto completo disponível

Citações Citado por
  • Título:
    Enhanced skeleton visualization for view invariant human action recognition
  • Autor: Liu, Mengyuan ; Liu, Hong ; Chen, Chen
  • Assuntos: Human action recognition ; Skeleton sequence ; View invariant
  • É parte de: Pattern recognition, 2017-08, Vol.68, p.346-362
  • Descrição: •Sequence-based view invariant transform can effectively cope with view variations.•Enhanced skeleton visualization method encodes spatio-temporal skeletons as visual and motion enhanced color images in a compact yet distinctive manner.•Multi-stream convolutional neural networks fusion model is able to explore complementary properties among different types of enhanced color images.•Our method consistently achieves the highest accuracies on four datasets, including the largest and most challenging NTU RGB+D dataset for skeleton-based action recognition. Human action recognition based on skeletons has wide applications in human–computer interaction and intelligent surveillance. However, view variations and noisy data bring challenges to this task. What’s more, it remains a problem to effectively represent spatio-temporal skeleton sequences. To solve these problems in one goal, this work presents an enhanced skeleton visualization method for view invariant human action recognition. Our method consists of three stages. First, a sequence-based view invariant transform is developed to eliminate the effect of view variations on spatio-temporal locations of skeleton joints. Second, the transformed skeletons are visualized as a series of color images, which implicitly encode the spatio-temporal information of skeleton joints. Furthermore, visual and motion enhancement methods are applied on color images to enhance their local patterns. Third, a convolutional neural networks-based model is adopted to extract robust and discriminative features from color images. The final action class scores are generated by decision level fusion of deep features. Extensive experiments on four challenging datasets consistently demonstrate the superiority of our method.
  • Editor: Elsevier Ltd
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.