skip to main content
Primo Search
Search in: Busca Geral

Spatio-temporal categorization for first-person-view videos using a convolutional variational autoencoder and Gaussian processes

Nagano, Masatoshi ; Nakamura, Tomoaki ; Nagai, Takayuki ; Mochihashi, Daichi ; Kobayashi, Ichiro

Frontiers in robotics and AI, 2022-09, Vol.9, p.903450-903450 [Periódico revisado por pares]

Frontiers Media S.A

Texto completo disponível

Citações Citado por
  • Título:
    Spatio-temporal categorization for first-person-view videos using a convolutional variational autoencoder and Gaussian processes
  • Autor: Nagano, Masatoshi ; Nakamura, Tomoaki ; Nagai, Takayuki ; Mochihashi, Daichi ; Kobayashi, Ichiro
  • Assuntos: convolutional variational autoencoder ; Gaussian process ; hidden semi-Markov model ; Robotics and AI ; segmentation ; spatio-temporal categorization ; unsupervised learning
  • É parte de: Frontiers in robotics and AI, 2022-09, Vol.9, p.903450-903450
  • Notas: ObjectType-Article-1
    SourceType-Scholarly Journals-1
    ObjectType-Feature-2
    content type line 23
    Edited by: Michael Spranger, Sony Computer Science Laboratories, Japan
    Reviewed by: Wataru Noguchi, Hokkaido University, Japan
    Sina Ardabili, University of Mohaghegh Ardabili, Iran
    This article was submitted to Computational Intelligence in Robotics, a section of the journal Frontiers in Robotics and AI
  • Descrição: In this study, HcVGH, a method that learns spatio-temporal categories by segmenting first-person-view (FPV) videos captured by mobile robots, is proposed. Humans perceive continuous high-dimensional information by dividing and categorizing it into significant segments. This unsupervised segmentation capability is considered important for mobile robots to learn spatial knowledge. The proposed HcVGH combines a convolutional variational autoencoder (cVAE) with HVGH, a past method, which follows the hierarchical Dirichlet process-variational autoencoder-Gaussian process-hidden semi-Markov model comprising deep generative and statistical models. In the experiment, FPV videos of an agent were used in a simulated maze environment. FPV videos contain spatial information, and spatial knowledge can be learned by segmenting them. Using the FPV-video dataset, the segmentation performance of the proposed model was compared with previous models: HVGH and hierarchical recurrent state space model. The average segmentation F-measure achieved by HcVGH was 0.77; therefore, HcVGH outperformed the baseline methods. Furthermore, the experimental results showed that the parameters that represent the movability of the maze environment can be learned.
  • Editor: Frontiers Media S.A
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.