A Review on Deep Learning Techniques for Video Prediction

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/121956
Información del item - Informació de l'item - Item information
Título: A Review on Deep Learning Techniques for Video Prediction
Autor/es: Oprea, Sergiu | Martínez González, Pablo | Garcia-Garcia, Alberto | Castro-Vargas, John Alejandro | Orts-Escolano, Sergio | Garcia-Rodriguez, Jose | Argyros, Antonis
Grupo/s de investigación o GITE: Robótica y Visión Tridimensional (RoViT) | Arquitecturas Inteligentes Aplicadas (AIA)
Centro, Departamento o Servicio: Universidad de Alicante. Departamento de Ciencia de la Computación e Inteligencia Artificial | Universidad de Alicante. Departamento de Tecnología Informática y Computación
Palabras clave: Video prediction | Future frame prediction | Deep learning | Representation learning | Self-supervised learning
Área/s de conocimiento: Ciencia de la Computación e Inteligencia Artificial | Arquitectura y Tecnología de Computadores
Fecha de publicación: 15-dic-2020
Editor: IEEE
Cita bibliográfica: IEEE Transactions on Pattern Analysis and Machine Intelligence. 2022, 44(6): 2806-2826. https://doi.org/10.1109/TPAMI.2020.3045007
Resumen: The ability to predict, anticipate and reason about future outcomes is a key component of intelligent decision-making systems. In light of the success of deep learning in computer vision, deep-learning-based video prediction emerged as a promising research direction. Defined as a self-supervised learning task, video prediction represents a suitable framework for representation learning, as it demonstrated potential capabilities for extracting meaningful representations of the underlying patterns in natural videos. Motivated by the increasing interest in this task, we provide a review on the deep learning methods for prediction in video sequences. We firstly define the video prediction fundamentals, as well as mandatory background concepts and the most used datasets. Next, we carefully analyze existing video prediction models organized according to a proposed taxonomy, highlighting their contributions and their significance in the field. The summary of the datasets and methods is accompanied with experimental results that facilitate the assessment of the state of the art on a quantitative basis. The paper is summarized by drawing some general conclusions, identifying open research challenges and by pointing out future research directions.
Patrocinador/es: This work has been funded by the Spanish Government PID2019-104818RB-I00 grant for the MoDeaAS project, supported with Feder funds. This work has also been supported by two Spanish national grants for PhD studies, FPU17/00166, and ACIF/2018/197 respectively.
URI: http://hdl.handle.net/10045/121956
ISSN: 0162-8828 (Print) | 1939-3539 (Online)
DOI: 10.1109/TPAMI.2020.3045007
Idioma: eng
Tipo: info:eu-repo/semantics/article
Derechos: © 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission
Revisión científica: si
Versión del editor: https://doi.org/10.1109/TPAMI.2020.3045007
Aparece en las colecciones:INV - RoViT - Artículos de Revistas
INV - AIA - Artículos de Revistas

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
ThumbnailOprea_etal_2020_IEEE-TPAMI_accepted.pdfAccepted Manuscript (acceso abierto)2,28 MBAdobe PDFAbrir Vista previa
ThumbnailOprea_etal_2020_IEEE-TPAMI_final.pdfVersión final (acceso restringido)2,21 MBAdobe PDFAbrir    Solicitar una copia


Todos los documentos en RUA están protegidos por derechos de autor. Algunos derechos reservados.