Shot Boundary Detection with 3D Depthwise Convolutions and Visual Attention

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/136718
Información del item - Informació de l'item - Item information
Título: Shot Boundary Detection with 3D Depthwise Convolutions and Visual Attention
Autor/es: Esteve Brotons, Miguel José | Lucendo, Francisco Javier | Rodríguez Juan, Javier | Garcia-Rodriguez, Jose
Grupo/s de investigación o GITE: Arquitecturas Inteligentes Aplicadas (AIA)
Centro, Departamento o Servicio: Universidad de Alicante. Departamento de Tecnología Informática y Computación
Palabras clave: Shot boundary detection | 3D convolution | Depthwise convolution | Visual attention
Fecha de publicación: 8-ago-2023
Editor: MDPI
Cita bibliográfica: Esteve Brotons MJ, Lucendo FJ, Javier R-J, Garcia-Rodriguez J. Shot Boundary Detection with 3D Depthwise Convolutions and Visual Attention. Sensors. 2023; 23(16):7022. https://doi.org/10.3390/s23167022
Resumen: Shot boundary detection is the process of identifying and locating the boundaries between individual shots in a video sequence. A shot is a continuous sequence of frames that are captured by a single camera, without any cuts or edits. Recent investigations have shown the effectiveness of the use of 3D convolutional networks to solve this task due to its high capacity to extract spatiotemporal features of the video and determine in which frame a transition or shot change occurs. When this task is used as part of a scene segmentation use case with the aim of improving the experience of viewing content from streaming platforms, the speed of segmentation is very important for live and near-live use cases such as start-over. The problem with models based on 3D convolutions is the large number of parameters that they entail. Standard 3D convolutions impose much higher CPU and memory requirements than do the same 2D operations. In this paper, we rely on depthwise separable convolutions to address the problem but with a scheme that significantly reduces the number of parameters. To compensate for the slight loss of performance, we analyze and propose the use of visual self-attention as a mechanism of improvement.
Patrocinador/es: We would like to thank “A way of making Europe” European Regional Development Fund (ERDF) and MCIN/AEI/10.13039/501100011033 for supporting this work under the TED2021-130890B (CHAN-TWIN) research project funded by MCIN/AEI /10.13039 /501100011033 and European Union NextGenerationEU/ PRTR. Additionally, the HORIZON-MSCA-2021-SE-0 action number: 101086387, REMARKABLE, Rural Environmental Monitoring via ultra wide-ARea networKs And distriButed federated Learning.
URI: http://hdl.handle.net/10045/136718
ISSN: 1424-8220
DOI: 10.3390/s23167022
Idioma: eng
Tipo: info:eu-repo/semantics/article
Derechos: © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Revisión científica: si
Versión del editor: https://doi.org/10.3390/s23167022
Aparece en las colecciones:INV - AIA - Artículos de Revistas
Investigaciones financiadas por la UE

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
ThumbnailEsteve-Brotons_etal_2023_Sensors.pdf2 MBAdobe PDFAbrir Vista previa


Este ítem está licenciado bajo Licencia Creative Commons Creative Commons