A distributed bug analyzer based on user-interaction features for mobile apps

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/102440
Información del item - Informació de l'item - Item information
Título: A distributed bug analyzer based on user-interaction features for mobile apps
Autor/es: Méndez-Porras, Abel | Méndez-Marín, Giovanni | Tablada-Rojas, Alberto | Nieto-Hidalgo, Mario | García-Chamizo, Juan Manuel | Jenkins, Marcelo | Martínez, Alexandra
Grupo/s de investigación o GITE: Informática Industrial y Redes de Computadores
Centro, Departamento o Servicio: Universidad de Alicante. Departamento de Tecnología Informática y Computación
Palabras clave: Distributed bug analyzer | User-interaction features | Digital imaging processing | Interest points | Automated testing
Área/s de conocimiento: Arquitectura y Tecnología de Computadores
Fecha de publicación: 2-feb-2017
Editor: Springer Berlin Heidelberg
Cita bibliográfica: Journal of Ambient Intelligence and Humanized Computing. 2017, 8: 579-591. doi:10.1007/s12652-016-0435-7
Resumen: Developers must spend more effort and attention on the processes of software development to deliver quality applications to the users. Software testing and automation play a strategic role in ensuring the quality of mobile applications. This paper proposes and evaluates a Distributed Bug Analyzer based on user-interaction features that uses digital imaging processing to find bugs. Our Distributed Bug Analyzer detects bugs by comparing the similarity between images taken before and after an user-interaction feature occurs. An interest point detector and descriptor is used for image comparison. To evaluate the Distribute Bug Analyzer, we conducted a case study with 38 randomly selected mobile applications. First, we identified user-interaction bugs by manually testing the applications. Images were captured before and after applying each user-interaction feature. Then, image pairs were processed (using SURF) to obtain interest points, from which a similarity percentage was computed, to identify the presence of bugs. We used a Master Computer, a Storage Test Database, and four Slave Computers to evaluate the Distributed Bug Analyzer. We performed 360 tests of user-interaction features in total. We found 79 bugs when manually testing user-interaction features, and 69 bugs when using digital imaging processing to detect bugs with a threshold fixed at 92.5% of similarity. Distributed Bug Analyzer evenly distributed tests that are pending in the Storage Test Database between the Slave Computers. Slave Computers 1, 2, 3, and 4 processed 21, 20, 23, and 36% of image pair respectively.
Patrocinador/es: This research was supported by the Costa Rican Ministry of Science, Technology and Telecommunications (MICITT).
URI: http://hdl.handle.net/10045/102440
ISSN: 1868-5137 (Print) | 1868-5145 (Online)
DOI: 10.1007/s12652-016-0435-7
Idioma: eng
Tipo: info:eu-repo/semantics/article
Derechos: © Springer-Verlag Berlin Heidelberg 2017
Revisión científica: si
Versión del editor: https://doi.org/10.1007/s12652-016-0435-7
Aparece en las colecciones:INV - I2RC - Artículos de Revistas

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
Thumbnail2017_Mendez-Porras_etal_JAmbientIntellHumanComput_final.pdfVersión final (acceso restringido)2,94 MBAdobe PDFAbrir    Solicitar una copia


Todos los documentos en RUA están protegidos por derechos de autor. Algunos derechos reservados.