Application of the Variable Precision Rough Sets Model to Estimate the Outlier Probability of Each Element

Por favor, use este identificador para citar o enlazar este ítem:
Información del item - Informació de l'item - Item information
Título: Application of the Variable Precision Rough Sets Model to Estimate the Outlier Probability of Each Element
Autor/es: Maciá Pérez, Francisco | Berna-Martinez, Jose Vicente | Fernández Oliva, Alberto | Abreu Ortega, Miguel
Grupo/s de investigación o GITE: GrupoM. Redes y Middleware
Centro, Departamento o Servicio: Universidad de Alicante. Departamento de Tecnología Informática y Computación
Palabras clave: Rough sets theory | Variable precision rough sets | Outlier probability
Área/s de conocimiento: Arquitectura y Tecnología de Computadores
Fecha de publicación: 8-oct-2018
Editor: Hindawi Publishing Corporation
Cita bibliográfica: Complexity. 2018. Volume 2018, Article ID 4867607, 14 pages. doi:10.1155/2018/4867607
Resumen: In a data mining process, outlier detection aims to use the high marginality of these elements to identify them by measuring their degree of deviation from representative patterns, thereby yielding relevant knowledge. Whereas rough sets (RS) theory has been applied to the field of knowledge discovery in databases (KDD) since its formulation in the 1980s; in recent years, outlier detection has been increasingly regarded as a KDD process with its own usefulness. The application of RS theory as a basis to characterise and detect outliers is a novel approach with great theoretical relevance and practical applicability. However, algorithms whose spatial and temporal complexity allows their application to realistic scenarios involving vast amounts of data and requiring very fast responses are difficult to develop. This study presents a theoretical framework based on a generalisation of RS theory, termed the variable precision rough sets model (VPRS), which allows the establishment of a stochastic approach to solving the problem of assessing whether a given element is an outlier within a specific universe of data. An algorithm derived from quasi-linearisation is developed based on this theoretical framework, thus enabling its application to large volumes of data. The experiments conducted demonstrate the feasibility of the proposed algorithm, whose usefulness is contextualised by comparison to different algorithms analysed in the literature.
Patrocinador/es: This work has been supported by University of Alicante projects GRE14-02 and Smart University.
ISSN: 1076-2787 (Print) | 1099-0526 (Online)
DOI: 10.1155/2018/4867607
Idioma: eng
Tipo: info:eu-repo/semantics/article
Derechos: © 2018 Francisco Maciá Pérez et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Revisión científica: si
Versión del editor:
Aparece en las colecciones:INV - GrupoM - Artículos de Revistas

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
Thumbnail2018_Macia_etal_Complexity.pdf1,93 MBAdobe PDFAbrir Vista previa

Este ítem está licenciado bajo Licencia Creative Commons Creative Commons