Application of the Variable Precision Rough Sets Model to Estimate the Outlier Probability of Each Element

Please use this identifier to cite or link to this item: http://hdl.handle.net/10045/81829
Información del item - Informació de l'item - Item information
Title: Application of the Variable Precision Rough Sets Model to Estimate the Outlier Probability of Each Element
Authors: Maciá Pérez, Francisco | Berna-Martinez, Jose Vicente | Fernández Oliva, Alberto | Abreu Ortega, Miguel
Research Group/s: GrupoM. Redes y Middleware
Center, Department or Service: Universidad de Alicante. Departamento de Tecnología Informática y Computación
Keywords: Rough sets theory | Variable precision rough sets | Outlier probability
Knowledge Area: Arquitectura y Tecnología de Computadores
Issue Date: 8-Oct-2018
Publisher: Hindawi Publishing Corporation
Citation: Complexity. 2018. Volume 2018, Article ID 4867607, 14 pages. doi:10.1155/2018/4867607
Abstract: In a data mining process, outlier detection aims to use the high marginality of these elements to identify them by measuring their degree of deviation from representative patterns, thereby yielding relevant knowledge. Whereas rough sets (RS) theory has been applied to the field of knowledge discovery in databases (KDD) since its formulation in the 1980s; in recent years, outlier detection has been increasingly regarded as a KDD process with its own usefulness. The application of RS theory as a basis to characterise and detect outliers is a novel approach with great theoretical relevance and practical applicability. However, algorithms whose spatial and temporal complexity allows their application to realistic scenarios involving vast amounts of data and requiring very fast responses are difficult to develop. This study presents a theoretical framework based on a generalisation of RS theory, termed the variable precision rough sets model (VPRS), which allows the establishment of a stochastic approach to solving the problem of assessing whether a given element is an outlier within a specific universe of data. An algorithm derived from quasi-linearisation is developed based on this theoretical framework, thus enabling its application to large volumes of data. The experiments conducted demonstrate the feasibility of the proposed algorithm, whose usefulness is contextualised by comparison to different algorithms analysed in the literature.
Sponsor: This work has been supported by University of Alicante projects GRE14-02 and Smart University.
URI: http://hdl.handle.net/10045/81829
ISSN: 1076-2787 (Print) | 1099-0526 (Online)
DOI: 10.1155/2018/4867607
Language: eng
Type: info:eu-repo/semantics/article
Rights: © 2018 Francisco Maciá Pérez et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Peer Review: si
Publisher version: https://doi.org/10.1155/2018/4867607
Appears in Collections:INV - GrupoM - Artículos de Revistas

Files in This Item:
Files in This Item:
File Description SizeFormat 
Thumbnail2018_Macia_etal_Complexity.pdf1,93 MBAdobe PDFOpen Preview


This item is licensed under a Creative Commons License Creative Commons