skip to main content
Primo Search
Search in: Busca Geral

Quantifying the Vulnerability of Attributes for Effective Privacy Preservation Using Machine Learning

Majeed, Abdul ; Hwang, Seong Oun

Access, IEEE, 2023, Vol.11, p.4400-4411

IEEE

Sem texto completo

Citações Citado por
  • Título:
    Quantifying the Vulnerability of Attributes for Effective Privacy Preservation Using Machine Learning
  • Autor: Majeed, Abdul ; Hwang, Seong Oun
  • Assuntos: anonymization ; Codes ; Data models ; data owners ; Data privacy ; Data science ; imbalanced data ; Information filtering ; Information integrity ; Machine learning ; Personal data ; privacy ; privacy models ; privacy-utility trade-off ; responsible data science ; utility ; vulnerability
  • É parte de: Access, IEEE, 2023, Vol.11, p.4400-4411
  • Descrição: Personal data have been increasingly used in data-driven applications to improve quality of life. However, privacy preservation of personal data while sharing it with analysts/ researchers has become an essential requirement to be met by data owners (hospitals, banks, insurance companies, etc.). The existing literature on privacy preservation does not precisely quantify the vulnerability of each item among user attributes, thereby leading to explicit privacy disclosures and poor data utility during published data analytics. In this work, we propose and implement an automated way of quantifying the vulnerability of each item among the attributes by using a machine learning (ML) technique to significantly preserve the privacy of users without degrading data utility. Our work can solve four technical problems in the privacy preservation field: optimization of the privacy-utility trade-off, privacy guarantees (i.e., safeguard against identity and sensitive information disclosures) in imbalanced data (or clusters), over-anonymization issues, and rectifying or enabling the applicability of prior privacy models when data have skewed distributions. The experiments were performed on two real-world benchmark datasets to prove the feasibility of the concept in practical scenarios. Compared with state-of-the-art (SOTA) methods, the proposed method effectively preserves the equilibrium between utility and privacy in the anonymized data. Furthermore, our method can significantly contribute towards responsible data science (extracting enclosed knowledge from data without violating subjects' privacy) by controlling higher changes in data during its anonymization.
  • Editor: IEEE
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.