skip to main content
Visitante
Meu Espaço
Minha Conta
Sair
Identificação
This feature requires javascript
Tags
Revistas Eletrônicas (eJournals)
Livros Eletrônicos (eBooks)
Bases de Dados
Bibliotecas USP
Ajuda
Ajuda
Idioma:
Inglês
Espanhol
Português
This feature required javascript
This feature requires javascript
Primo Search
Busca Geral
Busca Geral
Acervo Físico
Acervo Físico
Produção Intelectual da USP
Produção USP
Search For:
Clear Search Box
Search in:
Busca Geral
Or select another collection:
Search in:
Busca Geral
Busca Avançada
Busca por Índices
This feature requires javascript
This feature requires javascript
Quantifying the Vulnerability of Attributes for Effective Privacy Preservation Using Machine Learning
Majeed, Abdul ; Hwang, Seong Oun
Access, IEEE, 2023, Vol.11, p.4400-4411
IEEE
Sem texto completo
Citações
Citado por
Serviços
Detalhes
Resenhas & Tags
Nº de Citações
This feature requires javascript
Enviar para
Adicionar ao Meu Espaço
Remover do Meu Espaço
E-mail (máximo 30 registros por vez)
Imprimir
Link permanente
Referência
EasyBib
EndNote
RefWorks
del.icio.us
Exportar RIS
Exportar BibTeX
This feature requires javascript
Título:
Quantifying the Vulnerability of Attributes for Effective Privacy Preservation Using Machine Learning
Autor:
Majeed, Abdul
;
Hwang, Seong Oun
Assuntos:
anonymization
;
Codes
;
Data models
;
data owners
;
Data privacy
;
Data science
;
imbalanced data
;
Information filtering
;
Information integrity
;
Machine learning
;
Personal data
;
privacy
;
privacy models
;
privacy-utility trade-off
;
responsible data science
;
utility
;
vulnerability
É parte de:
Access, IEEE, 2023, Vol.11, p.4400-4411
Descrição:
Personal data have been increasingly used in data-driven applications to improve quality of life. However, privacy preservation of personal data while sharing it with analysts/ researchers has become an essential requirement to be met by data owners (hospitals, banks, insurance companies, etc.). The existing literature on privacy preservation does not precisely quantify the vulnerability of each item among user attributes, thereby leading to explicit privacy disclosures and poor data utility during published data analytics. In this work, we propose and implement an automated way of quantifying the vulnerability of each item among the attributes by using a machine learning (ML) technique to significantly preserve the privacy of users without degrading data utility. Our work can solve four technical problems in the privacy preservation field: optimization of the privacy-utility trade-off, privacy guarantees (i.e., safeguard against identity and sensitive information disclosures) in imbalanced data (or clusters), over-anonymization issues, and rectifying or enabling the applicability of prior privacy models when data have skewed distributions. The experiments were performed on two real-world benchmark datasets to prove the feasibility of the concept in practical scenarios. Compared with state-of-the-art (SOTA) methods, the proposed method effectively preserves the equilibrium between utility and privacy in the anonymized data. Furthermore, our method can significantly contribute towards responsible data science (extracting enclosed knowledge from data without violating subjects' privacy) by controlling higher changes in data during its anonymization.
Editor:
IEEE
Idioma:
Inglês
This feature requires javascript
This feature requires javascript
Voltar para lista de resultados
This feature requires javascript
This feature requires javascript
Buscando em bases de dados remotas. Favor aguardar.
Buscando por
em
scope:(USP_PRODUCAO),scope:(USP_EBOOKS),scope:("PRIMO"),scope:(USP),scope:(USP_EREVISTAS),scope:(USP_FISICO),primo_central_multiple_fe
Mostrar o que foi encontrado até o momento
This feature requires javascript
This feature requires javascript