skip to main content
Visitante
Meu Espaço
Minha Conta
Sair
Identificação
This feature requires javascript
Tags
Revistas Eletrônicas (eJournals)
Livros Eletrônicos (eBooks)
Bases de Dados
Bibliotecas USP
Ajuda
Ajuda
Idioma:
Inglês
Espanhol
Português
This feature required javascript
This feature requires javascript
Primo Search
Busca Geral
Busca Geral
Acervo Físico
Acervo Físico
Produção Intelectual da USP
Produção USP
Search For:
Clear Search Box
Search in:
Busca Geral
Or select another collection:
Search in:
Busca Geral
Busca Avançada
Busca por Índices
This feature requires javascript
This feature requires javascript
P-DIFF: Learning Classifier with Noisy Labels based on Probability Difference Distributions
Hu, Wei ; Zhao, QiHao ; Huang, Yangyu ; Zhang, Fan
2020 25th International Conference on Pattern Recognition (ICPR), 2021, p.1882-1889
IEEE
Sem texto completo
Citações
Citado por
Serviços
Detalhes
Resenhas & Tags
Nº de Citações
This feature requires javascript
Enviar para
Adicionar ao Meu Espaço
Remover do Meu Espaço
E-mail (máximo 30 registros por vez)
Imprimir
Link permanente
Referência
EasyBib
EndNote
RefWorks
del.icio.us
Exportar RIS
Exportar BibTeX
This feature requires javascript
Título:
P-DIFF: Learning Classifier with Noisy Labels based on Probability Difference Distributions
Autor:
Hu, Wei
;
Zhao, QiHao
;
Huang, Yangyu
;
Zhang, Fan
Assuntos:
Benchmark testing
;
Computational efficiency
;
Neural networks
;
Noise measurement
;
Pattern recognition
;
Task analysis
;
Training
É parte de:
2020 25th International Conference on Pattern Recognition (ICPR), 2021, p.1882-1889
Descrição:
Learning deep neural network (DNN) classifier with noisy labels is a challenging task because the DNN can easily overfit on these noisy labels due to its high capability. In this paper, we present a very simple but effective training paradigm called P-DIFF, which can train DNN classifiers but obviously alleviate the adverse impact of noisy labels. Our proposed probability difference distribution implicitly reflects the probability of a training sample to be clean, then this probability is employed to re-weight the corresponding sample during the training process. P-DIFF can also achieve good performance even without prior-knowledge on the noise rate of training samples. Experiments on benchmark datasets also demonstrate that P-DIFF is superior to the state-of-the-art sample selection methods.
Editor:
IEEE
Idioma:
Inglês
This feature requires javascript
This feature requires javascript
Voltar para lista de resultados
This feature requires javascript
This feature requires javascript
Buscando em bases de dados remotas. Favor aguardar.
Buscando por
em
scope:("USP"),primo_central_multiple_fe
Mostrar o que foi encontrado até o momento
This feature requires javascript
This feature requires javascript