skip to main content
Visitante
Meu Espaço
Minha Conta
Sair
Identificação
This feature requires javascript
Tags
Revistas Eletrônicas (eJournals)
Livros Eletrônicos (eBooks)
Bases de Dados
Bibliotecas USP
Ajuda
Ajuda
Idioma:
Inglês
Espanhol
Português
This feature required javascript
This feature requires javascript
Primo Search
Busca Geral
Busca Geral
Acervo Físico
Acervo Físico
Produção Intelectual da USP
Produção USP
Search For:
Clear Search Box
Search in:
Busca Geral
Or select another collection:
Search in:
Busca Geral
Busca Avançada
Busca por Índices
This feature requires javascript
This feature requires javascript
Efficient Online Subclass Knowledge Distillation for Image Classification
Tzelepi, Maria ; Passalis, Nikolaos ; Tefas, Anastasios
2020 25th International Conference on Pattern Recognition (ICPR), 2021, p.1007-1014
IEEE
Sem texto completo
Citações
Citado por
Serviços
Detalhes
Resenhas & Tags
Nº de Citações
This feature requires javascript
Enviar para
Adicionar ao Meu Espaço
Remover do Meu Espaço
E-mail (máximo 30 registros por vez)
Imprimir
Link permanente
Referência
EasyBib
EndNote
RefWorks
del.icio.us
Exportar RIS
Exportar BibTeX
This feature requires javascript
Título:
Efficient Online Subclass Knowledge Distillation for Image Classification
Autor:
Tzelepi, Maria
;
Passalis, Nikolaos
;
Tefas, Anastasios
Assuntos:
Computational modeling
;
Deep learning
;
Embedded systems
;
Image classification
;
Pattern recognition
;
Rendering (computer graphics)
;
Training
É parte de:
2020 25th International Conference on Pattern Recognition (ICPR), 2021, p.1007-1014
Descrição:
Deploying state-of-the-art deep learning models on embedded systems dictates certain storage and computation limitations. During the recent few years Knowledge Distillation (KD) has been recognized as a prominent approach to address this issue. That is, KD has been effectively proposed for training fast and compact deep learning models by transferring knowledge from more complex and powerful models. However, knowledge distillation, in its conventional form, involves multiple stages of training, rendering it a computationally and memory demanding procedure. In this paper, a novel single-stage self knowledge distillation method is proposed, namely Online Subclass Knowledge Distillation (OSKD), that aims at revealing the similarities inside classes, so as to improve the performance of any deep neural model in an online manner. Hence, as opposed to existing online distillation methods, we are able to acquire further knowledge from the model itself, without building multiple identical models or using multiple models to teach each other, rendering the proposed OSKD approach more efficient. The experimental evaluation on two datasets validates that the proposed method improves the classification performance.
Editor:
IEEE
Idioma:
Inglês
This feature requires javascript
This feature requires javascript
Voltar para lista de resultados
This feature requires javascript
This feature requires javascript
Buscando em bases de dados remotas. Favor aguardar.
Buscando por
em
scope:(USP_PRODUCAO),scope:(USP_EBOOKS),scope:("PRIMO"),scope:(USP),scope:(USP_EREVISTAS),scope:(USP_FISICO),primo_central_multiple_fe
Mostrar o que foi encontrado até o momento
This feature requires javascript
This feature requires javascript