skip to main content
Visitante
Meu Espaço
Minha Conta
Sair
Identificação
This feature requires javascript
Tags
Revistas Eletrônicas (eJournals)
Livros Eletrônicos (eBooks)
Bases de Dados
Bibliotecas USP
Ajuda
Ajuda
Idioma:
Inglês
Espanhol
Português
This feature required javascript
This feature requires javascript
Primo Search
Busca Geral
Busca Geral
Acervo Físico
Acervo Físico
Produção Intelectual da USP
Produção USP
Search For:
Clear Search Box
Search in:
Busca Geral
Or select another collection:
Search in:
Busca Geral
Busca Avançada
Busca por Índices
This feature requires javascript
This feature requires javascript
Gradient methods for minimizing composite functions
Nesterov, Yu
Mathematical programming, 2013-08, Vol.140 (1), p.125-161
[Periódico revisado por pares]
Berlin/Heidelberg: Springer-Verlag
Texto completo disponível
Citações
Citado por
Exibir Online
Detalhes
Resenhas & Tags
Mais Opções
Nº de Citações
This feature requires javascript
Enviar para
Adicionar ao Meu Espaço
Remover do Meu Espaço
E-mail (máximo 30 registros por vez)
Imprimir
Link permanente
Referência
EasyBib
EndNote
RefWorks
del.icio.us
Exportar RIS
Exportar BibTeX
This feature requires javascript
Título:
Gradient methods for minimizing composite functions
Autor:
Nesterov, Yu
Assuntos:
Calculus of Variations and Optimal Control
;
Optimization
;
Combinatorics
;
Complexity theory
;
Composite functions
;
Computation
;
Convergence
;
Convex analysis
;
Descent
;
Full Length Paper
;
Iterative methods
;
Mathematical analysis
;
Mathematical and Computational Physics
;
Mathematical Methods in Physics
;
Mathematical models
;
Mathematical programming
;
Mathematics
;
Mathematics and Statistics
;
Mathematics of Computing
;
Methods
;
Numerical Analysis
;
Optimization
;
Studies
;
Theoretical
É parte de:
Mathematical programming, 2013-08, Vol.140 (1), p.125-161
Notas:
ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
Descrição:
In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two terms: one is smooth and given by a black-box oracle, and another is a simple general convex function with known structure. Despite the absence of good properties of the sum, such problems, both in convex and nonconvex cases, can be solved with efficiency typical for the first part of the objective. For convex problems of the above structure, we consider primal and dual variants of the gradient method (with convergence rate ), and an accelerated multistep version with convergence rate , where is the iteration counter. For nonconvex problems with this structure, we prove convergence to a point from which there is no descent direction. In contrast, we show that for general nonsmooth, nonconvex problems, even resolving the question of whether a descent direction exists from a point is NP-hard. For all methods, we suggest some efficient “line search” procedures and show that the additional computational work necessary for estimating the unknown problem class parameters can only multiply the complexity of each iteration by a small constant factor. We present also the results of preliminary computational experiments, which confirm the superiority of the accelerated scheme.
Editor:
Berlin/Heidelberg: Springer-Verlag
Idioma:
Inglês
This feature requires javascript
This feature requires javascript
Voltar para lista de resultados
Anterior
Resultado
2
Avançar
This feature requires javascript
This feature requires javascript
Buscando em bases de dados remotas. Favor aguardar.
Buscando por
em
scope:(USP_PRODUCAO),scope:(USP_EBOOKS),scope:("PRIMO"),scope:(USP),scope:(USP_EREVISTAS),scope:(USP_FISICO),primo_central_multiple_fe
Mostrar o que foi encontrado até o momento
This feature requires javascript
This feature requires javascript