Convergence Rates for some Fundamental Algorithms for Continuous

Colloquia & Seminars

Convergence Rates for some Fundamental Algorithms for Continuous
Date/Time:17 Jan 2019 15:00 Venue: S17 #04-05 SR2 Speaker: Lee Ching-pei, University of Wisconsin-Madison Convergence Rates for some Fundamental Algorithms for Continuous Minimization of functions that are a sum of a smooth part and a convex but possibly nonsmooth part that promotes regularization is prevalent in many ap- plication areas, and proximal-type methods are the most e ective algorithms for such regularized optimization problems. In this talk, I will present our re- cent advancements on the convergence analysis of rst-order and second-order proximal-type methods for regularized optimization in [1, 2]. I will rst show that the long-known O(1/k) convergence rate on convex problems of gradient descent and coordinate descent in the function value is not tight and can be improved to o(1/k), and extend this result to proximal gradient and proximal coordinate descent for regularized optimization. The second part of this talk will focus on a general framework of inexact successive quadratic approxima- tion that has second-order proximal methods including proximal Newton and proximal quasi-Newton as special cases. Global convergence rates for strongly convex, convex, and nonconvex problems will be discussed. References: [1] Ching-pei Lee and Stephen J. Wright. First-order algorithms converge faster than O(1=k) on convex problems. Technical report, December 2018. URL http://www.optimization-online.org/DB_HTML/2018/12/6993.html. [2] Ching-pei Lee and Stephen J. Wright. Inexact successive quadratic approx- imation for regularized optimization. Computational Optimization and Ap- plications, 2019+. URL http://www.optimization-online.org/DB_HTML/ 2018/03/6499.html. Accepted. Add to calendar: