OPT_1: Gradient descent on convex function
-
date_range 02/07/2021 03:50 infosortMATHOPTIMIZATIONlabelmathoptimization
- As we know in gradient descent method, we shift our parameters against descent direction. In this post, we denote as:
where is current point, and we have update parameters formula:
where called learning rate.
- If we assume that our function is convex, we can find optimal learning rate to convergence faster. For the convenience of finding optimal value, we normalize the descent direction:
So on, we have next problem:
- Because we consider is convex, so we can solve by set:
Reference:
- Algorithms for Optimization | Mykel J. Kochenderfer - Tim A. Wheeler