menu

OPT_1: Gradient descent on convex function

  • As we know in gradient descent method, we shift our parameters against descent direction. In this post, we denote as:

where is current point, and we have update parameters formula:

where called learning rate.

  • If we assume that our function is convex, we can find optimal learning rate to convergence faster. For the convenience of finding optimal value, we normalize the descent direction:

So on, we have next problem:

  • Because we consider is convex, so we can solve by set:

Fig1 (Source: (1))

 

Reference:
  1. Algorithms for Optimization | Mykel J. Kochenderfer - Tim A. Wheeler