The idea is to take repeated steps in the opposite direction of the
gradient (or approximate gradient) of the function at the current point, because this is the direction of steepest descent. Conversely, stepping in the direction of the gradient will lead to a
local maximum of that function; the procedure is then known as gradient ascent.
It is particularly useful in machine learning for minimizing the cost or loss function.[1] Gradient descent should not be confused with
local search algorithms, although both are
iterative methods for
optimization.
Gradient descent is generally attributed to
Augustin-Louis Cauchy, who first suggested it in 1847.[2]Jacques Hadamard independently proposed a similar method in 1907.[3][4] Its convergence properties for non-linear optimization problems were first studied by
Haskell Curry in 1944,[5] with the method becoming increasingly well-studied and used in the following decades.[6][7]
Illustration of gradient descent on a series of
level sets
Gradient descent is based on the observation that if the
multi-variable function is
defined and
differentiable in a neighborhood of a point , then decreases fastest if one goes from in the direction of the negative
gradient of at . It follows that, if
for a small enough step size or
learning rate, then . In other words, the term is subtracted from because we want to move against the gradient, toward the local minimum. With this observation in mind, one starts with a guess for a local minimum of , and considers the sequence such that
so, hopefully, the sequence converges to the desired local minimum. Note that the value of the step size is allowed to change at every iteration.
It is possible to guarantee the
convergence to a local minimum under certain assumptions on the function (for example, convex and Lipschitz) and particular choices of . Those include the sequence
as in the
Barzilai-Borwein method,[8][9] or a sequence satisfying the
Wolfe conditions (which can be found by using
line search). When the function is
convex, all local minima are also global minima, so in this case gradient descent can converge to the global solution.
This process is illustrated in the adjacent picture. Here, is assumed to be defined on the plane, and that its graph has a
bowl shape. The blue curves are the
contour lines, that is, the regions on which the value of is constant. A red arrow originating at a point shows the direction of the negative gradient at that point. Note that the (negative) gradient at a point is
orthogonal to the contour line going through that point. We see that gradient descent leads us to the bottom of the bowl, that is, to the point where the value of the function is minimal.
An analogy for understanding gradient descent
Fog in the mountains
The basic intuition behind gradient descent can be illustrated by a hypothetical scenario. Persons are stuck in the mountains and are trying to get down (i.e., trying to find the global minimum). There is heavy fog such that visibility is extremely low. Therefore, the path down the mountain is not visible, so they must use local information to find the minimum. They can use the method of gradient descent, which involves looking at the steepness of the hill at their current position, then proceeding in the direction with the steepest descent (i.e., downhill). If they were trying to find the top of the mountain (i.e., the maximum), then they would proceed in the direction of steepest ascent (i.e., uphill). Using this method, they would eventually find their way down the mountain or possibly get stuck in some hole (i.e., local minimum or
saddle point), like a mountain lake. However, assume also that the steepness of the hill is not immediately obvious with simple observation, but rather it requires a sophisticated instrument to measure, which the persons happen to have at the moment. It takes quite some time to measure the steepness of the hill with the instrument, thus they should minimize their use of the instrument if they wanted to get down the mountain before sunset. The difficulty then is choosing the frequency at which they should measure the steepness of the hill so not to go off track.
In this analogy, the persons represent the algorithm, and the path taken down the mountain represents the sequence of parameter settings that the algorithm will explore. The steepness of the hill represents the
slope of the function at that point. The instrument used to measure steepness is
differentiation. The direction they choose to travel in aligns with the
gradient of the function at that point. The amount of time they travel before taking another measurement is the step size.
Choosing the step size and descent direction
Since using a step size that is too small would slow convergence, and a too large would lead to overshoot and divergence, finding a good setting of is an important practical problem.
Philip Wolfe also advocated using "clever choices of the [descent] direction" in practice.[10] Whilst using a direction that deviates from the steepest descent direction may seem counter-intuitive, the idea is that the smaller slope may be compensated for by being sustained over a much longer distance.
To reason about this mathematically, consider a direction and step size and consider the more general update:
.
Finding good settings of and requires some thought. First of all, we would like the update direction to point downhill. Mathematically, letting denote the angle between and , this requires that To say more, we need more information about the objective function that we are optimising. Under the fairly weak assumption that is continuously differentiable, we may prove that:[11]
(1)
This inequality implies that the amount by which we can be sure the function is decreased depends on a trade off between the two terms in square brackets. The first term in square brackets measures the angle between the descent direction and the negative gradient. The second term measures how quickly the gradient changes along the descent direction.
In principle inequality (1) could be optimized over and to choose an optimal step size and direction. The problem is that evaluating the second term in square brackets requires evaluating , and extra gradient evaluations are generally expensive and undesirable. Some ways around this problem are:
Forgo the benefits of a clever descent direction by setting , and use
line search to find a suitable step-size , such as one that satisfies the
Wolfe conditions. A more economic way of choosing learning rates is
backtracking line search, a method that has both good theoretical guarantees and experimental results. Note that one does not need to choose to be the gradient; any direction that has positive intersection product with the gradient will result in a reduction of the function value (for a sufficiently small value of ).
Assuming that is twice-differentiable, use its Hessian to estimate Then choose and by optimising inequality (1).
Assuming that is
Lipschitz, use its Lipschitz constant to bound Then choose and by optimising inequality (1).
Build a custom model of for . Then choose and by optimising inequality (1).
Usually by following one of the recipes above,
convergence to a local minimum can be guaranteed. When the function is
convex, all local minima are also global minima, so in this case gradient descent can converge to the global solution.
reformulated as a quadratic minimization problem.
If the system matrix is real
symmetric and
positive-definite, an objective function is defined as the quadratic function, with minimization of
In traditional linear least squares for real and the
Euclidean norm is used, in which case
The
line search minimization, finding the locally optimal step size on every iteration, can be performed analytically for quadratic functions, and explicit formulas for the locally optimal are known.[6][13]
To avoid multiplying by twice per iteration,
we note that implies , which gives the traditional algorithm,[14]
The method is rarely used for solving linear equations, with the
conjugate gradient method being one of the most popular alternatives. The number of gradient descent iterations is commonly proportional to the spectral
condition number of the system matrix (the ratio of the maximum to minimum
eigenvalues of ), while the convergence of
conjugate gradient method is typically determined by a square root of the condition number, i.e., is much faster. Both methods can benefit from
preconditioning, where gradient descent may require less assumptions on the preconditioner.[14]
Solution of a non-linear system
Gradient descent can also be used to solve a system of
nonlinear equations. Below is an example that shows how to use the gradient descent to solve for three unknown variables, x1, x2, and x3. This example shows one iteration of the gradient descent.
Consider the nonlinear system of equations
Let us introduce the associated function
where
One might now define the objective function
which we will attempt to minimize. As an initial guess, let us use
An animation showing the first 83 iterations of gradient descent applied to this example. Surfaces are
isosurfaces of at current guess , and arrows show the direction of descent. Due to a small and constant step size, the convergence is slow.
Now, a suitable must be found such that
This can be done with any of a variety of
line search algorithms. One might also simply guess which gives
Evaluating the objective function at this value, yields
The decrease from to the next step's value of
is a sizable decrease in the objective function. Further steps would reduce its value further until an approximate solution to the system was found.
Comments
Gradient descent works in spaces of any number of dimensions, even in infinite-dimensional ones. In the latter case, the search space is typically a
function space, and one calculates the
Fréchet derivative of the functional to be minimized to determine the descent direction.[7]
That gradient descent works in any number of dimensions (finite number at least) can be seen as a consequence of the
Cauchy-Schwarz inequality, i.e. the magnitude of the inner (dot) product of two vectors of any dimension is maximized when they are
colinear. In the case of gradient descent, that would be when the vector of independent variable adjustments is proportional to the gradient vector of partial derivatives.
The gradient descent can take many iterations to compute a local minimum with a required
accuracy, if the
curvature in different directions is very different for the given function. For such functions,
preconditioning, which changes the geometry of the space to shape the function level sets like
concentric circles, cures the slow convergence. Constructing and applying preconditioning can be computationally expensive, however.
The gradient descent can be combined with a
line search, finding the locally optimal step size on every iteration. Performing the line search can be time-consuming. Conversely, using a fixed small can yield poor convergence, and a great can lead to divergence. Nevertheless, one may alternate small and large stepsizes to improve the convergence rate.[15][16]
Methods based on
Newton's method and inversion of the
Hessian using
conjugate gradient techniques can be better alternatives.[17][18] Generally, such methods converge in fewer iterations, but the cost of each iteration is higher. An example is the
BFGS method which consists in calculating on every step a matrix by which the gradient vector is multiplied to go into a "better" direction, combined with a more sophisticated
line search algorithm, to find the "best" value of For extremely large problems, where the computer-memory issues dominate, a limited-memory method such as
L-BFGS should be used instead of BFGS or the steepest descent.
Gradient descent can converge to a local minimum and slow down in a neighborhood of a
saddle point. Even for unconstrained quadratic minimization, gradient descent develops a zig-zag pattern of subsequent iterates as iterations progress, resulting in slow convergence. Multiple modifications of gradient descent have been proposed to address these deficiencies.
Fast gradient methods
Yurii Nesterov has proposed[20] a simple modification that enables faster convergence for convex problems and has been since further generalized. For unconstrained smooth problems, the method is called the
fast gradient method (FGM) or the
accelerated gradient method (AGM). Specifically, if the differentiable function is convex and is
Lipschitz, and it is not assumed that is
strongly convex, then the error in the objective value generated at each step by the gradient descent method will be
bounded by. Using the Nesterov acceleration technique, the error decreases at .[21][22] It is known that the rate for the decrease of the
cost function is optimal for first-order optimization methods. Nevertheless, there is the opportunity to improve the algorithm by reducing the constant factor. The
optimized gradient method (OGM)[23] reduces that constant by a factor of two and is an optimal first-order method for large-scale problems.[24]
Trying to break the zig-zag pattern of gradient descent, the momentum or heavy ball method uses a momentum term in analogy to a heavy ball sliding on the surface of values of the function being minimized,[6] or to mass movement in
Newtonian dynamics through a
viscous medium in a
conservative force field.[25] Gradient descent with momentum remembers the solution update at each iteration, and determines the next update as a
linear combination of the gradient and the previous update. For unconstrained quadratic minimization, a theoretical convergence rate bound of the heavy ball method is asymptotically the same as that for the optimal
conjugate gradient method.[6]
Gradient descent can be extended to handle
constraints by including a
projection onto the set of constraints. This method is only feasible when the projection is efficiently computable on a computer. Under suitable assumptions, this method converges. This method is a specific case of the
forward-backward algorithm for monotone inclusions (which includes
convex programming and
variational inequalities).[28]
The properties of gradient descent depend on the properties of the objective function and the variant of gradient descent used (for example, if a
line search step is used). The assumptions made affect the convergence rate, and other properties, that can be proven for gradient descent.[30] For example, if the objective is assumed to be
strongly convex and
lipschitz smooth, then gradient descent converges linearly with a fixed step size.[1] Looser assumptions lead to either weaker convergence guarantees or require a more sophisticated step size selection.[30]
^Hadamard, Jacques (1908). "Mémoire sur le problème d'analyse relatif à l'équilibre des plaques élastiques encastrées". Mémoires présentés par divers savants éstrangers à l'Académie des Sciences de l'Institut de France. 33.
^Barzilai, Jonathan; Borwein, Jonathan M. (1988). "Two-Point Step Size Gradient Methods". IMA Journal of Numerical Analysis. 8 (1): 141–148.
doi:
10.1093/imanum/8.1.141.
^Fletcher, R. (2005). "On the Barzilai–Borwein Method". In Qi, L.; Teo, K.; Yang, X. (eds.). Optimization and Control with Applications. Applied Optimization. Vol. 96. Boston: Springer. pp. 235–256.
ISBN0-387-24254-6.
^Wolfe, Philip (April 1969). "Convergence Conditions for Ascent Methods". SIAM Review. 11 (2): 226–235.
doi:
10.1137/1011036.
^Bernstein, Jeremy; Vahdat, Arash; Yue, Yisong; Liu, Ming-Yu (2020-06-12). "On the distance between two neural networks and the stability of learning".
arXiv:2002.03432 [
cs.LG].
^Haykin, Simon S. Adaptive filter theory. Pearson Education India, 2008. - p. 108-142, 217-242
^Strutz, T. (2016). Data Fitting and Uncertainty: A Practical Introduction to Weighted Least Squares and Beyond (2nd ed.). Springer Vieweg.
ISBN978-3-658-11455-8.
^Combettes, P. L.; Pesquet, J.-C. (2011). "Proximal splitting methods in signal processing". In Bauschke, H. H.;
Burachik, R. S.; Combettes, P. L.; Elser, V.; Luke, D. R.; Wolkowicz, H. (eds.). Fixed-Point Algorithms for Inverse Problems in Science and Engineering. New York: Springer. pp. 185–212.
arXiv:0912.3522.
ISBN978-1-4419-9568-1.
Chong, Edwin K. P.; Żak, Stanislaw H. (2013).
"Gradient Methods". An Introduction to Optimization (Fourth ed.). Hoboken: Wiley. pp. 131–160.
ISBN978-1-118-27901-4.
Himmelblau, David M. (1972). "Unconstrained Minimization Procedures Using Derivatives". Applied Nonlinear Programming. New York: McGraw-Hill. pp. 63–132.
ISBN0-07-028921-2.