The basic idea behind stochastic approximation can be traced back to the
Robbins–Monro algorithm of the 1950s. Today, stochastic gradient descent has become an important optimization method in
machine learning.[2]
where the
parameter that minimizes is to be
estimated. Each summand function is typically associated with the -th
observation in the
data set (used for training).
In classical statistics, sum-minimization problems arise in
least squares and in
maximum-likelihood estimation (for independent observations). The general class of estimators that arise as minimizers of sums are called
M-estimators. However, in statistics, it has been long recognized that requiring even local minimization is too restrictive for some problems of maximum-likelihood estimation.[3] Therefore, contemporary statistical theorists often consider
stationary points of the
likelihood function (or zeros of its derivative, the
score function, and other
estimating equations).
When used to minimize the above function, a standard (or "batch")
gradient descent method would perform the following iterations:
The step size is denoted by (sometimes called the learning rate in machine learning) and here "" denotes the update of a variable in the algorithm.
In many cases, the summand functions have a simple form that enables inexpensive evaluations of the sum-function and the sum gradient. For example, in statistics,
one-parameter exponential families allow economical function-evaluations and gradient-evaluations.
However, in other cases, evaluating the sum-gradient may require expensive evaluations of the gradients from all summand functions. When the training set is enormous and no simple formulas exist, evaluating the sums of gradients becomes very expensive, because evaluating the gradient requires evaluating all the summand functions' gradients. To economize on the computational cost at every iteration, stochastic gradient descent
samples a subset of summand functions at every step. This is very effective in the case of large-scale machine learning problems.[4]
Iterative method
In stochastic (or "on-line") gradient descent, the true gradient of is approximated by a gradient at a single sample:
As the algorithm sweeps through the training set, it performs the above update for each training sample. Several passes can be made over the training set until the algorithm converges. If this is done, the data can be shuffled for each pass to prevent cycles. Typical implementations may use an
adaptive learning rate so that the algorithm converges.[5]
In pseudocode, stochastic gradient descent can be presented as :
Choose an initial vector of parameters and learning rate .
Repeat until an approximate minimum is obtained:
Randomly shuffle samples in the training set.
For , do:
A compromise between computing the true gradient and the gradient at a single sample is to compute the gradient against more than one training sample (called a "mini-batch") at each step. This can perform significantly better than "true" stochastic gradient descent described, because the code can make use of
vectorization libraries rather than computing each step separately as was first shown in [6] where it was called "the bunch-mode back-propagation algorithm". It may also result in smoother convergence, as the gradient computed at each step is averaged over more training samples.
The convergence of stochastic gradient descent has been analyzed using the theories of
convex minimization and of
stochastic approximation. Briefly, when the
learning rates decrease with an appropriate rate,
and subject to relatively mild assumptions, stochastic gradient descent converges
almost surely to a global minimum
when the objective function is
convex or
pseudoconvex,
and otherwise converges almost surely to a local minimum.[2][7]
This is in fact a consequence of the
Robbins–Siegmund theorem.[8]
Example
Suppose we want to fit a straight line to a training set with observations and corresponding estimated responses using
least squares. The objective function to be minimized is
The last line in the above pseudocode for this specific problem will become:
Note that in each iteration or update step, the gradient is only evaluated at a single . This is the key difference between stochastic gradient descent and batched gradient descent.
Backpropagation was first described in 1986, with stochastic gradient descent being used to efficiently optimize parameters across neural networks with multiple
hidden layers. Soon after, another improvement was developed: mini-batch gradient descent, where small batches of data are substituted for single samples. In 1997, the practical performance benefits from vectorization achievable with such small batches were first explored,[12] paving the way for efficient optimization in machine learning. As of 2023, this mini-batch approach remains the norm for training neural networks, balancing the benefits of stochastic gradient descent with
gradient descent.[13]
By the 1980s,
momentum had already been introduced, and was added to SGD optimization techniques in 1986.[14] However, these optimization techniques assumed constant
hyperparameters, i.e. a fixed learning rate and momentum parameter. In the 2010s, adaptive approaches to applying SGD with a per-parameter learning rate were introduced with AdaGrad (for "Adaptive Gradient") in 2011[15] and RMSprop (for "Root Mean Square Propagation") in 2012.[16] In 2014, Adam (for "Adaptive Moment Estimation") was published, applying the adaptive approaches of RMSprop to momentum; many improvements and branches of Adam were then developed such as Adadelta, Adagrad, AdamW, and Adamax.[17][18]
Within machine learning, approaches to optimization in 2023 are dominated by Adam-derived optimizers. TensorFlow and PyTorch, by far the most popular machine learning libraries,[19] as of 2023 largely only include Adam-derived optimizers, as well as predecessors to Adam such as RMSprop and classic SGD. PyTorch also partially supports
Limited-memory BFGS, a line-search method, but only for single-device setups without parameter groups.[18][20]
Stochastic gradient descent competes with the
L-BFGS algorithm,[citation needed] which is also widely used. Stochastic gradient descent has been used since at least 1960 for training
linear regression models, originally under the name
ADALINE.[24]
Many improvements on the basic stochastic gradient descent algorithm have been proposed and used. In particular, in machine learning, the need to set a
learning rate (step size) has been recognized as problematic. Setting this parameter too high can cause the algorithm to diverge; setting it too low makes it slow to converge.[25] A conceptually simple extension of stochastic gradient descent makes the learning rate a decreasing function ηt of the iteration number t, giving a learning rate schedule, so that the first iterations cause large changes in the parameters, while the later ones do only fine-tuning. Such schedules have been known since the work of MacQueen on
k-means clustering.[26] Practical guidance on choosing the step size in several variants of SGD is given by Spall.[27]
Implicit updates (ISGD)
As mentioned earlier, classical stochastic gradient descent is generally sensitive to
learning rateη. Fast convergence requires large learning rates but this may induce numerical instability. The problem can be largely solved[28] by considering implicit updates whereby the stochastic gradient is evaluated at the next iterate rather than the current one:
This equation is implicit since appears on both sides of the equation. It is a stochastic form of the
proximal gradient method since the update
can also be written as:
As an example,
consider least squares with features and observations
. We wish to solve:
where indicates the inner product.
Note that could have "1" as the first element to include an intercept. Classical stochastic gradient descent proceeds as follows:
where is uniformly sampled between 1 and . Although theoretical convergence of this procedure happens under relatively mild assumptions, in practice the procedure can be quite unstable. In particular, when is misspecified so that has large absolute eigenvalues with high probability, the procedure may diverge numerically within a few iterations. In contrast, implicit stochastic gradient descent (shortened as ISGD) can be solved in closed-form as:
This procedure will remain numerically stable virtually for all as the
learning rate is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between
least mean squares (LMS) and
normalized least mean squares filter (NLMS).
Even though a closed-form solution for ISGD is only possible in least squares, the procedure can be efficiently implemented in a wide range of models. Specifically, suppose that depends on only through a linear combination with features , so that we can write , where may depend on as well but not on except through . Least squares obeys this rule, and so does
logistic regression, and most
generalized linear models. For instance, in least squares, , and in logistic regression , where is the
logistic function. In
Poisson regression, , and so on.
In such settings, ISGD is simply implemented as follows. Let , where is scalar.
Then, ISGD is equivalent to:
The scaling factor can be found through the
bisection method since
in most regular models, such as the aforementioned generalized linear models, function is decreasing,
and thus the search bounds for are
.
Momentum
Further proposals include the momentum method or the heavy ball method, which in ML context appeared in
Rumelhart,
Hinton and
Williams' paper on backpropagation learning[29] and borrowed the idea from Soviet mathematician Boris Polyak's 1964 article on solving functional equations.[30] Stochastic gradient descent with momentum remembers the update Δw at each iteration, and determines the next update as a
linear combination of the gradient and the previous update:[31][32]
that leads to:
where the
parameter which minimizes is to be
estimated, is a step size (sometimes called the learning rate in machine learning) and is an exponential
decay factor between 0 and 1 that determines the relative contribution of the current gradient and earlier gradients to the weight change.
The name momentum stems from an analogy to
momentum in physics: the weight vector , thought of as a particle traveling through parameter space,[29] incurs acceleration from the gradient of the loss ("
force"). Unlike in classical stochastic gradient descent, it tends to keep traveling in the same direction, preventing oscillations. Momentum has been used successfully by computer scientists in the training of
artificial neural networks for several decades.[33]
The momentum method is closely related to
underdamped Langevin dynamics, and may be combined with
simulated annealing.[34]
In mid-1980s the method was modified by
Yurii Nesterov to use the gradient predicted at the next point, and the resulting so-called Nesterov Accelerated Gradient was sometimes used in ML in the 2010s.[35]
Averaging
Averaged stochastic gradient descent, invented independently by Ruppert and Polyak in the late 1980s, is ordinary stochastic gradient descent that records an average of its parameter vector over time. That is, the update is the same as for ordinary stochastic gradient descent, but the algorithm also keeps track of[36]
When optimization is done, this averaged parameter vector takes the place of w.
AdaGrad
AdaGrad (for adaptive
gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter
learning rate, first published in 2011.[37] Informally, this increases the learning rate for sparser parameters[clarification needed] and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative. Examples of such applications include natural language processing and image recognition.[37]
It still has a base learning rate η, but this is multiplied with the elements of a vector {Gj,j} which is the diagonal of the
outer product matrix
where , the gradient, at iteration τ. The diagonal is given by
This vector essentially stores a historical sum of gradient squares by dimension and is updated after every iteration. The formula for an update is now[a]
or, written as per-parameter updates,
Each {G(i,i)} gives rise to a scaling factor for the learning rate that applies to a single parameter wi. Since the denominator in this factor, is the
ℓ2 norm of previous derivatives, extreme parameter updates get dampened, while parameters that get few or small updates receive higher learning rates.[33]
While designed for
convex problems, AdaGrad has been successfully applied to non-convex optimization.[38]
RMSProp
RMSProp (for Root Mean Square Propagation) is a method invented in 2012 by James Martens and
Ilya Sutskever, at the time both PhD students in Geoffrey Hinton's group, in which the
learning rate is, like in Adagrad, adapted for each of the parameters. The idea is to divide the learning rate for a weight by a running average of the magnitudes of recent gradients for that weight.[39] Unusually, it was not published in an article but merely described in a
Coursera lecture.[citation needed]
So, first the running average is calculated in terms of means square,
where, is the forgetting factor. The concept of storing the historical gradient as sum of squares is borrowed from Adagrad, but "forgetting" is introduced to solve Adagrad's diminishing learning rates in non-convex problems by gradually decreasing the influence of old data.[40]
And the parameters are updated as,
RMSProp has shown good adaptation of learning rate in different applications. RMSProp can be seen as a generalization of
Rprop and is capable to work with mini-batches as well opposed to only full-batches.[39]
Adam
Adam[41] (short for Adaptive Moment Estimation) is a 2014 update to the RMSProp optimizer combining it with the main feature of the Momentum method.[42] In this optimization algorithm, running averages with exponential forgetting of both the gradients and the second moments of the gradients are used. Given parameters and a loss function , where indexes the current training iteration (indexed at ), Adam's parameter update is given by:
where is a small scalar (e.g. ) used to prevent division by 0, and (e.g. 0.9) and (e.g. 0.999) are the forgetting factors for gradients and second moments of gradients, respectively. Squaring and square-rooting is done element-wise.
The initial proof establishing the convergence of Adam was incomplete, and subsequent analysis has revealed that Adam does not converge for all convex objectives.[43][44] Despite this, Adam continues to be used in practice due to its strong performance in practice.[45] Moreover, the profound influence of this algorithm inspired multiple newer, less well-known momentum-based optimization schemes using Nesterov-enhanced gradients (eg: NAdam[46] and FASFA[47]) and varying interpretations of second-order information (eg: Powerpropagation[48] and AdaSqrt[49]). However, the most commonly used variants are AdaMax,[41] which generalizes Adam using the infinity norm, and AMSGrad,[50] which addresses convergence problems from Adam by using maximum of past squared gradients instead of the exponential average.[51]AdamW[52] is a later update which mitigates an unoptimal choice of the
weight decay algorithm in Adam.
Sign-based stochastic gradient descent
Even though sign-based optimization goes back to the aforementioned Rprop, in 2018 researchers tried to simplify Adam by removing the magnitude of the stochastic gradient from being taken into account and only considering its sign.[53][54]
This section needs expansion. You can help by
adding to it. (June 2023)
Backtracking line search
Backtracking line search is another variant of gradient descent. All of the below are sourced from the mentioned link. It is based on a condition known as the Armijo–Goldstein condition. Both methods allow learning rates to change at each iteration; however, the manner of the change is different. Backtracking line search uses function evaluations to check Armijo's condition, and in principle the loop in the algorithm for determining the learning rates can be long and unknown in advance. Adaptive SGD does not need a loop in determining learning rates. On the other hand, adaptive SGD does not guarantee the "descent property" – which Backtracking line search enjoys – which is that for all n. If the gradient of the cost function is globally Lipschitz continuous, with Lipschitz constant L, and learning rate is chosen of the order 1/L, then the standard version of SGD is a special case of backtracking line search.
Second-order methods
A stochastic analogue of the standard (deterministic)
Newton–Raphson algorithm (a "second-order" method) provides an asymptotically optimal or near-optimal form of iterative optimization in the setting of stochastic approximation[citation needed]. A method that uses direct measurements of the
Hessian matrices of the summands in the empirical risk function was developed by Byrd, Hansen, Nocedal, and Singer.[55] However, directly determining the required Hessian matrices for optimization may not be possible in practice. Practical and theoretically sound methods for second-order versions of SGD that do not require direct Hessian information are given by Spall and others.[56][57][58] (A less efficient method based on finite differences, instead of simultaneous perturbations, is given by Ruppert.[59]) Another approach to the approximation Hessian matrix is replacing it with the Fisher information matrix, which transforms usual gradient to natural.[60] These methods not requiring direct Hessian information are based on either values of the summands in the above empirical risk function or values of the gradients of the summands (i.e., the SGD inputs). In particular, second-order optimality is asymptotically achievable without direct calculation of the Hessian matrices of the summands in the empirical risk function.
Approximations in continuous time
For small learning rate stochastic gradient descent can be viewed as a discretization of the
gradient flow ODE
subject to additional stochastic noise. This approximation is only valid on a finite time-horizon in the following sense: assume that all the coefficients are sufficiently smooth. Let and be a sufficiently smooth test function. Then, there exists a constant such that for all
where denotes taking the expectation with respect to the random choice of indices in the stochastic gradient descent scheme.
Since this approximation does not capture the random fluctuations around the mean behavior of stochastic gradient descent solutions to
stochastic differential equations (SDEs) have been proposed as limiting objects.[61] More precisely, the solution to the SDE
for
where denotes the
Ito-integral with respect to a
Brownian motion is a more precise approximation in the sense that there exists a constant such that
However this SDE only approximates the one-point motion of stochastic gradient descent. For an approximation of the
stochastic flow one has to consider SDEs with infinite-dimensional noise.[62]
^Kiwiel, Krzysztof C. (2001). "Convergence and efficiency of subgradient methods for quasiconvex minimization". Mathematical Programming, Series A. 90 (1). Berlin, Heidelberg: Springer: 1–25.
doi:
10.1007/PL00011414.
ISSN0025-5610.
MR1819784.
S2CID10043417.
^Robbins, Herbert;
Siegmund, David O. (1971). "A convergence theorem for non negative almost supermartingales and some applications". In Rustagi, Jagdish S. (ed.). Optimizing Methods in Statistics. Academic Press.
ISBN0-12-604550-X.
^Rosenblatt, F. (1958). "The perceptron: A probabilistic model for information storage and organization in the brain". Psychological Review. 65 (6): 386–408.
doi:
10.1037/h0042519.
S2CID12781225.
^Cited by Darken, Christian; Moody, John (1990). Fast adaptive k-means clustering: some empirical results. Int'l Joint Conf. on Neural Networks (IJCNN). IEEE.
doi:
10.1109/IJCNN.1990.137720.
^Spall, J. C. (2003). Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control. Hoboken, NJ: Wiley. pp. Sections 4.4, 6.6, and 7.5.
ISBN0-471-33052-3.
^Toulis, Panos; Airoldi, Edoardo (2017). "Asymptotic and finite-sample properties of estimators based on stochastic gradients". Annals of Statistics. 45 (4): 1694–1727.
arXiv:1408.2923.
doi:
10.1214/16-AOS1506.
S2CID10279395.
^Sutskever, Ilya; Martens, James; Dahl, George; Hinton, Geoffrey E. (June 2013). Sanjoy Dasgupta and David Mcallester (ed.).
On the importance of initialization and momentum in deep learning(PDF). In Proceedings of the 30th international conference on machine learning (ICML-13). Vol. 28. Atlanta, GA. pp. 1139–1147. Retrieved 14 January 2016.
^Zhang, Yushun; Chen, Congliang; Shi, Naichen; Sun, Ruoyu; Luo, Zhi-Quan (2022). "Adam Can Converge Without Any Modification On Update Rules". Advances in Neural Information Processing Systems 35. Advances in Neural Information Processing Systems 35 (NeurIPS 2022).
arXiv:2208.09632.
^Dozat, T. (2016). "Incorporating Nesterov Momentum into Adam".
S2CID70293087. {{
cite journal}}: Cite journal requires |journal= (
help)
^Byrd, R. H.; Hansen, S. L.; Nocedal, J.; Singer, Y. (2016). "A Stochastic Quasi-Newton method for Large-Scale Optimization". SIAM Journal on Optimization. 26 (2): 1008–1031.
arXiv:1401.7020.
doi:
10.1137/140954362.
S2CID12396034.
^Spall, J. C. (2000). "Adaptive Stochastic Approximation by the Simultaneous Perturbation Method". IEEE Transactions on Automatic Control. 45 (10): 1839−1853.
doi:
10.1109/TAC.2000.880982.
^Spall, J. C. (2009). "Feedback and Weighting Mechanisms for Improving Jacobian Estimates in the Adaptive Simultaneous Perturbation Algorithm". IEEE Transactions on Automatic Control. 54 (6): 1216–1229.
doi:
10.1109/TAC.2009.2019793.
S2CID3564529.
^Bhatnagar, S.; Prasad, H. L.; Prashanth, L. A. (2013). Stochastic Recursive Algorithms for Optimization: Simultaneous Perturbation Methods. London: Springer.
ISBN978-1-4471-4284-3.