Stochastic Gradient Descent

Initializing live version
Download to Desktop

Requires a Wolfram Notebook System

Interact on desktop, mobile and cloud with the free Wolfram Player or other Wolfram Language products.

Stochastic gradient descent is an optimization algorithm for finding the minimum or maximum of an objective function. In this Demonstration, stochastic gradient descent is used to learn the parameters (intercept and slope) of a simple regression problem. Using "contour plot", the likelihood function of the parameters is shown as a contour plot. The blue point gives the actual parameters while the red point shows the iterates of the stochastic gradient function. Selecting "regression" shows the data points, the actual regression line in green, and the iterative regression line in red as determined by the algorithm.

Contributed by: Anthony Fox (March 2011)
Open content licensed under CC BY-NC-SA


Snapshots


Details

Consider a simple regression function . The parameters of the function are the slope and the intercept . The errors associated with each data point are assumed to be independent and normally distributed with variance . Given a sample data point , the likelihood of the parameters is specified as . Maximum-likelihood estimation uses the joint-likelihood function of all the data points to learn the parameters of the regression line. Stochastic gradient descent uses each data point to iteratively update the estimated parameters of the algorithm by traversing the likelihood surface in the direction of the negative gradient of the likelihood of each point. The amount of travel in the direction of the point gradient is specified by the learning parameter of the algorithm. The algorithm is as follows.

1. Choose a learning parameter , an initial estimate of the parameters .

2. Produce a random permutation of the data points.

3. For each point, compute .

4. Repeat steps 2 and 3 until some convergence criterion is met.

References

[1] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed., New York: Springer, 2001.

[2] Wikipedia, "Stochastic Gradient Descent." (Feb 28, 2011) http://en.wikipedia.org/wiki/Stochastic_gradient_descent.



Feedback (field required)
Email (field required) Name
Occupation Organization
Note: Your message & contact information may be shared with the author of any specific Demonstration for which you give feedback.
Send