Linear Regression with Gradient Descent

Initializing live version
Download to Desktop

Requires a Wolfram Notebook System

Interact on desktop, mobile and cloud with the free Wolfram Player or other Wolfram Language products.

This Demonstration shows how linear regression can determine the best fit to a collection of points by iteratively applying gradient descent. Linear regression works by minimizing the error function: , where is the number of points. Because it is not always possible to solve for the minimum of this function, gradient descent is used. Gradient descent consists of iteratively subtracting from a starting value the slope at point times a constant called the learning rate. You can vary the iterations into gradient descent, the number of points in the dataset, the seed for randomly generating the points and the learning rate.

Contributed by: Jonathan Kogan (April 2017)
Open content licensed under CC BY-NC-SA


Snapshots


Details



Feedback (field required)
Email (field required) Name
Occupation Organization
Note: Your message & contact information may be shared with the author of any specific Demonstration for which you give feedback.
Send