Gradient Descent on Linear Regression

Gradient descent is one of the more important optimisation techniques. It is used in a wide variety of machine learning techniques due to it’s flexibility in being able to be applied to any differentiable objective function. With each iteration steps are taken in the direction of the negative gradient until converging to a local minimum. As the algorithm approaches the local minimum the jumps become smaller until a specified tolerance is met, or the maximum iterations. To understand how this works gradient descent is applied to a common method, simple linear regression.

The simple linear regression model is of the form:

    \[ \textbf{y} = \textbf{X}\boldsymbol{\beta}+\boldsymbol{\epsilon} \]

where

    \[ \boldsymbol{\beta}^{T} = (\beta_0, \beta_1) \]

The objective is to find the parameters (\boldsymbol{\beta}) such that they minimise the mean squared error.

    \[ MSE(\hat{y}) = \frac{1}{n}\sum_{i=1}^{n} (y_i-\hat{y}_i)^2 \]

This is a good problem to start with since we know the analytical solution is given by

    \[\boldsymbol{\beta} = (\textbf{X}^{T}\textbf{X})^{-1}\textbf{X}^{T}\textbf{y}\]

and can check our results.

Example

Set up:

plot of chunk unnamed-chunk-1

The analytical solution can be found manually by

And just to convince ourselves this is correct

plot of chunk unnamed-chunk-3

Gradient descent

The objective is to achieve the same result using gradient descent. Gradient descent works by updating the parameters with each iteration in the direction of negative gradient i.e.

    \[ \boldsymbol{\hat{\beta}}_{t+1} = \boldsymbol{\hat{\beta}}_{t} -\gamma \nabla F(\boldsymbol{\hat{\beta}_t}) \]

where \gamma is the learning rate and

    \[ \begin{aligned} \nabla F(\boldsymbol{\hat{\beta}_t}) &= \biggl( \frac{\partial F}{\partial \beta_0}, \frac{\partial F}{\partial \beta_1} \biggr) \\ &= -\frac{2}{n} \biggl(\sum_{i=1}^{n} \boldsymbol{x}_{i,0}(y_{i}-\boldsymbol{x}_{i}^{T}\boldsymbol{\hat{\beta}}_{t}), \sum_{i=1}^{n} \boldsymbol{x}_{i,1}(y_{i}-\boldsymbol{x}_{i}^{T}\boldsymbol{\hat{\beta}}_{t}) \biggr) \\ &= -\frac{2}{n} \textbf{X}^T (\textbf{y}-\textbf{X}\boldsymbol{\hat{\beta}}_{t}) \end{aligned} \]

The learning rate is to ensure we don’t jump too far with each iteration and rather some proportion of the gradient, otherwise we could end up overshooting the local minimum taking mauch longer to converge or not find the optimal solution at all. Apply this to the problem above, we’ll initialise our values for \boldsymbol{\beta} to something sensible e.g. \boldsymbol{\beta}^{T} = (1,1). I have chosen \gamma=0.01 with 1000 iterations which is a reasonable learning rate to start with for this problem. It’s worth trying different values of \gamma to see how it changes convergence. The algorithm is setup as

plot of chunk unnamed-chunk-4

As expected we obtain the same result. The lines show the gradient and how the parameters converge to the optimal values.

Let’s try a different set of starting values to see how well it converges.

plot of chunk unnamed-chunk-5

In this example we can see how robust this method is on a simple problem like linear regression. Even when the initial values are very far away from the true values it converges very quickly.

Follow me on social media:

Leave a Reply

Your email address will not be published. Required fields are marked *