Gradient Descent is one of the basic iterative optimization algorithms used in Machine Learning, and its deep-rooted in linear algebra and math.
It is the first algorithm explained by Andrew Ng course of Machine Learning.
If you are like me, lost in the math when Andrew explained it, you will find this post useful.


Introduction

I am trying here to explain what Andrew explained, but in simpler way.
Beside that, we are going to cover the Gradient Descent and build the algorithm from scratch using Python.

Pre-requisits:

You need the following pre-requisits skills:

  • a begineer skills in Python and specially Numpy and Pandas.
  • some basic high school math.

Dataset and Code:

The dataset represents house prices in Boston.
You can get the fields from here.

You can find the code and the data in the repository:

The math behind machine learning:

Let’s start from the data we have. From the data set, the price of the house could be effected by many inputs e.g: number of rooms (RM) crime rate (CRIM), distances to the working cetes (DIS).

The machine learning model will be a mathematical equatios like this:

We can generalize the above statement to build a function that take the input value and gives the output:

Linear Algebra to help:

The above statement is a linear equation, and to solve it we can use Linear Algebra.

From our data sets we can get many equations as follows:





There are two ways to solve the above equations and find the values of :

  1. Normal Equations using Matrices.
  2. Gradient Descent algorithm.

We are going to cover both these methods.


1. Using Matrices (Normal Equation):

To solve these equations, we can use matrices in linear algebra.
We call this method Normal Equation.
We know we can represent the above as:


From linear algebra when we have the following matrices:

The solution for theta ().


2. Gradient Descent:

Gradient Descent is based on these high level steps:

  1. selecting arbitrary initial values of .
  2. then measuring the mean error between the real value and the calculated value based on those initial . To calculate the error we used an equation called Squared error or Mean squared error. We can use other error equations, but this is one of the most effecient ways.
    .
    We call the above the Cost Function.
  3. try to find the that minimize that function.

Step 1: The cost Function

Next step in gradient descent is to enhance the effeciency by minimizing the cost function, and make as close as possible to zero.
To do that we can use linear algebra by using To enhance the effeciency, and make the model fit the reality, we will work on minimize the cost by using calculus.
Without going into details, the equation to find the smallest value for the cost function is as follows:

By using calculus, we can come to these equations:

We can generalize it as:

Using metrices:

To calculate the above using matrices:

Using Python:

predictions = X @ theta.T
deltas = predictions - y
delta_power = np.power(deltas, 2)
J = np.sum(delta_power) / 2 * len(X)

Step 2: Minimize the theta parameters

As we said we want to represent this equation(s):


Using metrices:

Using Python:

predictions = X @ theta.T
deltas = predictions - y
theta = theta - (alpha / len(X)) * (X.T @ deltas)