prerequisite
1.derivative (only one variable)

(1) it measures the sensitivity of f(x) with respect to a trivial change of x (slope of the tangent line) (from wiki)
2. partial derivative (multi variable, but not any direction)
at lease two variables.
Actually,it’s derivative based on each dimension(x bar,y bar ,z bar….)

(2)  3.directional derivative (multi variables ,any direction)
break a vector down  into each dimension,so we can use partial derivative to solve this actually it’s just vector dot product ([partial derivative] * [vector in this direction])
it’s all about how the vector in this direction affects the partial derivative. we only care about the direction of the vector,so vector v is a unit vector

gradient is about finding a direction which we can get the steepest slope. it means we need to find a direction which maximize [partial derivative ]* [initial vector].
multiplication of two vectors ,obviously when they stick together(theta = 0) ,we can get the steepest slope. meanwhile ,we only care about the direction.
so the gradient = [partial derivative]

In conclusion, if we want to minimize the cost function, we can decrease each direction in the vector by its gradient respectively. This is gradient descent.

another way to comprehend. if we want to minimize the cost , we should find to make negative.
suppose we choose , alpha is a small,positive parameter.
then so the cost will be negative.this is what we looking for.
when , we can minimize the cost function

coursera marchine learining unit one

definition
a computer program is said to learn from experience E with respect to task T and some performance measure P, if its performance on T, as measured by P , improves with experience E.

type
1. supervised learning
1.1 classification (mapping to label, discrete)
1.2 regression (mapping to continuous number)
2.unsupervised learning (cluster data)

supervised learning workflow (from coursera)

how to measure the accuracy of the hypothesis (linear)
#linear regression cost function find the most probable theta to minimize the cost function.when the cost function equal 0 means all the data plot lies in the line.

how to find the probable theta to minimize residual  