back propagation

what is back propagation?
it’s gradient decent just with a different form.
Metaphor : divide features into layers, use different combination via weights to mimic the polynomial features.
calculate the gradient of the loss function respect to each weight by chain rule, consist of previous input layer and loss of  output in next layer,one layer at a time,

why we need back propagation?
Non-linear classification need to include lots of features. using back propagation can calculate gradient more quickly (dynamic programming).

how to ?
1.randomly initialize the weights
2.implement forward propagation to get  3.implement the cost function (K classification) with regulation
sum up the cost in multiple class of every training data 4.calculate output layer error, which is a-y 5.based on the output layer error , adjust weight on every layer. 6.back to 2

coursera marchine learining unit one

definition
a computer program is said to learn from experience E with respect to task T and some performance measure P, if its performance on T, as measured by P , improves with experience E.

type
1. supervised learning
1.1 classification (mapping to label, discrete)
1.2 regression (mapping to continuous number)
2.unsupervised learning (cluster data)

supervised learning workflow (from coursera)

how to measure the accuracy of the hypothesis (linear)
#linear regression cost function find the most probable theta to minimize the cost function.when the cost function equal 0 means all the data plot lies in the line.

how to find the probable theta to minimize residual
repeat until convergence (simultaneous update all the theta) { }
where i = {0,1}
repeat until convergence { naive bayes

it‘s all about conditional probability and combined probability. P(A|B)=P(AB) / P(B)
=> P(A|B)=P(A)P(B|A) / P(B)
=> P(A|B)=P(A)P(B|A) / (P(A)P(B|A)+P(A’)P(B|A’))

joint probability
assumption : events are independent .
E1 = p(s|w1)*p(s|w2)*p(s)
E2 = (1-p(s|w1))*(1-p(s|w2))*(1-p(s)) P(S|w1,w2) = P(S)P(S|w1)P(S|w2)   /  (  P(S)P(S|w1)P(S|w2) +  P(~S)P(~S|w1)P(~S|w2) )

bayesian inference

usage example

1.spam mail filter

reference:

http://www.ruanyifeng.com/blog/2011/08/bayesian_inference_part_one.html

http://www.paulgraham.com/spam.html