Loading [MathJax]/jax/element/mml/optable/BasicLatin.js

 Icosian Reflections

…a tendency to systematize and a keen sense

that we live in a broken world.

CS 181 (Machine Learning)

These are my lecture notes from CS 181: Machine Learning (Canvas​), taught by Ryan Adams at Harvard. I am an enrolled student, and these notes are in no way official; they’re reproduced by me from Prof. Adams’s lectures in Spring 2015, and are licensed Creative Commons Attribution-NonCommercial-ShareAlike 4.0​, which supercedes any other site-wide licenses.
Creative Commons License


W1: Supervised Learning, Linear Regression, Maximum Likelihood (1)

Some Math

  • An input space \X.
  • An output space \Y.
  • An induction function y:\X\Y.

Notation: y:\X\Y means “y is a function from \X to \Y.

Examples, by Y

  • Regression (predicting some scalar): \Y=\R.
  • Classification (sorting into categories): \Y={y1,y2,}.
  • Ordinal regression (putting things in order): Y=\N.
  • Structured prediction (e.g.​ segmentation, ranking): Y is some other combinatorially-defined space.

Sample Method: Nearest Neighbors

  • Works for regression, classification.
  • Given an unknown thing, take a “vote” over the k “nearest neighbors”, i.e.​ the things it’s most like.
  • If k is too large, you’re just defaulting to the average.
  • If k is too small, you’ve got a bad problem with noise.
  • Maybe weighting all neighbors by ‘nearness’ solves this problem?
  • Nonparametric, because it’s not building an explicit parametrization of Y (maybe this is a good thing?).
  • Assumes that the desired task looks a lot​ like the training data.
  • this’ll be an important point throughout the class)
  • sim.​ that you’ve got enough​ training data…
  • Slow/fat, since you need to keep all of your training data around to compute metrics on.
  • Generally good as an exploratory, thrown-against-the-wall method.

What does it mean to be “good at” the problem?

  • Usually, we’ll talk about a loss function​—a quantitative concept of what it means to do badly​ at a prediction.
  • e.g. 0/1​ loss (for binary classification)—a 0 if you’re right, and a 1 if you’re not.
  • But be careful, sometimes misclassifications aren’t symmetrically bad, even in the binary case.
  • (e.g.​ false-pos vs. false-neg in cancer diagnosis)
  • So in general, a loss function is a function (prediction,actual)\R.
  • e.g.​ for regression, squared error (y(x0)y0)2—we like this a lot because it’s differentiable at zero, the minimizer is the mean, &c.
    Read more elsewhere: Ben Kuhn | Why Squared Error?
  • Alternatively, absolute error |y(x0)y0|.
  • Maybe this makes sense for money?
  • In reinforcement-learning, we’ll often talk about reward functions​ instead.

A parable: Choosing the wrong loss/reward function.​ In grad school, I [RA] was trying to get a robot (a trashcan with wheels) to roll down a hallway without bumping into things. So I gave it a function that rewarded it when it moved and punished it when it ran into things.

So of course, it learned to turn in tight little circles.

Linear Regression

  • Input format: x\RD.
  • Output format: t\R.
  • \dsy(x,w)=w0+Diwixi.
  • Data: {(xn,tn)}Nn=1.
  • Game:​ Find a good w. (we’ll use least-squares)
    ED\lrw\dq12Nn=0\lrtn\lrw\trxn2=\f12\ntw\trx2.

Notation: x\tr means “the transpose of x“, i.e.​ flip it about the diagonal. For a vector, this means turn it sideways.

  • …with the convention that x0\dq1, for convenience.
  • So w\dq\argminwED\lrw.

Notation: \argminxf(x) means “the x that minimizes f(x)“.

But we can solve (minimize) this analytically!

  • Find the gradient \gradwED.

Notation: \grad:(\Rn\R)(\Rn\R)
\gradf\dq\aryc\f\delf\delx1\f\delf\delx2\f\delf\delxD
\gradf(x)\dq\aryc\f\delf\delx1(x)\f\delf\delx2(x)\f\delf\delxD(x)

Basically, \grad takes a function and gives you a vector of its partial derivatives in every direction.

  • Set \gradwED(w)=0.
  • See \gradwED\lrw=Nn\lrtn\lrw\trxnxn.
  • We can write this as a design matrix X:
    X\dq\aryc{x_1\tr\\x_2\tr\\\vdots\\x_N\tr}=\aryf{(\vec x_1)_1&(\vec x_1)_2&\ldots&(\vec x_1)_4\\(\vec x_2)_1&(\vec x_2)_2&\ldots&(\vec x_2)_4\\\vdots&\vdots&\ddots&\vdots\\(\vec x_N)_1&(\vec x_N)_2&\ldots&(\vec x_N)_4}
    t=\aryct1t2tN.
  • So then we have
    ED(w)=\f12\lrXwt\tr\lrXwt
    \gradwED\lrw=X\tr\lrXwt=0
    X\trXw=X\trt
    w=\lrX\trX1X\trt.

Alternative Mindset: Maximum-Likelihood

  • Goodness of param = P(data|param is true).
  • …to be continued…

Quote:​ We’re living in a novel age: not limited by our computing power; not limited by our data resources. We’re really only limited by our understanding.


M2

Linear regression, a review of last week

Digression:​ Matrix calculus looks a lot more intimidating than it is—when in doubt, you can just write out everything, properly indexed, and bash through it that way. But one thing that helps, when I’m dealing with some particularly thorny matrix calculation, is to write out rectangles, label what each axis corresponds to (or even just how long they are), and make sure everything matches up.

At other times in your undergraduate CS training, you’ve been doing discrete math and continuous math isolated from each other. Warning: all of a sudden, you’re going to have to remember all sorts of calculus you haven’t had to deal with since high school, and if you feel rusty, that’s normal.

  • Data: \s\lrxn,tn:xn\RDNn=1, with \lrxn0=c
  • Data Matrix: X\RN\xD\dq\arycx1\trx2\trxN\tr t\dq\aryct1t2tN
  • Regression Function: y\lrw,x\dqw\trx
  • Error: ED\lrw=\f12Nn=1\lrtny\lrw,xn2=\f12Nn=1\lrtnw\trxn2=\f12\lrtXw\tr\lrtXw
  • (figure 2)

Reminder: \gradf\lrx=0\mimx\tisacriticalpoint, similarly to f(x)=0\mimx\tisacriticalpoint.

\gradwED\lrw=\f12\gradw\lrt\trt2w\trX\trt+w\trX\trXw
=\f12\gradwt\trt\gradww\trX\trt+\f12\gradww\trX\trXw

Some identities: \gradzz\tra=\gradza\trz=a
\gradzz\trAz=\lrA+A\trz

[=0-X\tr\vec t+X\tr X\vec w\text{\ldots{}solve for }=0]

  • (figure 2)

Question:​ How do we know that we have a global minimum, and not some other criticum?

Answer:​ This problem is globally convex (uniformly bowl-shaped). Other, later problems are going to almost always be nonconvex

Question:​ What if \\(X\tr X\\) isn’t invertible?

Answer:​ That’ll happen if you have more dimensions than datapoints…which might give you problems for reasons which may be obvious.

Likelihood

  • tn=y\lrw,xn+\epn
  • \epnN(0,\bt1)
  • \bt1 variance
  • \bt precision
  • Likelihood for nth datum: p\lgrtnxn,w,\bt=N\lgrtnxn\trw,\bt1
  • Likelihood for all N data: p\lgrtX,w,\bt=Nn=1N\lgrtnxn\trw,\bt1
  • So we take the log…why? Because products suck. (Specifically, in the context of floating-point arithmetic.)
  • Log-ikelihood for all N data: log\lrp\lgrtX,w,\bt=Nn=1log\lrN\lgrtnxn\trw,\bt1

W3

Bayesian Inference

(see also An Intuitive Explanation of Bayes’ Theorem​, and A Technical Explanation of Technical Explanation​, by E. Yudkowsky)

“A hypothesis-processing machine”
p(\te)p\lgrDATA\tep\lgr\teDATA
Then new data (DATA’) arrive…
p\lgr\teDATA,DATA’p\lgr\teDATAp\lgrDATA’\te
p(\te)p\lgrDATA\tep\lgrDATA’\te

As it turns out, “\te“ / “DATA” is a terrible naming convention if you’re ever going to say it out loud.

A conjugate prior​ has the property that, when exposed to data (with some p\lgrDATA\te), the posterior has the same functional form (up to, e.g.​ concentration). In general, these will be the exponential family​ distributions:
p\lgrDATA\te\Exp\te\trT(DATA)
p\lr\te\Expψ\tr\te
p\lgr\teDATA\Exp\lrT(DATA)+ψ\tr\te

(example: beta-bernoulli)

Other examples:

PriorsLikelihoods
BetaBernoulli
NormalNormal
DirichletMultinomial
GammaPoisson

Multivariate Gaussian in N dimensions

  • Unknown mean μN(m0,S0) (prior on μ)
  • Known covariance \Sg
  • Data:\sxnNn=1xn\RD
    N\lgrxμ,\Sg=(2π)\nfD2\abs\Sg/nf12\Exp\f12\lrxμ\tr\Sg\tr\lrxμ
    p\lgrμm0,S0,\sxnNn=1N\lgrμm0,S0nN\lgrxnμ,\Sg
    =C\f12\lrμ\trS10μ2μ\trS10m0+2μ\tr\Sg1\lrNn=1xn+Nμ\tr\Sg1μ
    and so:
    SN=\lrS10+N\Sg1
    mN=SN\lrS10m0+N\Sg1xN
    Thus
    logp\lgrμm0,S0,\Sg,\sxnNn=1=\f12\lrμmN\trS1N\lrμmN+C

W4: Classification

  • Features: xϕ\lrx.
  • k classes: c1,\hh,ck
  • In particular cases, these may be (partially, well) ordered, but in general, they need not be.
  • Data: \s(xn,tn)Nn=1
  • Binary (k=2) is a common case: tn\2, often notated tn\s1,1
  • Multi-class​ often given as tn\2k, with tn=(;0;0;1;0;0;)\tr.

Linear Classification

Decision boundary (planar function): w\trx+w00

  • Note that w will be normal to the plane (and that the origin is \fw0\nw) from the plane).

Linear separability​: the property that the data can be separated by straight lines (planes)

Caution:​ Least-squares-ing the output (as if classes were integers) is dumb, and dumb in the stupidest possible way—it chokes on even-more-obvious examples.

Perceptron

(see also Wikipedia​)

Activation function:
f(a)=\pcwz+1\pifa0\-1\pifa<0
f\lrw\trϕ\lrx=y\lrx

“Ideal” Loss:
E\lrw\dqNn=1f\lrw\trϕ\lrxntn

  • Not differentiable, so we can’t do a 0-gradient analysis.

Perceptron Loss​:
Ep\lrw\dqn=1Ntnw\trϕ\lrxn

  • Guaranteed to find a perfectly separating hyperplane if one exists (but nnec a maximally-separating one).

Perceptron Learning:

initialize w arbitrarily
while(w is wrong) :
    for n in [1,N] :
        if predict(n) is wrong :
            ww+t_nϕ\lrx_n

Better algorithm:​ Gradient descent = steps in steepest direction; Stochastic gradient descent​ = GD w/ Ndist noise.

  • So long as there’s a positive inner product with steepest, you’ll converge the same.
  • But you can often get Ndist-noisy approximations much more cheaply than perfect gradient calculations.
  • Mini-batch size usually, like 256—divisible by 16, since that the number of threads on a GPU.

(some things about Fisher’s Linear Discriminant Analysis​)


M5: Classification

Bayes:\f\PrrxC1p(C1)\PrrxC1+\PrrxC2Pr
Bishop: given a\dq=\log\f{\Prr{\vec x}{C_1}}\Pr(C_1)}{\Prr{\vec x}{C_2}\Pr(C_2)},
=\f1{1+e^{-a}}\qd\sg(a)
Fun fact:
\grad\sg\lr{\vec a}=\sg\lr{\vec a}\lr{1-\sg\lr{\vec a}}

Naive Bayes: Dimensions are uncorrelated, ex class. So
\Prr{\vec x}{C_k}=\prod_{n=1}^D\Prr{x_n}{C_k},
given Normal noise:
\prod_{i=1}^D\mu_{k_i}^{x_{n_i}}\lr{1-\mu_{k_i}}^{1-x_{n_i}}
(much uninteresting math)


W_5: Neural Networks

  • Data: \s{\lr{\vec x_n,t_n}}_{n=1}^N\sbs P\lr{\R^D\x\R}
  • Vanilla LinReg: y\lr{\vec x,\vec w}=\vec w\vtr\vec x
  • Basis functions: \phi_j:X\to\R\im y\lr{\vec x_i,\vec w}=\vec w\vtr\vec\phi\lr{\vec x}—but how to pick the \phi_j
  • Neural net​=”deep learning”=adaptive bsis function regression
  • Add parameters to \s{\phi_j}, learned from the data
  • Typically, we’ll use \phi_j\lr{\vec x}=\sg\lr{\vec w_j\vtr\vec x}e.g. \sg can be tanh, logistic, re(ctified) l(inear) u(nit): “hard” \sg(z)=\max\s{0,z}, “soft” \sg(z)=\log\lr{1+e^z}

Q:​ At a high level, I’m having difficulty understanding why I’d want to choose one over the other.

A:​ Me too, man. ReLUs are neat for vision, because you’d like classifications to be invariant under “make everything brighter”, but opinions in other cases are very, very mixed.

Q:​ Can you fit the activation function?

A:​ Sure, that’s some of my research…but it gets kinda hairy.

Q:​ Why do you use activation functions at all?

A:​ Well, if you only used linear (or identity) activations, you’d only recover LinReg…

  • Input layer \to hidden layer \to output layer
  • In general, it’s a particular connectivity DAG

Q:​ Why would you ever need more​ layers?

A:​ Well, the whole schtick of deep learning is that the early layers perform adaptive feature extraction, later layers reason about elementary conjunctions of those features, and later-still layers then do abstract reasoning on that.

Q:​ Are there heuristics about how many layers to use?

A:​ No. Or maybe: three layers, and then make them as big as you can fit in your GPU.

So, we need a gradient… We could

  • Do calculus
  • Symbolic differentiation
  • Finite differences