Shen Shen
Feb 21, 2025
11am, Room 10-250
Supervised Learning
Algorithm
\(\mathcal{D}_\text{train}\)
🧠⚙️
Hypothesis class
Hyperparameters
Objective (loss) function
Regularization
Recap:
regressor
"Use" a model
"Learn" a model
Recap:
"Use" a model
"Learn" a model
train, optimize, learn, tune,
adjusting/updating model parameters
gradient based
Supervised Learning
Algorithm
\(\mathcal{D}_\text{train}\)
🧠⚙️
Hypothesis class
Hyperparameters
Objective (loss) function
Regularization
regressor
predict, test, evaluate, infer
applying the learned model
no gradients involved
Supervised Learning
Algorithm
🧠⚙️
Hypothesis class
Hyperparameters
Objective (loss) function
Regularization
Today:
classifier
\(\mathcal{D}_\text{train}\)
{"Fish", "Grizzly", "Chameleon", ...}
\(\{+1,0\}\)
\(\{😍, 🥺\}\)
Today:
new feature
"Fish"
new prediction
{"Fish", "Grizzly", "Chameleon", ...}
Supervised Learning
Algorithm
🧠⚙️
Hypothesis class
Hyperparameters
Objective (loss) function
Regularization
image adapted from Phillip Isola
linear regressor
linear binary classifier
features
parameters
linear combination
predict
\(x \in \mathbb{R}^d\)
\(\theta \in \mathbb{R}^d, \theta_0 \in \mathbb{R}\)
\(\theta^T x +\theta_0\)
\(z\)
\(=z\)
if \(z > 0\)
otherwise
\(1\)
0
linear
binary classifier
features
parameters
linear
combination
predict
\(x \in \mathbb{R}^d\)
\(\theta \in \mathbb{R}^d, \theta_0 \in \mathbb{R}\)
\(\theta^T x +\theta_0\)
\(=z\)
linear logistic
binary classifier
if \(z > 0\)
otherwise
\(1\)
0
if \(\sigma(z) > 0.5\)
otherwise
\(1\)
0
: a smooth step function
Sigmoid
if \(\sigma(z) > 0.5\)
otherwise
\(1\)
0
if \(z > 0\)
\(1\)
0
otherwise
\(\sigma\left(\cdot\right)\) between \((0,1)\) vertically
(\(\sigma\left(\cdot\right)\) monotonic, very elegant gradient (see hw/rec)
Sigmoid \(\sigma\left(\cdot\right)\) outputs the probability or confidence that feature \(x\) has positive label.
if \(\sigma(z) \)
Predict positive
e.g. to predict whether to bike to school using a given logistic classifier
1 feature:
2 features:
image credit: Tamara Broderick
linear logistic classifier still results in the linear separator
\(\theta^T x+\theta_0=0\)
training data:
😍
🥺
Recall, the labels \(y \in \{+1,0\}\)
training data:
😍
🥺
If \(y = 1\)
😍
🥺
training data:
😍
🥺
If \(y = 0\)
😍
🥺
training data:
linear
binary classifier
features
parameters
linear combo
predict
\(x \in \mathbb{R}^d\)
\(\theta \in \mathbb{R}^d, \theta_0 \in \mathbb{R}\)
\(\theta^T x +\theta_0\)
\(=z\)
linear logistic
binary classifier
loss
\((g - y)^2 \)
linear
regressor
closed-form or
gradient descent
NP-hard to learn
optimize via
Video edited from: HBO, Sillicon Valley
🌭
\(x\)
\(\theta^T x +\theta_0\)
\(z \in \mathbb{R}\)
\(\sigma(z) :\) model's confidence the input \(x\) is a hot-dog
learned scalar "summary" of "hot-dog-ness"
\(1-\sigma(z) :\) model's confidence the input \(x\) is not a hot-dog
fixed baseline of "non-hot-dog-ness"
🌭
\(x\)
\(\theta^T x +\theta_0\)
\(z \in \mathbb{R}\)
if we want to predict \(\{\)hot-dog, pizza, pasta, salad\(\}\)
❓
\(z \in \mathbb{R}^4\)
distribution over these 4 categories
4 scalars, each one as a "summary" of a food category
🌭
\(x\)
\(\theta^T x +\theta_0\)
\(z \in \mathbb{R}^4\)
distribution over these 4 categories
if we want to predict \(\{\)hot-dog, pizza, pasta, salad\(\}\)
❓
\( \begin{bmatrix} -0.23 \\ 3.67 \\ 1.47 \\ 0.44 \end{bmatrix} \)
\( \begin{bmatrix} 0.0173 \\ 0.8543 \\ 0.0947 \\ 0.0338 \end{bmatrix} \)
entries between (0,1), sums up to 1
training data
parameters
linear combo
predict
\(x \in \mathbb{R}^d,\)
\(\theta \in \mathbb{R}^d, \theta_0 \in \mathbb{R}\)
\(\theta^T x +\theta_0\)
\(=z \in \mathbb{R}\)
linear logistic
binary classifier
one-out-of-\(K\) classifier
\(\theta \in \mathbb{R}^{d \times K},\)
\(=z \in \mathbb{R}^{K}\)
\(\theta^T x +\theta_0\)
positive if \(\sigma(z)>0.5\)
category corresponding to the largest entry in softmax\((z)\)
\(\theta_0 \in \mathbb{R}^{K}\)
\(y \in \{0,1\}\)
\(x \in \mathbb{R}^d,\)
\(y: K\)-dimensional one-hot
image adapted from Phillip Isola
One-hot encoding:
current prediction
\(g=\text{softmax}(\cdot)\)
feature \(x\)
true label \(y\)
image adapted from Phillip Isola
loss \(\mathcal{L}_{\mathrm{nllm}}({g}, y)\\=-\sum_{\mathrm{k}=1}^{\mathrm{K}}y_{\mathrm{k}} \cdot \log \left({g}_{\mathrm{k}}\right)\)
feature \(x\)
true label \(y\)
current prediction
\(g=\text{softmax}(\cdot)\)
image adapted from Phillip Isola
loss \(\mathcal{L}_{\mathrm{nllm}}({g}, y)\\=-\sum_{\mathrm{k}=1}^{\mathrm{K}}y_{\mathrm{k}} \cdot \log \left({g}_{\mathrm{k}}\right)\)
Negative log-likelihood \(K-\) classes loss (aka, cross-entropy)
\(y:\)one-hot encoding label
\(y_{{k}}:\) either 0 or 1
\(g:\) softmax output
\(g_{{k}}:\) probability or confidence in class \(k\)
Image classification played a pivotal role in kicking off the current wave of AI enthusiasm.
We'd love to hear your thoughts.
The gradient issue is caused by both the 0/1 loss, and the sign functions nested in.
As before, let's first look at how to make prediction with a given linear logistic classifier
otherwise, negative label.
: a smooth step function
Sigmoid
if \(\sigma(z) > 0.5\)
otherwise
\(1\)
0
linear
binary classifier
features
parameters
linear
combination
predict
\(x \in \mathbb{R}^d\)
\(A \in \mathbb{R}^{n \times n}, \theta_0 \in \mathbb{R}\)
\(\theta^T x +\theta_0\)
\(=z\)
linear logistic
binary classifier
if \(z > 0\)
otherwise
\(1\)
0
if \(\sigma(z) > 0.5\)
otherwise
\(1\)
0
\(X \in \mathbb{R}^{n \times d}\)