Shen Shen
Sept 20, 2024
💻 Compute/optimize/ train
Recap:
🧠 ⚙️
Hypothesis class
Hyperparameters
Objective (loss) functions
Regularization
(image adapted from Phillip Isola)
new feature \(x\)
new prediction \(y\)
\(\in \mathbb{R}\)
(aka inferencing, or predicting)
Recap:
(image adapted from Phillip Isola)
(image adapted from Phillip Isola)
"Fish"
{"Fish", "Grizzly", "Chameleon", ...}
\(\in \)
A discrete set.
(image adapted from Phillip Isola)
new feature \(x\)
new prediction
The gradient issue is caused by both the 0/1 loss, and the sign functions nested in.
As before, let's first look at how to make prediction with a given linear logistic classifier
otherwise, negative label.
Sigmoid
: a smooth step function
"sandwiched" between 0 and 1 vertically (never 0 or 1 mathematically)
monotonic, very nice/elegant gradient (see recitation/hw)
\(\sigma\left(\cdot\right)\) interpreted as the probability/confidence that feature \(x\) has positive label. Predict positive if
e.g. suppose, wanna predict whether to bike to school.
with given parameters, how do I make prediction?
1 feature:
2 features:
(image credit: Tamara Broderick)
training data:
😍
🥺
training data:
😍
🥺
If \(y^{(i)} = 1\)
😍
🥺
training data:
😍
🥺
If \(y^{(i)} = 0\)
😍
🥺
(image adapted from Phillip Isola)
\(\in \mathbb{R}^{K}\)
Two classes
\(K\) classes
scalar
scalar
\(K\)-by-1
\(K\)-by-1
Two classes
\(K\) classes
(image adapted from Phillip Isola)
current prediction
\(g=\text{softmax}(\cdot)\)
feature \(x\)
true label \(y\)
loss \(\mathcal{L}_{\mathrm{nllm}}(\mathrm{g}, \mathrm{y})=-\sum_{\mathrm{k}=1}^{\mathrm{K}} \mathrm{y}_{\mathrm{k}} \cdot \log \left(\mathrm{g}_{\mathrm{k}}\right)\)
(image adapted from Phillip Isola)
feature \(x\)
true label \(y\)
current prediction
\(g=\text{softmax}(\cdot)\)
loss \(\mathcal{L}_{\mathrm{nllm}}(\mathrm{g}, \mathrm{y})=-\sum_{\mathrm{k}=1}^{\mathrm{K}} \mathrm{y}_{\mathrm{k}} \cdot \log \left(\mathrm{g}_{\mathrm{k}}\right)\)
Image classification played a pivotal role in kicking off the current wave of AI enthusiasm
We'd love to hear your thoughts.