linear regressor
Recall:
the regressor is linear in the feature \(x\)
\(y = \theta^{\top} x+\theta_0\)
\(x_1\)
\(x_2\)
\(y\)
e.g. \(d=2\) features
linear binary classifier (step-function based)
Recall:
separator
the separator is linear in the feature \(x\)
linear logistic classifier
\(g(x)=\sigma(z)=\sigma\left(\theta^{\top} x+\theta_0\right)\)
Recall:
separator
the separator is linear in the feature \(x\)
Linear classification played a pivotal role in kicking off the first wave of AI enthusiasm.
👆
👇
Not linearly separable.
Linear tools cannot solve interesting tasks.
Linear tools cannot, by themselves, solve interesting tasks.
XOR dataset
feature engineering 👉
👈 neural networks
original features \(x \in \mathbb{R} \)
new features \(\phi(x) \in \mathbb{R^{d^{\prime}}}\)
non-linear in \(x\)
linear in \(\phi\)
non-linear
transformation
\(\phi\)
Linearly separable in \(\phi(x) = x^2\) space
Not linearly separable in \(x\) space
transform via \(\phi(x) = x^2\)
Linearly separable in \(\phi(x) = x^2\) space, e.g. predict positive if \(\phi \geq 3\)
Non-linearly separated in \(x\) space, e.g. predict positive if \(x^2 \geq 3\)
transform via \(\phi(x) = x^2\)
non-linear classification
training data
p1 | -2 | 5 |
p2 | 1 | 2 |
p3 | 3 | 10 |
transform via
\(\phi(x)=x^2\)
training data
p1 | 4 | 5 |
p2 | 1 | 2 |
p3 | 9 | 10 |
non-linear regression
\(y=\phi+1\)
training data
p1 | 4 | 5 |
p2 | 1 | 2 |
p3 | 9 | 10 |
\(=x^2+1\)
training data
p1 | -2 | 5 |
p2 | 1 | 2 |
p3 | 3 | 10 |
systematic polynomial features construction
Underfitting
Appropriate model
Overfitting
high error on train set
high error on test set
low error on train set
low error on test set
low error on train set
high error on test set
Underfitting
Appropriate model
Overfitting
Similar overfitting can happen in classification
Using polynomial features of order 3
Quick summary:
Previously:
🧠⚙️
hypothesis class
loss function
hyperparameters
\(\left\{\left(x^{(i)}, y^{(i)}\right)\right\}_{i=1}^{n}\)
Linear
Learning Algorithm
\(\left\{\left(\phi(x^{(i)}), y^{(i)}\right)\right\}_{i=1}^{n}\)
Linear
Learning Algorithm
🧠⚙️
hypothesis class
loss function
hyperparameters
today, so far:
🧠⚙️
feature
transformation \(\phi(x)\)
can we automate 👆?
i.e. fold it into the learning algorithm?
\(\left\{\left(x^{(i)}, y^{(i)}\right)\right\}_{i=1}^{n}\)
Outlined the fundamental concepts of neural networks:
expressiveness
efficient learning
layered structure
importantly, linear in \(\phi\), non-linear in \(x\)
transform via
some appropriately weighted sum
👋 heads-up:
all neural network diagrams focus on a single data point
A neuron:
\(w\): what the algorithm learns
A neuron:
\(f\): what we engineers choose
\(z\): scalar
\(a\): scalar
Choose activation \(f(z)=z\)
learnable parameters (weights)
e.g. linear regressor represented as a computation graph
Choose activation \(f(z)=\sigma(z)\)
learnable parameters (weights)
e.g. linear logistic classifier represented as a computation graph
A layer:
learnable weights
A layer:
layer
linear combo
activations
A (fully-connected, feed-forward) neural network:
layer
input
neuron
learnable weights
Engineers choose:
hidden
output
some appropriately weighted sum
recall this example
\(f =\sigma(\cdot)\)
\(f(\cdot) \) identity function
\(-3(\sigma_1 +\sigma_2)\)
can be represented as:
e.g. forward-pass of a linear regressor
e.g. forward-pass of a linear logistic classifier
\(\dots\)
Forward pass: evaluate, given the current parameters
linear combination
loss function
(nonlinear) activation
Hidden layer activation function \(f\) choices
\(\sigma\) used to be the most popular
very simple function form (so is the gradient).
nowadays, the default choice:
compositions of ReLU(s) can be quite expressive
asymptotically, can approximate any function! (for regression)
therefore can also give arbitrary decision boundaries! (for classification)
compositions of ReLU(s) can be quite expressive
output layer design choices
e.g., say \(K=5\) classes
input \(x\)
hidden layer(s)
output layer
However, in the realm of neural networks, the precise nature of this relationship remains an active area of research—for example, phenomena like the double-descent curve and scaling laws
(The demo won't embed in PDF. But the direct link below works.)
We'd love to hear your thoughts.