Cristóbal Silva
Encoder
Decoder
Low Dimensional
Representation
Input
Output
Just an MLP where
input size = output size
Encoder
Transposed Encoder
Low Dimensional
Representation
Input
Output
Half the parameters!
import tensorflow as tf
w_encode_1 = tf.Variable(w_encode_1_init, dtype=tf.float32)
w_encode_2 = tf.Variable(w_encode_2_init, dtype=tf.float32)
w_decode_2 = tf.transpose(w_encode_2) # tied weights
w_decode_1 = tf.transpose(w_encode_1) # tied weights
Tensorflow
from torch.autograd import Variable
w_encode_1 = Variable(w_encode_1_init)
w_encode_2 = Variable(w_encode_2_init)
w_decode_2 = w_encode_2.t() # tied weights
w_decode_1 = w_encode_1.t() # tied weights
# alternative
def forward(self, x):
x = self.encode_1(x)
x = self.encode_2(x)
x = F.linear(x, weight=self.encode_1.weight.t()) # tied weights
x = F.linear(x, weight=self.encode_2.weight.t()) # tied weights
return x
PyTorch
Note: biases are never tied, nor regularized
Encoder
Decoder
Low Dimensional
Representation
Corrupted Input
Clean Output
Better reconstruction!
Adding noise
Shutting down nodes
Denoising autoencoder prevents neurons from colluding with each other, i.e. it forces each neuron or a small group of neurons to do its best in reconstructing the input
Encoder
Decoder
Low Dimensional
Representation
Input
Output
Efficient representation!
(*): Avoid small batches, or the mean will not be accurate
def kl_divergence(p, q):
return p * tf.log(p / q) + (1 - p) * tf.log((1 - p) / (1 - q))
reconstruction_loss = tf.reduce_mean(tf.square(outputs - X)) # MSE
sparsity_loss = tf.reduce_sum(kl_divergence(sparsity_target, hidden_mean))
loss = reconstruction_loss + sparsity_weight * sparsity_loss
Encoder
Decoder
Input
Output
Generative model!
Low Dimensional
Representation
Penalizing this term ensures that the codings are close to a unit-gaussian.
This is useful because we only need to sample from \( \mathcal{N}(0, 1) \) and pass through the decoder network to generate from \( P(X|z) \)
Stacked Autoencoder
https://github.com/ageron/handson-ml/blob/master/15_autoencoders.ipynb
https://github.com/GunhoChoi/Kind-PyTorch-Tutorial