But first, a detour
Algorithmically making "content"
Similar in distribution of "features"
https://github.com/mxgmn/WaveFunctionCollapse
https://github.com/mxgmn/WaveFunctionCollapse
Input:
Algorithm:
https://github.com/mxgmn/WaveFunctionCollapse
Catalog the frequency of appearance, and the compatibility with neighbours of each tile
https://github.com/mxgmn/WaveFunctionCollapse
https://github.com/mxgmn/WaveFunctionCollapse
Last week we talked about tensors, operations on tensors and autograd
Let's talk about some higher level functionality now
A Module is the "Base class for all neural network modules"
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))They provide a bit of "magic" behind the scenes (registration of parameters)
Generator from DC-GAN (Deep Convolutional GAN)
Loss functions model the problem you're trying to solve
import torch.nn as nn
import torch.nn.functional as F
model = Model(...)
predicted = model(inputs)
loss = LossFunc(predicted, expected)
loss.backwards()Network contains computed gradient after \(backwards\) call.
Optimizers optimize....
import torch.nn as nn
import torch.nn.functional as F
optimizer = torch.optim.SGD({model_parameters}, {optim_parameters})
... code to compute loss ...
loss.backwards()
optimizer.step()These handle all the heavy lifting for optimization, you setup the gradient, the optimizer adjusts the parameters
Can we reproduce WFC quality results with a generative network?
Generator
Convolutional: 128 -> (64, 64, 95)
95 Unique tiles
Flowers
Discriminator
Convolutional: 128 -> (64, 64, 3)
Reverse of Generator
Reverse of Generator
Dense
Dense
Generated 10,000 examples in both formats.
# Initialize generator and discriminator
generator = Generator()
discriminator = Discriminator()
generator.cuda()
discriminator.cuda()
lr = .0001
b1 = 0.5
b2 = .999
# Optimizers
optimizer_G = torch.optim.Adam(generator.parameters(), lr=lr, betas=(b1, b2))
optimizer_D = torch.optim.Adam(discriminator.parameters(), lr=lr, betas=(b1, b2))
# Loss function
loss_fn = nn.BCELoss()Generator from DC-GAN (Deep Convolutional GAN)
Discriminator from DC-GAN (Deep Convolutional GAN)
for epoch in range(epochs):
for i, data in enumerate(dataloader, 0):
real_imgs = data[0].float().to(0)
# Train generator
optimizer_G.zero_grad()
# Sample noise as generator input
z = Variable(Tensor(np.random.normal(0, 1, (real_imgs.shape[0], latent_dim))))
real_labels = torch.ones((real_imgs.shape[0],), requires_grad=False).to(0)
fake_labels = torch.zeros((real_imgs.shape[0],), requires_grad=False).to(0)
# Generate a batch of images
gen_imgs = generator(z)
# Loss measures generator's ability to fool the discriminator
g_loss = loss_fn(discriminator(gen_imgs).view(-1), real_labels)
g_loss.backward()
optimizer_G.step()
# ...
for epoch in range(epochs):
for i, data in enumerate(dataloader, 0):
# ... previous slide, train generator
# Train Discriminator
optimizer_D.zero_grad()
# Measure discriminator's ability to classify real from generated samples
real_loss = loss_fn(discriminator(real_imgs).view(-1), real_labels)
fake_loss = loss_fn(discriminator(gen_imgs.detach()).view(-1), fake_labels)
d_loss = (real_loss + fake_loss) / 2
d_loss.backward()
optimizer_D.step()
Definitely worse...