Domain Adaptation

Daniel Yukimura

Nov 7th, 2019

through adversarial training

Domain Adaptation

Has deep learning solved all our vision problems?

Domain Adaptation

Domain Adaptation

Different Domains | Same Task

Domain Adaptation

(\mathcal{X}_s, P_s)

Source data

Target data

(\mathcal{X}_t, P_t)
D_s = \{ (x_i, y_i)\}_{i = 1}^{N_s}
D_t = \{ (\tilde{x}_j, \cdot)\}_{j = 1}^{N_t}

Domain Adaptation (DA)

Scenarios:

  • Homogeneous DA:                                  Heterogeneous DA:

 

  • Supervised, Semi-Supervised, or Unsupervised

 

  • One-step or Multi-step DA
\mathcal{X}_s = \mathcal{X}_t
\mathcal{X}_s \neq \mathcal{X}_t
D_t = \{ (\tilde{x}_j, \cdot)\}_{j = 1}^{N_t}
\left(P_s \neq P_t\right)

Domain Adaptation (DA)

Scenarios:

  • Homogeneous DA:                                  Heterogeneous DA:

 

  • Supervised, Semi-Supervised, or Unsupervised

 

  • One-step or Multi-step DA
\mathcal{X}_s = \mathcal{X}_t
\mathcal{X}_s \neq \mathcal{X}_t
D_t = \{ (\tilde{x}_j, \cdot)\}_{j = 1}^{N_t}
\left(P_s \neq P_t\right)

Today's approach: Adversarial-based Domain Adaptation

A Review on GANs & Adversarial Training

A Review on GANs and Adversarial Training

Generator Functions:

A generator can map a known distribution to the distribution on the feature space:

X \overset{d}{=} g(Z)
g:\mathcal{Z} \rightarrow \mathcal{X}

A Review on GANs and Adversarial Training

Adversarial Training: 

Design a game between machines where the equilibrium solves a learning problem.

 

GANs:

  • Generator:
  • Discriminator: ​
  • Game:

 

G: \mathcal{Z} \rightarrow \mathcal{X}
D: \mathcal{X} \rightarrow [0,1]
\arg\!\min_{G}\max_{D} \mathbb{E}_{X\sim p_{data}} [\log{D(X)}] + \mathbb{E}_{Z\sim p_Z}[\log{(1-D(G(Z))}]

A Review on GANs and Adversarial Training

Adversarial Domain Adaptation

Ref: Adversarial Discriminative Domain Adaptation - Tzeng et al. 2017

Adversarial Domain Adaptation

Idea: Consider a intermediate feature space, a common representation for both domains.

 

 

Train a classifier on the labeled source data, passing through the representation space

 

 

Play a game between the maps and a discriminator

M_s: \mathcal{X}_s \rightarrow \mathcal{R}
M_t: \mathcal{X}_t \rightarrow \mathcal{R}
C: \mathcal{R} \rightarrow \mathcal{Y}
D: \mathcal{R} \rightarrow [0,1]

Adversarial Domain Adaptation

Pre-training:

\min\limits_{M_s, C} - \mathbb{E}_{P_s} \left[ \sum\limits_{k=1}^K \mathbb{1}_{[y_s = k]} \log C\left( M_s(x_s) \right)\right]

Adversarial Domain Adaptation

Adversarial Adaptation (First turn)

\min\limits_{D} - \mathbb{E}_{P_s} \left[ \log D\left( M_s(x_s) \right) \right] - \mathbb{E}_{P_t} \left[ \log\left(1 - D\left( M_t(x_t)\right) \right) \right]

Adversarial Domain Adaptation

Adversarial Adaptation (Second turn)

\min\limits_{M_t} - \mathbb{E}_{P_t} \left[ \log D\left( M_t(x_t) \right) \right]