Teacher Net
Dataset
T's Result
Student Net
S's Result
Pretrained
Knowledge Distillation
Teacher Net
Dataset
T's Result
Student Net
S's Result
Pretrained
Ground Truth
Teacher Net
T's Result
Student Net
S's Result
Pretrained
Knowledge Distillation
???
Fake Dataset
overlapped image
ResNet101
ResNet18
Pretrained
KD
Large Dataset
(ImageNet)
?
ResNet101
ResNet18
Pretrained
KD
Large Dataset
(ImageNet)
Lost / Privacy
Teacher Net
Dataset
T1's Result
T2's Result
Pretrained
Constrained, i.e. bn must be same
Teacher Net'
Untrained
Student Net
Student Net
S's Result
Knowledge Distillation
Teacher Net
T's Result
Student Net
S's Result
Pretrained
Feature Distillation
Fake Dataset
Generator
Feature Distillation
Logits Distillation
Generator
image x
image x'
Conv 1
Classification
Question
Conv 2
Channel Selecting
(Choose what network want to know)
Target: How to design loss, let model know about residual?
Generator
image x'
Conv 1
Feature1
Query1
Conv 2
Feature2
Query2
Classification 1
Classification 2
By add. Dense(Feature 1 + Feature 2)
Generator
image x'
Conv 1
Query1
Conv 2
Query2
By add. Dense(Feature 1 + Feature 2)
image x1
image x2
...
Batch Discrimination
Generator
image x'
Classification 1
Question 2
Feature lv 1
Classification 2
Feature lv 2
Identity
Question 3
Classification 3
Feature lv 3
Generator
image x'
Classification 1
Question 2
Feature lv 1
Classification 2
Feature lv 2
Identity
Question 3
Classification 3
Feature lv 3
Triangle : Dataset A / Circle : Dataset B
(Fill) Blue & Orange : Task A
(Border) Red & Green : Task B
Classifier A: Blue / Orange Triangle
Distribution
Dataset A
(Triangle)
Distribution
Dataset B
(Circle)
Classifier B:
Red / Green Circle
Task A
Task B
?
?
Data A
Data B
Class 1, 5-shots
Feature
Extractor
Latent
Latent
Latent
Latent
Latent
Mean
Proto
Support
Feature
Extractor
Latent1
Latent2
Latent3
Latent4
Latent5
Query
LatentQ
LatentQ
LatentQ
LatentQ
LatentQ
LatentQ
Attention
Network
Feature
Extractor
attention score
Prototype Latent
Mini-ImageNet
Cifar-100
Feature
Extractor
(10 epochs)
(10 epochs)
New
Classifier
New
Classifier
(10 epochs)
(10 epochs)
(10 epochs)
(10 epochs)
(10 epochs)
(10 epochs)
(10 epochs)
(10 epochs)
(50 epochs)
(50 epochs)
2
0
3
5
1
4
2
0
3
1
4
Feature
Extractor
Source Domain
Target Domain
(No label)
Label
Predictor
Source Feature
Domain
Target Feature
Domain
Logit
Label
Domain
Classifier
Source Score
(want -> 1)
Target
Score
(want -> 0)
Step 1
(train G)
Step 2
(train D)
?
Gradient Reversal Layer
input
Output
...
Loss
GRL
input
Output
...
...
Loss
gradient
gradient
-1 *gradient
Want loss to be small
Want loss to be big
Want loss to be small
input
GRL
Feature
Extractor
Domain
Classifier
Output
Loss
Want to classify
source or target
Want to obfuscate
source or target
grad
-grad
Feature
Extractor
Source Domain
Target Domain
Label
Predictor
Source Feature
Domain
Target Feature
Domain
Logit
Label
GRL + Domain
Classifier
Source Score
(want -> 1)
Target
Score
(want -> 0)
Gradient Reversal Layer
Only 1 Step!
image
text
Secret
Encoder
Decoder
image with secret
add noise
& printed
Secret
Real world image
image
Identification
image
text
Secret
Encoder
image with secret
image
Decoder
image
image
Encoder
low-dimension
latent
Decoder
Decoder
reconstruct image
image
text
Secret
Encoder
image with secret
image
Decoder
image
image
encoder
low-dimension
latent
Decoder
decoder
reconstruct image
image
image
Encoder
Decoder
reconstructed image
style-transfer
image
Cycle Consistency (L1-loss)
image
text
Secret
Encoder
image with secret
image
Decoder
text
Secret
Consistency (L1-loss)
Cross Entropy
Discriminator
image
text
Secret
Encoder
residual image
image
image with secret
image
text
Secret
Encoder
residual image
image
image with secret
image
text
Secret
Encoder
Detection
image with secret
add noise
& printed
Real world image
image
Identification
Decoder
image
text
Secret
Encoder
Decoder
image with secret
image
text
Secret
Encoder
image '
Iteratively do 10 times
image
image
Strategy
Network
LSB
DCT
DWT
IWT
environment
hyper-parmeters
image with secret
image
secret text 1
OWO
Encoder
image with secret
image
Decoder
Decoder A
secret text2
UMU
secret text 1
OWO
secret text2
UMU
secret text 1
OWO
secret text2
UMU
secret text 1
OWO
secret text2
UMU
Decoder B
image
image
Strategy
Network
LSB
DCT
DWT
IWT
environment
hyper-parmeters
image with secret
Find a strategy to less affect the original image.
image
secrets
OWO
Encoder A
RGB Channel
Encoder C
UMU
Residuals of
RGB Channel
image with secret
QAQ
Encoder B
image
text
Secret
Encoder
image
image with secret
Online System
Query
Download
image with secret
secrets
OWO
UMU
QAQ
image with secret
OWO
UMU
QAQ
Secret Message Pairs
Decoder
Optimize
Optimize
Encoder
Encoder
Low-complexity
Decoder
image
watermark
Ah-Neng®
image
watermark
Ah-Neng®
Copyright OK!
Low-complexity
Fake Decoder
watermark
asda???ajta
Encoder
High-complexity
Decoder
image
watermark
Ah-Neng®
image
watermark
Ah-Neng®
?????
High-complexity
Fake Decoder
watermark
127®
High-complexity
Decoder
watermark
Ah-Neng®
by Ah-Neng®
Init
Conv
Conv1
Conv2
Conv3
Conv4
Dec
downsample
ResNet BackBone
feature-wise
Distillation
Dec
Bottle
neck
Map
Dec
Bottle
neck
Map
Dec
Bottle
neck
Map
Depth (L1)
Sem. Seg (softmax)
Init
Conv
Conv1
Conv2
Conv3
Conv4
Dec
downsample
ResNet BackBone
Conv2'
Map
Dec
Conv3'
Conv4'
Conv3'
Conv4'
Conv4'
Map
Dec
Map
Dec
1. 幫傳給貓貓
"HI!"
2. HI!
3.
4.