What are adversarial examples?
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/7077078/circle-cropped.png)
@tiffanysouterre
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5442483/wtm-tagline-logo-hdr.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5442490/pasted-from-clipboard.png)
Developer Relations
@Microsoft
Tiffany Souterre
WTM Ambassador
@WTM
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/8921493/pasted-from-clipboard.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5335424/panda.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5335426/noise.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5335429/gibbon.png)
(Ian J. Goodfellow, et al. 2016)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5439720/pasted-from-clipboard.png)
(Christian Szegedy, et al. 2016)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5439893/pasted-from-clipboard.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5439895/pasted-from-clipboard.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5442557/pasted-from-clipboard.png)
(Kurakin A., et al. 2017)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5439598/Selection_055.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5439599/Selection_056.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5439601/Selection_057.png)
(Mahmood Sharif, 2016)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5442569/pasted-from-clipboard.png)
(Anish Athalye, et al. 2018)
28 px
28 px
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5438210/Selection_053.png)
0.00
1.00
0.38
784 neurons
Input layer
0
1
2
3
4
5
7
8
9
6
Output layer
Hidden layers
784 neurons
Input layer
Hidden layers
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5438210/Selection_053.png)
28 px
28 px
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5438210/Selection_053.png)
784 neurons
Input layer
0
1
2
3
4
5
7
8
9
6
Output layer
Hidden layers
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5453259/spiral.1-2.2-2-2-2-2-2.gif)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5453306/pasted-from-clipboard.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5453311/pasted-from-clipboard.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5453315/pasted-from-clipboard.png)
1 input layer
1 output layer
1 input layer
1 hidden layer
1 output layer
1 input layer
4 hidden layers
1 output layer
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5442710/pasted-from-clipboard.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5362246/Comparison_of_models_top-5_error_rate_on_the_ImageNet_Challenge.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5362392/andrej_blog.png)
https://karpathy.github.io/2014/09/02/what-i-learned-from-competing-against-a-convnet-on-imagenet/
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5361335/inceptionv3onc--oview.png)
Inception-v3 Architecture
Trained for the ImageNet database (1000 classes)
Input : 299 x 299 x 3
import numpy as np
from keras.preprocessing import image
from keras.applications import inception_v3
img = image.load_img("katoun.png", target_size=(299, 299))
input_image = image.img_to_array(img)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5361171/reduced-image.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5361148/katoun.png)
img.show()
input_image /= 255.
input_image -= 0.5
input_image *= 2.
input_image = np.expand_dims(input_image, axis=0)
model = inception_v3.InceptionV3()
predictions = model.predict(input_image)
predicted_classes = inception_v3.decode_predictions(predictions, top=1)
imagenet_id, name, confidence = predicted_classes[0][0]
print("This is a {} with {:.4}% confidence!".format(name, confidence * 100))
0 | 1 | ... | 297 | 298 |
1 | ||||
⋮ | ||||
297 | ||||
298 |
0 | 1 | ... | 297 | 298 |
1 | ||||
⋮ | ||||
297 | ||||
298 |
0 | 1 | ... | 297 | 298 |
1 | ||||
⋮ | ||||
297 | ||||
298 |
This is a tabby with 86.86% confidence!
Inception V3
import numpy as np
from keras.preprocessing import image
from keras.applications import inception_v3
from keras import backend as K
from PIL import Image
model = inception_v3.InceptionV3()
model_input_layer = model.layers[0].input
model_output_layer = model.layers[-1].output
confidence_function = model_output_layer[0, object_type_to_fake]
gradient_function = K.gradients(confidence_function, model_input_layer)[0]
grab_confidence_and_gradients_from_model = K.function([model_input_layer, K.learning_phase()], [confidence_function, gradient_function])
spoon_confidence = 0
while spoon_confidence < 0.98:
spoon_confidence, gradients = grab_confidence_and_gradients_from_model([hacked_image, 0])
hacked_image += gradients * learning_rate
hacked_image = np.clip(hacked_image, original_image - 0.1, original_image + 0.1)
hacked_image = np.clip(hacked_image, -1.0, 1.0)
img = Image.fromarray(img.astype(np.uint8))
img.save("hacked-image.png")
Inception V3
spoon
0.00
tabby
0.87
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5361171/reduced-image.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5363307/katoun_spoon.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5361171/reduced-image.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5363309/katoun-pineapple.png)
This is a tabby with 86.86% confidence!
This is a spoon with 98.65% confidence!
This is a pineapple with 98.93% confidence!
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5443734/hacked-with-boundaries-white-to-pineapple.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5443744/hacked-with-boundaries-white-to-pineapple-saturated.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5443755/white.png)
Original white image
Hacked white image
Hacked white image saturated
Discriminator
Real
Fake
0
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5361171/reduced-image.png)
1
Real
Database
Generator
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5443744/hacked-with-boundaries-white-to-pineapple-saturated.png)
Fake
Generative Adversary Networks GAN
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5444531/Selection_068.png)
(Goodfellow 2016)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5454089/pasted-from-clipboard.png)
Model based optimization
4.5 years of GAN progress on face generation. https://t.co/kiQkuYULMC https://t.co/S4aBsU536b https://t.co/8di6K6BxVC https://t.co/UEFhewds2M https://t.co/s6hKQz9gLz pic.twitter.com/F9Dkcfrq8l
— Ian Goodfellow (@goodfellow_ian) January 15, 2019
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5809750/pasted-from-clipboard.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5809636/pasted-from-clipboard.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5809668/1.jpeg)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5809695/20.jpeg)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5809694/19.jpeg)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5809693/18.jpeg)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5809691/16.jpeg)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5809692/17.jpeg)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5809690/15.jpeg)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5809689/14.jpeg)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5809688/13.jpeg)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5809687/12.jpeg)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5809686/11.jpeg)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5809685/10.jpeg)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5809683/9.jpeg)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5809682/8.jpeg)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5809680/7.jpeg)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5809678/6.jpeg)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5809672/2.jpeg)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5811040/27.jpeg)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5811039/26.jpeg)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5812549/pasted-from-clipboard.png)
(Egor Zakharov, et al. 2019)
(Egor Zakharov, et al. 2019)
Thank you!
@tiffanysouterre
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5442490/pasted-from-clipboard.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5439814/pasted-from-clipboard.png)
(Papernot, et al. 2016)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/842956/images/5442681/pasted-from-clipboard.png)
(Ian J. Goodfellow, 2016)
Tensorflow, there is no spoon
By Tiffany Souterre
Tensorflow, there is no spoon
- 3,419