Homomorphic encryption and applications to deep learning

featuring Alice, Bob, and Eve

People want to share information...

...but not with everybody

Generals most of all!

Caesar cipher

These Romans are crazy! (plaintext)

Qebpb Oljxkp xob zoxwv! (ciphertext)

We can use a mutual key (Symmetric encryption)

PK

Caveat: If Eve figures out the public key, communication is compromised!

But using two keys is safer (Asymmetric encryption)

PK

Rivest-Shamir-Adleson (RSA)

MIT, 1978

SK

Often we do not want to just send data, but also process it

Local computing is not always viable...

  • No money for GPUs.
  • Too much noise.
  • Electricity bill.
  • No space for servers.

...but cloud computing elicits privacy concerns

  • Bob sees my data.
  • Is Bob careful enough?
  • Is the channel secure?

f(    )

Quick math (and greek) course: homomorphism

   ὁμός (homo's) = "same"    (watch the accent!)

+ μορφή (morphe') = "form", "shape"

In plain English: "structure-preserving"

\( f(x*y) = f(x)*f(y) \)

Homomorphic Encryption (HE)

f(    )

\( enc(\cdot)\)

\( dec(\cdot)\)

f(    )

\( f(\cdot)\)

HE types

  • Fully Homomorphic Encryption (FHE)
    • Supports arbitrary number of operations (bootstrapping).
    • Very slow.
  • Leveled Homomorphic Encryption (LHE)
    • Faster.
    • Must know depth/complexity of circuit in advance.
  • Partial Homomorphic Encryption (PHE)
    • Faster.
    • More secure than FHE for equal runtime budget.
    • Only supports addition, multiplication (e.g., Paillier encryption).

That's cool but can I do deep learning with it?

Not all operations are supported

1. Convolution = addition and multiplication

3. Non-linearity (ReLU, tanh) 

Polynomial approximation:

\( ReLU(x) \approx \sum_{I=1}^N c_i P_i(x) \)      Slow for degree > 2

2. Max-pooling replaced by average pooling

4. Other operations?

Cryptosystems support integers

Solution: Integer approximation of model parameters before encryption, decode back to floats after decryption.

Only inference is supported

  • (Pre)Training on plaintext data.
  • Training is unstable because of approximation errors.

We may want to keep the model private too!

GELU highlights

  1. Paillier encryption

  2. Split into linear and non-linear components and distribute computations to non-colluding parties.

  3. No approximation of ReLUs.

  4. Privacy-preserving backpropagation.

Globally Encrypted

\(enc(x_1)\)

\(enc(x_2)\)

\(enc(x_3)\)

\( x_1 \)

\( x_2 \)

\( x_3 \)

Locally Unencrypted

\(w*enc(x_1)\)

\( ReLU(dec(w*enc(x_3)) \)

\(w*enc(x_2)\)

\(w*enc(x_3)\)

\( ReLU(dec(w*enc(x_2)) \)

\( ReLU(dec(w*enc(x_1)) \)

Rinse and repeat for next layer

Not as slow as it sounds

Scheme Communication Crypto Activation Total
Square 0 0 90.6 90.6
5-th order 0 0 1619.6 1619.6
GELU-Net 5 3.7 0.2 8.9

Computation time of activation (ms)

Architecture Time (s) Accuracy
GELU-Net 126.7 (15ms/image) 0.989
CryptoNets 3009.6 (367ms/image) 0.967

Computation time for LeNet on MNIST (8192 image batch)

Privacy-preserving training

Privacy-preserving training

Potential contributions

  1. Better encryption schemes.

  2. Improve performance for single (non-batched) inputs.

  3. Exploit advances on binary neural networks (BNNs) [1].

  4. Optimize privacy-preserving training.

Resources

Homomorphic Encryption

By Stavros Tsogkas

Homomorphic Encryption

Homomorphic Encryption and applications to Neural Networks

  • 1,072