On the Random
Subset Sum Problem
and Neural Networks
Emanuele Natale jointly with A. Da Cunha & L. Viennot
23 March 2023






Deep Learning on the Edge
Turing test (1950)
Today
1st AI winter (1974–1980)
2nd AI winter (1974–1980)
"A hand lifts a cup"
Use of GPUs in AI (2011)

Today, most AI heavy lifting is done in the cloud due to the concentration of large data sets and dedicated compute, especially when it comes to the training of machine learning (ML) models. But when it comes to the application of those models in real-world inferencing near the point where a decision is needed, a cloud-centric AI model struggles. [...] When time is of the essence, it makes sense to distribute the intelligence from the cloud to the edge.
Roadmap
1. ANN pruning
2. The Strong Lottery Ticket Hypothesis
3. SLTH for CNNs
4. Neuromorphic hardware
Dense ANNs
Feed-forward homogeneous dense ANN f:
Application of ℓ layers Ni with
Ni(x)=σ(Wix),
ReLu activation σ(x)=max(0,x)
and weight matrices Wi, so that
f(x)=σ(Wℓ(σ(Wℓ−1(...(x)))).

Compressing ANN
Matrix techniques
Quantization techniques
Pruning techniques
Neural Network Pruning
Blalock et al. (2020): iterative magnitude pruning still SOTA pruning technique.
train
train
prune
prune
train
The Lottery Ticket Hypothesis
Frankle & Carbin (ICLR 2019):
Large random networks contains sub-networks that reach comparable accuracy when trained
train
sparse random network
sparse
bad network
..., train&prune
train&prune, ...,
large random network
sparse good network
train
sparse "ticket" network
sparse
good network
rewind
Roadmap
1. ANN pruning
2. The Strong Lottery Ticket Hypothesis
3. SLTH for CNNs
4. Neuromorphic hardware
The Strong LTH

Ramanujan et al. (CVPR 2020) find a good subnetwork without changing weights (train by pruning!)
A network with random weights contains sub-networks that can approximate any given sufficiently-smaller neural network (without training)
Formalizing the SLTH
Random network R0 with h⋅d parameters
lottery ticket
NL⊆N0
Target network that solves task NT
with d parameters
Proving the SLTH
Malach et al. (ICML 2020)
Find random weight
close to w
Idea: Find patterns in the random networks which are equivalent to sampling a weight until you are lucky.
Q: How many uniform(−1,1) sample to approximate z up to ϵ?
Malach et al.'s Idea
Suppose x and all wi′s are positive, then
y=i∑wiσ(wi′x)=i∑wiwi′x
For general x, use the ReLu trick x=σ(x)−σ(−x):
y=i:wi′≥0∑wiσ(wi′x)+i:wi′<0∑wiσ(wi′x)
=i:wi′≥0∑wiwi′x1x≥0+i:wi′<0∑wiwi′x1x<0
Better Bound for SLTH
(assume x and wi′s are positive)
y= ∑iwiwi′x
Pensia et al. (NeurIPS 2020)
Find combination of random weights close to w
alternative in Orseau et al. (Neurips 2020)
RSSP. Given X1,...,Xn i.i.d. random variables, with prob. 1−ϵ for each z∈[−1,1] find a subset S⊆{1,...,n} such that ∣z−∑i∈SXi∣≤ϵ.
Lueker '98. Solution exists with prob. 1−ϵ if n=O(logϵ1).
RSS - Proof Idea 1/2
If n=O(logϵ1), given X1,...,Xn i.i.d. random variables, with prob. 1−ϵ for each z∈[−21,21] there is S⊆{1,...,n} such that ∣z−∑i∈SXi∣≤ϵ.
Let ft(z)=1(z∈(−21,21),∃S⊆{1,...,t}:∣z−∑i∈SXi∣≤ϵ)
then ft(z)=ft−1(z)+(1−ft−1(z))ft−1(z−Xt).
Observation: If we can approximate any z∈(a,b) and we add X′ to the sample, then we can approximate any
z∈(a,b)∪(a+X′,b+X′).

RSS - Proof Idea 2/2
z∈(−21,21),ft(z)=ft−1(z)+(1−ft−1(z))ft−1(z−Xt).
∫−2121ft−1(z)dz+E[∫−2121(1−ft−1(z))ft−1(z−Xt)dz∣Xt−1,...,X1]
=vt−1+21(1−vt−1)vt−1.
=vt−1+21∫−11[∫−2121(1−ft−1(z))ft−1(z−x)dz]dx
=vt−1+21∫−2121(1−ft−1(z))[∫−11ft−1(z−x)dx]dz
=vt−1+21∫−2121(1−ft−1(z))[∫z−1z+1ft−1(s)ds]dz
=vt−1+21∫−2121(1−ft−1(z))[∫−2121ft−1(s)ds]dz
E[vt∣Xt−1,...,X1]=
Let vt=∫−2121ft(z)dz, then
"Revisiting the Random Subset Sum problem" https://hal.science/hal-03654720/

Roadmap
1. ANN pruning
2. The Strong Lottery Ticket Hypothesis
3. SLTH for CNNs
4. Neuromorphic hardware
Convolutional Neural Network

The convolution of K∈Rd×d×c and X∈RD×D×c is (K∗X)i,j∈[D]=i′,j′∈[d],k∈[c]∑Ki′,j′,k⋅Xi−i′+1,j−j′+1,k,
where X is zero-padded.
A simple CNN N:[0,1]D×D×c0→RD×D×cℓ is defined as
N(X)=σ(K(ℓ)∗σ(K(ℓ−1)∗σ(⋯∗σ(K(1)∗X))))
where K(i)∈Rdi×di×ci−1×ci.
2D Discrete Convolution
If K∈Rd×d×c0×c1 and X∈RD×D×c0
(K∗X)i,j∈[D],ℓ∈[c1]=i′,j′∈[d],k∈[c0]∑Ki′,j′,k,ℓ⋅Xi−i′+1,j−j′+1,k.

SLTH for Convolutional Neural Networks
Theorem (da Cunha et al., ICLR 2022).
Given ϵ,δ>0, any CNN with k parameters and ℓ layers, and kernels with ℓ1 norm at most 1, can be approximated within error ϵ by pruning a random CNN with O(klogmin{ϵ,δ}kℓ) parameters and 2ℓ layers with probability at least 1−δ.
Proof Idea 1/2
For any K∈[−1,1]d×d×c×1 with ∥K∥1≤1 and X∈[0,1]D×D×c we want to approximate K∗X with V∗σ(U∗X) where U and V are tensors with i.i.d. Uniform(−1,1) entries.
Let U be d×d×c×n and V be 1×1×n×1.


Proof Idea 2/2
where Li,j,k,1=∑t=1nV1,1,t,1⋅Ui,j,k,t
Prune negative entries of U so that σ(U∗X)=U∗X.

Roadmap
1. ANN pruning
2. The Strong Lottery Ticket Hypothesis
3. SLTH for CNNs
4. Neuromorphic hardware
Reducing Energy: the Hardware Way




Letting Physics Do The Math
Ohm's law
To multiply w and x, set Vin=x and R=w1, then Iout=wx.
Resistive Crossbar Device
Analog MVM via crossbars of programmable resistances

Problem: Making precise programmable resistances is hard
Cfr. ~10k flops for digital 100x100 MVM
"Résistance équivalente modulable
à partir de résistances imprécises"
INRIA Patent deposit FR2210217
Leverage noise itself
to increase precision
RSS
Theorem
Programmable
effective resistance

RSS in Practice


bits of precision for any target value are linear w.r.t. number of resistances
Worst case among
2.5k instances
Conclusions
RSS theory provides a fundamental tool for understanding the relation between noise and sparsification.
Open problem. SLTH for neural pruning, i.e. removing entire neurons in dense ANNs.
Thank you
GSSI 2023
By Emanuele Natale
GSSI 2023
- 309