For a function f: D\(\mapsto\)\(\R^k\)
on datasets D₁, D₂ differing on at most one element
The sensitivity of function g(pre-noise layers) => the maximum change in output that can be produced by a change in the input. (input: p-norm, output: q-norm)
Larger L = robustness against larger attacks = larger noise std
On Measures of Entropy and Information
RENYI, A.
\( D_{\lambda}(P\|Q) := \dfrac{1}{\lambda-1}\log{E_{x \sim Q}(\dfrac{P(x)}{Q(x)})^{\lambda}} \)
\( D_1(P\|Q) = E_{x \sim P}\log\dfrac{P(x)}{Q(x)} \)
\( D_{\infty}(P\|Q) = \sup\limits_{x \in supp\ Q}\log\dfrac{P(x)}{Q(x)} \)
\( Let\ \mathcal{X}\ be\ a\ space\ of\ databases,\ \mathcal{Y}\ an\ output\ space,\ and\ \sim_h\ denoting\ the\ that\ two\ databases\ from\ \mathcal{X}\ only\ differ\ from\ one\ row.\\ A\ probabilistic\ mapping\ \mathcal{M}\ from\ \mathcal{X}\ to\ \mathcal{Y}\ is\ called\\ differentially\ private\ if\ for\ any\ x,x' \in \mathcal{X}\ s.t.\ x \sim_h x' and\\ for\ any\ Y \in \sigma(\mathcal{Y})\ on\ has\ \mathcal{M}(x)(Y) \le \exp(\epsilon)\mathcal{M}(x')(Y). \)
The algorithmic foundations of differential privacy.
Dwork, C., Roth, A.
\( Let\ \epsilon > 0,\ (\mathcal{X},d_{\mathcal{X}})\ an\ arbitrary\ (input)\ metric\ space,\ and\\ \mathcal{Y}\ an\ output\ space.\ A\ probabilistic\ mapping\ \mathcal{M}\ from\ \mathcal{X}\ to\ \mathcal{Y}\\ is\ called\ (\epsilon,\alpha)\)-\(d_{\mathcal{X}}\ private\ if\ for\ any\ x,x'\\ s.t.\ d_{\mathcal{X}}(x,x') \le \alpha,\ one\ has\ D_{\infty}(\mathcal{M}(x),\mathcal{M}(x')) \le \epsilon. \)
Broadening the scope of differential privacy using metrics.
Chatzikokolakis, K., Andrés, M.E., Bordenabe, N.E., Palamidessi, C.
\( Let\ \epsilon > 0,\ (\mathcal{X},d_{\mathcal{X}})\ an\ arbitrary\ (input)\ metric\ space,\ and\\ \mathcal{Y}\ an\ output\ space.\ A\ probabilistic\ mapping\ \mathcal{M}\ from\ \mathcal{X}\ to\ \mathcal{Y}\\ is\ called\ (\lambda,\epsilon,\alpha)\)-\(d_{\mathcal{X}}\ Renyi\)-\(private\ if\ for\ any\\ x,x'\ s.t.\ d_{\mathcal{X}}(x,x') \le \alpha,\ one\ has\ D_{\lambda}(\mathcal{M}(x),\mathcal{M}(x')) \le \epsilon. \)
Renyi Differential Privacy
Mironov, I.
\( A\ classifier\ h\ is\ said\ to\ be\ (\alpha,\gamma)\)-\(robust\\ if\ \mathbb{P}_{x \sim D_{\mathcal{X}}}[\exists x' \in B(x,\alpha)\ s.t.\ h(x') \ne h(x)] \le \gamma \)
\( Let\ D_{\mathcal{P}(\mathcal{Y})}\ be\ a\ metric/divergence\ on\ \mathcal{P}(\mathcal{Y}).\\ A\ randomized\ classifier\ \mathcal{M}\ is\ said\ to\ be\\ D_{\mathcal{P}(\mathcal{Y})}\)-\( (\alpha,\epsilon,\gamma)\)-\( robust\ if\\ \mathbb{P}_{x \sim D_{\mathcal{X}}}[\exists x' \in B(x,\alpha)\ s.t.\ D_{\mathcal{P}(\mathcal{Y})}(\mathcal{M}(x'),\mathcal{M}(x)) > \epsilon] \le \gamma \)
Theoretical evidence for adversarial robustness through randomization
\( D_{\lambda}\)-\((\alpha,\epsilon,0)\)-\(robust\\ \Leftrightarrow\\ D_{\mathcal{X}}\)-\(almost\ surely\\ (\lambda,\epsilon,\alpha)\)-\(d_{\mathcal{X}}\ Renyi\)-\(differentially\ private \)
\(x,x'\ s.t.\ d_{\mathcal{X}}(x,x') \le \alpha,\\ D_{\lambda}(\mathcal{M}(x),\mathcal{M}(x')) \le \epsilon \)
\( \mathbb{P}_{x \sim D_{\mathcal{X}}}[\exists x' \in B(x,\alpha)\\ s.t.\ D_{\mathcal{P}(\mathcal{Y})}(\mathcal{M}(x'),\mathcal{M}(x)) > \epsilon] \le \gamma \)