Error Exponent Analysis in Quantum Source and Channel Coding

1. [arXiv:1803.07505]: Cheng (NTU&UTS), Hanson (Cambridge), Datta (Cambridge), MH

3. [arXiv:1701.03195]: Cheng (NTU&UTS), MH

2. [arXiv:1704.05703]: Cheng (NTU&UTS), MH, Tomamichel (UTS)

In Source/Channel Coding,

  • Error Probability:
  • Transmission Rate:
  • Code Length:
\varepsilon
R
n
\varepsilon
R
C
1
n=1
n=\infty

Small Deviation

R\to C
\varepsilon>0

Large Deviation

\varepsilon\to 0
R \neq C

Channel Coding Trade-offs

Moderate Deviation

R\to C
\varepsilon\to 0
\varepsilon
R
C
1
n=1
n=\infty

Small Deviation

R\to C
\varepsilon>0

Large Deviation

\varepsilon\to 0
R \neq C

Source Coding Trade-offs

Moderate Deviation

R\to C
\varepsilon\to 0

Three Regimes:

Small Deviation

Large Deviation

Moderate Deviation

R\to C
\varepsilon \neq0
\varepsilon\to 0
R \neq C
\varepsilon\to 0
R\to C

Small Deviation:

a.k.a. Second-Order Analysis

R_n(\varepsilon) = C_C + \sqrt{\frac{V_C}{n}}\Phi^{-1}(\varepsilon) + O(\frac{\log n}{n})

Strassen, Transactions of the Third Prague Conference on Information Theory, pp. 689–723, 1962.

Tomamichel and Tan, CMP 338(1):103–137, 2015.

R_n(\varepsilon) = C_S - \sqrt{\frac{V_S}{n}}\Phi^{-1}(\varepsilon) + O(\frac{\log n}{n})

Tomamichel and Hayashi, IEEE IT 59(11):7693-7710, 2013.

Nomura and Han. IEEE IT 60(9):5553–5572, 2014.

\{
\{

Source

Channel

Large Deviation:

a.k.a. Error Exponent Analysis

\varepsilon_n(R) = e^{-\Theta(n)}

Shannon, Bell System Technical Journal, 38(3):611–656, 1959.

Burnashev and Holevo, Problems of information transmission, 34(2):97–107, 1998.

Moderate Deviation:

\varepsilon_n(R_n) = e^{-\Theta(na_n^2)}

Altug and Wagner. IEEE TIT 60(8):4417–4426, 2014.

Chubb, Tomamichel and Tan, arXiv: 1701.03114.

\{a_n\}: a_n\to 0,\ a_n\sqrt{n} \to \infty.
R_n= C- a_n

Cheng and Hsieh, arXiv: 1701.03195.

Outlines

Part I

Channel Coding 

X^n
\hat{M}_n
\mathcal{E}_n
M_n
\varepsilon_n(R) = \text{Pr}(M^n\neq \hat{M}^n):\ |M_n| = 2^{nR}
{W}^{\otimes n}
\mathcal{D}_n
B^n

Classical-Quantum Channels

W: \mathcal{X} \to \mathbb{C}^{d\times d}
\lim_{n\to\infty} \frac{-1}{n} \log\varepsilon_n(R) = E(R)

Error Exponent Analysis

E_{\text{rc}}(R) \leq E(R) \leq E_{\text{sp}}(R)

Shannon, Bell System Technical Journal, 38(3):611–656, 1959.

Claissical Sphere-Packing Bounds

E_{\text{sp}}(R):=\sup_{s\geq 0} \left\{ \max_P E_0(s,P) -sR\right\}
\tilde{E}_{\text{sp}}(R):=\max_P\min_V\{D(V\|W|P):I(P,V)\leq R\}

Shannon, Gallager, and Berlekamp.  Information and Control, 10(1):65–103, 1967.

Haroutunian. Problemy Peredachi Informatsii, 4(4):37–48, 1968, (in Russian).

Blahut. IEEE TIT, 20(4):405–417, 1974.

{E}_{\text{sp}}(R)= \tilde{E}_{\text{sp}}(R)

Quantum Sphere-Packing Bounds

E_{\text{sp}}(R):=\sup_{s\geq 0} \left\{ \max_P E_0(s,P) -sR\right\}
\tilde{E}_{\text{sp}}(R):=\max_P\min_V\{D(V\|W|P):I(P,V)\leq R\}

Dalai. IEEE TIT, 59(12):8027–8056, 2013.

Winter. PhD Thesis, Universitate Bielefeld, 1999.

Cheng, Hsieh, and Tomamichel. arXiv: 1704.05703

{E}_{\text{sp}}(R)\leq \tilde{E}_{\text{sp}}(R)
E_0(s,P)=-\log \text{Tr} \left[\left(\sum_{x}P(x)W_x^{1/1+s}\right)^{1+s}\right]
D(V\|W|P) = \sum_x P(x) D(V_x\|W_x)
\tilde{E}_{\text{sp}}(R,P)=\sup_{0<\alpha\leq 1}\min_\sigma\{\frac{1-\alpha}{\alpha}\left(D^\flat_\alpha(W\|\sigma|P)-R\right)\}
{E}_{\text{sp}}(R,P)\leq\sup_{0<\alpha\leq 1}\min_\sigma\{\frac{1-\alpha}{\alpha}\left(D_\alpha(W\|\sigma|P)-R\right)\}

Theorem:

{E}_{\text{sp}}(R)\leq \tilde{E}_{\text{sp}}(R)

Proof:

D_\alpha(\rho\|\sigma):= \frac{1}{\alpha-1}\log \text{Tr}\rho^\alpha \sigma^{(1-\alpha)} \leq D^\flat_\alpha(\rho\|\sigma):=\frac{1}{\alpha-1}\text{Tr}[e^{\alpha\log\rho+(1-\alpha)\log\sigma}]

Cheng, Hsieh, and Tomamichel. arXiv: 1704.05703

Dalai's Sphere-Packing Bound

\log \frac{1}{\varepsilon_n(R)} \leq n E_{\text{sp}}(R) + O(\sqrt{n})

Dalai. IEEE TIT, 59(12):8027–8056, 2013.

Shannon, Gallager, and Berlekamp.  Information and Control, 10(1):65–103, 1967.

Theorem:

\log \frac{1}{\varepsilon_n(R)} \leq n E_{\text{sp}}(R) + O(\log{n})

Cheng, Hsieh, and Tomamichel. arXiv: 1704.05703, 2017.

Altug and Wagner, IEEE TIT, 60(3): 1592–1614, 2014.

Proof:

Step 1:

$$\varepsilon_{\max}(\mathcal{C}_n) \geq \max_\sigma\min_{\mathbb{x}^n\in \mathcal{C}_n} \tilde{\alpha}_{\frac{1}{|\mathcal{C}_n|}}(W_{\mathbb{x}^n}||\sigma).$$

Step 2:

Two one-shot converse Hoeffding bounds for \(\tilde{\alpha}_\mu(\cdot|\cdot)\).

\tilde{\alpha}_\mu(\rho\|\sigma) = \min_\Pi\{\alpha(\Pi,\rho): \beta(\Pi,\sigma)\leq \mu\}

For Bad Codewords, Use Weak Converse Hoeffding Bound.

\tilde{\alpha}_{e^{-nR}}(\rho^n\|\sigma^n) \geq \kappa_1 e^{-\kappa_2\sqrt{n}-n\phi_n(R'|\rho^n\|\sigma^n)}
\phi_n(r|\rho^n\|\sigma^n):= \sup_{\alpha\in(0,1]} \left\{\frac{1-\alpha}{\alpha}(\frac{1}{n}D_\alpha(\rho^n\|\sigma^n)-r)\right\}

\(H_0:\rho^n=\rho_1\otimes\cdots\otimes\rho_n\);  \(H_1:\sigma^n=\sigma_1\otimes\cdots\otimes\sigma_n\)

Blahut. IEEE TIT, 20(4):405–417, 1974.

Audenaert et. al., PRL 98:160501, 2007.

For Good Codewords, Use Sharp Converse Hoeffding Bound.

\tilde{\alpha}_{e^{-nR}}(\rho^n\|\sigma^n) \geq \frac{A}{n^{-t}} e^{-n\phi_n(R'|\rho^n\|\sigma^n)}

\(H_0:\rho^n=\rho_1\otimes\cdots\otimes\rho_n\),  \(H_1:\sigma^n=\sigma_1\otimes\cdots\otimes\sigma_n\)

Bahadur and Rao, The Annals of Mathematical Statistics, 31(4):1015–1027, 1960.

Altug and Wagner, IEEE TIT, 60(3): 1592–1614, 2014.

t> 1/2

If Channel is Symmetric 

W_x = V^{x-1}W_1 (V^\dagger)^{x-1},
\log \frac{1}{\varepsilon_n(R)} \leq n E_{\text{sp}}(R) + \frac{1}{2}(1+|E'_{\text{sp}}(R)|)\log{n}+o(1)

then Sphere-packing bound is exact. 

Property of \(E(R)\):

I_{\alpha}^{(1)}(P,W):= \inf_\sigma D_\alpha(P\circ W\|P\otimes\sigma)
I_{\alpha}^{(2)}(P,W):= \inf_\sigma D_\alpha(W\|\sigma|P)

(a) The map \((\alpha,P)\to I_\alpha\) is continuous on \([0,1]\times\mathcal{P}(\mathcal{X})\).

(b) The map \(\alpha\to I_\alpha\) is monotone increasing on \([0,1]\).

(c) The map \(\alpha\to \frac{1-\alpha}{\alpha}I_\alpha\) is strictly concave on \((0,1]\).

Property of \(E(R)\):

E^{(1)}_{\text{sp}}(R,P):= \sup_{0<\alpha\leq 1} \frac{1-\alpha}{\alpha} \left(I_{\alpha}^{(1)}(P,W)-R\right)

(a) The map \(R\to E^{(\cdot)}_{\text{sp}}\) is convex, continuous, and non-increasing.

(b) \(E^{(\cdot)}_{\text{sp}}\) is differentiable w.r.t. \(R\).

(c) \({E'}^{(\cdot)}_{\text{sp}}\) is continuous.

E^{(2)}_{\text{sp}}(R,P):= \sup_{0<\alpha\leq 1} \frac{1-\alpha}{\alpha} \left(I_{\alpha}^{(2)}(P,W)-R\right)
E_{\text{sp}}(R,P):= \sup_{s>0} \left(E_{0}(s,P)-sR\right)

Part I.B

Moderate Deviation

Moderate Deviation:

\varepsilon_n(R_n) = e^{-\Theta(na_n^2)}
\{a_n\}: a_n\to 0,\ a_n\sqrt{n} \to \infty.
R_n= C- a_n,

Cheng and Hsieh, arXiv: 1701.03195.

[Achievability] \(\limsup_{n\to\infty}\frac{1}{na_n^2}\log\varepsilon_n(R_n)\leq -\frac{1}{2V_W}\)

[Converse] \(\liminf_{n\to\infty}\frac{1}{na_n^2}\log\varepsilon_n(R_n)\geq -\frac{1}{2V_W}\)

Chubb, Tomamichel and Tan, arXiv: 1701.03114.

Achievability:

Step 1:

$$\varepsilon_n(R_n) \leq 4\exp\left(-n\left[\max_{0\leq s\leq 1} \tilde{E}_0(s,P)-sR_n\right]\right)$$

Hayashi, PRA  76(6): 06230,12007.

\frac{1}{n a_n^2}\log\varepsilon_n(R_n)\leq \frac{4}{n a_n^2} - \frac{1}{a_n^2}\max_{s} \left\{ \tilde{E}_0(s,P)-sR\right\}

Achievability:

Step 2:

Apply Taylor Expansion to \(\tilde{E}_0(s,P)\) at \(s=0\). 

\tilde{E}_0(s,P) = s C_W - \frac{s^2}{2} V_W + \frac{s^3}{6} \frac{\partial^3 \tilde{E}_0(s,P)}{\partial s^3}|_{s=\bar{s}}

Property of \(\tilde{E}_0(s,P)\):

\tilde{E}_0(s,P):= -\log \sum_x P_x\text{Tr} W_x^{1-s}(PW)^s

(a) Partial derivatives of \(\tilde{E}_0\) are continuous.

(b) \(\tilde{E}_0\) is concave in \(s\geq 0 \).

\text{(c)}\left.\frac{\partial}{\partial s}\tilde{E}_0(s,P)\right|_{s=0} = I(P,W).
\text{(d)}\left.\frac{\partial^2}{\partial s^2}\tilde{E}_0(s,P)\right|_{s=0} = V(P,W).

Converse

Similar to Quantum SP Bounds

1. A New Sharp Hoeffiding Bound.

2. Weak Hoeffiding Bound needs special attention.

New Sharp Converse Hoeffding Bound

\tilde{\alpha}_{e^{-nR_n}}(\rho^n\|\sigma^n) \geq \frac{A}{{s_n^\star} n^{-1/2}} e^{-n\phi_n(R_n'|\rho^n\|\sigma^n)}

\(H_0:\rho^n=\rho_1\otimes\cdots\otimes\rho_n\),  \(H_1:\sigma^n=\sigma_1\otimes\cdots\otimes\sigma_n\)

Chaganty-Sethuraman , The Annals of Probability, 21(3):1671–1690, 1993.

\tilde{\alpha}_{e^{-nR}}(\rho^n\|\sigma^n) \geq \frac{A}{ n^{-t}} e^{-n\phi_n(R'|\rho^n\|\sigma^n)}, \quad t>1/2

Summary

Part II

Source Coding (with quantum side information)

X^n
B^n
\hat{X}^n
\mathcal{E}_n
\mathcal{D}_n
W_n
\varepsilon_n(R) = \text{Pr}(X^n\neq \hat{X}^n):\ |W_n| = 2^{nR}
\lim_{n\to\infty} \frac{-1}{n} \log\varepsilon_n(R) = E(R)

Error Exponent Analysis

E_{\text{rc}}(R) \leq E(R) \leq E_{\text{sp}}(R)

Shannon, Bell System Technical Journal, 38(3):611–656, 1959.

E_{\text{rc}}(R)=\sup_{0.5\leq \alpha<1} \frac{1-\alpha}{\alpha}\left(R-H_{2-\frac{1}{\alpha}}(X|B)_\rho\right)

Theorem 

H_\alpha(X|B)_\rho = -D_{\alpha}(\rho_{XB}\|\mathbb{I}_X\otimes \rho_B)
E_{\text{sp}}(R)=\sup_{0\leq \alpha\leq1} \frac{1-\alpha}{\alpha}\left(R-H^*_{\alpha}(X|B)_\rho\right)
H^*_\alpha(X|B)_\rho = \max_{\sigma_B}-D_{\alpha}(\rho_{XB}\|\mathbb{I}_X\otimes \sigma_B)

Theorem

R< C_S
\lim_{n\to\infty} \frac{-1}{n} \log\left(1-\varepsilon_n(R)\right) = E_{\text{sc}}(R)
E_{\text{sc}}(R)=\sup_{ \alpha>1} \frac{1-\alpha}{\alpha}\left(R-H^s_{\alpha}(X|B)_\rho\right)
H^s_\alpha(X|B)_\rho = \max_{\sigma_B}-D^s_{\alpha}(\rho_{XB}\|\mathbb{I}_X\otimes \sigma_B)

Theorem 

D^s_{\alpha}(\rho\|\sigma)= \frac{1}{\alpha-1} \log\text{Tr}\left[\left(\rho^{\frac{1}{2}}\sigma^{\frac{1-\alpha}{2\alpha}}\rho^{\frac{1}{2}}\right)^\alpha\right]

Moderate Deviation:

\lim_{n\to\infty} \frac{1}{na_n^2}\log\varepsilon_n(R_n) = - \frac{1}{2V(X|B)_\rho}
\{a_n\}: a_n\to 0,\ a_n\sqrt{n} \to \infty.
R_n= C_S+ a_n
R_n(\varepsilon_n) = C_S + \sqrt{2V(X|B)_\rho} a_n + O(a_n)
\varepsilon_n= e^{-na_n^2}

Duality

E_{\text{sp,source}}(R,Q_X) = E_{\text{sp,channel}}(H(Q_X)-R,Q_X)

Open Questions

1. Beyond C-Q Channel?

2. EA Channel?

3. Duality?

Thank you!

Error Exponent Analysis in Quantum Source and Channel Coding

By Lawrence Min-Hsiu Hsieh

Error Exponent Analysis in Quantum Source and Channel Coding

  • 507