Jan Korbel & David Wolpert
WOST IV
Slides are available at: https://slides.com/jankorbel
Reference: https://arxiv.org/abs/2210.05249
\(T\)
\(X\)
\(P(T)\)
\(X\)
3rd trial
\(T_3\)
\(X_3\)
...
Measure a quantity \(X\) and the assume that the temperature \(T\) can be measured with infinite precision
Temperature is measured with limited precision, can change between experiments
\(T=?\)
Effective scenario: we do not know which apparatus is coupled to our system but we can repeat the experiment many times with the same apparatus
Phenomenological scenario: the apparatus is randomly rechosen each time we run the experiment
The average work as a function of standard deviation of the distribution \(p(k)\)
Effective value over the apparatuses can be defined as
$$ \overline{X}:= \int \mathrm{d} P^\alpha X^\alpha$$
Effective distribution \(\bar{p}_x(t)\) fulfills the equation
\( \dot{\bar{p}}_x(t) = \sum_{x'} \int \mathrm{d} P^\alpha K^\alpha_{xx'} p^\alpha_{x'}(t) \) which is generally non-Markovian
$$\dot{\bar{U}} = \dot{\bar{Q}} + \dot{\bar{W}}$$
$$\dot{\bar{S}} = \dot{\bar{\Sigma}} + \dot{\bar{\mathcal{E}}}$$
Phenomenological EP \(\Phi = D_{KL}(\overline{\pmb{P}}(\pmb{x})||\overline{\pmb{P}}^{\dag}(\pmb{x}^{\dag})) = \int \mathcal{D} \pmb{x} \pmb{P}(\pmb{x}) \color{pink}{\ln \frac{\pmb{P}(\pmb{x})}{{\pmb{P}^\dag}({\pmb{x}}^\dag)}}\)
Phenomenological trajectory EP is \(\color{pink}{\phi(\pmb{x}) = \ln \frac{\overline{\pmb{P}}(\pmb{x})}{\overline{\pmb{P}}^\dag(\pmb{x}^{\dag})}}\)
It is straightforward to show that \(\phi\) fullfills detailed fluctuation theorem
$$ \frac{P(\phi)}{P^\dag(-\phi)} = e^{\phi}$$
Phenomenological EP describes thermodynamics for the case of expected probability
It is a lower bound for the effective EP: \(\overline{\Sigma} \geq \Phi\)
Likelihood EP \(\Lambda = D_{KL}(\pmb{P}(\alpha|\pmb{x})||{\pmb{P}^{\dag}}(\alpha|\pmb{x}^\dag)) = \int \mathrm{d} P^\alpha \pmb{P}(\alpha|\pmb{x}) {\color{pink} \ln \frac{\pmb{P}(\alpha|\pmb{x})}{\pmb{P}^\dag(\alpha|\pmb{x}^\dag)}}\)
Likelood trajectory EP as \({\color{pink}\lambda(\alpha|\pmb{x})}:= \sigma(\pmb{x}|\alpha) - \phi(\pmb{x}) = {\color{pink}\ln \frac{\pmb{P}(\alpha|\pmb{x})}{\pmb{P}^\dag(\alpha|\pmb{x}^\dag)}}\)
We can also show that \(\omega\) fulfills Detailed FT:
$$\frac{P(\lambda_{\pmb{x}})}{\tilde{P}(-\lambda_{\pmb{x}^\dag})} = e^{\lambda_{\pmb{x}}} $$
From Integrated FT, we obtain that \(\Lambda_{\pmb{x}} = \langle \lambda_{\pmb{x}} \rangle_{P(\lambda_{\pmb{x}})} \geq 0\)
By averaging over all trajectories \(\Lambda = \langle \Lambda_{\pmb{x}} \rangle_{\pmb{P}(\pmb{x})} \geq 0\)
Likelihood EP tells us how much more one can learn about the apparatus by observing a forward trajectory versus a backward trajectory
Further topics included in the paper
Future steps:
Thanks!