Nonequilibrium thermodynamics of uncertain stochastic processes

 

 

Jan Korbel     &    David Wolpert

 

 

 

 

 

WOST IV

Slides are available at: https://slides.com/jankorbel

Reference: https://arxiv.org/abs/2210.05249

Idealized experiment

\(T\)

\(X\)

\(P(T)\)

\(X\)

3rd trial

\(T_3\)

\(X_3\)

...

In reality

Measure a quantity \(X\) and the assume that the temperature \(T\) can be measured with infinite precision

Temperature is measured with limited precision, can change between experiments

\(T=?\)

In many experiments

  • We do not know the exact value of
    • number of heat baths
    • temperatures
    • chemical potentials
    • energy spectrum
    • control protocol
    • transition rates
    • initial distribution

In real experiments, there is always some uncertainty about the system and its environment

Example: 3-state system

  • Consider a simple example of a 3-state system coupled to one of three apparatuses. 
  • We want to measure the distribution \(p_{t_f}(E)\) at final time \(t_f\)
  • There are two possible scenarios:

Example: 3-state system

Effective scenario: we do not know which apparatus is coupled to our system but we can repeat the experiment many times with the same apparatus

Phenomenological scenario: the apparatus is randomly rechosen each time we run the experiment

Consequences for the experiment

  • Whether the experimenter can adjust the experiment to the actual value of the apparatus has significant consequences for the experiment.

 

  • Let's illustrate the difference on a simple example of a moving optical tweezer with uncertain stiffness.
  • We consider a particle in a laser trap potential described by Langevin equation \(\dot{x} = -\frac{\partial V}{\partial x} + \xi\) with the quadratic potential \(V_k(x,t)  = \frac{k}{2} (x - \lambda(t))^2\) where \(k\) is the stiffness.
  • We want to move the trap from \(\lambda_i=0\) at \(t_i = 0\) to \(\lambda_f\) at time \(t_f\)
  • It is possible to calculate the optimal protocol that minimizes the work

 

 

Consequences for the experiment

  • Let us suppose that the stiffness can change each time due to imprecisions in the experimental setup. We assume the stiffness is sampled from a distribution \(p(k)\) 
  • We compare two scenarios:
  1. Adapted: The experimenter can measure \(k\) for each run and can adapt the protocol accordingly 
  2. Unadapted: The experimenter can only measure average stiffness \(\bar{k}\) and adopts the protocol for that single stiffness \(\bar{k}\) for all runs

The average work as a function of standard deviation of the distribution \(p(k)\)

Thermodynamics of systems coupled to uncertain environment

  • Consider a set of apparatuses \(\mathcal{A}\).
  • For each apparatus \(\alpha \in \mathcal{A}\), we have a system with a precise number of baths, temperatures, chemical potentials, etc. satisfying local detailed balance
  • We consider a probability distribution \(P^\alpha\) over the apparatuses

Effective value over the apparatuses can be defined as 

$$ \overline{X}:= \int \mathrm{d} P^\alpha X^\alpha$$

Effective distribution \(\bar{p}_x(t)\) fulfills the equation

  \( \dot{\bar{p}}_x(t) = \sum_{x'} \int \mathrm{d} P^\alpha K^\alpha_{xx'} p^\alpha_{x'}(t) \)   which is generally non-Markovian

Effective ensemble stochastic thermodynamics

  • Expected internal energy is \(\bar{U} = \int \mathrm{d} P^\alpha \sum_x p^\alpha_t(x) u^\alpha(x)\)
  • Expected first law of thermodynamics

$$\dot{\bar{U}} = \dot{\bar{Q}} + \dot{\bar{W}}$$

  • Expected ensemble entropy \(\bar{S} =  - \sum_x \int \mathrm{d} P^\alpha p_t^\alpha(x) \ln p_t^\alpha(x)\)
  • Expected second law of thermodynamics

$$\dot{\bar{S}} = \dot{\bar{\Sigma}} + \dot{\bar{\mathcal{E}}}$$

 

  • where \(\dot{\bar{\Sigma}} \geq 0\)
  • and \(\dot{\bar{\mathcal{E}}} = \overline{\beta \dot{Q}}\) - no explicit relation between \(\dot{\bar{\mathcal{E}}}\) and \(\dot{\bar{Q}}\)

Two types of optimal work

  • For each scenario, we can define the minimum dissipated work
  1. Adapted: For each apparatus \(\alpha\), we can choose the optimal protocol \(\lambda_\alpha(t)\) minimizing the work. The effective optimal work is $$W^{ad}_{\min} = \int \mathrm{d} P^\alpha \min_{\lambda_\alpha(t)}(W^\alpha[\lambda_\alpha(t)]) $$
  2. Unadapted: Since we cannot adopt the protocol for each apparatus separately, we find a single protocol that minimizes the effective work $$W^{unad}_{\min} = \min_{\lambda(t)} \left( \int \mathrm{d} P^\alpha W^{\alpha}[\lambda(t)] \right) $$
  • The effective dissipated work is in both cases the difference between the average work obtained from the given set of protocols (adapted scenario) or a single protocol (unadapted scenario, from the optimal work

Uncertainty in initial distribution

 

  • We consider that the apparatus is fixed, except for \(p_{t_i}\). We denote the set of possible initial distributions \(p_{t_i}^\alpha\) where each distribution has probability \( p(\alpha) \) to appear.
  • We consider a distribution \(q_i\) that minimizes the expected work.
  • For any other distribution, it is possible to express the dissipated work as $$W_{diss}(p_{t_i}^\alpha) = D_{KL}(p_{t_i}^\alpha || q_{t_i}) - D_{KL}(p_{t_f}^\alpha || q_{t_f}) + W_{diss}(q_{t_i})$$
  • If \(p_{t_f}^\alpha = q_{t_f}\) for all \(\alpha\), i.e., the process has the same ending distribution regardless of the initial distribution, we obtain $$ \overline{W}_{diss} =\int \mathrm{d} P^\alpha W_{diss}(p_{t_i}^\alpha)  \geq \int \mathrm{d} P^\alpha  D_{KL}(p_{t_i}^\alpha || \bar{p}_{t_i}) = D_{JS}(\{p_{t_i}^\alpha\}^\alpha) $$

Phenomenological EP and FTs

  • Let us now focus on the case where the apparatus changes after each stochastic trajectory \(\pmb{x}\) is generated. We are not able to measure \(\pmb{P}(\pmb{x}|\alpha)\) but only \(\overline{\pmb{P}}(\pmb{x})\)
  • Denote \(\pmb{P}^\alpha(\pmb{x}) \equiv \pmb{P} (\pmb{x}|\alpha)\) and \(\pmb{P}(\pmb{x},\alpha) = \pmb{P}(\pmb{x}|\alpha) P^\alpha\). \(^{\dag}\) denotes time-reversal
  • Effective ensemble EP \(\bar{\Sigma}= D_{KL}(\pmb{P}(\pmb{x},\alpha)||\pmb{P}^\dag(\pmb{x}^{\dag},\alpha)) \)
  • By using the chain rule for KL-divergence $$\underbrace{D_{KL}(\pmb{P}(\pmb{x},\alpha)||\pmb{P}^\dag (\pmb{x}^{\dag},\alpha))}_{\bar{\Sigma}} =  \underbrace{D_{KL}(\overline{\pmb{P}}(\pmb{x})||\overline{\pmb{P}}^{\dag}(\pmb{x}^{\dag}))}_{\Phi} + \underbrace{D_{KL}(\pmb{P}(\alpha|\pmb{x})||\tilde{\pmb{P}}(\alpha|\tilde{\pmb{x}}))}_{\Lambda}$$
  • Here \(\pmb{P}(\alpha|\pmb{x}) = \frac{\pmb{P}(\pmb{x}|\alpha) P^\alpha}{\pmb{P}(\pmb{x})}\)
  • We have three types of EP
  1. Effective EP \(\bar{\Sigma}\)
  2. Phenomenological EP \(\Phi\)
  3. Likelihood EP \(\Lambda\)

Phenomenological EP

 

Phenomenological EP \(\Phi = D_{KL}(\overline{\pmb{P}}(\pmb{x})||\overline{\pmb{P}}^{\dag}(\pmb{x}^{\dag})) = \int \mathcal{D} \pmb{x} \pmb{P}(\pmb{x}) \color{pink}{\ln \frac{\pmb{P}(\pmb{x})}{{\pmb{P}^\dag}({\pmb{x}}^\dag)}}\)

Phenomenological trajectory EP is \(\color{pink}{\phi(\pmb{x}) = \ln \frac{\overline{\pmb{P}}(\pmb{x})}{\overline{\pmb{P}}^\dag(\pmb{x}^{\dag})}}\)

It is straightforward to show that \(\phi\) fullfills detailed fluctuation theorem

$$ \frac{P(\phi)}{P^\dag(-\phi)} = e^{\phi}$$

Phenomenological EP describes thermodynamics for the case of expected probability

It is a lower bound for the effective EP:   \(\overline{\Sigma} \geq \Phi\)

Likelihood EP

 

Likelihood EP \(\Lambda = D_{KL}(\pmb{P}(\alpha|\pmb{x})||{\pmb{P}^{\dag}}(\alpha|\pmb{x}^\dag)) = \int \mathrm{d} P^\alpha \pmb{P}(\alpha|\pmb{x}) {\color{pink} \ln \frac{\pmb{P}(\alpha|\pmb{x})}{\pmb{P}^\dag(\alpha|\pmb{x}^\dag)}}\)

Likelood trajectory EP as \({\color{pink}\lambda(\alpha|\pmb{x})}:= \sigma(\pmb{x}|\alpha) - \phi(\pmb{x}) =  {\color{pink}\ln \frac{\pmb{P}(\alpha|\pmb{x})}{\pmb{P}^\dag(\alpha|\pmb{x}^\dag)}}\)

We can also show that \(\omega\) fulfills Detailed FT:

$$\frac{P(\lambda_{\pmb{x}})}{\tilde{P}(-\lambda_{\pmb{x}^\dag})} = e^{\lambda_{\pmb{x}}}  $$

 

From Integrated FT, we obtain that \(\Lambda_{\pmb{x}} = \langle \lambda_{\pmb{x}} \rangle_{P(\lambda_{\pmb{x}})} \geq 0\)

 

By averaging over all trajectories \(\Lambda = \langle \Lambda_{\pmb{x}} \rangle_{\pmb{P}(\pmb{x})} \geq 0\)

Likelihood EP tells us how much more one can learn about the apparatus by observing a forward trajectory versus a backward trajectory

Example: 2-state system with uncertain temperature

Other topics

Further topics included in the paper

  • Maximal work extractions with uncertain temperatures
  • Dynamics of the thermodynamic value of information when there are uncertain thermodynamic parameters                    

 

 

Future steps:

  • Systems with uncertain energy spectrums
  • Experiments with uncertain control protocols
  • Complete analysis of maximal work extraction 
  • Extension to time-dependent apparatuses
  • Extension to Hidden Markov models

Thanks!