Jan Korbel
David Wolpert
22nd International Symposium on "Disordered Systems:
Theory and Its Applications" (DSS-2022)
slides can be found at: slides.com/jankorbel
Microscopic systems
Classical mechanics (QM,...)
Mesoscopic systems
Stochastic thermodynamics
Macroscopic systems
Thermodynamics
Trajectory TD
Ensemble TD
Stochastic Thermodynamics is a thermodynamic theory
for mesoscopic, non-equilibrium physical systems
interacting with equilibrium thermal (and/or chemical)
reservoirs
Statistical mechanics
- Maxwell, Boltzman, Planck, Claussius, Gibbs...
- Macroscopic systems (N→∞) in equilibrium (no time dependence of measurable quantities - thermoSTATICS)
- General structure of thermodynamics
- Applications: engines, refridgerators, air-condition,...
efficiency ≤1−T1T2
Car engines: 30-50%
Zeroth law:
Temperature can be measured. TA=TBifA and B are in equilibrium.
First law (Claussius 1850, Helmholtz 1847):
Energy is conserved.
dU=δQ−δW Second law (Carnot 1824, Claussius 1854, Kelvin):
Heat cannot be fully transformed into work. dS≥TδQ Third law: We cannot bring the system into the absolute zero
temperature in a finite number of steps. T→0limS(T)=0
- Onsager, Rayleigh...
- Systems close to equilibrium - linear response theory
- Local equilibrium: subsystems a,b,c are each in equilibrium
Total entropy S≈Sa+Sb+Sc+…
Entropy production σa=dtdSa=∑iYiaJia
Yia - thermodynamic forces; Jia - thermodynamic currents
4th Law of thermodynamics (Onsager 1931): σ=∑ijLijΓiΓj
Γi=Yia−Yib - afinity, Lij - symmetric
efficiency ≲1
1.) Consider linear Markov (= memoryless) with distribution pi(t).
Its evolution is described by master equation
p˙i(t)=j∑[wijpj(t)−wjipi(t)]
wij is transition rate.
2.) Entropy of the system - Shannon entropy S(P)=−∑ipilogpi. Equilibrium distribution is obtained by maximization of S(P) under the constraint of average energy U(P)=∑ipiϵi
pieq=Z1exp(−βϵi)where β=kBT1,Z=j∑exp(−βϵj)
3.) Detailed balance - stationary state (p˙i=0 ) coincides with the equilibrium state (pieq). We obtain
wjiwij=pjeqpieq=eβ(ϵj−ϵi)
4.) Second law of thermodynamics:
S˙=−i∑p˙ilogpi=21ij∑(wijpj−wjipi)logpipj
=S˙i21ij∑(wijpj−wjipi)logwjipiwijpj+S˙e21ij∑(wijpj−wjipi)logwijwji
S˙i≥0 - entropy production rate (2nd law of TD)
S˙e=βQ˙ entropy flow rate
5.) Trajectory thermodynamics - consider stochastic trajectory
x(t)=(x0,t0;x1,t1;…). Energy Ex=Ex(λ(t)), λ(t) - control protocol
Probability of observing x(t): P(x(t))
Time reversal x~(t)=x(T−t)
Reversed protocol λ~(t)=λ(T−t)
Probability of observing reversed trajectory under reversed protocol P~(x~(t))
6.) Fluctuation theorems
Trajectory entropy: s(t)=−logpx(t)
Trajectory 2nd law Δs=Δsi+Δse
Relation to the trajectory probabilities
logP~(x~(t))P(x(t))=Δsi
Detailed fluctuation theorem
P~(−Δsi)P(Δsi)=eΔsi
Integrated fluctuation theorem ⟨e−Δsi⟩=1⇒⟨Δsi⟩=ΔSi≥0
(with D. Wolpert - New J. Phys. doi:10.1088/1367-2630/abea46)
Requirements:
Studied in information theory since 60'S
used in physics since 90's
Main aim: study thermodynamics of systems with non-Botlzmannian equilibrium distributions (due to correlations, long-range interactions...)
We consider a sum-class form of entropy:
S(P)=f(∑mg(pm))
Maximum entropy principle: Maximize S(p) subject to constraint that p is normalized and expected energy has a given value
Solution: MaxEnt distribution: pm⋆=(g′)−1(Cfα+βϵm),
Cf=f′(∑mg(pm))
Theorem: Requirements 1-3 imply
1) Ω(pm)=exp(−g′(pm))
2) C(p)=f′(∑mg(pm))
Corollary:
a) S(p)=f(−∑m∫0pmlogΩ(z)dz)
b) wnmwmn=Tϵm−ϵn
S˙=Cf∑mp˙mg′(pm)
=2Cf∑mn(Jmn−Jnm)(g′(pm)−g′(pn))
=2Cf∑mn(Jmn−Jnm)(Φmn−Φnm)
+2Cf∑mn (Jmn−Jnm)(g′(pm)+Φnm−g′(pn)−Φmn)
=S˙i2Cfmn∑(Jmn−Jnm)(ϕ(Jmn)−ϕ(Jnm))
+S˙e2Cfmn∑ (Jmn−Jnm)(g′(pm)+ϕ(Jnm)−g′(pn)−ϕ(Jmn))
S˙i⇒ϕ− increasing
S˙e⇒Cf[g′(pm)+ϕ(Jnm)−g′(pn)−ϕ(Jmn)]=Tϵn−ϵm
⇒ϕ(Jmn)=j(wmn)−g′(pn)
⇒Jmn=ψ(j(wmn)−g′(pn)), ψ=ϕ−1 - increasing □.
j(wmn)−j(wnm)=CfTϵn−ϵm
β=T1
Stochastic entropy: s(t)=log(Ω(px(t))1)=g′(px(t))
Detailed fluctuation theorem holds:
P~(−Δsi)P(Δsi)=eΔsi