Modeling Topological Polymers

Clayton Shonkwiler

Colorado State University

http://shonkwiler.org

11.08.18

Collaborators

Jason Cantarella

U. of Georgia

Tetsuo Deguchi

Ochanomizu U.

Erica Uehara

Ochanomizu U.

Funding: Simons Foundation

Linear polymers

A linear polymer is a chain of molecular units with free ends.

Polyethylene

Nicole Gordine [CC BY 3.0] from Wikimedia Commons

Shape of linear polymers

In solution, linear polymers become crumpled:

Protonated P2VP

Roiter–Minko, J. Am. Chem. Soc. 127 (2005), 15688-15689

[CC BY-SA 3.0], from Wikimedia Commons

Ring polymers

Octamethylcyclotetrasiloxane

(Common in cosmetics, bad for fish)

Ring biopolymers

Most known cyclic polymers are biological

Material properties

Ring polymers have weird properties; e.g.,

Thermus aquaticus

Uses cyclic archaeol, a heat-resistant lipid

Grand Prismatic Spring

Home of t. aquaticus; 170ºF

Topological polymers

A topological polymer joins monomers in some graph type:

Petersen graph

In biology

Topological biopolymers have graph types that are extremely complicated (and thought to be random):

Wood-based nanofibrillated cellulose

Qspheroid4 [CC BY-SA 4.0], from Wikimedia Commons

Synthetic topological polymers

The Tezuka lab in Tokyo can now synthesize many topological polymers in usable quantities

Y. Tezuka, Acc. Chem. Res. 50 (2017), 2661–2672

Main Question

What is the probability distribution on the shapes of topological polymers in solution?

Ansatz

Linear polymers

Ring polymers

Topological polymers

Random walks with independent steps

Random walks with steps conditioned on closure

Random walks with steps conditioned on ???

Functions and vector fields

Suppose \(\mathfrak{G}\) is a directed graph with \(\mathfrak{V}\) vertices and \(\mathfrak{E}\) edges.

Definition. A function on \(\mathfrak{G}\) is a map \(f:\{v_1,\dots , v_\mathfrak{V}\} \to \mathbb{R}\). Functions are vectors in \(\mathbb{R}^\mathfrak{V}\).

Definition. A vector field on \(\mathfrak{G}\) is a map \(w:\{e_1,\dots , e_\mathfrak{E}\} \to \mathbb{R}\). Vector fields are vectors in \(\mathbb{R}^\mathfrak{E}\).

Gradient and divergence

By analogy with vector calculus:

Definition. The gradient of a function \(f\) is the vector field

(\operatorname{grad} f)(e_i) = f(\operatorname{head} e_i) - f(\operatorname{tail} e_i).
(gradf)(ei)=f(headei)f(tailei).(\operatorname{grad} f)(e_i) = f(\operatorname{head} e_i) - f(\operatorname{tail} e_i).
(\operatorname{div} w)(v_i) = \sum_{j=1}^\mathfrak{E} \begin{cases} +w(e_j) & v_i = \operatorname{head} e_j \\ -w(e_j) & v_i = \operatorname{tail} e_j \\ 0 & \text{else} \end{cases}
(divw)(vi)=j=1E{+w(ej)vi=headejw(ej)vi=tailej0else(\operatorname{div} w)(v_i) = \sum_{j=1}^\mathfrak{E} \begin{cases} +w(e_j) & v_i = \operatorname{head} e_j \\ -w(e_j) & v_i = \operatorname{tail} e_j \\ 0 & \text{else} \end{cases}

Definition. The divergence of a vector field \(w\) is the function

Gradient and divergence as matrices

\operatorname{grad}_{ij} = \begin{cases} +1 & v_j = \operatorname{head} e_i \\ -1 & v_j = \operatorname{tail} e_i \\ 0 & \text{else} \end{cases}
gradij={+1vj=headei1vj=tailei0else\operatorname{grad}_{ij} = \begin{cases} +1 & v_j = \operatorname{head} e_i \\ -1 & v_j = \operatorname{tail} e_i \\ 0 & \text{else} \end{cases}
\operatorname{div}_{ij} = \begin{cases} +1 & v_i = \operatorname{head} e_j \\ -1 & v_i = \operatorname{tail} e_j \\ 0 & \text{else} \end{cases}
divij={+1vi=headej1vi=tailej0else\operatorname{div}_{ij} = \begin{cases} +1 & v_i = \operatorname{head} e_j \\ -1 & v_i = \operatorname{tail} e_j \\ 0 & \text{else} \end{cases}

So if \(B = \operatorname{div}\), which is \(\mathfrak{V} \times \mathfrak{E}\), then \(\operatorname{grad} = B^T\).

Helmholtz’s Theorem

Fact.

The space \(\mathbb{R}^\mathfrak{E}\) of vector fields on \(\mathfrak{G}\) has an orthogonal decomposition

\mathbb{R}^\mathfrak{E} = (\text{gradient fields}) \oplus (\text{divergence-free fields})
RE=(gradient fields)(divergence-free fields)\mathbb{R}^\mathfrak{E} = (\text{gradient fields}) \oplus (\text{divergence-free fields})

Corollary.

A vector field \(w\) is a gradient (conservative field) if and only if the (signed) sum of \(w\) around every loop in \(\mathfrak{G}\) vanishes.

Gaussian embeddings

Definition.

A function \(f:\{v_i\} \to \mathbb{R}^d\) determines an embedding of \(\mathfrak{G}\) into \(\mathbb{R}^d\). The displacement vectors between adjacent vertices are given by \(\operatorname{grad}f\).

A Gaussian random embedding of \(\mathfrak{G}\) has displacements sampled from a standard multivariate Gaussian on \((\text{gradient fields})^d\subset \left(\mathbb{R}^\mathfrak{E}\right)^d\).

Useful lemma

Lemma. The projections of a Gaussian random embedding of \(\mathfrak{G}\) in \(\mathbb{R}^d\) onto each coordinate axis are independent Gaussian random embeddings of \(\mathfrak{G}\) into \(\mathbb{R}\).

So we can restrict to Gaussian embeddings of \(\mathfrak{G}\) in \(\mathbb{R}\).

 Main Theorem

Since \(\operatorname{grad}f=0 \,\Longleftrightarrow f\) is a constant function, \(\operatorname{grad}f\) only determines \(f\) up to a constant. So assume our random embeddings are centered; i.e., \(\sum f(v_i) = 0\).

Theorem [w/ CDU; cf. James, 1947]

The distribution of vertex positions on the \((\mathfrak{V}-1)\)-dimensional subspace of centered embeddings is

\mathcal{N}(0,(BB^T)^+) = \mathcal{N}(0,L^+)
N(0,(BBT)+)=N(0,L+)\mathcal{N}(0,(BB^T)^+) = \mathcal{N}(0,L^+)

\(BB^T = \operatorname{div}\operatorname{grad} = L \) is the graph Laplacian

L_{ij} = \begin{cases} \operatorname{deg}(v_i) & i=j \\ -1 & v_i,v_j \text{ joined by an edge} \\ 0 & \text{else} \end{cases}
Lij={deg(vi)i=j1vi,vj joined by an edge0elseL_{ij} = \begin{cases} \operatorname{deg}(v_i) & i=j \\ -1 & v_i,v_j \text{ joined by an edge} \\ 0 & \text{else} \end{cases}

Pseudoinverse

For symmetric matrices like \(L \), with eigenvalues \(\lambda_i\) and eigenvectors \(v_i\), the pseudoinverse \(L^+\) is the symmetric matrix defined by:

  • The eigenvalues \(\lambda_i'\) of \(L^+\) are
\lambda_i' = \begin{cases} \frac{1}{\lambda_i} & \lambda_i \neq 0 \\ 0 & \lambda_i = 0 \end{cases}
λi={1λiλi00λi=0\lambda_i' = \begin{cases} \frac{1}{\lambda_i} & \lambda_i \neq 0 \\ 0 & \lambda_i = 0 \end{cases}
  • The eigenvectors \(v_i'\) of \(L^+\) are \(v_i' = v_i\).

Sampling algorithm

Algorithm.

  • Generate \(w\) from \(\mathcal{N}(0,I_\mathfrak{E})\)
  • Let \(f = {B^T}^+ w\), which is \(\mathcal{N}(0,L^+)\)

Expected distances

Theorem [w/ CDU; cf. James, 1947]

The distribution of vertex positions on the \((\mathfrak{V}-1)\)-dimensional subspace of centered embeddings is

\mathcal{N}(0,(BB^T)^+) = \mathcal{N}(0,L^+)
N(0,(BBT)+)=N(0,L+)\mathcal{N}(0,(BB^T)^+) = \mathcal{N}(0,L^+)

Corollary.

The expected squared distance between vertex \(i\) and vertex \(j\) is

E[(f_i-f_j)^2] = E[f_i^2+f_j^2-2f_if_j] = L_{ii}^+ + L_{jj}^+ - L_{ij}^+-L_{ji}^+
E[(fifj)2]=E[fi2+fj22fifj]=Lii++Ljj+Lij+Lji+E[(f_i-f_j)^2] = E[f_i^2+f_j^2-2f_if_j] = L_{ii}^+ + L_{jj}^+ - L_{ij}^+-L_{ji}^+

Example: multitheta graphs

Definition.

An \((m,n)\)-theta graph consists of \(m\) arcs of \(n\) edges connecting two junctions.

\((5,20)\)-theta graph

Theorem [Deguchi–Uehara, 2017]

The expected squared distance between junctions is \(\frac{dn}{m}\).

Amazing fact about \(L^+\)

Proposition [Nash–Williams (resistors, 1960s), James (springs, 1947)]

The expression \(L_{ii}^++L_{jj}^+-L_{ij}^+-L_{ji}^+\) is:

  • the resistance between \(v_i\) and \(v_j\) if all edges of \(\mathfrak{G}\) are unit resistors;
  • the reciprocal of the force between \(v_i\) and \(v_j\) if all edges are unit springs and \(v_i\) and \(v_j\) are one unit apart.

Randall Munro, [CC BY-NC 2.5], from xkcd

Deguchi–Uehara result, redux

R=R_1+R_2+\dots +R_n
R=R1+R2++RnR=R_1+R_2+\dots +R_n
\frac{1}{R} = \frac{1}{R_1} + \frac{1}{R_2} + \dots + \frac{1}{R_n}
1R=1R1+1R2++1Rn\frac{1}{R} = \frac{1}{R_1} + \frac{1}{R_2} + \dots + \frac{1}{R_n}

Expected radius of gyration

Theorem [w/ CDU; cf. Estrada–Hatano, 2010]

If \(\lambda_i\) are the eigenvalues of \(L\), the expected squared radius of gyration of a Gaussian random embedding of \(\mathfrak{G}\) in \(\mathbb{R}^d\) is

\frac{d}{\mathfrak{V}} \sum \frac{1}{\lambda_i} = \frac{d}{\mathfrak{V}} \operatorname{tr} L^+
dV1λi=dVtrL+\frac{d}{\mathfrak{V}} \sum \frac{1}{\lambda_i} = \frac{d}{\mathfrak{V}} \operatorname{tr} L^+

This quantity is called the Kirchhoff index of \(\mathfrak{G}\).

Distinguishing graph types

Y. Tezuka, Acc. Chem. Res. 50 (2017), 2661–2672

The suspects

T. Suzuki et al., J. Am. Chem. Soc. 136 (2014), 10148–10155

\(K_{3,3}\) (subdivided)

ladder graph (subdivided)

Topological polymers

Size exclusion chromatograph

Distinguishing graph types in the lab

T. Suzuki et al., J. Am. Chem. Soc. 136 (2014), 10148–10155

larger molecule

smaller molecule

Different sizes

Proposition [with Cantarella, Deguchi, & Uehara]

If each edge is subdivided equally to make \(\mathfrak{V}\) vertices total:

E[R_g^2(K_{3,3})] = \frac{108 - 261 \mathfrak{V} + 60 \mathfrak{V}^2 + 17 \mathfrak{V}^3}{486 \mathfrak{V}^2} \sim 0.12 + 0.035 \mathfrak{V}
E[Rg2(K3,3)]=108261V+60V2+17V3486V20.12+0.035VE[R_g^2(K_{3,3})] = \frac{108 - 261 \mathfrak{V} + 60 \mathfrak{V}^2 + 17 \mathfrak{V}^3}{486 \mathfrak{V}^2} \sim 0.12 + 0.035 \mathfrak{V}
E[R_g^2(\text{ladder})] = \frac{540 - 1305 \mathfrak{V} + 372 \mathfrak{V}^2 + 109 \mathfrak{V}^3}{2430 \mathfrak{V}^2} \sim 0.15 + 0.045 \mathfrak{V}
E[Rg2(ladder)]=5401305V+372V2+109V32430V20.15+0.045VE[R_g^2(\text{ladder})] = \frac{540 - 1305 \mathfrak{V} + 372 \mathfrak{V}^2 + 109 \mathfrak{V}^3}{2430 \mathfrak{V}^2} \sim 0.15 + 0.045 \mathfrak{V}

So the smaller molecule is predicted to be \(K_{3,3}\)!

Open questions

  • Expectations are computable by computer algebra; are there analytic formulae for graph subdivisions?
  • Topological type of graph embedding?
  • What if the graph is a random graph?

Thank you!

Modeling Topological Polymers

By Clayton Shonkwiler

Modeling Topological Polymers

  • 1,703