On the real cycle class map

Department of Quantitative Theory and Methods

Jeremy Jacobson

Current

 

  1. Mathematical intuition

  2. Definition

Kolmogorov

Every continuous function of several variables defined on the unit cube can be represented as a superposition of continuous functions of one variable and the operation of addition (1957).

f(x1,x2,,xn)=i=12n+1fi(j=1nϕi,j(xj))f(x_1,x_2, \ldots, x_n) = \sum\limits_{i=1}^{2n+1}f_i(\sum\limits_{j=1}^{n}\phi_{i,j}(x_j))
f(x_1,x_2, \ldots, x_n) = \sum\limits_{i=1}^{2n+1}f_i(\sum\limits_{j=1}^{n}\phi_{i,j}(x_j))

Thus, it is as if there are no functions of several variables at all. There are only simple combinations of functions of one variable.

f1f_1
f_1
fif_i
f_i
f2n+1f_{2n+1}
f_{2n+1}
f(x1,x2,,xn)=i=12n+1fi(j=1nϕi,j(xj))f(x_1,x_2, \ldots, x_n) = \sum\limits_{i=1}^{2n+1}f_i(\sum\limits_{j=1}^{n}\phi_{i,j}(x_j))
f(x_1,x_2, \ldots, x_n) = \sum\limits_{i=1}^{2n+1}f_i(\sum\limits_{j=1}^{n}\phi_{i,j}(x_j))
x1x_1
x_1
x2x_2
x_2
xnx_n
x_n
ϕ1,n\phi_{1,n}
\phi_{1,n}
ϕ2n+1,n\phi_{2n+1,n}
\phi_{2n+1,n}
ϕ2n+1,1\phi_{2n+1,1}
\phi_{2n+1,1}
ϕ1,1\phi_{1,1}
\phi_{1,1}
ϕ1,2\phi_{1,2}
\phi_{1,2}
ϕ2n+1,2\phi_{2n+1,2}
\phi_{2n+1,2}
ff
f
f(x1,x2,,xn)=ϕ(i=1nwixi+θ)f(x_1,x_2,\cdots,x_n) = \phi(\sum\limits_{i=1}^n w_i x_i+\theta)
f(x_1,x_2,\cdots,x_n) = \phi(\sum\limits_{i=1}^n w_i x_i+\theta)
w1w_1
w_1
w2w_2
w_2
wnw_n
w_n
x1x_1
x_1
x2x_2
x_2
xnx_n
x_n
RnfR1\mathbb{R}^n \stackrel{f}{\rightarrow}\mathbb{R}^1
\mathbb{R}^n \stackrel{f}{\rightarrow}\mathbb{R}^1
  • one ''hidden layer"

  • one "node"

  • "activation" phi

  • "threshold" theta

ϕ\phi
\phi

Definition of a feedforward neural network 

Definition of a feedforward neural network 

f(x1,x2,,xn)=i=12Wiϕi(j=1nwi,jxi+θi)f(x_1,x_2,\cdots,x_n) = \sum\limits_{i=1}^{2}W_i\phi_i(\sum\limits_{j=1}^n w_{i,j} x_i+\theta_i)
f(x_1,x_2,\cdots,x_n) = \sum\limits_{i=1}^{2}W_i\phi_i(\sum\limits_{j=1}^n w_{i,j} x_i+\theta_i)
w1,1w_{1,1}
w_{1,1}
w1,2w_{1,2}
w_{1,2}
w1,nw_{1,n}
w_{1,n}
x1x_1
x_1
x2x_2
x_2
xnx_n
x_n
RnfR1\mathbb{R}^n \stackrel{f}{\rightarrow}\mathbb{R}^1
\mathbb{R}^n \stackrel{f}{\rightarrow}\mathbb{R}^1
  • one ''hidden layer"

  • two "nodes"

W1W_1
W_1
W2W_2
W_2
w2,1w_{2,1}
w_{2,1}
w2,2w_{2,2}
w_{2,2}
w2,nw_{2,n}
w_{2,n}

Definition of a feedforward neural network 

(vector notation)

f(x)=WTϕ(wTx+θ)+ηf(\vec{x}) = \vec{W}^T\phi(\vec{w}^T\vec{x}+\vec{\theta})+\eta
f(\vec{x}) = \vec{W}^T\phi(\vec{w}^T\vec{x}+\vec{\theta})+\eta
x\vec{x}
\vec{x}
RnfR1\mathbb{R}^n \stackrel{f}{\rightarrow}\mathbb{R}^1
\mathbb{R}^n \stackrel{f}{\rightarrow}\mathbb{R}^1
W\vec{W}
\vec{W}
w\vec{w}
\vec{w}

Google's TensorFlow and ML workbench

  • Google Cloud Platform Datalab (https://cloud.google.com/datalab/)

  • TensorFlow and high-level framework (ML Workbench)

import google.datalab.contrib.mlworkbench.commands

Thank you!

Copy of Copy of Introduction to neural networks

By Jeremy Jacobson

Copy of Copy of Introduction to neural networks

  • 98