Conformal Geometric Algebra with Python

Part 2

Hugo Hadfield 2018

Under the hood

  • Clifford is built as flat, dense arrays of blade coefficients
  • The sign flips for products are calculated in advance
  • The various product functions are then exposed as JITed closures with the Numba JIT (Just in Time) compiler
  • It is pretty fast!

Mixing GA and LA

  • Dense, flat representation is not the most efficient
  • Does make interfacing with linear algebra systems very easy
  • Currently many GA algorithms still terminate in a linear algebra solution step
  • Need to be able to easily merge between GA and LA

Toy Problem

  • Given a set of lines in the world, what point lies closest to all of them in a least squares sense?
C= \min_P \left( -\sum^N_i P \cdot(L_iPL_i) \right)
P^2 = 0
C= \min_P \left( -\sum^N_i P \cdot(L_iPL_i) \right)
P^2 = 0
\therefore L = -\sum^N_i P \cdot(L_iPL_i) + \lambda P^2

Set up Lagrange multiplier:

Differentiation and setting equal to 0 gives an eigen-point problem:

\sum^N_iL_iPL_i = 2\lambda P

How do we solve this? If anyone knows a nice GA way to do it let me know afterwards :)

\sum^N_iL_iPL_i = 2\lambda P

Bring in the linear algebra

 

Define:            as the matrix that when multiplied with a 32x1 coefficient vector is equivalent to the left geometric product with Li

 

Define:            is equivalent to the right geometric product with Li

\underrightarrow{L_i}
\underleftarrow{L_i}

With our new notation we can construct the reflection as a linear matrix operation:

\left( \sum^N_i \underleftarrow{L_i} \hspace{0.1cm} \underrightarrow{L_i} \right) P = 2\lambda P

where P here is a 32x1 vector of coefficients.

We therefore need to find the eigenvectors of the matrix:

M = \left( \sum^N_i \underleftarrow{L_i} \hspace{0.1cm} \underrightarrow{L_i} \right)

The clifford Python package provides us with the tools to solve this easily

from clifford.g3c import *
from clifford.tools.g3c import *
from pyganja import *

# Make a load of lines
line_list = generate_random_object_cluster(10, random_line)

# Allocate some space for our finished matrix
accumulator_matrix = np.zeros((32,32))

# Loop over our lines and construct the matrix
for i in range(len(line_list)):
    # Get the line as a left gmt matrix
    L_i_l = get_left_gmt_matrix(line_list[i].value)
    # Get the line as a right gmt matrix
    L_i_r = get_right_gmt_matrix(line_list[i].value)
    # Multiply and add
    accumulator_matrix += L_i_r@L_i_l

# Raise the matrix to a very high power
power_mat = np.linalg.matrix_power(accumulator_matrix/10,256)

# Get a point that lies on the first line as an approximation to the e-vector
p_start = normalise_n_minus_1(((line_list[0]|eo)*einf*(line_list[0]|eo))(1))

# Apply the matrix
p_end = layout.MultiVector(value=power_mat@p_start)(1)

# Remove any junk that has come along with it
final_point = normalise_n_minus_1((p_end*einf*p_end)(1))

# Visualise
draw(line_list+[final_point], scale=0.1)

Non linear optimisation

  • Python has a well developed scientific ecosystem
  • SciPy (Scientific Python) contains a set of non-linear optimisers
  • Clifford is designed to work well with these optimisers

 

Lets take our toy problem and try non-linear optimisation to solve it this time

C= \min_P \left( -\sum^N_i P \cdot(L_iPL_i) \right)

To enforce P is a valid conformal point we will optimise in the euclidean domain and map into conformal

P = \frac{1}{2}p^2 e_\infty + p + e_0
= \text{up}(p)
from clifford.g3c import *
from clifford.tools.g3 import *
from clifford.tools.g3c import *
from pyganja import *

# Import scipy minimize
from scipy.optimize import minimize

# Define our cost function
def cost_function(p_numpy, line_list):
    p = np_to_euc_mv(p_numpy)
    P = up(p)
    return -sum((P|(l*P*l))[0] for l in line_list)

# Make a load of lines
line_list = generate_random_object_cluster(10, 
                                  random_line)

# We will start at the origin
x0 = np.zeros(3)

# Run a non-linear optimiser to find the result
opt_result = minimize(cost_function, x0, 
                      args=tuple([line_list]), 
                      method='L-BFGS-B')

# Get our final conformal point
final_point = up(np_to_euc_mv(opt_result.x))

# Visualise
draw(line_list+[final_point], scale=0.1)

Optimal Sphere Fitting

  • Another fun toy problem, this is a shameless knock off of Leo Dorst's AGACSE 2018 tutorial
  • Given a set of points in 3D space what is the best fitting sphere?
\sum_i^N(P_i\cdot\sigma)^2/\sigma^2

Defining a cost function:

Leads to another eigenproblem, we want the minimal non-negative eigenvalue of:

\frac{1}{N}\sum_i^NP_i\left(P_i\cdot [\hspace{0.2cm}]\right)

Bringing in the linear algebra again, we define the matrix that performs projection to grade g as:

M_g

Which leads to a final matrix as:

\left( \frac{1}{N}\sum_i^N \underrightarrow{P_i} M_0 \underrightarrow{P_i } \right)
from clifford.g3c import *
from clifford.tools.g3 import *
from clifford.tools.g3c import *
from pyganja import *

# Sample a load of points on a sphere with a bit of noise
radius = 2.5
center = 3*e1 + 2.3*e2 + 5*e3
sigma = 0.1
sphere_points = [up((random_euc_mv().normal()*radius + random_euc_mv()*sigma) + center) for i in range(50)]

# Loop over our points and construct the matrix
accumulator_matrix = np.zeros((32,32))
for i in range(len(sphere_points)):
    # Get the point as a left gmt matrix
    P_i_l = get_left_gmt_matrix(sphere_points[i].value)
    # Multiply and add
    accumulator_matrix += P_i_l@mask0@P_i_l

# Find the eigenvalues of the matrix
e_vals, e_vecs = np.linalg.eig(accumulator_matrix)

# Find the smallest non negative eigenvalues
min_eval = np.inf
min_eval_index = -1
for i in range(len(e_vals)):
    if e_vals[i] < min_eval and e_vals[i] > 0:
        min_eval = e_vals[i]
        min_eval_index = i

# Get the associated eigenvector and visualise it
best_sphere = layout.MultiVector(value=e_vecs[:,min_eval_index]).normal()
draw([best_sphere]+sphere_points,static=False,scale=0.3)

Best fitting circle

  • The best fitting sphere minimises the spherical radial error in the data set
  • The second best fitting eigenvector is also a sphere that minimises the remaining error
  • Best sphere meets seconds best sphere is the best circle?
from clifford.tools.g3 import *
from clifford.tools.g3c import *
from pyganja import *

# Sample a load of points on a circle with a bit of noise
center = 3*e1 + 2.3*e2 + 5*e3
sphere_points = [up(((np.random.randn()*e1 + np.random.randn()*e2).normal()*2.5 + random_euc_mv()*0.01) + center) for i in range(50)]

# Loop over our points and construct the matrix
accumulator_matrix = np.zeros((32,32))
for i in range(len(sphere_points)):
    # Get the point as a left gmt matrix
    P_i_l = get_left_gmt_matrix(sphere_points[i].value)
    # Multiply and add
    accumulator_matrix += P_i_l@mask0@P_i_l

# Find the eigenvalues of the matrix
e_vals, e_vecs = np.linalg.eig(accumulator_matrix)

# Find the smallest and second smallest non negative eigenvalues
min_eval = np.inf
min_eval2 = np.inf
min_eval_index = -1
min_eval_index2 = -1
for i in range(len(e_vals)):
    this_e_val = e_vals[i]
    if this_e_val > 0:
        if this_e_val < min_eval:
            min_eval2 = min_eval
            min_eval = this_e_val
            min_eval_index2 = min_eval_index
            min_eval_index = i
        elif this_e_val < min_eval2:
            min_eval2 = this_e_val
            min_eval_index2 = i

# Get the associated eigenvectors and visualise
best_sphere = layout.MultiVector(value=mask1@np.real(e_vecs[:, min_eval_index])).normal()
second_best_sphere = layout.MultiVector(value=mask1@np.real(e_vecs[:, min_eval_index2])).normal()
best_circle = meet(best_sphere*I5,second_best_sphere*I5)(3).normal()

draw([best_circle] + sphere_points, browser_window=True, scale=0.1)

Summary

  • Clifford allows CGA to mix freely with traditional linear algebra and non-linear optimisation
  • CGA can be used to solve interesting and difficult problems in an intuitive way