Arnau Quera-Bofarull
How to speed-up JUNE by 10,000x
Currently postdoc at the Large Agent Collider project.
Working on the calibration of ABMs.
PIs: Ani Calinescu, Doyne Farmer, and Michael Wooldridge,
Colaborating with Ayush Chopra (MIT)
class Person:
def __init__(self, age, sex):
self.age = age
self.sex = sex
self.susceptibility = 1.0
self.infectivity = 0.0
In JUNE we write:
Problems:
import torch
age = torch.tensor([10, 20, 30])
sex = torch.tensor([0, 1, 0])
susceptibility = torch.tensor([1.,1.,1.])
infectivity = torch.tensor([0.,0.,0.])
Idea: Use ML frameworks (PyTorch) to code ABMs
Advantages:
How do we implement interactions?
Idea: Represent JUNE as a heterogenous graph
Use tools for Graph Neural Networks (PyTorch Geometric)
Node
Edge
Message
Convolution
"Average" Message
Updated node
Updated edge
Convolution
Update node function
Update edge function
JUNE
O(100) CPU hours
Torch JUNE
O(10) CPU seconds
How to efficiently and reliably calculate the derivative of a program?
PyTorch supports automatic differentiation
Problem: ABMs have discrete behaviour,
but ML people deal with this too!
Solution: Reparametrize discrete distributions with Gumbel-Softmax trick
We can now very efficiently calculate gradients of the type
So we can fit the model using gradient descent!
Caution:
Idea: Gradients give you the sensitivity
Run the simulation once, get the sensitivity for free!
Use this to study ABM dynamics
Gradients inform you about optimal (local) policy
Papers:
(Chopra et al. 2022) :
https://arxiv.org/abs/2207.09714
(Quera-Bofarull et al. 2022):
Submitted