Last week, I argued that computer games can be viewed as interactive simulations.
We discussed the "mainloop" which governs how such a simulation would function,
at the highest level, and some aspects of the "rendering" process which controls
how we get images onto the screen.
Next, we'll be looking into the simulation itself, and how that is organised.
The Rules
Firstly, it's worth clarifying that the term "simulation" doesn't, in itself, imply realism.
We certainly could be simulating something grounded in reality, but we could just as easily
be simulating an entirely abstract system where gravity behaves in weird unreal ways
(if it exists at all), momentum isn't conserved, and eating mushrooms turns you into a
giant for 30 seconds.
The rules can be crazy...
but there have to be rules, of some kind.
How do we simulate things?
If we look at the real world for reference, we can see that it's made up of
a bunch of "things" moving around in space, bumping into one another, merging together, breaking apart, making sounds, generating heat and so on.
"Physicists Say..."
If we look further, physicists say, what's actually going on here is just immensely huge numbers of tiny sub-atomic particles interacting with one another in accordance with The Laws of Physics.
However, for many practical purposes, it's enormously convenient to think at the higher level of big chunks of atoms which are grouped together into relatively stable "things".
We might be inclined to call these "objects".
Objects... Things... Entities
You've already learned a bit of Java, so I'm sure you know all about objects!
In programming parlance, the term "object" is used to describe a collection of related data representing some coherent "thing" and the functions which act on that data (we call such functions "methods").
When we use a software object to represent a simulated "thing", I sometimes use the term "entity" (to differentiate it from other, less tangible, kinds of object).
A Convenient Fiction
One way to simulate something then, is to decompose it into a set of somewhat autonomous "entities" which basically know how to look after themselves, and "understand" the laws which govern their own behaviour, but are also able to interact with each other when required.
This is, technically-speaking, an over-simplification based on our "convenient fiction" of a world of discrete entities (rather than a turbulent world of particles), but it's what everyone does in practice.
The Universe in a Glass of Wine
The "object boundary" between the wine and the air isn't as clear-cut as you might think. There's actually an ongoing process of molecular exchange, as the wine evaporates a little into the air, and the air mixes a little into the wine.
Moreover, apparently, we are actually turbulent collections ourselves...
In particular, my brain might be composed of an almost entirely different set of atoms to those of which it was made when I first read that Feynman quote.
And, yet, I still remember it!
It's not about the atoms, it's about the information that they encode. It's not about the hardware; it's about the software.
(e.g. What about DNA, neurons & tooth enamel? ...but)
AKA "Trigger's Broom"
(or was it?)
A History of Objects...
Incidentally, Alan Kay
developed the idea for software
objects by thinking of them as being like biological cells,
which contain "hidden" internal information, and then communicate with each other by "sending messages".
It probably helped that he had degrees in Molecular Biology (&
Maths), and Elec Eng, in addition to his Comp Sci PhD!
Sometimes knowing about lots of different things,
and being able to see the connections between them,
is quite... useful.
Kay's ideas about "object orientation" became the core of the Smalltalk system developed during the 1970s which was also the origin of our modern idea of a "Windows" (aka "WIMP") user-interface.
Smalltalk gave us the GUI, the MVC pattern, WYSIWYG, the IDE, edit-and-continue debugging, portable virtual machine images, etc., etc.
It was paid for by the Xerox Corporation.
Xerox
...could have been Apple.
(If they hadn't been so worried about photo-copiers.)
"Just One More Thing..."
Steve visits Xerox (from 6:11)
(source: "Triumph of The Nerds", 1996)
Back to Games!
OK, but let's return to our developing notion that we can simulate things by representing them as a set of somewhat autonomous, but interacting, "objects" or "entities".
Pac-Man Land
What are its entities?
If I wanted to simulate the peculiar universe that Pac-Man lives in, I would identify the main entities as being:
Pac-Man, the Ghosts, the standard pills, the power-pills, the bonus fruits and, importantly, the maze itself.
Each of these entities has a position in space,
and some visual representation which can be used
when we need to render it.
They can presumably be made to understand the rules which apply to them e.g.
Pills get eaten when Pac-Man collides with them;
ghosts chase Pac-Man (except when he's Powered Up);
moving entities can only travel down corridors
and are not able to pass through the walls of the maze, and so on.
Flip-Books Again!
Also, we already know, from our previous look at the rendering process, that a real-time
interactive game (assuming that it has visual output) is really like a "flip-book", and that
our ultimate goal is to capture a series of snapshots in time, and to draw them in turn.
This tells us that we don't necessarily have to know the state of the simulation at all possible points in time, but we do have to know the state at some particular (and,
ideally, frequent) instants...
"Just A Tiny Amount"
The real problem, then, is how to compute the state
at the next instant,
based on our definite knowledge
of it at the "current" instant.
The Passage of Time
In an object-based system we try to make each entity fairly autonomous about its own state.
This includes autonomy over how it handles the passage of time.
In short, each object
should have a public "update" method which takes in a "deltaTime" parameter and uses that to
compute the new state of the object (and possibly triggers some side-effects on other
objects, if necessary).
How much time?
Because the process of updating and rendering the simulation may itself take a varying
amount of time -- out there in the (allegedly) REAL WORLD
that your computer lives in -- the ideal approach
to creating a real-time system is to measure the elapsed time between each computed update
and use that as the "deltaTime" parameter...
...surprisingly, perhaps, not everyone does this.
Lots of games simply assume that the time
between updates is some nominal value
(typically 1/60th or 1/30th of a second),
and make their computations based on that.
As a result, when those games experience "slow frames" the simulation itself gets slower.
Why use a fixed delta?
Well, sometimes I think it's honestly due to ignorance:
The developers have made optimistic
assumptions that the game will just magically achieve whatever frame-rate they originally
targetted, and they've designed the code around that.
My experience suggests that there are
a lot of games out there which were aiming for 60FPS but, in development, discovered that
they could only hit 30FPS reliably...
...and then they had to go and change all their so-called "velocity" and "acceleration" parameters to compensate.
(If those parameters are implicitly expressed in units of "frame times", then they need to be changed if the nominal frame time changes).
WTF?
But there is a defence for this peculiar practice:
Sometimes a fixed delta is important for
consistency, especially in networked games.
(which sometimes rely on all participants using
the same time deltas).
Numerical Issues
Also, if you think about the nature of "numerical integration" (which is what simulations are generally doing),
you might realise that the value of the delta can have an impact on the outcome:
Basically, computing two steps at 1/60th of a second doesn't necessarily produce the
same result as a single big step of 1/30th of a second.
This is something we'll return to later.
Variable Deltas?
Fixed deltas are also a bit easier to test with and, on ancient hardware (where floating
point calculations were slow), it was faster too.
Generally speaking though, I prefer to support variable time-deltas nowadays, especially
when targetting PCs -- where the spread of performance is very wide, and being able to
support that range (e.g. anything from 30FPS to 200FPS) is valuable.
Scaled Deltas
Another side-benefit of using variable deltas is that, because they are already non-constant, it doesn't really make things any more complicated if you also scale them...
...this makes implementing things like "slow motion" (or "fast forward") features relatively simple.
Indeed, there's a certain amount of fun to be had from a game-design perspective in exploring the idea of manipulating the passage of time.
Negative Deltas?
So, what would happen if you put a negative deltaTime into your simulation?
Would time run backwards?
Would broken shards of glass join back together?
Would ice un-melt?
Would the dead return to life?
Dead Island
No, of course not.
One problem with running time backwards is that there is more than one past state which could lead to the present one...
...so, in the absence of any further information, we don't know which one to take.
A ball (or body) on the ground could have just been there "forever"... or could have rolled to that position from the left, or from the right, or it could have fallen from above.
But, if you're willing to store some information about past states in the game, you can still sort-of do it....
Braid
Mario is, like, So Mainstream
Simultaneity
Another interesting subtlety about time is that it appears to be experienced by all entities simultaneously.
So how do we do that in a computer?
Well, the standard solution to the simultaneity problem is... to ignore it.
(And, yes, I'm also ignoring the complexities of Einstein's Theory of Relativity,
in which the concept of simultaneity is not even well-defined!).
If you think about it, achieving true simultaneity in a simulation is incredibly difficult.
What we actually do, instead, is just to update the entities one-at-a-time, with
sufficiently small time-deltas... and hope that no-one really notices.
The good news is that THIS WORKS!
Which is to say, "we've been getting away with it for years!"
It's worth noting that this has consequences though:
The order in which entities are updated actually has some (albeit usually small) impact on the outcome of the simulation.
It creates some potential "unfairness" e.g. if two entities are both trying to pick-up a third, and would theoretically do so simultaneously.
In a practical simulation system, whichever of those entities is processed first, will "win". The effects are not usually noticeable but, nevertheless, they exist.
In theory, you could compensate for this unfairness by updating the entities in "backwards
order"
on every other (alternating) frame.
In fact, there may be other advantages for doing it that way ("ping-ponged" updates are potentially more cache-friendly), but I don't believe
it's generally done, although I have done it myself in the past.
Even then, it's still not entirely fair: consider the case of a race between several (i.e. more than two) equal parties.
Extra Credit?
You might also want to think about how
the Universe itself handles this difficult
problem!
If you succeed, I think you might be entitled to
"special credit"...
...in the form of
a Nobel Prize for Physics.
(Don't apply to me though;
I don't have the authority to give
them out).
Some Broad Hints...
(...on how to "create a universe from scratch", maybe).
There is a field called "Digital Physics", which explores the idea that the Universe itself is a kind of computer (or a computation).
One of the pioneers of this field was the German engineer Konrad Zuse, who actually built the first programmable computers back in the late 1930s (!).
Here is some info about Zuse and his wacky ideas of "Calculating Space"
Super Hot Game
"It's About Time"
Here is another game which plays with the concept of the flow of time in a game, and with our notion of "time as an input"