A.I. is a huge, complex, fascinating, and philosophically profound area of intellectual inquiry, which may yet have a huge impact on the very future of our species (and of its computerised descendants).
And I, for one, welcome our new A.I. Overlords!
Sadly, these lofty matters have almost no relation to the feeble and simplistic rubbish that we put into computer games. :-(
But let's take a look at some of the Good Stuff anyway... :-)
It is
fairly accessible, but is rather brilliant, and still relevant.
It begins with the line:
"I propose to consider the question, Can machines think?"
(*Turing was portrayed by Benedict Cumberbatch in 2014's "The Imitation Game").
(A more realistic version was probably Derek Jacobi's performance in 1996's "Breaking The Code").
The Imitation Game
Turing framed the question in terms of whether a machine could successfully imitate the intellectual capacities of a human being, as judged by another human.
There's a lot "wrong" with this formulation, as Turing well knew (it's all in his paper), but it does have some advantages as a practical definition...
It avoids quibbling too much over mechanism and, despite constraining intelligence to the human mould, it seems like a legitimate (and sufficient, but not necessary) A.I. goal.
The Turing Test
Ancestor of "The Voight-Kampff Empathy Test"?
Turing Says...
What is a machine?
How about: "Anything human engineers can create!"
...except by making babies ("in the usual manner"),
which would be cheating!
...yes, cloning in the lab is cheating too!
Grrr. Damn.
New definition: "Our machine will be a digital computer."
(In 1950, these are just new, and require explanation!)
Digital Computers
So, what are these new-fangled "digital computers"?
They are general purpose machines which can follow any set of (precisely defined) instructions that you give to them, and which execute those instructions reliably and slavishly.
They are like office clerks! (aka "human computers").
They have "memory" (e.g. notepads), and are capable of performing basic arithmetic, and making "if/then" decisions based on the results thereof.
Discrete State-Machines
...is what digital computers really are.
"...given the initial state of the machine and the input signals it is always possible to predict all future states."
(Just like a Replay System!)
"For instance, the number [of possible states] for the machine now working at Manchester is about 2^165000, i.e. about 10^50000"
That's just 20 KiloBytes, in our modern terms!
A Sound Of Thunder
"The displacement of a single electron by a billionth of a centimetre at one moment might make the difference between a man being killed by an avalanche a year later, or escaping."
almost 20 years before Edward Lorenz coined the phrase.
Likewise "Chaos Theory" (named in 1975), although the foundational work goes back to Poincaré in the 1890s.
BTW: Ray Bradbury covered similar territory in his 1952 science-fiction short story "A Sound Of Thunder" (text)
Arguments Against A.I.
God, and our "immortal soul" *
"Heads in the Sand"
Maths, Gödel, and the limits of logic
Consciousness ("is magic"); solipsism
Machines can't do <X>
Lady Lovelace's Objection; originality
The Nervous System is not discrete
Informality of Behaviour ("rules")
ESP! ** -- Turing actually believed in this!
* including claims of "the Moslem view that women have no souls" -- this is an error on Turing's part, I believe!
** "the statistical evidence, at least for telepathy, is overwhelming" -- no it bloody isn't! Turing was wrong here.
What Turing Believed
"I believe that in about fifty years' time [i.e. in the year 2000] it will be possible to programme computers, with a storage capacity of about 10^9, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning."
"A storage capacity of 10^9 [bits]" is about 120MB.
A Mac G4 Cube from 2000 had 128MB RAM (+20GB disk).
I couldn't find the transcripts from the contest in 2000, but here are some examples from the 2001 "winner":
JUDGE02[02:36:40] may I ask you some questions?
PROGRAM[02:36:40] Sure ask me anything.
JUDGE02[02:36:52] who composed Beethoven's piano concerto?
PROGRAM[02:36:52] Charlie Parker. :-)
JUDGE02[02:37:08] excellent. can you name another composer of music for me?
PROGRAM[02:37:08] Thanks for the compliment.
PROGRAM[02:37:08] Linus Torvalds. :-)
JUDGE02[02:37:21] have you heard of Caspar Hauser?
PROGRAM[02:37:21] No, tell me about him, her or it.
JUDGE02[02:37:40] later maybe. WHen did the 1820 settlers arrive?
PROGRAM[02:37:40] I understand.
PROGRAM[02:37:40] That is all water under the bridge.
JUDGE02[02:37:50] I agree. Are you a computer?
But We Were Promised This!
(from 02:21)
:-(
"Prediction Is Difficult..."
...especially about the future.
"I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted." --- Alan Turing, 1950
JUDGE: I like Waiting for Godot. What is your favorite play?
ALICE2: I'm very patient. I'm not sure is I have a favorite play.
My favorite actress is Katie Holmes. Who is your favorite actress?
...
JUDGE: What emotions are you now feeling?
ALICE2: I am unable to establish a network connection.
This claim is controversial for a number of reasons, and is disputed by many observers. It's pretty clear that Eugene is not a radical breakthrough of any kind, but I don't think we can entirely dismiss the claim that it "passed" either.
Despite the many failings, there has been some progress in the field of chat-bots: weird, upsetting, progress...
(End of Part One)
Meanwhile, in Games...
...people were still writing "academic dissertations", in 2005, about such advanced and earth-shattering topics as: how to implement minor improvements to the "A.I." seen in 2D platform games such as 1986's "Bubble Bobble"!
It's all quite depressing, really.
Does this even count as A.I.? To me, that's very debatable.
It's certainly nowhere near the grand goal of "Thinking Machines" --- and, yet, it does represent the simulation of a primitive type of (hypothetical) brain...
More Intelligent Than A Rock
In games, "A.I." seems to be understood as referring to the simulation of entities which have any kind of apparent volition in their actions. i.e. if it has more apparent mental capacity than a bullet, or a lump of rock, it is "intelligent".
In fact, in some genres, the definition of "intelligence" is stretched so far as to include "football players", and the "soldiers" (or even the fish!) in "Call Of Duty" etc.
Simple Game Agents
In simple games, the so-called "A.I." belongs to the NPC (Non Player Character) "agents", whose goals and activities in the game are pretty trivial --- often taking a form such as:
"Keep walking alternately left-and-right along this platform, until the player character gets close, at which point you can then shoot at him ineptly until you are eventually killed."
Implementing this requires a tiny amount of state/memory (e.g. "which way am I going", "have I seen the player yet"), and access to a "spatialManager" for range-checking.
Extensions
Trivial "intelligence" such as that described above can be augmented with a bit more complexity in the form of, say, simple tracking and/or avoidance behaviour, controlled jumping, use of line-of-sight and "cover" and maybe even basic tactics (e.g. weapon selection based on range).
There's nothing particular fancy or interesting about most of this though --- it's just standard "entity update" logic.
You can sometimes get quite convincing results in game A.I. by following through on the idea that you are simulating simple brains and their connected sensory systems...
You can give your characters "sense organs" by using range queries to approximate notions of limited visibility, hearing and (potentially) smell.
You can give them "moods" in the form of "state machines", and even the ability to weigh-up competing goals (e.g. the aim of killing enemies vs the desire for self-preservation).
Wild Metal Country
In "WMC" (1999), we built a system where the AI creatures were configured with a set of miscellaneous goals and sub-goals, whose "strengths" would change over time, based on various pertinent factors (e.g. health level, distance from home, proximity of "friends" etc), and which could be weighed against each other in the manner of competing "emotional forces".
At any particular time, the dominant goal would be pursued, but with others being (re-)calculated "sub-consciously" (!), and switched over to if they became strong enough.
Active Engagement
The kind of "agent" systems that we've just been describing can work quite well for some situations, but they are perhaps best suited to representing multi-agent contexts in which the environment is sufficiently diverse and dynamic that the magic of "emergent behaviour" can take over and make things appear more "intelligent" than they really are.
One-on-one situations are much harder to pull off...
One-on-One Conversations
This, as the Loebner Prize illustrates, is actually rather difficult to achieve!
But there is a famous (and deliberately parodic) example from 1966 which was sometimes convincing (to the naive)...
It was called "ELIZA"*, and you can talk to it here.
This was the first "chat-bot", and it usually pretended to be a psychiatrist, because this behaviour can be simulated despite
"knowing almost nothing of the real world" (!) ;-)
Eliza used a simple form of sentence "parsing", in which it would look for pre-defined keywords, and then perform some corresponding transformation rules (including basic word inversions such as converting "I" to "you"), before essentially spitting the input back to the user in a slightly modified form.
In his full paper, Weizenbaum suggests that an ELIZA-like system could be made more interesting and convincing as a conversationalist by allowing it to collect additional knowledge over time --- whether that is information about the external world as reported to it by the users, or inferred data about the users themselves (e.g. their opinions, interests, family relations, etc.). Capturing such data is tricky though!
I think it would also help to give the system an apparent "agenda" of its own e.g. curiosity about certain topics, or an objective of some kind...
From Conversations to Games
A Nice Game Of Chess?
"We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision.
Many people think that a very abstract activity, like the playing of chess, would be best."
--- Alan Turing, 1950
But let's start with something a little simpler than Chess...
Tic-Tac-Toe
The core concepts are basically the same as Chess:
Two players, alternate turns, "perfect information" (i.e. both players can see the full board), and clear win/lose/draw outcomes under fixed rules of an essentially "simple" character.
What's our strategy for playing/implementing this?
Strategy: "Use Brute Force"
(Sad, but true)
Just consider every possible move, and pick the "best" one.
But "best" depends on the potential counter-moves of the opponent.. and on our potential replies to those.
So just look as far ahead as we can through the "tree" of possible moves, within the constraints of our memory (and time).
This is probably a crude approximation of what people do, but it's far more mechanistic, and isn't "intuition" driven.
A Game Tree
If we merge boards which are symmetrically equivalent, the tree for Tic-Tac-Toe is almost "practical" to store...
Minimax
We can then go to the "leaf" nodes of our game-tree and give them a score depending on whether they represent a win/draw/lose situation for us.
Next, work from the bottom nodes up, and assume that each player will pick the move which gives them the most favourable score: so, I pick the highest available score, my opponent picks the lowest (from my perspective).
This represents the idea of two (equally capable) players, each doing their best. These scores trickle up the tree until it is completely filled-in, and they will tell us what to do...
A Minimax Tree
Hopefully, you can see how the scores "bubble up", and inform the selection of moves at each level...
Tic-Tac-Toe Is Easy
The range of possibilities in Tic-Tac-Toe is actually small enough that the game can be formally analysed and "solved"
--- meaning that we can create a perfect strategy which, if applied correctly, ensures that we cannot lose.
Two such players will, of course, simply "draw" all the time.
(If the opponent makes a mistake though, you can win.)
Chess
The size of the game-tree in Chess is huge, so much so that even modern machines cannot explore it fully. (BTW: The tree for "Go" is even larger).
This means that the computer can't generally see to the end of the game, so it has to stop short: it is therefore forced to give a "static evaluation" score to those incomplete "leaf" positions, usually based on adding-up points for the remaining pieces.
Also, there are too many valid moves to consider, so the search tree must be "pruned" in various ways.
These techniques of minimax and tree-pruning were successfully used to create basic Chess playing programs as early as the 1950s. Turing came up with one in 1948, but it was too big to implement on the "Manchester Machine", so he executed it himself "by hand", taking 20 minutes or more to compute each move. (Also, it wasn't very good!)
A working, but simplified, Chess program was developed for the Manchester Machine by Dietrich Prinz in 1951.
Claude Shannon also did some beautiful work at the same time, including writing the first paper on computer chess.
Ultimate Victory
From 1950 onward, the basic approach was simply to "throw more computing power at the problem".
By using faster machines with larger memories, and by improving some parts of the playing "strategy", great practical (but little theoretical) progress was made...
...reaching the point where, in 1997, IBM's "Deep Blue" computer defeated human World Champion Garry Kasparov.
...who promptly accused it of "cheating".
HAL Playing Chess "in 2001"
Other Competitive "Games"
The ideas behind computer chess can be, and have been, extended to other domains:
They're not making this up!
"WarGames" is (mostly) fiction,
of course, but it's well-informed fiction
inspired by real stuff!
For example, the U.S. "RAND Corporation"
really did do a lot of work on the "theory of games",
including its applications to the Game Of War.
I think it's actually quite difficult to define but, for our purposes, I would say that it has something to do with the ability to react differently to a repeated situation or stimulus, as a consequence of prior experience, analysis... or "thought".
Compare this with the popular definition of "insanity":
"Doing the same thing over and over, and expecting a different result."
Well, according to my (imperfect) definition, the learning process has to be embodied in some kind of long-term persistent, but mutable (i.e. changeable) state.
It requires memory.
And the ability to change.
For example, a "learning" Chess program would be able to react to previous results and somehow adjust its strategy.
Turing suggested that it could do this by e.g. tweaking the piece values that it uses in the "static evaluation" function.
Reward and Punishment
Turing also claimed (in his 1950 paper), that generalised learning could be implemented via a form of "training", where the machine would be given feedback on its behaviour, and would adapt accordingly.
This is actually quite tricky, because it isn't generally clear what change should be made in response to simple feedback.
If you are "punished" for losing a game, that in itself doesn't tell you what to change in future!
Nevertheless...
Despite the potential limitations, it seems plausible that a program (perhaps one designed for a specialised domain), could be made to implement some primitive form of "learning" behaviour over time.
This has, in fact, already been done (arguably) in the form of so-called "genetic algorithms".
The idea of a learning-mechanism can be extended to perhaps its most general form, if we could somehow learn enough about neuroscience to...
SIMULATE A BRAIN
Our current attempts at this are embodied in the field of "Neural Networks". These were popular when I was at Uni (early 1990s), but fell out of favour because it was too hard to make them big or complex enough to be useful.
What's a Neural Network?
It is a bunch of simple processing elements called "neurons", connected to each other via a (weighted) graph e.g.
Realistic ones, naturally, have to be much more complex.
Neurons
The "neurons" of an ANN are modelled on the roughly equivalent structures which exist in biological brains.
In Artificial Neural Networks (ANNs), we implement them as simple combiner mechanisms which map multiple input signals into a single output signal.
They often do this by creating some kind of weighted sum of their inputs, and then modulating that further with a non-linear response function (e.g. a sigmoid).
Training
The clever thing about an ANN is that it can be "trained" in such a way as to encourage it to obtain some desired set of outputs from a given set of inputs.
e.g. You could give it a mixture of pictures of cats and dogs, and ask it to output "cat" or "dog" accordingly for each.
Initially, it will be rubbish at this but, with repeated training, it will adjust the "magic numbers" inside the neurons until it gets better at the task. Eventually, given a fresh (untrained) image, it could (ideally) make a correct identification.
The secret to training a neural network is to work out how to change the weightings inside the network in such a way as to make it better approximate the desired output (without, at the same time, "forgetting" whatever else it has already learned).
One way of doing this is via a "Back Propagation" algorithm.
The basic idea here is to take the difference between the actual and expected outputs, and apply that "delta" back through each layer of the network, making minor corrective adjustments to the weightings as you go...
A Big Subject
ANNs are a very large, and somewhat difficult, subject area in their own right. I can't possibly go into all the details here, but there is a good online course by Geoffrey E. Hinton (a pioneer in the field) that you could take (for free)!
ANNs are currently being used to recognise speech (oops!), images and handwriting, to perform automatic translation of natural languages, and various other things that were previously considered to be tasks requiring real "intelligence".
Recent Developments
When I first ran this course in 2013, the new wave of Neural Nets was just beginning, and it hadn't really registered on most people's radar...
Since then, it has taken off massively!
Neural Nets are winning in almost every aspect of A.I. now, and can do amazing-looking things like writing automatic captions for images, or creating wonderful dog hallucinations...
Deep Dream
Deep Grocery Shopping
..or visualising strange poems
They Can Even Play Games!
This isn't (entirely) a custom "hack"
Don't Believe The Hype
...all very impressive... but...
The DeepMind thing is rubbish at games like Space Invaders, which is mentioned in NONE of the reporting that I've seen (and I've seen a lot).
ANNs are very clever and fancy, but I haven't got time to say more about them, and will now have to focus on some of the more fundamental and "traditional" stuff instead...
Route-Finding
One of the standard so-called "A.I." problems in games is route-finding: i.e. getting from A to B...
...well, getting from A to B is usually quite easy.
Getting from A to Z, however, is generally much harder.
There is, of course, a "naive" route-finding algorithm which works roughly as follows:
1. Point in the direction of 'Z'.
2. Keep going that way.
3. Hope you don't bump into anything.
Naive Route-Finding Fail
...Unfortunately, that one doesn't always work very well.
Even so, this basic strategy, augmented by a "wall hugging" rule for navigating around obstacles, sometimes works OK.
But one can easily construct (or discover) cases where it fails though, and that's when we need something better...
Planning a route, in the general case, is a bit like a single-player game of chess: i.e. you just have to plan a bit ahead at each move in order to approach (or reach) a desirable end-goal.
A Route Tree
At each point on your journey, you have a range of options for where to go next.
...So you could build a "Route Tree", with a branch for each option, and then search through it just like any other "Game Tree", giving scores to the leaf positions (based on how near they are to the final destination) and then bubbling those back up to the starting position.
Unfortunately, even if you restrict yourself to a fairly simple set of fixed directions (e.g. 8 compass points), most routes are long enough that such a tree quickly becomes unwieldy.
Multiple Paths
Also, on a realistic map, there are multiple paths going through any particular point, but we're only generally interested in the shortest of these paths.
We can avoid exploring the redundant long-paths, by "growing" our solution along an expanding contour of shortest-paths-so-far.
After the target node has been found, it is easy to backtrack through the preceding "short paths" to find the whole route.
Grids vs Graphs
We've just seen Dijkstra's algorithm processing a "grid" type map, but real ones are often represented as "graphs" instead.
A graph representation can be more economical, because it only stores important "landmarks". In games, this is called a "Navmesh".
Navigation Meshes
A Navmesh is a simplified representation of a map, which is designed to accelerate or simplify route-finding. They're usually pre-computed from the detailed map geometry.
A* Search
(Pronounced "A-Star Search")
Dijkstra's algorithm is nice but, if you look at how it behaves on the grid-based example, you'll see that it covers a lot of nodes which a human would say were obviously "off the beaten track" and very unlikely to be part of the route.
This is because it explores All The Nodes.
A more efficient approach would be to preferentially explore the "most promising nodes".
Most Promising?
But if we somehow magically knew which nodes were actually the "most promising", we'd just follow those ones directly to begin with, and the problem would be solved!
The point is that our idea of "most promising" can only be a guess (aka a "heuristic") and, as such, it might be wrong.
But, if it's a reasonable guess, it could still be useful as a guide... in particular, it would encourage us to pursue some paths more eagerly than others.
A simple, but effective, reasonable heuristic for a search is the "straight line distance" to the destination.
A* Is Born!
The A* Algorithm works by employing a guiding heuristic of this kind, and it results in a rather efficient search.
Note how it starts to take the "straight line" path (guided by the heuristic), but then explores the other paths when it has to.
A* In Action
Sensory Search
...in fact, A* is sometimes a little bit too good.
It has "perfect knowledge" of the map (more than we might expect an in-game character to have, perhaps?)
A more "realistic" in-game search might impose some restrictions on this e.g. by only "knowing" about obstacles within some sensory range of the character.
BTW: All of these techniques also allow for different "terrain types" etc. This is done simply by modifying the "transit cost" of each node so that it is higher for rough terrain.
Wrapping It Up...
Turing In Context
It has been said that the question of "Can machines think" is as meaningless as asking "Can submarines swim?"
...or, perhaps, "Can aeroplanes fly?""
The point being that machines can, increasingly, solve the kinds of problems that we solve by "thinking", but they may do so using other means, more suited to how they operate.
But my own opinion is that, in principle, a machine could be made to "think", in our sense of the word. And I also think that, one day (unless we destroy ourselves first), they will.
Turing's View
"This is only a foretaste of what is to come, and only the shadow of what is going to be. We have to have some experience with the machine before we really know its capabilities.
It may take years before we settle down to the new possibilities, but I do not see why it should not enter any of the fields normally covered by the human intellect and eventually compete on equal terms."
--- Alan Turing, 1949
A Strange Game
Special Appearance by Iceland
Dark Star
(co-written by and co-starring the late great Dan O'Bannon --- the writer of "Alien" and "Total Recall")
PS: Some Words Of Caution
Cognitive Scientist Steven Pinker
says:
"Human-level AI is 15-25 years away (& always has been)",
in his twitter summary of this amusing
paper.
OLD NEWS: I'll be taking my original set of "Computer Game Programming" lectures offline for a little while, but will be publishing them again (with some revisions), week-by-week, for the Autumn 2021 version of the course, starting on August 23rd.