Game Theory I:
Simultaneous-Move Games

Christopher Makler

Stanford University Department of Economics

Econ 51: Lecture 9

  • Motivation: why game theory?
  • Notation and setup
    • Components of a game
    • The normal form
  • Optimal choice 
    • Dominant and dominated strategies
    • Best responses
  • Equilibrium
    • Equilibrium in dominant strategies
    • Iterated deletion of strictly dominated strategies
    • Best response (Nash) equilibrium
    • Multiple equilibria and belief formation

Today's Agenda

  • Up until now: agents only (really) interact with "the market" via prices
  • In real life, people, firms, countries ("players") interact with each other.
  • Our economic lives are interconnected: our well-being doesn't depend only on our own actions, but on the actions taken by others
  • Questions:
    • OPTMIZATION: How do you operate in a world like this?
    • EQUILIBRIUM: What is our notion of "equilibrium" in a world like this, and how is it different from competitive equilibrium?
    • POLICY: Given how people behave in strategic settings, how can we design "mechanisms" to achieve policy goals?

Motivation

  • The branch of economics that studies strategic interactions between economic agents.
  • Everyone's payoffs depend on the actions chosen by all agents
  • To "play the game," each agent thinks strategically about how the other agents are playing

Game Theory

  • Industrial organization: situations where a few firms dominate the market,
    and each firm's decisions affect others
  • Political economy: campaigning, governing, international diplomacy,
    provision of public goods
  • Contract negotiations: incentive structures, credible threats, negotiating over price
  • Interpersonal relationships: team dynamics, division of chores within a family

Applications

  • Tuesday 11/1: Lecture 9 on simultaneous-move games with pure strategies

  • Thursday 11/3: Lecture 10 on sequential and repeated games

  • Sunday 11/6: Homework for lectures 9 & 10 due.
     

  • Tuesday 11/8: Democracy Day, no class
     

  • Thursday 11/10: Lecture 11 on simultaneous-move games with mixed strategies

  • Tuesday 11/15: Lecture 12 on simultaneous-move games with incomplete information

  • Tuesday 11/15: Homework for Lectures 11 & 12 due.
     

  • Thursday 11/17: Midterm II

Plan for the next three weeks

  • Players: who is playing the game?
  • Actions: what can the players do at different points in the game?
  • Information: what do the players know when they act?
  • Outcomes: what happens, as a function of all players' choices?
  • Payoffs: what are players' preferences over outcomes?

Components of a Game

  • Outcomes: what happens, as a function of all players' choices?
  • Payoffs: what are players' preferences over outcomes?

1

2

1

1

,

0

0

,

1

1

,

0

0

,

Left

Right

Left

Right

1

2

Left

Right

Left

Right

Both OK

Both OK

Crash

Crash

Outcomes

Two bikers approach on an unmarked bike path.

Payoffs

Strategies and Strategy Spaces

A strategy is a  complete, contingent plan  of action for a player in a game.

This is going to take on more meaning when we look at games that take aren't played simultaneously.

A strategy space is the set of all strategies available to a player.

Continuous: like Cournot; each agent chooses (e.g.) a real number (payoff function)

Discrete: each agent chooses one of a finite number of options (payoff matrix)

Normal-Form Game

List of players: \(i = 1, 2, ..., n\)

Strategy spaces for each player, \(S_i\)

Payoff functions for each player \(i: u_i(s)\),
where \(s = (s_1, s_2, ..., s_n)\) is a strategy profile 
listing each player's chosen strategy.

Example: Two Player Game (Discrete Strategies)

Strategy for player \(i\):

Strategy space for player \(i\):

S_1 = \{X,Y,Z\}
s_1 = Z

Strategy profile:

s_i
S_i
(s_1, s_2)
S_2 = \{A,B,C,D\}
s_2 = B
(\ \ \ ,\ \ \ )
Z
B

(set of all possible strategies for player \(i\))

(list of strategies chosen by each player \(i\))

\(X\)

\(A\)

1

2

\(B\)

\(C\)

\(D\)

\(Y\)

\(Z\)

1

1

,

1

1

,

0

0

,

0

0

,

2

2

,

0

0

,

2

2

,

0

0

,

3

–1

,

0

0

,

3

–1

,

0

0

,

Payoffs for both players, as a function of what strategies are played

Example: Two Player Game (Continuous Strategies)

Strategy for player \(i\):

Strategy space for player \(i\):

S_1 = \mathbb{R}_+
s_1 = 6

Strategy profile:

s_i
S_i
(s_1, s_2)
S_2 = \mathbb{R}_+
s_2 = 3
(\ \ \ ,\ \ \ )
6
3

(set of all possible strategies for player \(i\))

(list of strategies chosen by each player \(i\))

Payoffs for both players, as a function of what strategies are played

\pi_1(s_1,s_2) = 12s_1 - s_1^2 - s_1s_2
\pi_2(s_1,s_2) = 12s_2 - s_2^2 - s_1s_2
i = \text{Player }i
-i = \text{Player(s) other than }i
\text{Example: Consider a strategy profile for four players }s = (s_1, s_2, s_3, s_4)
\text{If we consider player 2, then }s_i = s_2 \text{ and } s_{-i}=(s_1, s_3, s_4)
u_i(s_i,s_{-i}) = \text{Player $i$'s utility from playing $s_i$ when others play $s_{-i}$}

Notation Convention

Optimal Choice

Optimal choice

  • What strategy should a player choose?
  • It may depend on what the other player is doing, or it may not
    • Dominant strategy: best no matter what the other player does
    • Dominated strategy: never a good move
  • If there is no dominant strategy, your best move will depend on what the other person is doing (best response).
  • Players: prisoners being interrogated in separate rooms.
  • Strategies: "cooperate" (don't rat out other)
    or "defect" (squeal like the little rat you are)
  • Payoffs:
    • if they both cooperate, the prosecutor doesn't have much to go on, so they each get a light sentence.
    • If they defect while the other cooperates, they go free.
    • If they both defect, they both go to jail for a long time.

Prisoners' Dilemma

1

2

Cooperate

Defect

Cooperate

Defect

If you believe the other person will defect,
what is your best response?

If you believe the other person will cooperate, what is your best response?

Defect

Defect

2

2

,

3

0

,

1

1

,

0

3

,

Prisoners' Dilemma

1

2

Cooperate

Defect

Cooperate

Defect

If you believe the other person will defect,
what is your best response?

If you believe the other person will cooperate, what is your best response?

Defect

Defect

Because Defect aways results in a
strictly higher payoff than Cooperate, we say that
Defect  strictly dominates Cooperate.

2

2

,

3

0

,

1

1

,

0

3

,

Prisoners' Dilemma

1

2

Cooperate

Defect

Cooperate

Defect

2

2

,

3

0

,

1

1

,

0

3

,

(C,C) pareto dominates (D,D)

(D,D) is a dominant strategy equilibrium

The First Strategic Dilemma:

Everyone doing what's best for themselves can lead to a group loss.

Strict vs. Weak Dominance

1

2

Top

Bottom

Left

Right

2

5

,

1

0

,

4

1

,

5

5

,

Right weakly dominates Left.

Top strictly dominates Bottom.

Iterated Dominance

The process of eliminating strategies that are dominated, until no remaining strategies are dominated.

Rationalizable Strategies

The set of strategies that survive iterated dominance.

Which strategy or strategies is strictly dominated for a player?

1

2

1

2

,

4

3

,

1

4

,

1

1

,

Top

Middle

Left

Center

Bottom

Right

3

0

,

2

1

,

3

2

,

8

0

,

8

0

,

Center strictly dominates Right.

If we know that player 2 will never play Right, is any strategy now dominated for player 1?

Bottom strictly dominates Top.

And with that off the board...

Bottom strictly dominates Middle.

Can we eliminate anything else?

Center strictly dominates Left.

Definition: Best Response

\text{Let }s_{-i}\text{ be the strategies being played by all players other than player }i
\text{We say }s_{i}^*\text{ is a \textbf{best response} to }s_{-i}\text{ if}
u_i(s_i^*,s_{-i}) \ge u_i(s'_i,s_{-i})
\text{ for every available strategy }s'_i \in S_i

In plain English: given what the other player(s) are doing,
a strategy is my "best response"
if there is no other strategy available to me
that would give me a higher payoff.

\text{ player }i\text{'s payoff from }s_i^*
\text{ player }i\text{'s payoff from }s'_i

1

2

1

2

,

4

3

,

1

4

,

1

1

,

Top

Middle

Left

Center

Bottom

Right

3

0

,

2

1

,

3

2

,

8

0

,

8

0

,

How should player 1 best respond to a belief that player 2 will play Left? What about Center or Right?

Believe Left => play Middle

What about player 2?

Believe Top => play Left

Believe Center => play Bottom

Believe Right => play Top or Bottom

Believe Middle => play Center

Believe Bottom => play Center

Equilibrium

pollev.com/chrismakler

Definition: Best Response (Nash) Equilibrium

\text{A strategy profile }s^* = s_1^*,s_2^*,...,s^*_n\text{ is a \textbf{Nash Equilibrium} if}
u_i(s^*) \ge u_i(s'_i,s^*_{-i})
\text{ for every available strategy }s'_i \in S_i \text{, for all players } i=1,2,...,n

In plain English: in a Nash Equilibrium, every player is playing a best response to the strategies played by the other players.

\text{ player }i\text{'s equilibrium payoff}
\text{ player }i\text{'s payoff from some deviation }s'_i

In other words: there is no profitable unilateral deviation 
given the other players' equilibrium strategies.

Stag Hunt Game

1

2

Stag

Hare

Stag

Hare

5

5

,

4

0

,

4

4

,

0

4

,

Coordination Game

1

2

Top

Bottom

Left

Right

2

1

,

0

0

,

1

2

,

0

0

,

Pareto Coordination Game

1

2

Top

Bottom

Left

Right

2

2

,

0

0

,

1

1

,

0

0

,

Contribution to a Public Good

  • You have $12
  • You can contribute $1, $2, $3, $4, $5, or $6
  • Your payoff is the amount of money you have left multiplied by the average donation in the class.

Econ 51 | 9 | Simultaneous-Move Games

By Chris Makler

Econ 51 | 9 | Simultaneous-Move Games

Introduction to game theory; dominance and best response; Nash equilibrium

  • 509