next up previous
Next: Bibliography Up: Models, Algorithms and Economics Previous: Game Theory

Nash Equilibrium

Here is a pointer to a short biography of John F. Nash, Jr. from the Nobel-e-Museum.

First consider only pure strategies (no randomization or mixed strategies). A strategy profile $s=(s_1,\cdots, s_I)$ constitutes a Nash Equilibrium if for every $i\in \{1,\cdots,I\}$ $u_i(s_i, s_{-i}) \ge u_i(s'_i, s_{-i})$ for all $s'_i \in S_i$. Which strategy profile constitutes a Nash Equilibrium in the following game?

  y1 y2 y3
x1 1,4 2,2 2,3
x2 3,1 1,5 4,1
x3 2,0 3,4 1,2

If Player 2 plays y2, then the best response of player 1 is x3. If player 1 plays x3, then the best response of player 2 is y2. This implies that strategy profile (x3,y2) is a Nash Equilibrium. What if player 2 plays y1, then player 1's best response is x2, but for x2, player 2's best response is y2. Similarly if player 2 plays y3, then player 1's best response is x2, but for x2, player 2's best response is y2. Therefore $\{(x_3,y_2)\}$ is the only Nash Equilibrium strategy profile.

Consider the game of Matching Pennies.

  Heads Tails
Heads -1, 1 1, -1
Tails 1, -1 -1, 1

Observe that there is no pure strategy equilibrium in the Matching Pennies.

When does (pure strategy) Nash Equilibrium exists?
(We are following the treatment in the book [#!OR94!#].)

First we need a few definitions and Kakutani's fixed point theorem. For any player i, define the best response correspondence $b_i:
S_{-i}\rightarrow S_i$ such that for each $s_{-i}\in S_{-i}$, the set

Observe that the strategy profile $(s_1,s_2,\cdots,s_I)$ is Nash Equilibrium if and only if each $s_i\in
b_i(s_{-i})$. Define the correspondence $b:S\rightarrow S$ by

where $S=S_1\times S_2 \times \cdots \times

A correspondence $f:A\rightarrow B$, where $A \subset R^n$, the set $B\subset
R^m$ is closed, has a closed graph if for any two converging sequences, $a^k\rightarrow a\in A$ and $b^k\rightarrow b$, where $a^k\in A$ and $b^k\in f(a^k)$ for all k, then $b\in f(a)$. The correspondence $f:A\rightarrow B$, where B is closed, is upper hemicontinuous if it has a closed graph and the images of the compact sets are bounded. A function $f:A\rightarrow R$ defined on a convex set $A \subset R^n$ is quasi-concave if its upper contour set $\{x\in A: f(x)\ge t\}$, for some value t is convex.

Here is the fixed point theorem due to Kakutani, and fir Illustration in one dimension see Figure 3.

Figure 3: Illustration of Kakutani's Theorem in 1-dimension
\begin{figure}\epsfig{file=kaku.eps, height=6cm}\end{figure}

Theorem 3 (Kakutani's fixed point theorem)   Let $A \subset R^n$ be a nonempty, convex, compact set and $f: A \rightarrow A$ be a upper hemicontinuous correspondence from A to itself, such that $f(x)\in A$ is nonempty and convex for each $x \in A$. Then f has a fixed point, i.e. $x\in f(x)$ for some $x \in A$.

Lemma 2   If the sets $S_1,\cdots, S_I$ are nonempty, Si is compact and convex, and the utility function is continuous, and quasi-concave in each of the variables si, then the function bi is nonempty, convex-valued, and upper hemicontinuous.

Proof: By definition the function bi(s-i) maximizes the continuous function ui(.,s-i) over a compact set Si. Therefore the set $b_i\not=\emptyset$. The set bi is convex, since the set of maximizers of a quasi-concave function on a convex set are convex. For upper hemicontinuity, we need to show that for any sequence $({s^n}_i,{s^n}_{-i})\rightarrow (s_i, s_{-i})$ such that ${s^n}_i\in b_i({s^n}_{-i})$, $s_i\in
b_i(s_{-i})$. For all n, $u_i({s^n}_i,{s^n}_{-i})\ge u_i(s'_i,{s^n}_{-i})$ for all $s'_i \in S_i$. By continuity of ui, $u_i(s_i, s_{-i}) \ge u_i(s'_i, s_{-i})$. This implies that $s_i\in
b_i(s_{-i})$. $\Box$

Theorem 4   A game has a Nash Equilibrium if for all $i\in I$, the sets Si are nonempty, convex and compact subset of Euclidean space, and the utility function is continuous and quasi-concave in each si.

Proof: Recall the definition of the correspondence $b:S\rightarrow S$ given by $b(s_1,s_2,\cdots,
s_I)=b_1(s_{-1})\times b_2(s_{-2})\times \cdots
\times b_I(s_{-I}),$ where $S=S_1\times S_2 \times \cdots \times
S_I$. From Lemma 2 we know that bi()'s are nonempty, convex-valued and upper hemicontinuous. Now all the conditions for Kakutani's fixed point theorem are satisfied by the function b. Therefore there exists a fixed point for this correspondence, say $s\in S$, such that $s\in b(s)$. Clearly s is a Nash Equilibrium since each $s_i\in
b_i(s_{-i})$. $\Box$

Of course this theorem applies when the sets Si are convex, but in our examples like Matching Pennies etc., the sets Si are finite! OOPS! Before we do the proofs for this scenario - lets introduce mixed strategy Nash Equilibrium.

A mixed strategy profile $\sigma=(\sigma_1,\cdots,\sigma_I)$ constitutes a Nash Equilibrium if for every $i=1,\cdots,I$, $u_i(\sigma_i, \sigma_{-i}) \ge u_i(\sigma'_i, \sigma_{-i})$, for all $\sigma'_i\in \Delta(S_i)$. For example, consider the Matching Pennies game again. This game has a mixed strategy equilibrium if each player chooses Heads or Tails with equal probability. Although this game didn't have a pure strategy Nash Equilibrium, but has a mixed strategy Nash Equilibrium! Here is the celebrated Nash Theorem!

Theorem 5 (Nash Equilibrium Theorem)   Every finite strategy game has a mixed strategy Nash Equilibrium.

Proof: Let player i, has M pure strategies in the set Si, i.e., |Si|=M. Recall the simplex $\Delta(S_i)$, representing the probability vectors

Clearly this simplex is nonempty, compact and convex. Notice that the expected utility is linear in the probabilities, implying that the utility function under mixed strategy is continuous as well as quasiconcave. Now we have all the properties required for Kakutani's theorem. $\Box$

In the following lemma we state an important property of mixed strategy Nash Equilibrium. It means that each player, given the distribution of strategies played by all his opponents, is indifferent among all pure strategies that (s)he plays. This lemma can be used to find the mixed strategy equilibrium.

Lemma 3   Strategy profile $\sigma=(\sigma_1,\sigma_2,\cdots,\sigma_I)$ is an equilibrium if and only if for every player $i\in I$ every pure strategy $s_i\in S_i$ in the support of $\sigma_i$ (i.e. having non-zero probability of being selected) is a best response to $\sigma_{-i}$.

Proof: Necessary: Suppose that there is a strategy $s_i\in S_i$ in support of $\sigma_i$, but is not a best response to $\sigma_{-i}$. Then by the linearity of the utility function ui, player i can increase h(is/er) utility by transferring the probability from si to a strategy, say s'i that is a best response (i.e. player i would have played s'i in place of si and increased h(is/er) utility).
Sufficiency: Suppose that $\sigma$ is not a Nash Equilibrium. Then there is a player who has a strategy $\sigma'_i$ with $u_i(\sigma'_i,\sigma_{-i})> u_i(\sigma_i,\sigma_{-i})$. This implies that there is a pure strategy $s'_i \in S_i$ such that $u_i(s'_i,\sigma_{-i})>u_i(\sigma_i,\sigma'_i)$. But by assumption $u_i(s'_i,\sigma_{-i})=u_i(\sigma_i,\sigma'_i)$ for all $s_i\in S_i$ which are played with positive probabilities. $\Box$

What are Nash Equilibriums for the following game?

  Col-1 Col-2
Row-1 2,2 3,4
Row-2 1,3 5,1

Suppose Player 1 selects Row-1 and Row-2 with probability 1-p and p respectively, i.e. $\sigma_1=(1-p,p)$. Then the utility of Player 2, under strategy Col-1 and Col-2 should be the same, i.e.,

Similarly, if Player 2 selects Col-1 and Col-2 with probability $\sigma_2=(1-q,q)$, then

Therefore Nash Equilibrium is $\sigma=(\sigma_1,\sigma_2)=(\{1/2, 1/2\},\{2/3, 1/3\})$.

Here is a graphical way of computing Nash Equilibrium of $2\times 2$ games. In general let the game be given as follows:

  1-q q
1-p a1,b1 a2,b2
p a3,b3 a4,b4

Expected utility of the row player (Player 1) will be

u1= (1-p)(1-q)a1+ (1-p)qa2+ p(1-q)a3+ pqa4, or

u1= (a1-a2-a3+a4)pq+ (a3-a1)p+ (a2-a1)q + a1.

Similarly, expected utility of the column player, Player 2, will be

u2= (1-p)(1-q)b1+ (1-p)qb2+ p(1-q)b3+ pqb4, or

u2= (b1-b2-b3+b4)pq+ (b3-b1)p+ (b2-b1)q + b1.

First consider Player 1 (the row player). We can rewrite the utility function u1 as a function of p, representing the equation of a line $u_1=\mbox{(slope)}p+\mbox{(constant)}$, as follows.

u1=[(a1-a2-a3+a4)q+ (a3-a1)]p + [(a2-a1)q+a1].

The aim of the Player 1 is to maximize u1. In our example


For $0\le q<1/3$, p=0. For q=1/3, $0\le p
\le 1$. For $1/3<q\le 1$, p=1.

Similarly aim of the Player 2 is to maximize u2. In our example

u2=[(b1-b2-b3+b4)p+ (b2-b1)]q + [(b3-b1)p+b1],


u2=[-4p+2]q + [p+2].

For $0\le p<1/2$, q=1. For p=1/2, $0\le q \le 1$. For $1/2<p\le1$, q=0.

The plots are shown in Figure 4, and the intersection of the two graphs, where p=1/2 and q=1/3 is the Nash Equilibrium in this case. These graphs can be viewed as ``no regrets graphs'' of the two players. Given the strategy of a player, the best strategy of the other player is the corresponding point on the graph. For example, if Player 1 chooses p=.2, then the best strategy for player 2 is to choose q=1 (or Player 2 has no regrets in choosing q=1).

Figure: Illustration of Nash Equilibrium in $2\times 2$ games.
\begin{figure}\epsfig{file=plot-nash.eps, height=6cm}\end{figure}

next up previous
Next: Bibliography Up: Models, Algorithms and Economics Previous: Game Theory
Anil Maheshwari