Click here to Skip to main content
15,881,715 members
Articles / Programming Languages / C#

Markov Monopoly

Rate me:
Please Sign up or sign in to vote.
4.92/5 (48 votes)
29 Apr 2009Ms-PL11 min read 66.5K   7.1K   60   11
Analyzing the board game Monopoly using a Markov chain model.

Introduction

Most practitioners of numerical computation aren’t introduced to Markov chains until graduate school. But the basic concepts required to analyze Markov chains don’t require math beyond undergraduate matrix algebra. This article presents an analysis of the board game Monopoly as a Markov system. I have found that introducing Markov chains using this example helps to form an intuitive understanding of Markov chains models and their applications.

The Math

Suppose a system can be in a finite number of states. If the system is “one token on a Monopoly board,” the states are the board’s 40 spaces. Suppose that we know the probability for each state to transition into every other state. For example, if our token is in the “Mediterranean Avenue” state, the probability that it will transition to the “Baltic Avenue” state on the next turn is 1/36, because Baltic is two spaces ahead of Mediterranean, and the only way to advance those two spaces is to roll snake-eyes, which has a probability of 1/6 X 1/6 = 1/36. Let’s denote the probability of state a transitioning into state b as Pba.

Given these one-turn transition probabilities, can we compute the probability for each state to transition into any other state in two turns? No problem! Since the transitions are independent, the probability of a transitioning into b and then b into c is just the product Pcb Pba. The probability of a transitioning into c via any intermediate state b is just the sum of those products over all b:

P2ElementEquation.gif

Now, imagine that we arrange the transition probabilities into a matrix P:

PMatrix.gif

where all the probabilities of transition from the same state are in the same column, and all the probabilities of transition to the same state are in the same row. Look back now at our previous equation for a two-step transition probability above, you will recognize the expression as just the matrix multiplication rule. That is, the equation is just the same as the matrix equation:

P2MatrixMultiply.gif

or P(2) = P * P. To make this even more clear, we have highlighted in blue the row and column entries in each factor of P that contributes to the blue-highlighted element of P(2).

We can find the transition probabilities for any number of steps by additional matrix factors. For example, P(5) = P * P * P * P * P.

The probability interpretation has some immediate consequences for the matrix P. Since they are probabilities, all its entries must lie between 0 and 1. Since each transition must take us somewhere, the entries in each column must sum to 1. A matrix that fulfills these requirements is called stochastic. Every such matrix can be interpreted as a transition matrix between states of some system. If the transitions are independent, the system is said to be a Markov chain. The word “chain” in the name alludes to the act of chaining together factors of the transition matrix to obtain multi-step transition matrices.

It turns out that, given a Markov chain model of a system, we can use the eigenvector decomposition of the transition matrix to learn about the long-run behavior of the system, that is the form of P(N) for large N. I’m not going to prove all the relevant theorems here, but I will give a short, heuristic picture of how this works. If you just care about the result, you’re welcome to just skip the next paragraph.

Imagine that we represent the initial state of the system as a linear combination of eigenvectors of P. To find the probability distribution of states after N transitions, we multiply this vector by P, N times. Each application of P adds a factor of the eigenvalue λ to the coefficient of each eigenvector in the expansion of the state, so after N transitions, the coefficient of each eigenvector has gained a factor λN. Now, if λ > 1, this factor would grow without bound as N increases, leading to matrix elements larger than 1, which would ruin the probability interpretation of our state vector. So, there must not be any eigenvalues larger than one. If λ < 1, this factor would suppress the contribution of the corresponding eigenvector as N increases. It’s fine if this happens to some eigenvectors, but if it happens to all of them, the probability interpretation of our state vector would again be ruined, because all entries in our vector would be driven to zero. So, there must be at least one eigenvalue λ = 1. The eigenvector corresponding to this eigenvector will become the only unsuppressed contribution to the state vector as N gets large. This eigenvector therefore represents the long-run state towards which the system tends.

If that heuristic picture is too sketchy or confusing, feel free to look up a rigorous treatment. For the time being, though, take the relevant assertions on faith: Stochastic matrices are guaranteed to have eigenvalues not larger than one, and to have at least one eigenvalue equal to one. The components of the corresponding eigenvector, called the dominant eigenvector, are proportional to the relative occupation probabilities of states in the long run.

The Code

Let’s use this technique to find the long-run probabilities for tokens to land on each Monopoly space.

We start by computing the one-turn transition matrix. The movement of a token in a Monopoly turn actually consists of two steps. First, a roll of the dice determines the number of spaces to move forward. Then, if that roll has placed the token on a special space, additional instructions may require a move to another space. The simplest example of this second step occurs when the token lands on the “Go To Jail” space. But more complex examples also occur when the token lands on a “Chance” or “Community Chest” space, in which case there are various probabilities for the token to move to various spaces prescribed by the drawn card.

The die-roll step gives a very simple transition matrix, which moves us ahead by two spaces with probability 1/36 (“snake-eyes”), three spaces with probability 2/36, and so on up to twelve spaces with probability 1/36 (“double-sixes”).

C#
// compute the roll matrix
for (int c = 0; c < n; c++) {
    R[(c + 2) % n, c] = 1.0 / 36.0;
    R[(c + 3) % n, c] = 2.0 / 36.0;
    R[(c + 4) % n, c] = 3.0 / 36.0;
    R[(c + 5) % n, c] = 4.0 / 36.0;
    R[(c + 6) % n, c] = 5.0 / 36.0;
    R[(c + 7) % n, c] = 6.0 / 36.0;
    R[(c + 8) % n, c] = 5.0 / 36.0;
    R[(c + 9) % n, c] = 4.0 / 36.0;
    R[(c + 10) % n, c] = 3.0 / 36.0;
    R[(c + 11) % n, c] = 2.0 / 36.0;
    R[(c + 12) % n, c] = 1.0 / 36.0;
}

Notice that we take the die roll and the current space number modulo the total number of spaces, so that a roll of 6 from space 38 takes us to space 4, not the non-existent space 44.

Next, we compute the special transition matrix. This is slightly more complicated than the die-roll transition matrix, because the various columns are constructed according to entirely different rules. The column for space 30, “Go To Jail”, transitions with certainty to space 10, the jail.

C#
// go to jail 
S[10, 30] = 1.0;

A token that lands on a community chest space sometimes transitions to go and sometimes transitions to jail, but most often just stays put.

C#
// community chest 
if ((c == 2) || (c == 17) || (c == 33)) { 
    // advance to go 
    S[0, c] = 1.0 / 16.0; 
    // go to jail 
    S[10, c] = 1.0 / 16.0; 
    // stay put 
    S[c, c] = 14.0 / 16.0; 
}

The relative probabilities are determined by the relative numbers of the 16 different community chest cards. The more complex instructions found on some “Chance” cards, such as “advance token to the nearest railroad” and “go back three spaces,” are encoded with slightly more complex code.

Now, we have both the roll-transition matrix R and the special-transition matrix S. Since a full turn is a roll transition followed by a special transition, and since transitions in a Markov model are “chained” together by matrix multiplication, the full one-turn transition matrix can be determined by simple matrix multiplication.

C#
// compute the transition matrix 
SquareMatrix P = S * R;

You can easily verify that P fulfills the requirements for a stochastic matrix. (Alternatively, you can easily prove that the product of any two stochastic matrices is itself stochastic.)

Now that we have the transition matrix, we need to compute its dominant eigenvector. There are two ways to approach this problem.

One way is to compute P’s full eigenvector decomposition. This is a complex and difficult numerical procedure, but one for which the Meta.Numerics library offers a simple and efficient API.

C#
// compute the eigenvalues and eigenvectors of the transition matrix 
ComplexEigensystem E = P.Eigensystem();

An alternative is to simply apply an ad hoc methodology inspired by our heuristic argument above. Begin with some initial state vector, and repeatedly apply P until the resultant vector no longer changes. The resultant vector will be the dominant eigenvector.

The Strategy

Having found the dominant eigenvector, we can examine its components to find the long-run relative probabilities of a token landing on each space. These are shown in the column labeled “Prob.” in the following table.

SpaceProb.RentROI
Go0.031100
Mediterranean Avenue0.02160.04320.0007
Community Chest0.019100
Baltic Avenue0.0220.08810.0015
Income Tax0.023700
Reading Railroad0.02860.71490.0036
Oriental Avenue0.0230.13780.0014
Chance0.010300
Vermont Avenue0.02350.14090.0014
Connecticut Avenue0.02330.18640.0016
Jail0.058800
St. Charles Place0.02740.27370.002
Electric Company0.02630.63080.0042
States Avenue0.02390.23930.0017
Virginia Avenue0.02480.29770.0019
Pennsylvania Railroad0.02780.69620.0035
St. James Place0.02790.39020.0022
Community Chest0.025800
Tennessee Avenue0.02920.40860.0023
New York Avenue0.03070.49040.0025
Free Parking0.028600
Kentucky Avenue0.02820.50690.0023
Chance0.012100
Indiana Avenue0.02720.49020.0022
Illinois Avenue0.03180.63560.0026
B & O Railroad0.02890.72140.0036
Atlantic Avenue0.02710.59620.0023
Ventnor Avenue0.02680.59050.0023
Water Works0.02820.6760.0045
Marvin Gardens0.0260.62350.0022
Go To Jail000
Pacific Avenue0.02680.69570.0023
North Carolina Avenue0.02620.68010.0023
Community Chest0.023800
Pennsylvania Avenue0.0250.69870.0022
Short Line0.02430.60780.003
Chance0.010100
Park Place0.02190.76820.0022
Luxury Tax0.021900
Boardwalk0.02651.32360.0033

Examination of the results shows that by far the most probable place to find yourself at the end of a turn is in jail. That’s hardly surprising, given the large number of ways to get there.

More interesting, from a strategic perspective, is which rent-generating properties are most landed upon. From our table, we see the answers are Illinois Avenue (3.2%) and New York Avenue (3.1%), followed by Tennessee, the Reading, and the B & O (2.9% each). The least landed upon rent-generating properties are Mediterranean Avenue and Park Place (2.2% each).

Of course, a rent-seeking player would be willing to accept a slightly less frequently landed upon space, if it generates a much higher rent when landed upon. To know which spaces maximize rent per turn, we need to multiply the landing probability per turn by the rent per landing. Boardwalk ($1.32/turn) is far and away the most rent-generating property to own. Clustered below it are the other blue and green properties on the home stretch ($0.68/turn-$0.77/turn) and the B & O and the Reading ($0.72/turn). The worst rent generators are the purple and cyan properties along the first leg. This basic pattern is unsurprising, since the rents increase along the board much more than the landing probabilities vary.

Finally, a clever investor cares not only about cash flow, but also about the return on his investment (ROI), that is the rent generated per dollar paid. Dividing the rent per turn by the property price changes the picture yet again. By this metric, the best values on the board are the utilities, followed by the railways. The best normal property is Boardwalk, followed by Illinois Avenue and the other red and orange properties near “Free Parking.” That area, about halfway around the board, appears to offer the best balance between price and rent.

Further Explorations

You can use this basic framework to investigate other aspects of Monopoly. Remove or change chance and community chest cards to test your intuitions about how the probability distribution of spaces will change. Use Monopoly rents with houses to see how cash flow late in the game differs from cash flow early in the game. Add three states representing the first, second, and third turn in jail to better model the situation late in the game when players prefer to stay in jail (where they can collect rent without paying it).

Many of these enhancements are explored at this website, which is where I first saw an analysis of Monopoly as a Markov process.

Of course, Markov chains have been used to model many systems besides Monopoly. Perhaps the most famous application among I.T. geeks is Google’s PageRank system. This system treats each page as a state and each outbound link as an equally probable transition to the target page. The PageRank of each page is just its probably-weight in the dominant eigenvector of this Markov system.

License

This article, along with any associated source code and files, is licensed under The Microsoft Public License (Ms-PL)


Written By
United States United States
I am a .NET developer who works daily on enterprise-scale applications using C#, SQL, XML, ASP.NET, and myriad other technologies. My academic background is in physics and economics.

I am the original architect of Sandcastle managed reference documentation engine and of the Meta.Numerics library for scientific computation.

Comments and Discussions

 
GeneralI finally can win! [modified] Pin
bilo8129-Apr-09 22:26
bilo8129-Apr-09 22:26 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.