I must confess to feeling an almost obsessive fascination with intransitive games, dice, and other artifacts. The most famous intransitive game is rock, scissors, paper. Rock beats scissors. Scissors beats paper. Paper beats rock. Everyone older than 7 seems to know this, but very few people are aware that dice can exhibit this same behavior, at least in terms of expectation. Die A can beat die B more than half the time, die B can beat die C more than half the time, and die C can beat die A more than half the time.
How is this possible? Consider the following three dice, each with three sides (For the sake of most of this post and in my source code I pretend to have a 3-sided die. If you prefer the regular 6-sided ones, just double up every number. It makes no difference to the probabilities or outcomes.):
Die A: 1, 5, 9
Die B: 3, 4, 8
Die C: 2, 6, 7
Die A beats B [latex]5/9[/latex] of the time which beats C [latex]5/9[/latex] of the time which beats A [latex]5/9[/latex] of the time. Note that the ratios don’t all have to be the same. Here’s another intransitive trio:
Die A: 2, 4 ,9
Die B: 1, 8, 7
Die C: 3, 5, 6
Take a moment to calculate the relative winning percentages, or just trust me that they are not all the same…. Did you trust me? Will you trust me now in the future?
In order to find these particular dice I wrote some code in R to automate the search. The following functions calculate the winning percentage for one die over another and check for intransitivity:
# Return the proportion of the time that d1 beats d2.
# Dice need to have same number of sides
calcWinP <- function(d1,d2) {
sides = length(d1)
d1Vd2 = 0
for(i in 1:sides) {
for(j in 1:sides) {
if(d1[i] > d2[j]) {
d1Vd2 = d1Vd2 + 1
}
}
}
return( d1Vd2/(sides^2) )
}
# Assumes dice have no ties.
# All dice must have the same number of sides.
# How many times do I have to tell you that?
checkIntransitivity <- function(d1,d2,d3) {
d1beatsd2 = calcWinP(d1,d2)
if (d1beatsd2 > 0.5) {
if(calcWinP(d2,d3) > 0.5) {
if(calcWinP(d3,d1) > 0.5) {
return(TRUE)
}
}
} else {
# Check if d1 beats d3, if so check if d3 beats d2
if(calcWinP(d1,d3) > 0.5) {
if(calcWinP(d3,d2) > 0.5) {
return(TRUE)
}
}
}
# Regular old transitivity.
return(FALSE)
}
I then checked every possible combination. How many unique configurations are there? Every die has three numbers on it, and you have three die for a total of nine numbers. To make things simpler and avoid ties, no number can be used more than once. If each sides of a die was ordered and each of the die was ordered, you’d have [latex]9![/latex] different combinations, which is to say a whole mess of them. But our basic unit of interest here isn’t the digits, it’s the dice. So let’s think about it like this: For die A you can choose 6 of the 9 numbers, for die B you can pick 3 of the remaining 6, and for die C you’re stuck with whatever 3 are left. Multiply this all together:
choose(9,6)*choose(6,3)
and you get 1680 possibilities. But wait? What’s that you say? You don’t care which die is A, which is B, and which is C? Fantastic. That reduces the number of “unique” configurations by [latex]3![/latex], which is to say 6, at least if my back-of-the-envelope calculations are correct. Final tally? 280.
Not bad. Unfortunately, there no obvious way to ennumerate each of these 280 combinations (at least not to me there isn’t). So I ended up using a lot of scratch work and messing around in the R console until I had what I believed to be the right batch. Sorry, I no longer have the code to show you for that. After testing those 280 configurations, I found a total of 5 intransitive ones, including the 2 dice shown previously and the following 3 sets:
Die A: 2, 9, 3
Die B: 1, 6, 8
Die C: 4, 7, 5
Die A: 7, 1, 8
Die B: 5, 6, 4
Die C: 9, 3, 2
Die A: 7, 3, 5
Die B: 2, 9, 4
Die C: 8, 6, 1
Did I make a mistake? According to my calculations, [latex]5/280[/latex] of the combinations are intransitive. That represents 1.786% of the total. How might I very this? That’s right, it’s Monte Carlo time.
Using the following code, I created all [latex]9![/latex] permutations of dice and sides, then sampled from those 362,880 sets of dice many, many times:
library(e1071) # Makes making permutations easy
allPerms = permutations(9)
intransFound = 0
for(i in 1:dim(allPerms)[1]) {
d1 = allPerms[i,1:3]
d2 = allPerms[i,4:6]
d3 = allPerms[i,7:9]
if(checkIntransitivity(d1,d2,d3)) {
intransFound = intransFound + 1
}
}
print(intransFound)
found = 0
tries = 100000
for(i in 1:tries) {
one2nine = sample(1:9,9)
d1 = one2nine[1:3]
d2 = one2nine[4:6]
d3 = one2nine[7:9]
if( checkIntransitivity(d1,d2,d3)) {
found = found + 1
# Uncomment below if you want to see them.
#print("found one")
#print(d1)
#print(d2)
#print(d3)
#flush.console()
}
}
print(found/tries)
Final percentage: 1.807%. That’s pretty close to [latex]5/280[/latex], and much closer than it is to either [latex]4/280[/latex] or [latex]6/280[/latex], so I’m going to conclude that I got them all and got it right.
What happens if your dice have fewer, or more, sides? Turns out you need at least 3 sides to achieve intransitivity. Can you have it with 4 sides? What about 5, 6, or 7? To estimate the fraction of dice configurations which are intransitive for different numbers of sides I wrote the following code. Note that this could take a while to run, depending on the number of “tires” you use:
# Transitivity vs sides.
results = rep(0,6)
tries = 100000
for(j in 4:12) {
found = 0
for(i in 1:tries) {
one2nine = sample(1:(3*j),(3*j))
d1 = one2nine[1:j]
d2 = one2nine[(j+1):(2*j)]
d3 = one2nine[(2*j+1):(3*j)]
if( checkIntransitivity(d1,d2,d3)) {
found = found + 1
}
}
results[j] = found/tries
print("Found:")
print(results[j])
flush.console()
}
If you wait through all that you might notice some interesting patters emerge, which probably have explanations rooted in theory but it’s getting on nap time, so I’ll wrap this post up.
I think what fascinates me the most about intransitive dice, and games like rock, scissors, paper, is that they represent breakdowns in what math folks like to call a “total order”. Most of our calculations are done in this nice land of numbers where you can count on transitivity. [latex]A>B[/latex] and [latex]B>C[/latex], therefore [latex]A>C[/latex]. Every item has it’s place on the hierarchy, and “ties” only occur between an object and itself. “Total order” is a good name in that these are comfortable spaces to do calculations where nothing all that unexpected happens (usually, ok?). For excitement and unexpected delight, you need relax those orders, the more relaxing the better. Incidentally, if instead your goal is frustration and dirty looks from your friends at a party, try pretending that you can apply the methods of a total order (like the calculus) to economics, consumer choice, and love.
One final note before drifting off… in statistics we have at least one delightfully unexpected instance of intransitivity: correlations. Just because [latex]X[/latex] is positively correlated with [latex]Y[/latex] and [latex]Y[/latex] is positively correlated with [latex]Z[/latex], doesn’t mean that [latex]X[/latex] and [latex]Z[/latex] are positively correlated. Strange, no? But you can prove it with an example covariance matrix. Have you got one to show me?
I don’t know much about the properties of covariance matrices–still got a lot to learn in this world of statistics!–but I sure can give a simple example of where X is positively correlated with Y, and Y with Z, but X is not correlated with Z.
Let X and Z be independent random variables with mean 0 and variance 1, and Y = X + Z.
cov(Y, X) = E((X + Z) * X) – E(X + Z) * E(X)
E(X) = 0 for standard normal, so the second term is zero. Then,
E(X^2 + Z * X) = E(X^2) + E(Z * X)
= E(X^2) + E(Z) * E(X)
= E(X^2) + 0 * 0
= Var(X) + [E(X)]^2 = Var(X) + 0 = 1.
Analogously, cov(Y, Z) = 1. But cov(X, Z) = 0, since X and Z are independent.
I obviously made much stronger assumptions than I needed to for this result to make the math easy, so I’m sure there’s a much more general way of finding these “intransitive” covariances using theory I haven’t learned yet.
Nothing like a little probability theory on a Saturday afternoon!
(Also, please ignore the mention above of “standard normal”–I forgot to delete that. I first wrote it using the standard normal for X and Y, then made it slightly more general to random variables of mean 0 and variance 1 because I wasn’t actually using anything about it being a normal distribution. The Endless Quest for Generality strikes again, even in arbitrary examples in blog comments!)
Hmm, off the top of my head
Let A,B,C be iid N(0,1)
X=(A+C)/sqrt(2),
Y = (A+B)/sqrt(2),
Z = (B-C)/sqrt(2)
X,Y,Z are also std normal
Cor(X,Y) = Cov(X,Y) = Cov(A+C,A+B)/2 = 1/2
Cor(Y,Z) = Cov(Y,Z) = Cov(A+B,B-C)/2 = 1/2
Cor(X,Z) = Cov(X,Z) = Cov(A+C,B-C)/2 = -1/2
Well done guys! Here’s efrique’s (X,Y) covariance matrix for those who haven’t seen it in that form:
rock paper scissors is much more of a mental challenge when you open with the phrase “real men pick stone”. I wonder if it is still intransitive…