r


1
Dec 11

Wasting away again in Martingaleville

Alright, I better start with an apology for the title of this post. I know, it’s really bad. But let’s get on to the good stuff, or, perhaps more accurately, the really frightening stuff. The plot shown at the top of this post is a simulation of the martingale betting strategy. You’ll find code for it here. What is the martingale betting strategy? Imagine you go into a a mythical casino that gives you perfectly fair odds on the toss of a mythically perfect coin. You can bet one dollar or a million. Heads you lose the amount you bet, tails you win that same amount. For your first bet, you wager $1. If you win, great! Bet again with a dollar. If you lose, double your wager to $2. Then if you win the next time, you’ve still won $1 overall (lost $1 then won $2). In general, continue to double your bet size until you get a win, then drop your bet size back down to a dollar. Because the probably of an infinite loosing streak is infinitely small, sooner or later you’ll make $1 off of the sequence of bets. Sound good?

The catch (you knew there had to be a catch, right?) is that the longer you use the martingale strategy, the more likely you are to go broke, unless you have an infinitely large bankroll. Sooner or later, a run of heads will wipe out your entire fortune. That’s what the plot at the beginning of this post shows. Our simulated gambler starts out with $1000, grows her pot up to over $12,000 (with a few bumps along the way), then goes bankrupt during a single sequence of bad luck. In short, the martingale stagy worked spectacularly well for her (12-fold pot increase!) right up until the point where it went spectacularly wrong (bankruptcy!).

Pretty scary, no? But I haven’t even gotten to the really scary part. In an interview with financial analyst Karl Denninger, Max Keiser explains the martingale betting strategy then comments:

“This seems to be what every Wall Street firm is doing. They are continuously loosing, but they are doubling down on every subsequent throw, because they know that they’ve got unlimited cash at their disposal from The Fed… Is this a correct way to describe what’s going on?

Karl Denninger replies. “I think it probably is. I’m familiar with that strategy. It bankrupts everyone who tries it, eventually…. and that’s the problem. Everyone says that this is an infinite sum of funds from the Federal Reserve, but in fact there is no such thing as an infinite amount of anything.”

Look at the plot at the beginning of this post again. Imagine the top banking executives in your country were paid huge bonuses based on their firm’s profits, and in the case of poor performance they got to walk away with a generous severance package. Now imagine that these companies could borrow unlimited funds at 0% interest, and if things really blew up they expected the taxpayers to cover the tab through bailouts or inflation. Do you think this might be a recipe for disaster?


20
Oct 11

Queueing up in R, continued

Shown above is a queueing simulation. Each diamond represents a person. The vertical line up is the queue; at the bottom are 5 slots where the people are attended to. The size of each diamond is proportional to the log of the time it will take them to be attended. Color is used to tell one person from another and doesn’t have any other meaning. Code for this simulation, written in R, is here. This is my second post about queueing simulation, you can find the first one, including an earlier version of the code, here. Thanks as always to commenters for their suggestions.

A few notes about the simulation:

  • Creating an animation to go along with your simulation can take a while to program (unless, perhaps, you are coding in Flash), and it may seem like an extra, unnecessary step. But you can often learn a lot just by “watching”, and animations can help you spot bugs in the code. I noticed that sometimes smaller diamonds hung around for much longer then I expected, which led me to track down a tricky little error in the code.
  • As usual, I’ve put all of the configuration options at the beginning of the code. Try experimenting with different numbers of intervals and tellers/slots, or change the mean service time.
  • If you want to run the code, you’ll need to have ImageMagick installed. If you are on a PC, make sure to include the full path to “convert”, since Windows has a built-in convert tool might take precedence. Also, note how the files that represent the individual animation cells are named. That’s so that they are added in the animation in the right order, naming them sequentially without zeros at the beginning failed.
  • I used Photoshop to interlace the animated GIF and resave. This reduced the file size by over 90%
  • The code is still a work in progress, it needs cleanup and I still have some questions I want to “ask” of the simulation.

13
Oct 11

Waiting in line, waiting on R

I should state right away that I know almost nothing about queuing theory. That’s one of the reasons I wanted to do some queuing simulations. Another reason: when I’m waiting in line at the bank, I tend to do mental calculations for how long it should take me to get served. I look at the number of tellers attending, pick an average teller session length (say one or two minutes), then come up with an average wait per person in line. For example, if there are 4 tellers and the average person takes 2 minutes to do her transaction, then new tellers should become available every 30 seconds. If I’m the 6th person in line, I should expect to wait 3 minutes before I’m attended.

However, based on my experience (the much maligned anecdotal), it often takes much longer than expected. My suspicion is that over time the teller’s get “clogged up” with the slowest people, so that even though an average person might take only 2 minutes, the people you actually see being attended right now are much more likely to be those who take a long time.

To explore this possibility, I set up a simulation in R (as usual, full source code provided at end of post). The first graph, at the beginning of this post, shows the length of queues over time for 4 runs of the simulator, all with the same configuration parameters. Often this graph was completely flat. Note though that when things get out of hand in terms of queue length, they can get way out of hand. To get a feel for how long you would have to wait, given that there is a line in front of you, I tracked how long the first person in line had to wait to be served. Given my configuration settings, this wait would be expected to last 5 intervals. It does seem to take longer than 5 intervals, though I want to tweak the model some and do more testing before I’m ready to quantify that wait.

There are, as with any models, things that make this one unrealistic. The biggest may be that people get in line with the same probability no matter how long the line is. What I need is some kind of tendency to abandon the line if it’s too long. That shouldn’t shorten the wait times for those already in line. I could make those worse. If you assume that slower people are prepared to wait longer in line, then the line is more likely to have slow people. Grandpa Jones is willing to spend an hour in line so he can chat for a while with the pretty young teller; but if the line is too long, that 50-year-old business guy will come back later to deposit his check. I would imagine that, from the bank’s perspective, this presents a tricky dilemma. The people whose time is worth the least are probably the most likely to be clogging up your tellers, upsetting the customers you care the most about (I know, I know, Bank of America cares about all of us equally, right?).

Code so far, note that run times can be very long for high intervals if the queue length gets long:

#### Code by Matt Asher. Published at StatisticsBlog.com ####
#### CONFIG ####
# Number of slots to fill
numbSlots = 5

# Total time to track
intervals = 1000

# Probability that a new person will show up during an interval
# Note, a maximum of one new person can show up during an interval
p = 0.1

# Average time each person takes at the teller, discretized exponential distribution assumed
# Times will be augmented by one, so that everyone takes at least 1 interval to serve
meanServiceTime = 25

#### INITIALIZATION ####
queueLengths = rep(0, intervals)
slots = rep(0, numbSlots)
waitTimes = c()
leavingTimes = c()
queue = list()
arrivalTimes = c()
frontOfLineWaits = c()


#### Libraries ####
# Use the proto library to treat people like objects in traditional oop
library("proto")

#### Functions ####
# R is missing a nice way to do ++, so we use this
inc <- function(x) {
  eval.parent(substitute(x <- x + 1))
}

# Main object, really a "proto" function
person <- proto(
	intervalArrived = 0,
	intervalAttended = NULL,
	
	# How much teller time will this person demand?
	intervalsNeeded = floor(rexp(1, 1/meanServiceTime)) + 1,
	intervalsWaited = 0,
	intervalsWaitedAtHeadOfQueue = 0,
)

#### Main loop ####
for(i in 1:intervals) {
	# Check if anyone is leaving the slots
	for(j in 1:numbSlots) {
		if(slots[j] == i) {
			# They are leaving the queue, slot to 0
			slots[j] = 0
			leavingTimes = c(leavingTimes, i)
		}
	}
	
	# See if a new person is to be added to the queue
	if(runif(1) < p) {
		newPerson = as.proto(person$as.list())
		newPerson$intervalArrived = i
		queue = c(queue, newPerson)
		arrivalTimes  = c(arrivalTimes, i)
	}
	
	# Can we place someone into a slot?
	for(j in 1:numbSlots) {
		# Is this slot free
		if(!slots[j]) {
			if(length(queue) > 0) {
				placedPerson = queue[[1]]
				slots[j] = i + placedPerson$intervalsNeeded
				waitTimes = c(waitTimes, placedPerson$intervalsWaited)
				# Only interested in these if person waited 1 or more intevals at front of line
				if(placedPerson$intervalsWaitedAtHeadOfQueue) {
					frontOfLineWaits = c(frontOfLineWaits, placedPerson$intervalsWaitedAtHeadOfQueue)
				}
				
				# Remove placed person from queue
				queue[[1]] = NULL
			}
		}
	}
	
	# Everyone left in the queue has now waited one more interval to be served
	if(length(queue)) {
		for(j in 1:length(queue)) {
			inc(queue[[j]]$intervalsWaited) # = queue[[j]]$intervalsWaited + 1
		}
		
		# The (possibley new) person at the front of the queue has had to wait there one more interval.
		inc(queue[[1]]$intervalsWaitedAtHeadOfQueue) # = queue[[1]]$intervalsWaitedAtHeadOfQueue + 1
	}
	
	# End of the interval, what is the state of things
	queueLengths[i] = length(queue);
}

#### Output ####
plot(queueLengths, type="o", col="blue", pch=20, main="Queue lengths over time", xlab="Interval", ylab="Queue length")
# plot(waitTimes, type="o", col="blue", pch=20, main="Wait times", xlab="Person", ylab="Wait time")

12
Jan 11

R: Attack of the hair-trigger bees?

In their book “Complex Adaptive Systems”, authors Miller and Page create a theoretic model for bee attacks, based on the real, flying, honey-making, photogenic stingers. Suppose the hive is threatened by some external creature. Some initial group of guard bees sense the danger and fly off to attack. As they go, they lay down a scent. Other bees join in the attack if their scent sensitivity threshold (SST) is reached. When they join the attack, they send out their own warning scent, perhaps triggering an even bigger attack. The authors make the point that if the colony of bees were homogeneous, and every single one had the same attack threshold, then if that threshold was above the initial attack number, then no one else would join in. If it were below, then everyone goes all at once. Fortunately for the bees, they are a motley lot, which is to say a lot more diverse than you would imagine just from looking at the things. As a result, they exhibit much more complicated behavior. The authors describe a model with 100 bees and call their attack threshold “R”. By assuming a heterogeneous population of 100 with thresholds all equal spaced, they note:

“In the hetrogeneous case, a full-scall attack ensues for [latex]R \geq 1[/latex]. This latter result is easy to see, because once at least one bee attacks, then the bee with threshold equal to one will join the fray, and this will trigger the bee with the next highest threshold to join in, and so on…. It is relatively difficult to get the homogeneous hive to react, while the hetrogeneous one is on a hair trigger. Without explicity incorporating the diversity of thresholds, it is difficult to make any kind of accurate prediction of how a given hive will behave.”

I think this last sentence is their way of noting that the exact distribution of sensitivities makes a huge difference in how the bees behave, which indeed it does. I decided to put the bees to the test, so I coded a simulation in the language R (code at the end of this post). I gave 100 virtual apis mellifera a random sensitivity level, chosen from a Uniform(1,100) distribution, then assumed 10 guards decided to attack. How would the others respond? Would a hair-trigger chain reaction occur? The chart at the top shows the results from 1000 trials. Looks pretty chaotic, so here’s a histogram of the results:

As you can see, most of the time the chain reaction dies out quickly, with no more than 20 new bees joining in the attack. However, occasionally the bees do go nuts, sending the full on attack. This happened about 1 percent of the time. Perhaps most interestingly, all of the other attack levels were clearly possible as well, in the flat zone from about 30 to 95. You might want to try playing with the distribution of the sensitivities and see if you get any other interesting distributions. Meanwhile, if you decide to raid a hive, make sure to dip yourself in mud first, that way the bees will think you are nothing but an innocent black rain cloud.

Code in R:

trials = 1000

go = rep(0,trials)
initial = 10

for(i in 1:trials) {
  bees = sort(runif(100,1,100))
  
  # Everyone who's threshold is less than the inital amount goes.
  going = length(bees[bees initial) {
    more = length(bees[bees going) {
    # Keep doing this until it stops
      going = more
      more = length(bees[bees
				

5
Nov 10

Livin’ la Vida Poisson

Yes, I did just mix English, Spanish and French. And no, I am not living the “fishy” life, popular opinion to the contrary. Here’s the story. As someone who spends the majority of his time working online, with no oversight, I notice that I tend to drift a lot. I don’t play solitaire, or farm for virtual carrots, but I do wander over to Reddit more than I should, or poke around in this or that market in virtual assets to see if anything interesting has shown up. To some extent this can be justified. Many, perhaps all, of my profitable ventures have come from keeping my eyes open, poking around, doing my best to understand the digital world. On the other hand, at times I feel like I’ve been drifting aimlessly, that I’m all drift and no focus. My existing projects are gathering dust while I chase after shiny new things.

That’s the feeling, anyway. What does the evidence say? To keep track of what I was really doing, and perhaps nudge me towards more focus, I set a stopwatch to go off every 15 minutes. When it did, I would stop, write down what I was doing at that moment, and continue on. Perhaps you can see how these set intervals might provide an incentive to, shall we say, cheat? Especially right after the stopwatch chimed, I knew that whatever I did for the next few minutes was “free”, untracked. So I decided that I would have to write down everything I did during those 15 minute intervals, which worked sometimes, othertimes not so well.

My current solution? Setup a bell which chimes at random intervals, with an average time between chimes of 15 minutes. To hear what the bell sounds like, Go ahead and try it out, I think you’ll find it makes a nice sound. Leave that page open while you read the rest of this post, see how many times it rings.

At any rate, in order to randomize how long the wait was between chimes, I used a little something called a Poisson process. Actually, what I used was the Binomial approximation to the Poisson built from multiple Bernoulli trials, which results in wait times that are Exponential. Wait! Did you get all that? If so, then skip ahead until things look interesting. Otherwise, here’s more detail about how this works:

In order to determine the length of time between chimes, my computer generates a random number number between 0 and 1. If this random number is less than 1/15, then the next chime is in just one minute. Otherwise, the computer generates another random number and adds one minute to the time between chimes. On average, it will take 15 tries to get a number below 1/15, so the average time between chimes will be 15 minutes. However, to call 15 minutes the average is somewhat misleading. Here are the frequencies of different wait times (source code in R at the end):


As you can see, the most common time between chimes is just one minute. Strange, no? What’s going on here is that each test to see if the random number is below 1/15 is a Bernoulli trial, which is basically Italian for “maybe it succeeds, maybe it fails”. In this case “success” has probability of 1/15, failure happens the other 14 out of 15 times. In cases where probability is small, and you end of doing a lot of trials, the total number of successes over a given time period will have the Poisson distribution. The “Poisson” here is a Frenchman, who may or may not have smelled like his surname, but who certainly understood The Calculus as well as anyone in the early 1800’s. To get an even better approximation of the Poisson, I could have used trails with probability of success of 1/900, then treated each failure as another second of waiting time. That would have made the graph above smoother.

But wait! I didn’t show you a graph of the Poisson. I showed you a graph of something that approximates the exponential distribution. The number of chimes per hour is (roughly) Poisson distributed, but the waiting time between each chime is exponential, which means shorter wait times are more frequent, but no length of time, no matter how long, can be ruled out. In fact, the exponential distribution is the only (continuous) distribution which is “memoryless”. If you have waited 15 minutes for a chime, your expected wait time is still…. 15 minutes. In fact, your expected wait is independent of how long you have waited so far. The exponential distribution is a “maximal entropy” distribution, entropy in this case is related to how much you know. With the exponential, no matter how long you’ve waited, you still don’t know anything more than when you started waiting.

If you’ve been tuning out and scanning this post, now would be a good time to tune back in. I promise new and interesting things ahead!

It’s one things to understand the memoryless property of the exponential, even down to the mathematical nitty-gritty. It’s quite another to actually live with the exponential. No matter how well I know the formulas, I can’t shake the felling that the longer I have waited in between bell rings, the sooner the next chime must be coming. Certainly, it should be due any time now! While I “know” that any given minute has exactly the same probably as the next to bring with it the bell, the longer I wait, the nearer I feel the the next chime must be. After all, the back of my mind insists, once the page loads the wait time has been set into stone. However it was distributed before, it’s now a constant. Every minute you wait you are getting closer to the next bell, whenever it might have been set to come. I keep wanting to know more than I did a minute ago about about when the next bell will arrive.

This isn’t the only way in which I find my psyche battling with my intellect. I would also swear that over time the distribution of short waits and long waits evens out. Now, by the law of large numbers, it’s true that the more chimes I sit through, the closer the mean wait time will approach 15 minutes. However, even if you’ve just heard three quick bells in a row, that has absolutely no bearing on how long the wait will be between the next three chimes. The expected wait times going forward are completely independent of the wait times in the past. The mean remains 15 minutes, the median remains 10.4 minutes. Yet that’s not what I feel is happening, and over the past two weeks of experimenting with this I would swear that on days when there are a number of unusually quick intervals, these have been followed, later that very the same day, with unusually long intervals. And vice versa. It feels like things are evening out.

It’s possible that when my computer wakes up from a sleep mode, my web browser doesn’t remember where it was in a countdown to refreshing the chime page. So I reload it. Now, in theory, if you “reload” an exponential wait time while in process, this has absolutely no effect on your eventual wait time until the next chime. Yet anytime I reload the page, I have a moment of doubt as to whether I’m “cheating” in some way, to make what would have been a long wait shorter. In this case, the back of my mind says the exact opposite of its previous bias: because I am reloading a page that has been waiting a long time, this means that the wait time would have been really long. By starting the process anew, I’m increasing the chances of a short chime time.

Before you call me a nut, try living for a while with the timer running the background. Keep track of what you are doing if you want (and BTW I’ve found this to be every enlightening and more than a little sad), but mostly keep track of how you feel about the timing. Try reloading the page if you don’t hear a chime for a while. How does that feel? I suspect that in some ways humans were very well hard wired to understand probabilities. Yet I also suspect our wiring hinders how we understand probability, a suspicion backed up by all those gamblers out there waiting for the lucky break that’s well overdue.

CODE:

iters = 1000
results = rep(0,iters)
for (i in 1:iters) {
	minutes = 1
	while(runif(1)>(1/15)){
		minutes = minutes + 1
	}
	
	results[i] = minutes
}

hist(results, breaks=40, col="blue", xlab="Minutes")

4
Sep 10

Weekend art in R (Part 4)

Computer creations are perfect by design. We put in numbers, and if all goes well we get out an exact result. If we want a line, we want it perfectly straight. If we want a circle, it should conform to the platonic ideal of a circle. From a mathematical standpoint, these perfect shapes and precisely computed numbers are ideal.

Someday, perhaps, we will have true fuzzy computation built right into our hardware. For now, it takes considerable effort to achieve just the right level of imperfection needed for simulating mistakes, or any organic processes.

I sent each of the circles shown above on a random walk. That part was easy, getting each circle to end up where it started (and close the loop) took a bit more effort. To vary the “wigglyness” of the lines, adjust the “sd” parameter in “rnorm”. To change how quickly randomness tapers off, change the “4” in “i/4”. Here is my code:

# Circle lengths
j = seq(0.1,1.9,.08)

par(bg = "black")
plot(-2,-2,pch=".",xlim=c(-2,2),ylim=c(-2,2),col="white")

# How many dots around the circle?
dots = 1000

# Create an offkilter circle
rads = seq(0,2*pi,2*pi/dots)

for(aLength in j) {
	# Pick a random color
	myCol = paste("#",paste(sample(c(1:9,"A","B","C","D","E","F"),6,replace=T),collapse=""),collapse="",sep="")
	
	# Start at length = 1, then walk.
	myLength = rep(aLength,dots)
	
	for(i in 2:dots) {
		myLength[i] = myLength[(i-1)] + rnorm(1,0,sd=.005)
		
		# Closer we are to end, faster we return to where started so circle closes
		dist = aLength - myLength[i]
		myLength[i] = aLength - (dist*((dots-(i/4))/(dots)))
	}
	

	
	for(i in 1:dots) {
		cat(myLength[i]*cos(rads[i]),myLength[i]*sin(rads[i]),"\n")
		points(myLength[i]*cos(rads[i]),myLength[i]*sin(rads[i]),col=myCol,pch=20,cex=2)
	}
}

What do your circles look like?


30
Aug 10

The Chosen One

Toss one hundred different balls into your basket. Shuffle them up and select one with equal probability amongst the balls. That ball you just selected, it’s special. Before you put it back, increase its weight by 1/100th. Then put it back, mix up the balls and pick again. If you do this enough, at some point there will be a consistent winner which begins to stand out.

The graph above shows the results of 1000 iterations with 20 balls (each victory increases the weight of the winner by 5%). The more balls you have, the longer it takes before a clear winner appears. Here’s the graph for 200 balls (0.5% weight boost for each victory).

As you can see, in this simulation it took about 85,000 iterations before a clear winner appeared.

I contend that as the number of iterations grows, the probability of seeing a Chosen One approaches unity, no matter how many balls you use. In other words, for any number of balls, a single one of them will eventually see its relative weight, compared to the others, diverge. Can you prove this is true?

BTW this is a good Monte Carlo simulation of the Matthew Effect (no relation).

Here is the code in R to replicate:

numbItems = 200
items = 1:numbItems
itemWeights = rep(1/numbItems,numbItems) # Start out uniform
iterations = 100000
itemHistory = rep(0,iterations)

for(i in 1:iterations) {
	chosen = sample(items, 1, prob=itemWeights)
	itemWeights[chosen] = itemWeights[chosen] + (itemWeights[chosen] * (1/numbItems))
	itemWeights = itemWeights / sum(itemWeights) # re-Normalze
	itemHistory[i] = chosen
}

plot(itemHistory, 1:iterations, pch=".", col="blue")

After many trials using a fixed large number of balls and iterations, I found that the moment of divergence was amazingly consistent. Do you get the same results?


21
Aug 10

Weekend art in R (Part 3)

I have a few posts nearing completion, but meanwhile a weekend break for art. Big thanks to Simon Urbanek and Jeffrey Horner, creators of Cairo, a library for the programming language R. Have you noticed how R can’t anti-alias (fancy way for saying smooth out lines and curves when creating a bit-mapped image)? Cairo can.

Make sure to click the image above for the full version. Here’s my code:

# The Cairo library produces nice, smooth graphics
Cairo(1200, 1200, file="D:/Your/Path/Here/Dots.png", type="png", bg="#FF6A00")

# How big should the grid for placing dots be?
myWidth=40
myHeight=40

dotsPlaced = myWidth*myHeight

# Optional default colors and sizes for dots
myColors = rep(c("#0000F0","#00F000"),dotsPlaced)
myCex = rep(3.2,dotsPlaced)

for(i in 1:dotsPlaced) {
	# Change this to allow more of the default color dots to survive
	if(runif(1)<1) {
		myColors[i] = paste("#",paste(sample(c(3:9,"A","B","C","D","E","F"),6,replace=T),collapse=""),collapse="",sep="")
	}
	myCex[i] = runif(1,3,6)
}

# Keeping this is marginal
par(oma=c(0,0,0,0))
par(mar=c(0,0,0,0))

# Start off with a blank plot. The white dot helps with cropping later
plot(0,0,pch=".",xlim=c(0,40),ylim=c(0,40),col="white", xaxt = "n", yaxt = "n")

for(m in 1:myWidth) {
	for(n in 1:myHeight) {
		if(runif(1) < .93) {
			points(n,m,pch=20,col=myColors[((m*n)+n)],cex=myCex[((m*n)+n)])
		}
	}
}

dev.off() # Tell Cairo to burn the plot to disk

19
Jul 10

R: Clash of the cannon cycles

Imagine a unit square. Every side has length 1, perfectly square. Now imagine this square was really a fence, and you picked two spots at random along the fence, with uniform probability over the length of the fence. At each of these two locations, set down a special kind of cannon. Just like the light cycles from Tron, these cannons leave trails of color. To aim each cannon, pick another random point from one of the three other sides of the fence, and aim for that point.

Sometimes there will be a collision within the square, other times no. The image at top shows the results of five trials. The red dots are where the trails from a pair of cannons collided. My burning question: What is the distribution for these dots? Before reading on, try to make a guess. Where will collisions be most likely to happen?

Somewhere in the world, there lives a probabilist who could come up with a formula for the exact distribution in an hour, but that person doesn’t live in my house, so I took the Monte Carlo approach, coded in R:

# Functions to pick two points, not on the same side:
m2pt <- function(m) {
	if(m <1) {
		myPoint = c(5,m,0)
	} else if (m < 2) {
		myPoint = c(6,1,m %% 1) 
	} else if (m < 3) {
		myPoint = c(7,1-(m %% 1),1)
	} else {
		myPoint = c(8,0,1-(m %% 1))
	}
	return(myPoint)
}		

get2pts <- function() {
	pt1 = m2pt(runif(1,0,4))
	pt2 = m2pt(runif(1,0,4))
	
	# Make sure not both on the same sides. If so, keep trying
	while(pt1[1] == pt2[1]) {
		pt2 = m2pt(runif(1,0,4))
	}
	return(c(pt1[2:3],pt2[2:3]))
}

# Optional plot of every cannon fire line. Not a good idea for "iters" more than 100
#windows()
#plot(0,0,xlim=c(0,1),ylim=c(0,1),col="white")		
		
# How many times to run the experiment
iters = 10000

# Track where the intersections occur
interx = c()
intery = c()

for(i in 1:iters) {
	can1 = get2pts()
	can2 = get2pts()
	
	# Optional plot of every cannon fire line. Not a good idea for "iters" more than 100 
	#points(c(can1[1],can1[3]),c(can1[2],can1[4]),pch=20,col="yellow")
	#segments(can1[1],can1[2],can1[3],can1[4],pch=20,col="yellow",lwd=1.5)
	#points(c(can2[1],can2[3]),c(can2[2],can2[4]),pch=20,col="blue")
	#segments(can2[1],can2[2],can2[3],can2[4],pch=20,col="blue",lwd=1.5)

	# See if there is a point of intersection, find it. 
	toSolve = matrix(c( can1[3]-can1[1], can2[3]-can2[1], can1[4]-can1[2], can2[4]-can2[2]),byrow=T,ncol=2)
	paras = solve(toSolve, c( can2[1]-can1[1], can2[2]-can1[2]))
	
	solution = c(can1[1] + paras[1]*(can1[3]-can1[1]), can1[2] + paras[1]*(can1[4]-can1[2]))
	
	# Was the collision in the square
	if(min(solution) > 0 && max(solution) < 1) {
		# Optional plot of red dots
		# points(solution[1],solution[2],pch=20,col="red",cex=1.5)
		
		# if this intersection is in square, plot it, add it to list of intersections
		interx = c(interx, solution[1])
		intery = c(intery, solution[2])
	}
}

windows()
plot(interx, intery, pch=20,col="blue",xlim=c(0,1),ylim=c(0,1))

After carefully writing and debugging much more code than I expected, I ran a trial with several thousand cannon fires and plotted just the collisions. Here is what I saw:

Looks pretty uniform, doesn't it? If it is, I will have gone a very long way just to replicate the bi-variate uniform distribution. My own original guess was that most collisions, if they happened in the square, would be towards the middle. Clearly this wasn't the case. Looks can be deceiving, though, so I checked a histogram of the x's (no need to check the y's, by symmetry they have the same distribution):

Very interesting, no? The area near the edges appears more likely to have a collision, with an overall rounded bowl shape to the curve. The great thing about Monte Carlo simulations is that if something unexpected happens, you can always run it again with more trials. Here I changed "iters" to 100,000, ran the code again, and plotted the histogram.

hist(interx, breaks=100, col="blue",xlab="x",main="Histogram of the x's")

Now its clear that the distribution spikes way up near the edges, and appears to be essentially flat for most of the middle area. It seems like it may even go up slightly at the very middle. Just to be sure, I ran a trial with one million iterations:

Now it definitely looks like a small upward bulge in the middle, though to be sure I would have to do run some statistical tests or use an even larger Monte Carlo sample, and given how inefficient my code is, that could take the better part of a week to run. So for today I'll leave it at that.

One final statistic of note: During my run of one million iterations, 47.22% of all collisions happened inside the box. What do you think, is the true, theoretical ratio of collisions within the box a rational number?


9
Jul 10

100 Prisoners, 100 lines of code

In math and economics, there is a long, proud history of placing imaginary prisoners into nasty, complicated scenarios. We have, of course, the classic Prisoner’s Dilemma, as well as 100 prisoners and a light bulb. Add to that list the focus of this post, 100 prisoners and 100 boxes.

In this game, the warden places 100 numbers in 100 boxes, at random with equal probability that any number will be in any box. Each convict is assigned a number. One by one they enter the room with the boxes, and try to find their corresponding number. They can open up to 50 different boxes. Once they either find their number or fail, they move on to a different room and all of the boxes are returned to exactly how they were before the prisoner entered the room.

The prisoners can communicate with each other before the game begins, but as soon as it starts they have no way to signal to each other. The warden is requiring that all 100 prisoners find their numbers, otherwise she will force them to listen to hundreds of hours of non-stop, loud rock musician interviews. Can they avoid this fate?

The first thing you might notice is that if every prisoner opens 50 boxes at random, they will have a [latex]0.5[/latex] probability of finding their number. The chances that all of them will find their number is [latex](\frac{1}2)^{100}[/latex], which is approximately as rare as finding a friendly alien with small eyes. Can they do better?

Of course they can. Otherwise I wouldn’t be asking the question, right? Before I explain how, and go into a Monte Carlo simulation in R, you might want to think about how they can do it. No Googling!

All set? Did you find a better way? The trick should be clear from the code below, but if not skip on to the explanation.

# How many times should we run this experiment?
iters = 1000
results = rep(0,iters)

for(i in 1:iters) {
	# A random permutation:
	boxes = sample(1:100,100)

	# Labels for our prisoners
	prisoners = 1:100

	# Track how many "winners" we have
	foundIt = 0

	# Main loop over the prisoners
	for(prisoner in prisoners) {

		# Track the prisoners path
		path = c(prisoner)

		tries = 1

		# Look first in the box that matches your own number
		inBox = boxes[prisoner]

		while(tries < 50) { 			 			path = c(path, inBox) 			 			if(inBox == prisoner) { 				#cat("Prisoner", prisoner, "found her number on try", tries, "\n") 				foundIt = foundIt + 1 				break; 			} else { 				# Follow that number to the next box 				inBox = boxes[inBox] 			} 			tries = tries+1 		} 		 		# cat("Prisoner", prisoner, "took path", paste(path, collapse=" -> "), "\n")
	}

	# How many prisoners found their numbers?
	cat("A total of", foundIt, "prisoners found their numbers.\n")
	flush.console()
	results[i] = foundIt
}

hist(results, breaks=100, col="blue")

Here is what one of my plots looked like after running the code:

Out of the 1000 times I ran the experiment, on 307 occasions every single prisoner found his number. The theoretical success rate is about 31%. So, if it’s not clear from the code, what was the strategy employed by the prisoners and how does it work?

One way to look at the distribution of numbers in boxes is to see it as a permutation of the numbers from 1 to 100. Each permutation can be partitioned into what are called cycles. A cycle works like this: pick any number in your permutation. Let’s say it’s 23. Then you look at the number the 23rd place (ie the number in the 23rd box, counting from the left). If that number is 16, you look at the number in the 16th place. If that number is 87, go open box number 87 and follow that number. Eventually, the box you open up will have the number that brings you back to where you started, completing the cycle. Different permutations have different cycles.

The key for the prisoner is that by starting with the box that is the same place from the left as his number, and by following the numbers in the boxes, the prisoner guarantees that if he is in a cycle of length less than 50, he will eventually open the box with his number in it, which would complete the cycle he began. One way to envision cycles of different lengths is to think about the extreme cases. If a particular permutation shifted every single number over one to the left (and wrapped number 1 onto the end), you would have a single cycle of length 100. Box 1 would contain number 2, box 2 number 3 and so on. On the other hand, if a permutation flipped every pair of consecutive numbers, you would have 50 cycles, each of length 2: box 1 would have number 2, box 2 would have number 1. Of course if your permutation doesn’t change anything you have 100 cycles of length 1.

As you can see from the histogram, when using this strategy you can never have between 50 and 100 winning prisoners. Anytime you have a single cycle of length greater than 50, for example 55, then all 55 prisoners who start on that cycle will fail to find their number. If no cycles are longer than 50, everyone wins. Just how rare are different cycles of different lengths? For the math behind that check out this excellent explanation by Peter Taylor of Queen’s University.

Before moving on I wanted to visualize these cycles. Try running the code below:

# Unit circle
plot(0,0,xlim=c(-1,1),ylim=c(-1,1),col="white",ann=FALSE, xaxt="n", yaxt="n")
for(i in 1:100) {
	points(cos(2*i/100*pi), sin(2*i/100*pi),pch=20,col="gray")
}

mySample = sample(1:100,100)
for(i in 1:100) {
	found = FALSE
	nextItem = i

	# Pick a random color for this cycle
	color = sample(c(0:9,"A","B","C","D","E","F"),12,replace=T)
	lineColor = paste("#", paste(color[1:6],collapse=""),sep="")

	while(!found) {
		# Draw the cycle

		segments(cos(nextItem/50*pi), sin(nextItem/50*pi), cos(mySample[nextItem]/50*pi), sin(mySample[nextItem]/50*pi),col=lineColor,lwd=2)
		Sys.sleep(.4)
		if(mySample[nextItem] == i) {
			found = TRUE
		} else {
			nextItem = mySample[nextItem]
		}
	}
}

You can adjust the “Sys.sleep()” parameter to make the animation faster. I recommend running the code to see how the cycles “develop” over time, but here’s a snapshot of what I got: