18
Mar 13

Review of Mathematica 9 and R-link

VIDEO TRANSCRIPT: Hello, this is Matt Asher from StatisticsBlog.com. I’m going to be reviewing Mathematica 9, from Wolfram Research. In particular, I’ll be focusing on using it with R and to do Monte Carlo simulations and other statistical work. You can find a full transcript of this video at my blog, including the source code and links to all of the webpages I mention.

Before I begin I’d like to thank Jeff Hara and Andy Ross from Wolfram for their time. Also thanks to the folks at the Mathematica Stack Exchange, who helped with a couple of my questions.

I’m going to get started with a blank notebook. I’m going to clear out all of the variables that exist. I’ve found sometimes that if you have existing variables that can cause problems.

ClearAll["Global`*"]

After each line I’m hitting Shift+Enter to run the command, if you just hit enter Mathematica won’t run things yet.

So I’ve cleared my variables and I’m going to run

Needs["RLink`"]

which will bring in the link between Mathematica and R.

InstallR[]

I’m going to make sure it’s installed.

REvaluate["R.Version()"]

And then I’m going to run a test command here to make sure everything is up and running. As you can see this is the version of R I’m running and the connection succeeded.

Note that the free version of Mathematica, the evaluation version, doesn’t come with support for R, so if you want to test out Mathematica’s and its interactions with R you either have to have to buy the full version or maybe if you call up or contact Wolfram they’d be willing to let you have a free evaluation version that is full and allows you to test out R.

So how does the interface between R and Mathematica work?

Basically, you can run operations in R, then store the results to variables in R. You can also pass data types back and forth between R and Mathematica.

Here I’m setting a variable and this variable is set in R, not in Mathematica

RSet["hal", 9000]

So if I were to type just

hal

There is no response back. This is Mathematica’s way of saying that the variable is undefined or that it doesn’t know what to do with your input. So to get back information from R we have to use:

REvaluate["hal"]

We are putting “hal” in quotes so we are parsing this in R and not in Mathematica.

For example we can do things like grab a dataset from R

iris = REvaluate["iris"]

I’m grabbing the famous “iris” dataset in R and I am pulling it into Mathematica.

or I could do things like evaluate a command in R:

REvaluate["det(matrix(sample(-50:49),nrow=10))"]

and bring back the results. This grabs the determinant of a random matrix.

We can even do things like create our own functions in R, and this gets put into a variable in Mathematica.

perfectSample = RFunction["function(n, dist, ...) match.fun(paste('q', dist, sep=''))((1:n) / (n+1), ...)"]

This function creates a perfect sample of the length that you specify of a particular distribution. Then we can call that function directly in Mathematica.

perfectSample[100, "pois", 10]

and the results are returned.

Of course, if we just wanted to do things in R, we would be continuing to just use R, instead of having to learn this new interface between R and Mathematica. So then what can we do in Mathematica that goes beyond what we can easily do in R?

One of the biggest advantages to using Mathematica is that you get access to tools for creating interactive plots and simulations that can be changed on the fly.

I’m going to do an example using the Benini Distribution, which, according to Wolfram’s web page, can be used to model the weight of cats.

So to do that, what I’m going to do is use the Mathematica command “Manipulate”

Manipulate[Block[{data, dist, kmd},
  data = RandomVariate[dist = BeniniDistribution[\[Alpha],\[Beta], \[Sigma]], n];
  kmd = KernelMixtureDistribution[data, h, MaxMixtureKernels -> All];
  Plot[{PDF[dist, x], PDF[kmd, x]}, {x, 0, xRng}, 
   PlotRange -> Automatic]], {{\[Alpha], 1}, .01, 5}, {{\[Beta], 1}, 0.01, 
  5}, {{\[Sigma], 1}, .01, 2}, {{n, 100}, {10, 100, 1000, 10000}}, {{xRng, 5}, 1, 
  10}, {{h, .5}, 0.01, 1}]

And then I get back the results and what I’ve got here is a live Monte Carlo simulation where I am specifying the different parameters of the distribution and I’m also specifying how many variates that I’m creating. This is the smoothing, the kernel bandwidth that I am adjusting.

And I can adjust the size of it here. Make it bigger. And do all of these adjustments on the fly.

As you can see, you’ve got some good power here for generating interactive plots and simulations. You can do these natively in Mathematica, or you do live manipulation of results coming from R. This example comes from the Mathematica guys:

mathematicaRPlotWrapper = RFunction["function(filename, plotfun){
     pdf(filename)
     plotfun()
     dev.off()
     }"];

Clear[getRPlot];
getRPlot[plotFun_RFunction] := 
  With[{tempfile = FileNameJoin[{$TemporaryDirectory, "temp.pdf"}]}, 
   If[FileExistsQ[tempfile], DeleteFile[tempfile]];
   mathematicaRPlotWrapper[tempfile, plotFun];
   If[! FileExistsQ[tempfile], Return[$Failed]];
   Import[tempfile]];

myFun[t_] := 
 Show[#, ImageSize -> Medium, PlotRange -> All] &@
  getRPlot[RFunction["function(){
        x<- seq(1," <> ToString@t <> ",0.1)
        y<- sin(x)
        plot(y~x)
        }"]]

What's going to happen here is I am calling an R function, doing all of my calculations, bringing them back into Mathematica.

I forgot the "Manipulate" part:

Manipulate[myFun[t], {t, 2, 10}]

So here we go. And what's happening is everything is being sent to R for processing then coming all the way back to Mathematica. As you can see even though we are making that round trip the results are coming back at a good pace, it's almost instantaneous this particular example.

What about speed though, more generally?

I tried creating some random variates like I did in my examination of JavaScript versus R. So I'm going to create 10 million random variates from a Normal distribution

Timing[data = RandomVariate[NormalDistribution[], 10^7];]

and that takes about a quarter of a second, a little bit more, which is about twice as fast as R for generating those.

But then let's look at more of a worst-case scenario, a bunch of nested loops.

Timing[
 l = 0;
 For [i = 0, i < 10^2, i = i + 1,
  For[j = 0, j < 10^2, j = j + 1,
   For[k = 0, k < 10^2, k = k + 1,
   l = l + 1
   ]
  ]
  ]
 ]
 

Three of them, a total of a million loops it's going through, and this takes about 1.2 seconds, and it will increase by an order of magnitude if I add an order of magnitude in here. That is slow, it's about twice as slow as the same code would take to run in R, a language not known for it's speed with loops. Of course, this is generally speaking not what you want to do if you want to run fast code. Mathematica itself on their website advises against using these kinds of procedural codes.

But at the same time I've found that there are times when there really is no other way to do things, especially if you are doing simulations with a bunch of objects that take place over time, that are iterative, in which case you do need a programming structure that is like this, so you'll have to keep this in mind in terms of the speed.

Beside the live graphical manipulation of the results, another key benefit to using Mathematica is that you can do direct probability calculations, and sometimes even find closed form solutions for combining random variables.

I've got some code here that determines the probability that one standard normal variable will be greater than another.

Probability[
 Subscript[x, 1] > Subscript[x, 
  2], {Subscript[x, 1] \[Distributed] NormalDistribution[0, 1], 
  Subscript[x, 2] \[Distributed] NormalDistribution[0, 1]}]

And it is, of course, one-half. That's a simple example. We can do things that are more complicated. I'm going to look here for a more general solution when you have two Normal variables with means μ1 and μ2 and we're going to try and find the probability that one of them is greater than the other.

Probability[
 Subscript[x, 1] > Subscript[x, 
  2], {Subscript[x, 1] \[Distributed] 
   NormalDistribution[Subscript[\[Mu], 1], 1], 
  Subscript[x, 2] \[Distributed] 
   NormalDistribution[Subscript[\[Mu], 2], 1]}]

As you can see it's taking more time to run this calculation. And eventually we get back the response. And we do have a closed form solution.

"Erfc" stands for the complementary error function.

Unfortunately, not all of the problems that I tried to do work and sometimes they just freeze up completely.

Probability[
 Subscript[x, 3] > Subscript[x, 
  4], {Subscript[x, 3] \[Distributed] PoissonDistribution[5], 
  Subscript[x, 4] \[Distributed] PoissonDistribution[10]}]

Here I'm running a query to Mathematica to try and find the probability that a Poisson(5) will be greater than a Poission(10) random variable. I found that this just freezes up and will not return a response. Though I only waited a certain number of minutes. Actually, one time it locked my compter up entirely, the other time I gave up after a few minutes. I'm going to hit Alt-comma to abort the command and back out of that.

So, by comparison to R, you can do the same calculation of two Poissions. I'm going to make sure that's run in R:

x = rpois(10^6,5); y=rpois(10^6,10); z = x

(NOTE: This code actually finds the probability that a Poission(5) is LESS than a Poission(10))

As you can see I've run that a couple times already, and it takes about .9 seconds to run this. Of course, this is not trying to find a closed form solution, but for me anyway, these are perfectly good solutions, numerical solutions, to the problems.

So, going back in to Mathematica. Besides getting closed form solutions for probabilities, you can combine distributions, do convolutions and see what kind of formula you get back out.

I found this worked fairly well for simple, well-know distributions:

dist = TransformedDistribution[u + v, 
 {u \[Distributed] NormalDistribution[μ1, σ1], 
  v \[Distributed] NormalDistribution[μ2, σ2]}];
PDF[dist, x]

I do it here for the Normal. I'm adding two Normally distributed variables together and we get back out very quickly the formula for that.

But what happens if we try to work with a less well known distribution. In fact, lets go ahead and see what happens if we want to add together cats and find out the final weight distribution.

dist = TransformedDistribution[u + v, 
 {u \[Distributed] BeniniDistribution[\[Alpha]1,\[Beta]1,\[Sigma]1], 
  v \[Distributed] BeniniDistribution[\[Alpha]2, \[Beta]2,\[Sigma]2]}];
PDF[dist, x]

And, apparently, cats don't add so well. I tried this one as well a couple times and wasn't able to get results returned from Mathematica unfortunately.

So, besides these bugs and issues, there are a couple other significant downsides to Mathematica. Sharing results over the Internet can be done: you can export notebooks to HTML, but people if they want to use the interactive graphs they'll need to have a special browser plugin installed. I asked the guys from Wolfram if they know what percent of web users already have it installed. They didn't know, I suspect the number is very low, much lower than, say, the number who have Flash installed. Of course R by itself doesn't provide much support for sharing results over the web, though Rstudio makes something called Shiny that can do some exporting over the web. I haven't checked that out very much yet. I plan to do that soon.

So, beyond sharing, the biggest impediment to using Mathematica on a daily basis is the interface. The GUI. I'll bring in some of the buttons here, the palettes of buttons. Overall the look is extremely crude. Things are disordered. The floating palettes have buttons of various sizes different places, and it looks essentially like something you might have hacked together in Visual Basic 15 years ago and then hadn't been touched since. Clearly, the modern era of interface design has passed Mathematica by. At this point even open source programs that began with horrific interfaces, like the Gimp, have now begun to focus on making their interfaces look good. And looks may seem superficial, but if you are going to be working with an interface like this, if you are going to be paying $1000 for a license of Mathematica, I don't think it's too much to expect that the design be easy to use, that it be easy to find things, and that there are cues as to where to find things. Color would certainly go a long way to help with that, as would other cues to help you.

I'm going to bring back in Mathematica here.

Based on the GUI, I wonder... One more thing about the GUI, if you're moving the palettes along it doesn't do ghosting, so it pops back in.

So, despite these issues, I can see using Mathematica as an occasional tool to find exact results for probabilities and distributions, or quickly testing out how changing parameters affects a distribution's shape. For now though, I'm going to continue experimenting with JavaScript as an alternative to R that runs code quickly and also I'll be looking some more into Shiny.

Make sure to check out the links related to this video at StatisticsBlog.com, and if you like these videos click on the subscribe button.


05
Mar 13

My favorite randomization device

My recent look at JavaScript as a contender for statistical modeling got me thinking about the different methods used to create random variates. All computers algorithms create Type 1 randomness, which is to say, completely deterministic once you either figure out the underlying algorithm or once you see every number in the algorithm’s period. Jumping outside of software to the hard world around us, it seems possible to create Type 2 or even Type 3 randomness, at least from perspective of an observer who can’t base their predictions on real-time analysis of the generating mechanism (ie, they can’t watch it tick).

My favorite example of a real-world solution to randomizing is shown in the video at top. More details about the construction of the device are here.

What’s your favorite (hardware or virtual) randomization device?


28
Feb 13

Statistical computation in JavaScript — am I nuts?

Over the past couple weeks, I’ve been considering alternatives to R. I’d heard Python was much faster, so I translated a piece of R code with several nested loops into Python (it ran an order of magnitude faster). To find out more about Mathematica 9, I had an extended conversation with some representatives from Wolfram Research (Mathematica can run R code, I’ll post a detailed review soon). And I’ve been experimenting with JavaScript and HTML5’s “canvas” feature.

JavaScript may seem like an unlikely competitor for R, and in may ways it is. It has no repository of statistical analysis packages, doesn’t support vectorization, and requires the additional layer of a web browser to run. This last drawback, though, could be it’s killer feature. Once a piece of code is written in JavaScript, it can be instantly shared with anyone in the world directly on a web page. No additional software needed to install, no images to upload separately. And unlike Adobe’s (very slowly dying) Flash, the output renders perfectly on your smartphone. R has dozens of packages and hundreds of options for charts, but the interactivity of these is highly limited. JavaScript has fewer charting libraries, but it does have some which produce nice output.

Nice output? What matters is the content; the rest is just window dressing, right? Not so fast. Visually pleasing, interactive information display is more than window dressing, and it’s more in demand than ever. As statisticians have stepped up their game, consumers of data analysis have come to expect more from their graphics. In my experience, users spend more time looking at graphs that are pleasing, and get more out of charts with (useful) interactive elements. Beyond that, there’s a whole world of simulations which only provide insight if they are visual and interactive.

Pretty legs, but can she type?
Alright, so there are some advantages to using JavaScript when it comes to creating and sharing output, but what about speed? The last time I used JavaScript for a computationally intensive project, I was frustrated by its slow speed and browser (usually IE!) lockups. I’d heard, though, that improvements had been made, that a new “V8” engine made quick work of even the nastiest js code. Could it be true?

If there’s one thing I rely on R for, it’s creating random variables. To see if JavaScript could keep up on R’s home court, I ran the following code in R:

start = proc.time()[3]
x = rnorm(10^7,0,1)
end = proc.time()[3]
cat(start-end) 

Time needed to create 10 million standard Normal variates in R? About half-a-second on my desktop computer. JavaScript has no native function to generate Normals, and while I know very little about how these are created in R, it seemed like cheating to use a simple inverse CDF method (I’ve heard bad things about these, especially when it comes to tails, can anyone confirm or deny?). After some googling, I found this function by Yu-Jie Lin for generating JS Normals via a “polar” method:

function normal_random(mean, variance) {
  if (mean == undefined)
    mean = 0.0;
  if (variance == undefined)
    variance = 1.0;
  var V1, V2, S;
  do {
    var U1 = Math.random();
    var U2 = Math.random();
    V1 = 2 * U1 - 1;
    V2 = 2 * U2 - 1;
    S = V1 * V1 + V2 * V2;
  } while (S > 1);

  X = Math.sqrt(-2 * Math.log(S) / S) * V1;
//  Y = Math.sqrt(-2 * Math.log(S) / S) * V2;
  X = mean + Math.sqrt(variance) * X;
//  Y = mean + Math.sqrt(variance) * Y ;
  return X;
  }

So how long did it take Yu-Jie’s function to run 10 million times and store the results into an array? In Chrome, it took about half-a-second, same as in R (in Firefox it took about 3 times as long). Got that? No speed difference between R and JS running in Chrome. For loops, JS seems blazing fast (compared to R). Take another look at the demo simulation I created. Each iteration of the code requires on the order of N-squared operations, and the entire display area is re-rendered from scratch. Try adding new balls using the “+” button and see if your browser keeps up.

It’s only a flesh wound!
So have I found the Holy Grail of computer languages for statistical computation? That’s much too strong a statement, especially given the crude state of JS libraries for even basic scientific needs like matrix operations. For now, R is safe. In the long term, though, I suspect the pressures to create easily shared, interactive interfaces, combined with improvements in speed, will push more people to JS/HTML5. Bridges like The Omega Project (has anyone used this?) might speed up the outflow, until people pour out of R and into JavaScript like blood from a butchered knight.


22
Feb 13

What’s my daughter listening to? HTML chart gen in R

[advanced_iframe securitykey=”5870105317bee1454e210fffd43d74b89a88430f” src=”http://statisticsblog.com/popchart.html” width=”600″ height=”550″ scrolling=”yes”]

 

My daughter, who turns 10 in April, has discovered pop music. She’s been listing to Virgin Radio 99.9, one of our local stations. Virgin provides an online playlist that goes back four days, so I scraped the data and brought it into R. The chart shown at top shows all of the songs played from February 17th through the 20th, listed by frequency.

Broadly speaking, the data follows a power law. But only broadly speaking. Instead of a smoothly shaped curve from the single most frequently played song to a tail of single plays, Virgin Toronto has four songs that all share the heaviest level of rotation, then a drop-off of almost 50% to the next level. There was one big surprise in the data, at least for me. Listening to the station, it seems like they are playing the same 10 songs over and over. This impression is true to some extent, as the top 10 songs represented about one-third of all plays. But in just four days there were 57 single plays, and 44 songs played just twice. In all, 173 unique songs were played, with a much longer tail than I had expected.

That said, it would be interesting to compare Virgin’s playlist distribution with the widely eclectic (at least to my ears) Radio Paradise. Anyone want to give it a try? Here’s my code after I scraped the four pages of data by hand and put them into a text file.

To get the link to the Youtube videos, I used Google’s “I feel lucky” option paired with a search for the song name. If you get an unexpected result, take it up with Google. In the past I’ve used R’s “brew” library to generate HTML code from a template, this time I just hand coded the snippets. To make the red bars I found out the maximum number of plays for any song, then stretched each bar relative to this maximum.


19
Feb 13

Google places itself at the center of cyberspace

Above is a screen capture of the “Google Doodle” for today. It honors the 540th birthday of Nicolaus Copernicus, proponent of the heliocentric model of the universe. Note that Google is placing itself in the center of the universe, a decision I suspect was made very deliberately for its symbolism.

Astronomy was the first scientific discipline to make extensive use of data. Many of the early advances in data analysis and statistics (like the Gaussian Distribution) came about through detailed observations of heavenly bodies and the vast quantities of (imprecise) data this generated. Astronomy may have even given us the first murder over scientific data. With its Doodle, Google is saying that it’s become the center of the data universe, the dominant lens through which we view the world.

A bold claim! Is it true? Looking closely at all the ways in which Google has integrated itself into our online and offline lives, and it starts to look less like presumption on their part, and more like a simple acknowledgement of present reality.

How does Google guide and track and thee? Let me count the ways:

  1. With search, of course. This includes every character you type into the search box or toolbar, since these are sent to Google for auto-complete and search suggestions. If you’ve ever accidentally pasted a password or a whole draft of your book in progress into the search box, Google has a copy of this stored in their vast data center.
  2. Through your email, if you use Gmail, but also if you email other people who use Gmail.
  3. Every Youtube video you watch.
  4. Your location information, if you use Google Maps. Also, if you are like most people, Google knows the house (or at least the neighborhood) you grew up in, since this is the first place you zoomed-in on that wasn’t your current location. Even if you don’t visit the Maps website or app directly, there’s a good chance a Google Map is embedded in the website of your real estate agent or the restaurant you just checked out.
  5. Through tracking for Analytics. This is a little javascript nugget webmasters put on their pages to get information about their visitors. Millions of websites use Google Analytics, including this one.
  6. Through Adsense, those Google-style ads you see on the side of pages which aren’t Google itself. Adsense is by far the most popular “monetizing” solution for webmasters.
  7. If you use voice dictation on an Android phone, your sounds get sent to Google for conversion into words. Your Android phone is also likely to integrate your calender with Google’s online calender app, sending data about your daily schedule back and forth.
  8. If you use Chrome, then all of the URLs you visit are sent to Google as you type, for auto-complete. Many people use the search box itself to type in URLs, giving this info to Google.
  9. Google has a dozen other products that most of us use at least occasionally, from News to Blogsearch to Translate to Google Docs to Google+ social networking.

Is there any way to escape the pull of Google’s gravity? There are some things you can do to limit the amount of tracking Google does, like clear your cookies on a regular basis, or block Google Ads and Analytics by using your computer’s “hosts” file, but the harder you work to keep your personal data off Google’s servers, the more you end up pushed to the fringes of cyberspace and in some ways from modern life itself: ignoring emails from friends on Gmail, unexposed to the viral video everyone else in your office is talking about, adrift without a good virtual map of the human universe.

Do we welcome our new Sun King?


14
Feb 13

Population simulation leads to Valentine’s Day a[R]t

Working on a quick-and-dirty simulation of people wandering around until they find neighbors, then settling down. After playing with the coloring a bit I arrived at the above image, which I quite like. Code below:

# Code by Matt Asher for statisticsblog.com
# Feel free to modify and redistribute, but please keep this notice 

maxSettlers = 150000

# Size of the area
areaW = 300
areaH = 300

# How many small movements will they make to find a neighbor
maxSteps = 200

# Homesteaders, they don't care about finding a neighbor
numbHomesteaders = 10

areaMatrix = matrix(0, nrow=areaW, ncol=areaH)

# For the walk part
adjacents = array(c(1,0,1,1,0,1,-1,1,-1,0,-1,-1,0,-1,1,-1), dim=c(2,8))

# Is an adjacent cell occupied?
hasNeighbor <- function(m,n,theMatrix) {
	toReturn = FALSE
	for(k in 1:8) {
		yCheck = m + adjacents[,k][1]
		xCheck = n + adjacents[,k][2]
		if( !((xCheck > areaW) | (xCheck < 1) | (yCheck > areaH) | (yCheck < 1)) ) {
			if(theMatrix[yCheck,xCheck]>0) {
				toReturn = TRUE
			}
		}
	}
	return(toReturn)
}


# Main loop
for(i in 1:maxSettlers) {
	steps = 1
	xPos = sample(1:areaW, 1)
	yPos = sample(1:areaH, 1)
	
	if(i <= numbHomesteaders) {
		# Seed it with homesteaders
		areaMatrix[xPos,yPos] = 1
	} else {
		if(areaMatrix[xPos,yPos]==0 & hasNeighbor(xPos,yPos,areaMatrix)) {
			areaMatrix[xPos,yPos] = 1
		} else {
			spotFound = FALSE
			outOfBounds = FALSE
			
			while(!spotFound & !outOfBounds & (steps areaW) | (xPos < 1) | (yPos > areaH) | (yPos < 1)) {
					outOfBounds = TRUE
				} else if(hasNeighbor(xPos,yPos,areaMatrix) ) {
					areaMatrix[xPos,yPos] = steps
					spotFound = TRUE
				}
			}
		}
		
	}

}
	
image(areaMatrix, col=rev(rgb(seq(0.01,1,0.01),seq(0.01,1,0.01),seq(0.01,1,0.01))))

# I think this version looks nicer!
# areaMatrix[areaMatrix !=0] = 1
# image(areaMatrix, col=rev(rgb(.5,0,seq(0.2,1,0.2))))

04
Feb 13

Landmine detection revisited; the inverse unicorn problem

A couple weeks ago I wrote about an interesting idea to clear landmines using the power of the wind. A reader asked me to comment more on the value of using these wind-powered “Kafons” to do an initial assay of a suspected minefield, an idea I mentioned at the end of my video on the subject. In particular, how good would the devices be at detecting the existence of mines if they were very sparse in an area? In a sense, this is the inverse of the unicorn problem; instead of trying to find every last mine, we’re concerned with finding the very first one, if indeed land mines are there. Put another way: How hard do we have to look before we can safely conclude that unicorns don’t exist?

Download the code for this simulation.

The animated plot shown at the top of this post represents a small sample of data from the simulation I ran. Each blue dot shows the progress of testing of a location to see if that field has mines. I’ve cutoff the testing at $30,000, which is 600 kafons based on their estimated cost (feel free to go into the code and change the cutoff to whatever you want). The dots at the top, with numbers above them, represent testing that used all 600 kanfons without finding any mines. Does this mean no mines exist? Sometimes, but not always. The number above the dot shows the true number of mines in that field during that particular simulation. As you can see, it’s very possible for the field to have several mines, yet still not have any detected, even after trying with hundreds of kafons. In the entire simulation, there were 283 trials with a single mine in the field. On 36 of those occasions, the mine was detected (and, in the simulation, detonated). The other 87% of time, we spent a (virtual) $30,000 and failed to detect its presence.

I’ve shown the results as an animation so that you can put yourself into the position of someone trying to decide whether to continue testing a field for mines, or move on to another location. Each new test costs additional funds, but when do you stop? My $30,000 cutoff is arbitrary. It represents a best guess on my part as to when it would be better to use other methods to test for landmines. These kinds of optimal stopping points can be extremely difficult to determine, especially when, as in this case, those testing for landmines will have to deal with the problem of sunk costs: imagine you’ve just spent $30,000 testing for mines in a field you suspect is dangerous, but you haven’t found anything. The very next kafon, at a cost of just $50, could be the one to find a mine. Do you continue? In my simulation, with this particular distribution of probabilities, once no mine was found in the first 300 kafons, it was very unlikely one would ever be found (although, as mentioned, even when no mine was detected after 600 kafons the field was still way more likely than not to have mines).

Based on the results of the simulation, using kafons to detect mines is cheap when the probability of finding a mine is very high, but in that case I would imagine there’s already strong evidence that landmines exist. In the case where landmines are more sparse, testing with kafons is expensive and the question of when to stop testing is difficult. Note that in a real world use, we don’t know the underlying probability of a mine in the field; we you could be anywhere along the x-axis of the plot shown at top. All we see, in real time, is a rising cost and no kafon found.

If we know how much (new) area is covered by each addition kafon, and if we assume that coverage and placement of landmines is randomly distributed (at least from our position of ignorance), then we can come up with some probability estimates for the chances that a field has an undetected landmine after each additional kafon is given a chance to detect mines. The biggest challenge is that, unless I’m missing something, the question of this exact probability is unanswerable unless we assume a prior distribution on the number of landmines in our field.

We wanted to let the data speak for itself in terms of whether we have hidden mines or not, but in the end, our final beliefs will depend as much on our previous hunch as on the data itself. Which is, in effect, exactly what we were hoping to avoid by sending out kafons to detect for mines.

Shown below is a full plot of all 2800 trials, each dot at the top might represent dozens of failed attempts to find a mine.


23
Jan 13

Fake text generation the wrong way, and a contest

As part of a bigger project, I needed to simulate a text string based on a source document, but at the character level. Just in case people find the code useful, I’ve uploaded it to MCMCtext.r.

In my simulated text, each character is chosen based on the transition probabilities in the source text from one character to another. The result is (nearly complete) gibberish without much interest to anyone, except perhaps those looking for a replacement for the standard Lorum Ibsum dummy text. More interesting fake text could be generated by using two character (or more) transition probabilities, or by working at the level of words.

Before moving on, I thought it might be interesting to see if anyone can “reverse engineer” my fake text output to figure out which original text was used as a source to generate it. Got that? The source text comes from Project Gutenberg. Hint: some features of the (fake) text could help you narrow the field of candidates.

First person to post a correct guess in the comments gets a copy of my comic and an unlimited supply of Hotpockets*. Limit one guess per person please.

* Hotpockets offer only valid if you are currently saving the planet from destruction.


21
Jan 13

Zero-information predictions for 2013

The embarrassing failure of experts to predict the future is well known and has been exhaustively cataloged by writers such as Nassim Taleb and Nate Silver. It often seems like the more someone knows about a subject, the worse they are at predicting what will happen in that area. In this spirit, I bring you seven predictions for the coming year, based on an absolute minimal amount of knowledge. Combined, I’ve spent less than one hour studying these topics, with the exception of bubbles in general. Here they are, we’ll see how well this exercise in ignorance holds up.

Prediction: The EU crisis will not be solved, more good money will be thrown after bad.
Source of prediction: The last three years. Also, bankers (for now) still rule the world.

Prediction: Bubbles will begin to burst. It could be the Higher Education Bubble, the Regulatory Complexity Bubble, the Government Money Printing (aka Fiat Currency) Bubble, or the closely related Global Debt Bubble.
Source of prediction: In Western, developed nations, all of these these have gone exponential. That never lasts because it can’t.

Prediction: Obamacare (officially know as PPACA, had to look that up), will begin to look like the Democrat’s Vietnam. Those who supported the legislation will need to make a quick u-turn or double down on their support, riding the increasingly unpopular, cost-overrunning quagmire into the depths of bureaucratic hell.
Source of prediction: I’m not a zombie.

Prediction: Speaking of soon-to-fail-in-its-intended-goals legislation, Dodd-Frank will have many nasty unintended consequences.
Source of prediction: It’s 8,800 pages long and still leaves lots of things to be worked out later by regulators.

Prediction: The meltdown at Fukushima will still be a problem at the end of the year.
Source of prediction: The most radioactive fish of all time (by a factor of 10) was just caught. The lovely, and glowing, Murasoi is shown at top.

Prediction: The Miami Heat will win the NBA. Oklahoma will peter out.
Source of prediction: I stopped watching the NBA a few years ago, but as I recall Miami is a good team, and no one plays pro basketball in Oklahoma.

Prediction: At this year’s Oscars, lots of pretentious, overwrought crap will get awards.
Source of prediction: Recent history, reading the nominee list, watching a trailer for Les Miz.


10
Jan 13

Simulation of landmine clearing with Massoud Hassani’s Mine Kafon

Code used: MineClearingSimulationWithKafons.r

TRANSCRIPT OF VIDEO:
Hello, I’m Matt Asher with StatisticsBlog.com. This video is about my attempt to simulate a landmine clearing device built by Massoud Hassani called the Mine Kafon. I’ve put a link to his webpage at StatisticsBlog.com, I highly recommend checking out the video. Hassani’s device looks like this:

It’s a cheap, easy to build mine clearer that travels under the power of the wind. When I first saw the video, I was awestruck by what Hassani had done. It seemed like an awesome achievement, cleaver, creative, a fantastic idea for making the world better. A device that used the power of nature to clean up after man.

The more I thought about it, though, the more I wondered what might happen if hundreds of Kafons were sent out onto a mine field. So to examine that question, I built a simulation. I’ll run it now.

I’ve slowed it down so you can see what’s happening. Each blue line represents the path a Kafon might take across the minefield. The red circles represent exploded mines, and the gray parts are places where paths have overlapped.

Based on Hassani’s video, I’ve assumed there’s a prevailing wind that sweeps the devices right out into the field, and that the Kafons are released at equal intervals at the edge of the minefield. The movements up and down represent turbulence, uneven ground, or the natural tendency of the Kafons themselves to move with a wobble.

I’ve posted the code to this simulation on my website, as I do for all my blog posts. It’s written in R, a free and open source programming language. You can go into my code and easily change the wind speed and other parameters, then re-run the simulation. For example, I’ve set the number of mines that each Kafon can absorb before it stops working to 4, that’s on based on what Hassani estimates, but you can change that up or down.

Paths that stop at a red circle before they traverse the whole minefield are Kafons that have “plated out” after hitting 4 mines.

Just looking at the simulation in this way, it seems to be working very well. Most of the Kafons are finding land mines, and lot of land mines are getting cleared.

The biggest problem I see with Hassani’s approach has to do with efficiency, especially as you try to get more and more of the mines detected. The more Kafons you send into the field, the more overlapping you get, and the lower your efficiency becomes, and the harder it gets to detect remaining mines. I ran the simulation with different numbers of Kafons, always spaced at equal intervals, which gives them the best chance to clear as many mines as possible.

Here’s a graph showing the percentage of unexploded mines still left versus the number of Kafons that were released into the minefield. Each point on the graph represents a new simulation with that many Kafons. As you can see, at first adding more Kafons gives an almost linear decrease in the percentage of mines left, but the closer you get to clearing all the mines, the more elusive that goal becomes. Even after 2000 Kafons have been released, which if moving perfectly straight could have covered the area of our simulation four times over, there are still some mines left unexploded.

If you look on my blog you’ll see a post I did about something I call The Unicorn Problem, related to finding all of the new species in an environment. The problem there, as with this approach, is that the marginal rate of detection goes down as the number of attempts goes up. What’s happening, is that the more Kafons you use, the more they overlap territory.

Here you can see the amount of territory that’s be traversed more than once by a Kafon. The overall result is that the cost per mine destroyed gets higher and higher as you get closer to and closer to eliminating all of them. Here’s a plot of the cost per destroyed mine versus the number of Kafons used.

End result is cost per mine detected keeps increasing. Hassini estimates a cost of 40 euros to build each device, or about $50.

I wish Hassani the best of luck with his project, hopefully these issues can be addressed. Whatever the faults with this approach, this is a very important thing he’s doing. I noticed that in a more recent video he mentioned tracking the Kafon’s motion with GPS. I don’t know whether his initial price estimate included the cost of GPS. At any rate this would help to keep track of which areas have been covered and which haven’t, but unless he’s using a GPS accurate to within a foot, I wouldn’t want to try and walk in the exact path cleared by the device. In my simulation I’m assuming that every single time the Kafon is in the area of a mine it explodes it. It’s not clear that would be the case. It would be easy enough to add a probability of failure to the simulation.

There are adjustments you can make to the simulation or its parameters which would result in more of the Kafons being effective, though any design that relies on wind patterns is going to suffer from the same diminishing returns and unicorn problem, even if the wind is widely turbulent and increases the probability that all of the plates will get used, there’s still the problem of overlap and perhaps even worse performance if the Kafons get stuck in one area, or are quickly blown out of the mine field. Overall there are lots of reasons not to want to walk out into a mine field, no matter how many kafons have been through it.

Even if the Mine Kafon isn’t the best option for clearing an entire region of mines, they might still be an effective way to test for the presence of mines in an area, to do a sample of the area and see how likely it is to contain land mines, and if so how many and what regions might have higher concentrations of mines.