A small one in terms of words, but lots of thought has gone into this addition:
Correlation proves compatibility.
Negative correlation implies incompatibility.
As Ned Ryerson would ask, “Am I right or am I right?”
A small one in terms of words, but lots of thought has gone into this addition:
Correlation proves compatibility.
Negative correlation implies incompatibility.
As Ned Ryerson would ask, “Am I right or am I right?”
Here’s how it used to work. You have a hypothesis, something you want to test. You go out, collect a mess of data, then start to build a model. The model is your key weapon for understanding the data. Is there a linear relationship? Fit a regression line. Does a particular variable have an impact on the results? Do t-test and find out. The goal is to make your models clear, interpretable, and above all concise. We all know the more parameters you add to a model, the closer you can get it match the data, whatever the data may be, so avoid the temptation to overfit at all costs. Overly complicated models tell you nothing.
Stick to the process above, and you can claim that your results show not just tendencies and correlations, but meaning. The models, properly tested and fit, offer understanding. Through the use of math and inductive logic, we are able to separate the word into signal and noise, “systematic” trends and “random” variation. Once complete, we know what we know (Gremlins are 87% more evil if you feed them after midnight), and we also know what we don’t know (23% of evil behavior in gremlins can’t be explained by violations of the three rules). As an added bonus, we get bounds for how well we know what we know, and how little we know about what we don’t know.
Models can be incredibly powerful tools, but perhaps their least understood property is how well they fool us into believing that fitting a line through points is the same things as understanding an underlying process.
In 2011 I’m going beyond the model. Instead of understanding, I’ll be striving for accuracy of prediction, or to optimize some profit/loss function related to the accuracy of prediction. Instead of trying to part the world into signal and noise — the part that can be understood, and the part that must be dealt with as inevitable “error” — I’m going to design a system that treats signal and noise as all one and the same. Instead of using math and algorithms to extract meaning, I’ll be using these tools to decrease the informational entropy of a stream of data. Data will be treated like a dense, tangled and interconnect forest, an entire ecology of information that cannot be split apart, and can only be “understood” by non-deterministic, evolutionary models which grow in complexity and inscrutability as quickly as their real-world counterparts. In my most well-read (and controversial!) post of 2010, I argued that Occam’s razor was the dumbest argument smart people made. In 2011, I’ll try to demonstrate the power of leaving behind the “simple is better” mentality once and for all.