I’ve begun reading some of the recent works by what are called “behavioral economists”. A staple of their work seems to be research into how humans fail to be perfectly rational economic actors, the most famous book of this sort being Dan Ariley’s Predictably Irrational. No doubt there is a lot of value in understanding how humans tend to deviate from behavior we (or, perhaps, academics) might expect. At the same time, I see many of these experiments as deeply flawed, in a way directly related to these economists’ failures to understand probability. In particular, they seem incapable of understanding the (often very rational) role that uncertainty plays in the minds of the participants. You can think of this uncertainty is as subjective probability, related to our degrees of belief. As human beings, we all have invisible, imperfect Bayesian calculators in our heads which crunch the data from our world and make implicit judgments about the information we take in. Right now, as you read this, how much credibility does what I’m saying have in your mind? How would your “uncertainty” about my arguments change if I made a clear mistakeee?
To see where the economists fail, consider the following experiment: A stranger approaches you and offers to give you $100 in cash right now, or to pay you $1000 in exactly one year. I can say right away that I would pocket the $100. To an economist, this would mean that I have an (implied) internal rate of interest of 1000% per annum, since $100 right now is equal in my mind to $1000 in a year. From there, the economist could easily ask me a few other questions to show that my internal rate of return isn’t really 1000%, in fact it’s all over the pace. My preferences are fully inconsistent and therefore irrational.
But is my behavior really all that irrational? In taking the $100 now, what I’ve really done is an implicit probability calculation. What is the chance that I will actually get paid that $1000 in a year? $100 in my hand right now is simple. A payment I have to wait a year for is complicated. How will I receive it? Who will pay out? How many mental resources will I spend over the course of the year thinking (or worrying) about this $1000 payment from an unknown person? Complexity always adds uncertainty; the two cannot be disentangled. The economist has failed to understand people’s (rational) uncertainties, and has ignored the psychological cost of living with that uncertainty, especially over long periods of time.
Here’s another experiment. Imagine you asked your neighbor to look after your dog for you while you were gone for the weekend. How much compensation might she expect? If she’s particularly sociable, she might be willing to look after your dog for free, or be happy with a $50 payment. But now imagine you offered her $2000 right away, would she accept that? If not, than she has what economists call an downward sloping supply curve: giving her more money leads to less of the same service. Downward sloping supply curves, especially on the personal (micro) level, get economists all hot and bothered. They seem incoherent and rife with opportunities for exploitation.
Again though, is this really a case of irrational behavior? All things being equal (a deadly assumption that’s often made by economists), there’s no doubt that your neighbor would prefer $2000 to $50 for providing the same service. But of course in this example all things aren’t equal. The amount you offer her is a signal. It tells her something about your assessment of the underlying value of the service you are requesting. $50 tells her that you appreciate the minor inconvenience of caring for your poodle. $2000 tells her that something screwy is going on. Is your dog a terror? Will it chew up her furniture? Pee everywhere? Is there some kind of legal issue that she has no idea about? The key here is the importance of conditional probability. Specifically, what is the difference in the probability that taking care of this dog is equivalent to stepping into a mine field, given that $50 is being offered, versus given that $2000 is being offered? Human beings in general have incredibly sophisticated minds, capable of spotting hidden uncertainties and performing fuzzy, but essentially correct Bayesian updates of our prior beliefs given new information. Unfortunately, such skills seem to be lacking from many modern economists.
Hi, welcome to the world of behavioral economics. Your initial skepticism is pretty common—most people approach a study of the type you describe and asks `if you look at this from another angle, isn’t it rational?’ For the examples you give, this is probably true, but that makes them lousy experiments.
A good study covers all the bases and really is foolproof; not all studies are good ones. To give a good example, I think K&T did a great job with their anchor & adjust demonstration:
–Tell the subject that you’re going to ask what percent of the UN is African countries.
–Spin a spinner that’s 50% very high values and 50% very low values
–Ask the subject if the number they spun onto is too high or too low.
–Ask the subject for their guess at the right number.
Subjects who spun onto the too-high values concluded with great consistency at a final number higher than subjects who spun the too-low values, demonstrating a sort of anchor-and-adjust means of arriving at a value. But there is transparently no different information between the different runs.
To generalize this and pick up on your own term: Bayesian updating is a terrible, terrible tool for modeling how people handle probabilities. With the UN question, subjects combine what we presume are similar priors, plus absolutely zero information from the spin of a spinner, and arrived at different values. There’s no way to seriously explain that via Bayesian updating. We can also break Bayesian updating with probabilities near one or near zero, or by abusing our lousy facility with conditional probability (Monty Hall!). At this point, I think you’d be hard-pressed to find a study that asks people to do a nontrivial task and _confirms_ Bayesian updating as a model of human thought.
“We can also break Bayesian updating with probabilities near one or near zero, or by abusing our lousy facility with conditional probability (Monty Hall!).”
This is a really important point. Unfortunately, problems with implicit priors near certainty or non-intuitive conditional probabilities seem to afflict scientists just as much as laymen (this is one reason why obviously wrong prior scientific beliefs or poor models have to await a massive paradigm shift before they are rejected in favor of newer knowledge).
Someone just posted another example of how economists struggle to understanding probability at a deep level, especially how it is used by humans: the Ellsberg Paradox. (See this comment: http://www.statisticsblog.com/2011/05/problematic-quote-of-the-day/comment-page-1/#comment-12252). Some of the traps are related to confusing probability (as abstract math ideal) from uncertainty about probabilities (an obviously fuzzy topic). In particular, it seems to me that economists have a hard time working with the key distinction between a probability (0.5 chance of outcome A) and the distribution of probabilities which models our belief about the probabilities of different probabilities.
I suppose from your writing that you don’t know that much how things are in experimental economics, at least in practice.
For the first part of the experiment. Laboratories (I’m working in one) pay their subjects when they promise that, so that your point of uncertainty about the payment in a year simply does not hold. The subjects know that what is said and agreed upon is for real, the moment they subscribe to the experiment, otherwise it makes no sense at all.
For the second part. I don’t know where do you have that individual supply curve story from, if from an economist, then he’s certainly not a good one. A normal one with a PhD, would have made the same arguments as you did actually..but the last two sentences.
We wanted to let you know that your blog was included in our list of the top 50 statistics blogs of 2011. Our goal was to highlight blogs that students and prospective students will find useful and interesting in their exploration of the field.
You can view the entire list at http://www.thebestcolleges.org/best-statistics-blogs/
Congratulations!
I’m not sure I understand the part about the rate of return. Why would the behavioral economist assume the 1000% discount factor is irrational? It just suggests an enormous risk aversion factor, which is perfectly consistent with economic theory given the circumstances (which you explain in the next paragraph).
IMO, the behavioral economist would have no problem with the 1000% factor itself. He would have a problem, however, if that discount factor is inconsistent with a model of rationality. For example, in Finance behavioral economists have suggested that the discount rates people require on a stock immediately prior to inclusion in the S&P500 list differs from the discount rate immediately following inclusion. That’s irrational because nothing substantial has changed in the short window of time before and after inclusion.
That phenomenon is up for behavioral critique. A 1000% discount factor by itself, even if it differs in different circumstances, is not as long as it conforms to a model of rational asset pricing. The same applies to some of the other examples- I don’t think a behavioral economist has any issue with people’s discount rates changing in different circumstances, given different information sets. They only take issue when discount rates conditioned on the same information sets and utility functions differ.
Could you clarify?
@Peter:
I wasn’t saying that the behavioral economist would view any one particular implied discount factor as a problem. What I meant was that the economist might see evidence that the same person shows widely different discount factors depending on the circumstances, and conclude that they were being inconsistent and therefore irrational. Sorry if that wasn’t clear.
For an examples of the “we see inconsistency and thus the participants are irrational” brand of economics, look at how bothered researchers got when they noticed that people were “less willing to gamble with profits than with losses” (Ghiglino & Tvede, 2000)
In general, wiser scientists will come in later and explain why inconsistencies aren’t just random brain glitches or irrationalities on the part of participants. Instead, they may reflect good strategy or the epistemological limitations of the participants. Which brings up…
@bob:
The key here is that, while the tenured professional carrying out serious research believes that lab participants can be 100% assured of getting paid, that’s not the knowledge that the participant has. What he sees may look a lot less certain, especially in the context of any experiment that looks like a game.
In a broader way, this kind of disconnect is subtle and maddening enough to have made my list of “Dumb arguments by smart people” (http://www.statisticsblog.com/2010/06/five-dumb-arguments-smart-people-make/).
You can’t assume that someone else is acting irrationally because they refuse to trust something you “know” to be trustworthy. For an example, look at the name-calling directed at those who opt out of vaccinating their kids. What’s implied here is the argument that since we “know” these vaccines are safe, you have no rational reason to doubt.
Thanks Matt. It’s really interesting- irrationality- because like you’ve said in a few of your comments it’s hard to know who is conditioning their expectations on what information sets. So any time you test for “rationality” you are basically testing both a theory of what is rational and whether a person’s behavior is rational
(i.e. if your model of rationality is incorrect or if you are trying to estimate ex ante expectations based on the wrong information set, rational behavior can appear irrational because rational people will systematically (and rationally) deviate from your model! and because ex post outcomes will consistently deviate from your ex ante estimates).
It’s a real brain skewer.
Finance has only recently started grappling with the issues you’ve brought up here and particularly with the underlying problem of never quite knowing what information sets people are using. I for one have started thinking that the methodology might be all wrong. Rather than trying to calculate ex ante expectations and match them to outcomes to see how people think about risk, maybe we should rely more on simulations with heterogeneous decision-makers with unique (stochastic?) information sets.
Can you do basic simulations of networks of investors in R?
A post on that would be awesome! Thanks.
@Peter
This: “(i.e. if your model of rationality is incorrect or if you are trying to estimate ex ante expectations based on the wrong information set, rational behavior can appear irrational because rational people will systematically (and rationally) deviate from your model! and because ex post outcomes will consistently deviate from your ex ante estimates).”
Is brilliant, and very well put. The kind of thinking that should be taught to all scientists, in all fields.
I’d love to work on a simulation of networks of investors, though I suspect to do it right would be quite involved. It’s the kind of work I do for hire, if anyone’s interested. :–)