Wednesday, September 26, 2007

Emperor of Ice Cream? Policy Planning? Fuhgeddabouddit.b

Edge has a great series of sessions by Danny Kahnemann, whom Steve Pinker calls the greatest psychologist in the world. Anyway, Danny won a Nobel Prize in 2003 for some out-of-the-box thinking that he explores on this thread:
The word "utility" that was mentioned this morning has a very interesting history – and has had two very different meanings. As it was used by Jeremy Bentham, it was pain and pleasure—the sovereign masters that govern what we do and what we should do – that was one concept of utility. In economics in the twentieth century, and that's closely related to the idea of the rational agent model, the meaning of utility changed completely to become what people want. Utility is inferred from watching what people choose, and it's used to explain what they choose. Some columnist called it "wantability". It's a very different concept.

One of the things I did some 15 years ago was draw a distinction, which obviously needed drawing. between them just to give them names. So "decision utility" is the weight that you assign to something when you're choosing it, and "experience utility", which is what Bentham wanted, is the experience. Once you start doing that, a lot of additional things happen, because it turns out that experience utility can be defined in at least two very different ways. One way is when a dentist asks you, does it hurt? That's one question that's got to do with your experience of right now. But what about when the dentist asks you, Did it hurt? and he's asking about a past session. Or it can be Did you have a good vacation? You have experience utility, which is everything that happens moment by moment by moment, and you have remembered utility, which is how you score the experience once it's over.

And some fifteen years ago or so, I started studying whether people remembered correctly what had happened to them. It turned out that they don't. And I also began to study whether people can predict how well they will enjoy what will happen to them in future. I used to call that "predictive utility", but Dan Gilbert has given it a much better name; he calls it "affective forecasting". This predicts what your emotional reactions will be. It turns out people don't do that very well, either.

Just to give you a sense of how little people know, my first experiment with predictive utility asked whether people knew how their taste for ice cream would change. We ran an experiment at Berkeley when we arrived, and advertised that you would get paid to eat ice cream. We were not short of volunteers. People at the first session were asked to list their favorite ice cream and were asked to come back. In the first experimental session they were given a regular helping of their favorite ice cream, while listening to a piece of music—Canadian rock music—that I had actually chosen. That took about ten-fifteen minutes, and then they were asked to rate their experience.

Afterward, they were also told, because they had undertaken to do so, that they would be coming to the lab every day at the same hour for I think eight working days, and every day they would have the same ice cream, the same music, and rate it. And they were asked to predict their rating tomorrow and their rating on the last day.

It turns out that people can't do this. Most people get tired of the ice cream, but some of them get kind of addicted to the ice cream, and people do not know in advance which category they will belong to. The correlation between what the change that actually happened in their tastes and the change that they predicted was absolutely zero.

It turns out—this I think is now generally accepted—that people are not good at affective forecasting. We have no problem predicting whether we'll enjoy the soup we're going to have now if it's a familiar soup, but we are not good if it's an unfamiliar experience, or a frequently repeated familiar experience. Another trivial case: we ran an experiment with plain yogurt, which students at Berkeley really didn't like at all, we had them eat yogurt for eight days, and after eight days they kind of liked it. But they really had no idea that that was going to happen. ...

Just a sample of a guy who reminds me of Richard Feynman, a delight on many levels. Here's another that fits my experience of Amoco in Entry Strategy & Economic Planning:
The question I'd like to raise is something that I'm deeply curious about, which is what should organizations do to improve the quality of their decision-making? And I'll tell you what it looks like, from my point of view.

I have never tried very hard, but I am in a way surprised by the ambivalence about it that you encounter in organizations. My sense is that by and large there isn't a huge wish to improve decision-making—there is a lot of talk about doing so, but it is a topic that is considered dangerous by the people in the organization and by the leadership of the organization. I'll give you a couple of examples. I taught a seminar to the top executives of a very large corporation that I cannot name and asked them, would you invest one percent of your annual profits into improving your decision-making? They looked at me as if I was crazy; it was too much.

I'll give you another example. There is an intelligence agency, and the CIA, and a lot of activity, and there are academics involved, and there is a CIA university. I was approached by someone there who said, will you come and help us out, we need help to improve our analysis. I said, I will come, but on one condition, and I know it will not be met. The condition is: if you can get a workshop where you get one of the ten top people in the organization to spend an entire day, I will come. If you can't, I won't. I never heard from them again.

What you can do is have them organize a conference where some really important people will come for three-quarters of an hour and give a talk about how important it is to improve the analysis. But when it comes to, are you willing to invest time in doing this, the seriousness just vanishes. That's been my experience, and I'm puzzled by it.

Since I'm in the right place to raise that question, with the right people to raise the question, I will. What do you think? Where did this come from; can it be fixed; can it be changed; should it be changed? What is your view, after we have talked about these things?

One of my slides concerned why decision analysis didn't catch on. And it's actually a talk I prepared because 30 years ago we all thought that decision analysis was going to conquer the world. It was clearly the best way of doing things—you took Bayesian logic and utility theory, a consistent thing, and—you had beliefs to be separated from values, and you would elicit the values of the organization, the beliefs of the organization, and pull them together. It looked obviously like the way to go, and basically it's a flop. Some organizations are still doing it, but it really isn't what it was intended to be 30 years ago. ...

Easy to see why if you've worked in a huge organization like the State Dept or Amoco Corp. The top decision-makers are so jealous of their prerogatives that any such exercise would be taken seriously if they themselves personally took part in the process. Then the outcome, whatever it was, would somehow have their own accountability written into the template or code that emerged from the DA process.

Their lawyers probably advised them not to participate, or they may be liable!!!

No comments :