Lee Merkhofer Consulting Priority Systems
Implementing project portfolio management

Debiasing

The problem of bias is of critical importance, given that judgment pervades human experience and is crucial to decision making: "Should I accept this job?" "Should we develop a new product?" "For whom should I vote?" "Is the defendant guilty?" Decision-making errors, obviously, can be extremely costly at the personal, professional, and societal levels. Not surprisingly, there has been a fair amount of effort invested in looking for ways to reduce bias.

Unfortunately, there does not appear to be an easy fix. In 1982, decision scientist Baruch Fischhoff reviewed 4 straightforward strategies for reducing bias: (1) warning subjects about the potential for bias, (2) describing the likely direction of bias, (3) illustrating biases to the subject, and (4) providing extended training, feedback, coaching, and other interventions. Fischhoff concluded that the first 3 strategies yielded only limited success, and even intensive, personalized feedback and training produced only moderate, short-term improvements in decision making [18]. In the succeeding 25 years since Fischhoff's study, much additional research has been conducted, but the basic conclusion remains the same—simple methods for addressing bias have limited applicability and produce limited success. On the other hand, as described below, more involved methods, such as replacing intuitive decision making with analysis, can be effective.

Common Methods Reducing Bias

One continuing line of research involves investigating whether biases can be reduced by encouraging subjects to put more effort into forming judgments. Asking students to "show their work," for example, has been shown to slightly increase the chances of obtaining a correct answer (it is more helpful for pinpointing where lack of understanding may occur). In general, however, the limited success of such techniques suggests that most biases are not very sensitive to the amount of effort one applies.

Encouraging people to take an outsider's perspective has been shown to somewhat reduce the tendency for overconfidence to bias estimates ("What do you think someone not directly involved would say?"). The idea is to reduce personal biases by removing oneself mentally from the specific situation. Some studies show that the technique can improve estimates of the time it would take to complete a task and the odds of success [19, 20].

Note taking

Note taking may encourage more thoughtful responses

Increasing accountability for decisions has been shown to lead to better choices [21]. Likewise, enhancing accountability for opinions that people express can help in some circumstances. For example, it has been suggested that, when obtaining critical information from someone, it may be useful to take notes (or to appear to take notes). If people believe you may quote them to others, they may be more careful in what they say. Similarly, to support project-selection decisions, it is useful to have project proponents document precisely why they believe their proposed projects should be conducted. Going on record encourages managers to be more careful in their logic, and the fear of being proved wrong helps counter over-optimism.

Training (in biases) has been shown to help people address some biases. However, as observed by Fischhoff above, the effect is generally short lived and does not produce an overwhelming improvement in performance. One problem is that it is often hard to get people to appreciate that bias is something that affects them personally, not just others. Thus, in situations where time permits, it helps to demonstrate biases. For example, if you are obtaining judgments from a group of individuals and are concerned about overconfidence bias, don't just tell them about the 2/50 rule (described above). Instead, run them through an exercise that demonstrates that the rule applies to them.

Not surprisingly, improving cause-effect understanding of the relevant situation and processes has been shown to improve the quality of estimates and decisions. For example, studies show that when people are encouraged to look for common principles underlying seemingly unrelated tasks they are able to better create solutions for different tasks that rely on the same underlying principles [22].

There is evidence that groups reach better decisions when alternatives are evaluated simultaneously as opposed to having each alternative evaluated sequentially and potentially rejected. The presumed explanation is that people initially react emotionally when considering an alternative; they think mostly about how it will affect them personally. If alternatives are evaluated simultaneously side-by-side, group members are less susceptible to this reaction [23].

Strategies for Reducing Specific Biases

The usual strategy for reducing a specific bias is to address the mental processing error that is believed to produce that bias. For example, in one study, researchers assumed that hindsight bias, the tendency to exaggerate the extent to which one could have anticipated a particular outcome, results from the difficulty people have in appreciating the limited information available at the time and the restricted inferences that could be made from that information. By providing evidence that argued against the actual outcome, they found that their subjects could be made more resistant to the bias [24]. Similarly, it has been hypothesized that people's tendency to over-claim credit for a group accomplishment is due in part to the tendency to be more aware of one's own efforts. Researchers showed that when people are asked to estimate not only their own contributions but also those of others, they attribute less credit to themselves [25].

Figure 4 (derived from Wilson and Brekke [26]) illustrates a view for how judgmental biases are created and suggests a general strategy for reducing biases. According to the figure, the first step is to create awareness of the flawed mental processes involved. The subject must be motivated to correct the bias, and understand the direction and magnitude of the errors produced. Finally, the bias must be removed or countered. The technique used to mitigate the bias of concern is often the application of a countering bias, for example, countering overconfidence by encouraging subjects to describe (and therefore anchor on) extreme possibilities. Many of the recommendations provided earlier in this section under "Advice" are based on this logic.


Debiasing steps

Figure 4:   General strategy for debiasing.



Decision Aids

Hundreds of decision aids have been developed and recommended to reduce the distortions in decisions caused by errors and biases. I developed the list of sample aids below in the context of a chapter for a book on aids for environmental decisions [27]. Links to definitions are provided for terms that tend to be used in project portfolio management. As indicated, there are at least 5 categories of aids: (1) checklists for promoting a quality decision process, (2) thinking aids intended mainly to improve perspective or create insights, (3) models and optimization methods for recommending choices, (4) aids for promoting group consensus, and (5) voting methods. As an example of the first category, Figure 5 is a checklist aid for scoring the decision-making process relative to the components of a quality decision-making process.


Sample Decision Aids


Decision checklist

Figure 5:   Checklist diagram for evaluating deficiencies in the decision-making process [28].



Notice that a common characteristic among decision aids is that they add structure to the decision making process, forcing decision makers to rely less on intuition and emotion and more on deliberate thinking. Models and analysis, in my opinion, represent the most effective way to address errors and biases in decision making. Essentially, the concept is to replace flawed intuitive reasoning with a formal, analytical process.

Much evidence has accumulated indicating the effectiveness of models and analysis. For example, in situations where past data are available on the inputs and results of a decision making process, models can be created using regression analysis. Such models are being used to help graduate schools to decide which students to admit, clinical psychologists to diagnose neuroses and psychoses, and credit card companies to decide whether to accept loan applications. For a very wide range of subject areas, researchers have found that such models produce better and more reliable decisions than those made by people, including the experts who made the original decisions from which the models were derived [29].

Even very simple models have been shown to improve estimates and, therefore, encourage better decisions. Ask people to estimate how tall an eight-story building is, and you will likely get very poor estimates. But, if they envision each floor as being about 50% higher than a tall person, say 10 feet, and then multiply by the number of stories, the result of 80 feet is fairly accurate. A model doesn't need to be very precise or even require very accurate inputs. To illustrate this, in training classes I've asked managers to estimate quantities about which they know very little, for example, "How many chicken eggs are produced in the U.S. each year?" The estimates are not very accurate. Then, attendees break into teams to create a simple model the output of which is the number in question. For example, a team might calculate annual egg production as the number of people in the country times the average number of eggs consumed per week times the number of weeks in the year. Invariably, the teams produce much more accurate estimates using their models.

Another argument for using models comes from research that shows that experts appear to be less subject to biases when addressing issues that are entirely within their areas of expertise. Models break the problem of evaluating alternatives down into individual pieces such that different experts with specialized knowledge can be selected to focus on each piece. Thus, models dissect a complex problem in a way that makes the required judgments less likely to be biased.

A Strategy for Decision Making

Approaches to making good decisions differ greatly in terms of the amount of time and effort required. Intuitive decisions can be fast, automatic and effortless, while analysis is slower and requires considerably more effort. Figure 6 illustrates that the appropriate approach to decision making depends on the significance of the decision, the challenges involved, and the time available for analysis. It is useful to develop a quiver of decision-making strategies, and to select the approach that makes most sense for the given circumstance.


Considerations for selecting a decision aid

Figure 6:   The appropriate decision aid and decision-making approach depends on the nature of the decision.



The best protection from bias comes from training, using formal techniques for obtaining important judgments, utilizing well-founded decision aids, and instituting rigorous decision processes that document the judgments and assumptions upon which choices are based. The subsequent parts of this paper describe such a process specific to decision making for project portfolio management. As stated by Ken Keyes, "To be successful in the business world we need to check our bright ideas against the territory. Our enthusiasm must be restrained long enough for us to analyze our ideas critically" [30].

References for Part 1

  1. A. Tversky and D. Kahneman, Judgment Under Uncertainty: Heuristics and Biases, Cambridge Press, 1987.
  2. Hammond, John S., Ralph L. Keeney and Howard Raiffa, "The Hidden Traps in Decision Making," Harvard Business Review, Sept-Oct 1998, pp 47-55.
  3. Russo, J.E., and P. J.H. Shoemaker, Winning Decisions: Getting it Right the First Time, Doubleday, 2001.
  4. DeBoodt, W. F. and R. H. Thaler, "Financial Decision Making In Markets and Firms: A behavior Perspective," in A. Jarrow, V. Maksimovic and W. T. Ziemba, eds, Financial Handbooks in Operations Research and Management Science, Elsevier, North Holland, 1995, p. 386.
  5. J. R. Radzevick, J. R. and D. A. Moore, "Competing to Be Certain (But Wrong): Market Dynamics and Excessive Confidence in Judgment," Management Science pp 93-106, Jan 2011.
  6. Bazerman, M and D. Chugh, "Decisions without Blinders," Harvard Business Review, Jan 2006, pp 88-97.
  7. Gilbert, D., Stumbling on Happiness Random House, 2007.
  8. Day, G. S. and Schoemaker, P. J. H., "Scanning the Periphery" Harvard Business Review, Nov. 2005, pp. 135-148.
  9. Gino, F. G and M. H. Bazerman, "Slippery Slopes and Misconduct," Harvard Business School, 2006.
  10. Reported in Bazerman, M and D. Chugh, "Bounded Awareness: What You Fail to See Can Hurt You," HBS Working Paper #05-037, Revised, p 13, 8/25/2005.
  11. Goldacre, B. Bad Science, Quqcks, Hacks, and Big Pharma Flacks, Faber and Faber, 2011.
  12. 11. Kahneman, D., Thinking Fast and Slow, Rarrar, Straus and Giroux, 2011.
  13. Lichtenstein, S, and P. Slovic, "Response-Induced Reversals of Preference in Gambling: An Extended Replication in Las Vegas," Oregon Research Institute Research Bulletin 12, No. 6, 1972.
  14. 12. A. D. Gershoff and J. J. Koehler, "Safety First? The Role of Emotion in Safety Product Betrayal Aversion," Journal of Consumer Research Inc., June 2011.
  15. The quote appears on www.numeraire.com, "Global Value Investing," which cites a speech given by Warren Buffet in Boston in 1997.
  16. P.C, Nutt, Why Decisions Fail, Berrett-Koehler, 2002.
  17. P. Johansson, L. Hall, S. Sikstrom, and A. Olsson, "Failure to Detect Mismatches between Intention and Outcome in a Simple Decision Task," Science, Vol 310, pp 116-119, Oct. 2005.
  18. Reported in M. H. Bazerman and D. Moore, Judgment in Managerial Decision Making (7'th ed.), Hoboken, N.J., Wiley, 2008.
  19. D. Kahneman and D. Lovallo, "Timid Choices and Bold Forecasts: A Cognitive Perspective on Risk and Risk Taking," Management Science, 39, 17-31, 1993.
  20. A. C, Cooper, C. Y. Woo, and W.C. Dunkelberg, "Entrepeneurs' Perceived Chances for Success," Journal of Business Venturing, 3(2), 97-109, 1988.
  21. R. P. Larrick, " Debiasing," in D. J. Koehler and N. Harvey (eds.) Blackwell Handbook of Judgment and Decision Making, Oxford, England, Blackwell Publishers, 2004.
  22. S. Moran , Y. Bereby-Meyer and M. Bazerman, "Stretching the Effectiveness of Analogical Training in Negotiations: Learning Core Principles for Creating Value," Negotiation & Conflict Management Research, 1(2), 99-134, 2008.
  23. M. H. Bazerman, S. B, White and G. F. Loewenstein, "Perceptions of Fairness in Interpersonal and Individual Choice Situations," Current Directions in Psychological Science, 4, 39-43, 1995.
  24. P. Slovic and B. Fischhoff, "On the Psychology of Experimental Surprises," Journal of Experimental Psychology: Human Perception and Performance, 3, 544-551:23, 31, 1977.
  25. K. Savitsky, L. Van Boven, N. Epley, and W. Wright, "The Unpacking Effect in Responsibility Allocations for Group Tasks," Journal of Experimental Social Psychology, 41, 447-457, 2005.
  26. T.D. Wilson & N. Brekke (1984), "Mental Contamination and Mental Correction: Unwanted Influences on Judgments and Evaluations," Psychological Bulletin, 116, 117-142, July 1994.
  27. Merkhofer, M. W., "Assessment, Refinement and Narrowing of Options," Chapter 8, Tools to Aid Environmental Decision Making, V. H. Dale and M. R. English, eds., Springer, New York, 1998.
  28. This figure is similar to the approach and corresponding figure in R. Howard, "The Foundations of Decision Analysis Revisited," Chapter 3 in Advances in Decision Analysis, W. Edwards, R. Miles, and D. von Winterfeldt, eds., Cambridge University Press, 1999.
  29. 25. R. J. Jagacinski and J Flach, Control Theory for Humans: Quantitative Approaches to Modeling Performance, CRC Press, 318-319, 2003.
  30. K. Keyes, Jr., Taming your Mind, Love Line Books, 1991.