Draft - fully annotated - The Drunken Walk
This sentence applies equally to actors and to many other labels of success or failure.
When I try to judge someone by their level of success, I remind myself that if they had the chance to start over, Stephen King might have been just another Richard Bachman, and Naipaul might have been just another writer still struggling.
But the editors of The Sunday Times deliberately presented these submitted novel excerpts as if they were the work of some struggling, not-yet-famous writer, so that publishers and literary agents could not see the manuscripts’ true origins. What was the fate of these two very successful novels? All the responses were rejections,
Other patients who knew nothing about the impostors would often probe them with phrases like “You’re not crazy, you’re a reporter… you’re doing an undercover investigation of this hospital.”
Other patients who knew nothing about the impostors would often probe them with phrases like “You’re not crazy, you’re a
After the doctors’ examinations, the subjects no longer simulated any abnormal symptoms and reported that those strange sounds had disappeared. Following Rosenhan’s prior instructions, they then waited for hospital staff to discover that they were not actually insane. But no one noticed this. Instead, hospital staff interpreted the impostors’ behavior through the preset lens of mental illness. When they saw a patient writing in a diary, a note was added to their care record reading “patient occupied with writing,” treating writing as a sign of mental illness. And when another patient became angry because of staff mistreatment, that behavior was likewise regarded as part of their illness.
Moviegoers tend to show more liking for a film they’ve heard is good in advance. In this example, small random influences produced a snowball effect, making the song’s future outcomes drastically different.
In the cases of Pearl Harbor (and the 9/11 attacks), when we look back at what happened before the attacks, the events seem to point in an obvious direction. But as with the examples of dye molecules, weather, and chess, that sense of inevitability quickly disappears if we track the situation in advance. First, aside from the intelligence I mentioned, there was a massive volume of useless intelligence. New tips and reports piled up by the week. Sometimes these messages seemed like warnings, sometimes mysterious, but later proved misleading or inconsequential. Even if we focus on reports that turned out to be important in hindsight, before the attack each such report had plausible alternative explanations and did not prove that a sneak attack on Pearl Harbor was underway. For example, the order that divided Pearl Harbor into five zones was similar to orders sent to Japanese intelligence officers in Panama, Vancouver, San Francisco, and Portland, Oregon. As for the loss of radio monitoring, that was not unprecedented; in the past it often meant warships were docked at home ports and communicated via shore cables. More importantly, even if one believed war expansion was imminent, there was much information indicating the attack might occur elsewhere—for example, the Philippine Islands, the Malay Peninsula, or Guam. In the Pearl Harbor case, the number of factors that redirected our attention was certainly not as numerous as the water molecules encountered by dye molecules, but it was sufficient to blur a clearly defined picture of the future.
In the cases of Pearl Harbor (and the 9/11 attacks), when we look back at what happened before the attacks, the events seem to point in an obvious direction. But as with the examples of dye molecules, weather, and chess, that sense of inevitability quickly disappears if we track the situation in advance. First, aside from the intelligence I mentioned, there was a massive volume of useless intelligence. New tips and reports piled up by the week. Sometimes these messages seemed like warnings, sometimes mysterious, but later proved misleading or inconsequential. Even if we focus on reports that turned out to be important in hindsight, before the attack each such report had plausible alternative explanations and did not prove that a sneak attack on Pearl Harbor was underway. For example, the order that divided Pearl Harbor into five zones was similar to orders sent to Japanese intelligence officers in Panama, Vancouver, San Francisco, and Portland, Oregon. As for the loss of radio monitoring, that was not unprecedented; in the past it often meant warships were docked at home ports and communicated via shore cables. More importantly, even if one believed war expansion was imminent, there was much information indicating the attack might occur elsewhere—for example, the Philippine Islands, the Malay Peninsula, or Guam. In the Pearl Harbor case, the number of factors that redirected our attention was certainly not as numerous as the water molecules encountered by dye molecules, but it was sufficient to blur a clearly defined picture of the future.
In the cases of Pearl Harbor (and the 9/11 attacks), when we look back at what happened before the attacks, the events seem to point in an obvious direction. But as with the examples of dye molecules, weather, and chess, that sense of inevitability quickly disappears if we track the situation in advance. First, aside from the intelligence I mentioned, there was a massive volume of useless intelligence. New tips and reports piled up by the week. Sometimes these messages seemed like warnings, sometimes mysterious, but later proved misleading or inconsequential. Even if we focus on reports that turned out to be important in hindsight, before the attack each such report had plausible alternative explanations and did not prove that a sneak attack on Pearl Harbor was underway. For example, the order that divided Pearl Harbor into five zones was similar to orders sent to Japanese intelligence officers in Panama, Vancouver, San Francisco, and Portland, Oregon. As for the loss of radio monitoring, that was not unprecedented; in the past it often meant warships were docked at home ports and communicated via shore cables. More importantly, even if one believed war expansion was imminent, there was much information indicating the attack might occur elsewhere—for example, the Philippine Islands, the Malay Peninsula, or Guam.
We should also learn not only to seek reasons that prove we are right, but to spend just as much time looking for evidence that proves we are wrong.
Confirmation bias causes many unfortunate consequences in practice. If a teacher initially believes one student is smarter than another, they will selectively focus on evidence that tends to confirm that belief. If an employer interviews a candidate who fits some of their preconceptions, they typically form a first impression quickly and then use the rest of the interview to look for information supporting that impression. If a clinical counselor is told in advance that a client is naturally combative, they will tend to conclude the client is indeed combative even if the client is no more combative than average. People also often interpret behaviors of minority groups according to a preconceived “template.”
Confirmation bias causes many unfortunate consequences in practice. If a teacher initially believes one student is smarter than another, they will selectively focus on evidence that tends to confirm that belief. If an employer interviews a candidate who fits some of their preconceptions, they typically form a first impression quickly and then use the rest of the interview to look for information supporting that impression. If a clinical counselor is told in advance that a client is naturally combative, they will tend to conclude the client is indeed combative even if the client is no more combative than average. People also often interpret behaviors of minority groups according to a preconceived “template.” Human brains have evolved to recognize patterns efficiently. But as confirmation bias shows, our attention is mainly devoted to finding and confirming these patterns rather than minimizing erroneous conclusions. Still, we need not be pessimistic, because we can overcome these biases. Merely realizing that random events can produce patterns is a start. If we learn to question our views and theories, that’s another major step forward. Finally, we should learn not only to look for reasons that prove we are right, but to spend as much time looking for evidence that proves we are wrong. Our journey through randomness is now nearly complete. We began with simple rules and saw how they manifest in complex systems. So how important is randomness in the most important complex system—our own fate? This is a tough question that encompasses much of what we’ve considered so far. I do not expect to provide a complete answer, but I hope to make the issue clearer. The title of the next chapter, which is also the book’s
Researchers gathered a group of undergraduates, some of whom supported the death penalty and others opposed it. Then they gave all the students the same scholarly studies on the death penalty’s actual effects. Half the conclusions in these materials supported the view that the death penalty is a deterrent, and the other half contradicted it. The researchers also provided clues indicating weaknesses in these studies. Then the students were asked to independently rate the quality of each study and to state whether and to what extent these studies affected their attitude toward the death penalty. Participants gave higher ratings to studies that supported their initial views, even though both sides’ studies used the same methods.
If we form an opinion that a new neighbor is unfriendly based on weak evidence, then any future behavior that can be interpreted as “unfriendly” will stick in our memory while other behaviors are easily forgotten. If we trust a politician, then when she does well the credit naturally goes to her; when she does poorly, the blame is placed on the environment or her opponents,
We not only tend to seek evidence that confirms our preconceived notions, but we also interpret ambiguous evidence in ways favorable to our beliefs. This is a big problem because data are often ambiguous.
The philosopher Francis Bacon wrote in 1620: “When the human understanding has adopted an opinion, it collects any instances which serve to confirm it,
Large companies have complex operations and are to a great extent influenced by unpredictable market factors, so the relationship between senior managers’ intelligence and company performance is far from direct.
Regardless of differences in CEOs’ personal abilities, their abilities are always influenced by uncontrollable factors in the broader system, just as differences in musicians’ performance become less apparent when broadcast through noisy radio.
One group tried to make the flicker occur clockwise while the other group tried to make it counterclockwise, and both groups felt the flicker’s direction matched the direction they desired.
Researchers showed subjects a ring of randomly flickering lights and told them they could, by concentrating, make the bulbs light in clockwise order; the subjects then truly felt they had done so and were astonished.
One of the easiest things for psychology researchers to do is to induce people to mistake luck for ability, or to mistake purposeless action for control. For example, we can have people press a button to control a light’s flicker when in fact the button does nothing. Even though the flicker is completely random, people still believe they are controlling the light.
Even merely giving subjects control over the order of the tests, even if that control is meaningless, can reduce their anxiety.
To survive in Nazi concentration camps, survival depended on an ability to plan and organize—to preserve some space for independent action and maintain control over certain crucial aspects of life, no matter how oppressive the surrounding environment seemed.
People like to exert control over their environment. Some people will drive after half a bottle of Scotch but behave erratically at a little turbulence on a plane; the reason is that in the first situation they believe they are in control, whereas in the latter the control lies with others.
People like to exert control over their environment. Some people will drive after half a bottle of Scotch but behave erratically at a little turbulence on a plane; the reason is that in the first situation they believe they are in control, whereas in the latter the control lies with others. This desire for control is not unfounded, because things under one’s control
People would need to be exposed to concentrations of carcinogens typically seen only in chemotherapy patients or certain occupational environments to produce cancer clusters requiring epidemiological investigation; in fact, the concentration required is far higher than what people encounter around contaminated homes or schools. Yet people still resist the explanation that such clusters are formed randomly,
If 10,000 people watch the stock market and try to pick the most profitable stocks, there will always be someone among them who succeeds, and their success is due solely to luck.
By Dow Jones measures, the system made correct predictions for 11 consecutive years from 1979 to 1989; the 1990 prediction failed, but from 1991 to 1998 the method again gave correct answers year after year. Although Koppeit hit 18 out of 19 predictions, I can confidently say that this stable performance had nothing to do with any skill or ability. Why? Because Koppeit was a columnist for Sports News, and his method was merely to predict based on Super Bowl (the NFL championship) results.
In a random sequence of 101000007 zeros or ones, we have over a fifty percent chance of finding at least ten non-overlapping subsequences each consisting of one million consecutive zeros.
Finding patterns and assigning meaning to them is human nature.
If my Viking-brand stove has some flaws and, by chance, an acquaintance tells me she had the same experience, I’ll warn friends not to buy that brand; if on several United Airlines flights the fellow passengers seem consistently ruder than on flights of other recent airlines I’ve taken, I’ll avoid flying United. In these cases the sample sizes are small, but our innate instinct finds a pattern.
When physicists want to determine the significance of data from a supercollider, they don’t stare at the curves trying to spot peaks emerging from noise; they use mathematical techniques to judge. Significance testing is such a statistical analysis method,
Then we can use a chi-squared test to determine how likely it is that the cereal with the most votes is truly preferred by consumers rather than just having won by chance.
If a process lacked such regression behavior, it would inevitably run out of control. For example, suppose tall fathers’ sons are, on average, as tall as their fathers. Since heights vary, some sons will be taller than their fathers. In the next generation, assuming those taller sons’ sons are on average as tall as their fathers, some of them will also exceed their fathers in height. Generation after generation, the tallest people would grow ever taller. But because regression exists, this does not happen. The same argument applies to innate intelligence, artistic talent, or golf skill. Therefore, very tall parents should not expect equally tall children, and very intelligent parents should not expect equally intelligent children,
After measuring these seeds, Galton noticed that offspring of large seeds had a median diameter smaller than their parent seeds, while offspring of small seeds were larger than their parents. Later, using data from a London laboratory, he found the same pattern in human parents and children’s heights. He called this phenomenon regression—when, in correlated measurements, a value far from the mean is followed by another measurement closer to the mean.
Not all social events follow a normal distribution, and this is especially evident in finance. For example, if movie box office returns followed a normal distribution, most films’ earnings would cluster around an average and about two-thirds would fall within one standard deviation of that mean. But in the film industry, 20% of movies generate 80% of box office revenue.
Quetelet recognized that the “average person” varies across cultures and that this “average person” changes as society changes.
Wolfers examined games with large mismatches in team strength and found a surprising result: in these games, the favorites beat the spread far less often than expected, while wins by less than the spread occurred inexplicably more often. This is another Quetelet anomaly.
For a favored team, bookmakers set a point spread to encourage roughly even betting on both teams. For example, if everyone believes Caltech’s basketball team is stronger than UCLA’s (and for college basketball fans in the 1950s that was true), the bookmaker will only pay bettors who backed Caltech if Caltech beats UCLA by more than 13 points. This spread balances the chances of winning for both sides. Although the spread is set by the bookmaker, it is ultimately determined by the bettors, because the bookmaker adjusts the spread based on betting patterns to balance demand. (Bookmakers take a cut from the bets, so they seek equal betting on both sides to guarantee a profit regardless of the outcome.)
For a favored team, bookmakers set a point spread to encourage roughly even betting on both teams. For example, if everyone believes Caltech’s basketball team is stronger than UCLA’s (and for college basketball fans in the 1950s that was true), the bookmaker will only pay bettors who backed Caltech if Caltech beats UCLA by more than 13 points. This spread balances the chances of winning for both sides. Although the spread is set by the bookmaker, it is ultimately determined by the bettors, because the bookmaker adjusts the spread based on betting patterns to balance demand. (
If the baker truly selected a loaf at random to sell him, then the number of loaves somewhat heavier or lighter than the average weight should decrease according to the bell curve of the law of errors described in Chapter 7. But Poincaré found too few light loaves and too many heavy loaves. He concluded that the baker had not stopped making underweight loaves; he had simply been handing the buyer the largest loaf on the table.
If the baker truly selected a loaf at random to sell him, then the number of loaves somewhat heavier or lighter than the average weight should decrease according to the bell curve of the law of errors described in Chapter 7. But Poincaré found too few light loaves and too many heavy loaves.
When subjects tasted wines they believed to be more expensive, the brain region generally associated with pleasure response was indeed in a more excited state.
In a 2008 study, researchers had subjects rate five wines. They scored the bottle labeled $90 higher than another labeled $10, but the clever researchers had actually poured the same wine into both bottles.
Studies show that even professionally trained tasters can rarely distinguish more than three or four components in a mixture. In that case, discerning the medley that produces a wine’s flavor is truly a challenge.
Sir Meadow estimated the above probability as follows. First, he estimated a baby’s chance of dying from SIDS as 1/8543. So the probability that two children both die of SIDS is that number squared, i.e., 1 in 73 million. But this calculation assumes the two deaths are independent, that even if an older sibling dies of SIDS no environmental or genetic factor increases the second child’s risk.
Assume about 1% of men who got tested for HIV in California in 1989 were infected. That means among 10,000 test results there would be 100 true positives rather than the previously mentioned 1, plus 10 false positives. Then the probability that a person with a positive result is truly infected becomes 10/11. That is why knowing whether the person belongs to a high-risk group is very helpful when interpreting test results.
Suppose the false-negative rate is nearly zero; this implies about 1 in 10,000 truly infected people will test positive. Also, with a false-positive rate of 1/1000 as the doctor claimed, about another 10 people would test positive despite not being infected with HIV.
Even if a roulette wheel were perfectly balanced, the numbers 0, 1, 2, 3, etc., would not appear with exactly equal frequency. The more frequent numbers will not politely wait to let the lagging ones catch up. In reality, some numbers will appear more often than average and others less often.
There is another similar but less often mentioned law that examines the lowest-order digits. Many types of data follow Benford’s law, especially financial figures. In fact, this law is practically tailor-made for detecting fraud in large financial datasets.
According to Benford’s law, the frequencies of digits 1 through 9 are not equal. Instead, leading digit 1 occurs about 30% of the time, 2 about 18%, and so on down to 9, which appears as the leading digit roughly 5% of the time.
Newcomb noticed around 1881 that the pages of logarithm tables used for numbers beginning with 1 looked dirtier and more worn than the pages for numbers beginning with 2 through 9. In particular, the pages for numbers beginning with 9 looked clean and new.
According to a rule called Benford’s law, numbers generated by accumulative processes like national debt are not random; in fact, they tend to favor smaller values.
Numbers produced by accumulative processes like national debt are not random; in fact, they tend to favor smaller values.
Virginia’s lottery actually violated the above principle. The state’s lottery picks six numbers from 1 to 44 to determine winners. If you find a large enough Pascal triangle, you can see that choosing 6 from 44 yields 7,059,052 different combinations. The jackpot was $27 million, and including second, third, and fourth prizes the total prize pool rose to $27,918,561. These clever Australians calculated that if they bought a ticket for every one of the 7,059,052 possible combinations, the lottery’s total value would equal the prize pool—so each ticket would be worth $27,918,561 divided by 7,059,052, or about $3.95. What did Virginia set the ticket price at? The usual $1. These Australians quickly found 2,500 average investors in their country and in New Zealand, Europe, and the U.S. willing to contribute $3,000 each. If the plan succeeded, that $3,000 investment would return $10,800. There were risks. First, because they were not the only ticket buyers, someone else might also win, forcing them to share the prize. In the 170 draws held, 120 had no winners, 40 had a single winner, and only 10 had two winners. If these data reflect true probabilities, investors had a 120/170 chance of having the prize to themselves, a 40/170 chance of getting half the prize, and a 10/170 chance of receiving only 1/3. Using Pascal’s expected-value calculus, the expected prize equals (120/170×$27.9M) + (40/170×$13.95M) + (10/170×$6.975M) = $23.4M, corresponding to about $3.31 per ticket. Compared to the $1 ticket price, even after expenses, this promised a substantial return.
Virginia’s lottery actually violated the above principle. The state’s lottery picks six numbers from 1 to 44 to determine winners. If you find a large enough Pascal triangle, you can see that choosing 6 from 44 yields 7,059,052 different combinations.
Suppose California offered its citizens a game in which each player pays $1 or $2, most get nothing, but someone wins a fortune while another person dies tragically. Would anyone play? Yes. Not only would people play, they would do so enthusiastically. That game is the California lottery. Although the state’s ads do not use my phrasing, the lottery’s consequences are effectively as described. For each jackpot winner there are millions of competitors driving to buy tickets, and some of those people die in traffic accidents on the way. Based on National Highway Traffic Safety Administration statistics and some assumptions about each person’s driving distance, number of tickets purchased, and how many people get into typical traffic accidents, a reasonable estimate of this death rate is about one person per draw.
In summary, if you initially “picked the lucky door” (probability 1/3), then staying with your choice wins; if you initially “picked the unlucky door” (probability 2/3), then because of the host’s action you can win by switching. Thus the key to the final decision is the following guess: which situation were you in at the start?
The great American physicist Richard Feynman once told me that if you only read other people’s derivations over and over you will never understand any physics problem. He said the only way to truly understand a theory is to carry out the derivation yourself (or perhaps ultimately show it to be wrong!).
The great American physicist Richard Feynman once told me that if you only read other people’s derivations over and over you will never understand any physics problem. He said the only way to truly understand a theory is to carry out the derivation yourself (
When collecting and using samples, samples can be accidentally mixed up or reversed, experimental results can be misrepresented, or the results can be reported incorrectly. These errors are rare but far from as uncommon as random matches. For example, the Philadelphia crime lab admitted that in a rape case it had accidentally swapped the defendant’s and the victim’s reference samples, and the DNA testing company Cellmark Diagnostics admitted to a similar error. Unfortunately, DNA match statistics presented in court are so powerful that, despite 11 alibi witnesses, an Oklahoma court sentenced a man named Timothy Daramy to over 3,100 years. It was later discovered that in the initial analysis the lab had failed to fully separate the rapist’s DNA from Daramy’s in the sample, producing a positive match. The error was discovered on retesting and Daramy was released, but by then he had served nearly four years in prison.
Which are more numerous: English words of six letters with n as the fifth letter, or English six-letter words ending in -ing? Most people choose the latter. Why? Because it is easier to recall words ending in -ing than general six-letter words with n as the fifth letter. But we don’t need the Oxford English Dictionary—or even to know how to count—to prove this guess wrong: the set of six-letter words whose fifth letter is n necessarily includes all six-letter words ending in -ing. Psychologists call this error the availability bias, since when reconstructing the past we assign undue importance to the most vivid and easily recalled items.
Which are more numerous: English words of six letters with n as the fifth letter, or English six-letter words ending in -ing? Most people choose the latter. Why? Because it is easier to recall words ending in -ing than general six-letter words with n as the fifth letter. But we don’t need the Oxford English Dictionary—or even to know how to count—to prove this guess wrong: the set of six-letter words whose fifth letter is n necessarily includes all six-letter words ending in -ing.
The ability to judge meaningful links among phenomena in the environment is so important that in some situations it is worth paying the cost of chasing mirages. Suppose a hungry cave dweller sees a faint blur of green on a distant rock; he can ignore it and miss a tasty lizard, or dash forward and attack only to find a few straw leaves. The cost of the first mistake is clearly larger. Thus, by this theory, evolution may have favored avoiding the costlier mistake at the expense of occasionally making the lesser one.
Linda has a franchise for IHOP (an American restaurant chain). Linda has had gender reassignment surgery and is now called Larry. Linda has had gender reassignment surgery and is now called Larry, and she owns an IHOP franchise. Almost no one judges the last statement more probable than the other two.
We can model this experiment mathematically as follows: line the audience up and have them flip a coin in sequence—heads means they watch Star Wars Prequel A, tails Prequel B. Since the coin has equal chance of heads or tails, you might think each film should lead about half the time in this experimental box-office contest. Randomness theory tells us otherwise: the most likely number of lead changes is zero; that is, in a contest for 20,000 viewers one film is likely to lead from start to finish, and this outcome is 88 times more likely than the films trading the lead back and forth.
There is a gulf of randomness and uncertainty between the creation of a great novel (or a piece of jewelry, or a cookie sprinkled with chocolate chips) and the towering stacks of that novel in thousands of bookstores (or matched sets of jewelry or bags of cookies) after publication. It is this gulf that explains why successful people in all fields almost invariably belong to a particular kind of person—those who never give up.
John Kennedy Toole—after many rejections became utterly despairing about his work’s publication and committed suicide. His mother kept his manuscript. Eleven years later A Confederacy of Dunces was published, won the Pulitzer Prize for Fiction, and sold nearly two million copies.
Dr. Seuss’s first children’s book And to Think That I Saw It on Mulberry Street was rejected by 27 publishers. J.K. Rowling’s first Harry Potter manuscript was rejected nine times.
Opposite those who later succeeded are the well-known other side of the coin in the writing profession: the many authors with enormous potential who never made it—the Grimsons who quit after the first twenty rejections, or the Rowlings who gave up after the first five rejections.
Are we, like those instructors, inclined to believe that harsh criticism improves children’s behavior or employee performance?
Regression means that in any sequence of random events, an unusually extreme event is more likely to be followed by a relatively more ordinary event, and this occurs purely by chance. The flight instructor example illustrates regression: each trainee already has a baseline ability to fly a fighter. Improvement depends on many factors and requires much practice. Although their flying skills gradually improve with training, that improvement is hard to detect from two consecutive flights. Thus especially good or bad performances are largely due to luck. If a pilot has an exceptionally good landing one day, it’s likely his performance will be more normal the next day—that is, worse than today. If the instructor praised him this time, the praise will seem ineffective the next day. Conversely, if a landing is extraordinarily bad, running off the runway into a pot of corn stew at the base cafeteria, he is likewise likely to perform closer to normal the next day—i.e., better. If the instructor routinely yells “you stupid ape!” at such poor performance, the scolding will superficially appear to have produced a benefit.
The part of the human brain that evaluates uncertain situations is tightly linked with the part that processes emotion—a human trait often considered the main source of irrationality.
Taleb mocked Soros’s former partner Rogers, saying that for someone who cannot distinguish probability from expectation, it seems he made far too much money in his lifetime.
A sloppy way to say it is: you should bet on things with a high chance of success. That is not entirely correct; you should bet on things with a positive expected value.
The first half of entrepreneurship is rapid trial and error; once you find an opportunity with positive expected value, work to scale it up aggressively.
You should bet on things with a positive expected value.
Last updated