Probability is the likelihood or chance that something will happen. Probability is an estimate of the relative average frequency with which an event occurs in repeated independent trials. The relative frequency is always between 0% (the event never occurs) and 100% (the event always occurs). Probability gives us a tool to predict how often an event will occur, but does not allow us to predict when exactly an event will occur. Probability can also be used to determine the conditions for obtaining certain results or the long-term financial prospects of a particular game; it may also help determine if a particular game is worth playing. It is often expressed as odds, a fraction or a decimal fraction (also known as a proportion). Probability and odds are slightly different ways of describing a player’s chances of winning a bet.
Note: This page is excerpted from
Probability, Random Events, and the Mathematics of Gambling. References can be found on page 24 of the PDF available for download below.
Probability, Random Events, and the Mathematics of Gambling
Probability is an estimate of the chance of winning divided by the total number of chances available. Probability is an ordinary fraction (e.g., 1/4) that can also be expressed as a percentage (e.g., 25%) or as a proportion between 0 and 1 (e.g., p = 0.25). If there are four tickets in a draw and a player owns one of them, his or her probability of winning is 1 in 4 or 1/4 or 25% or p = 0.25.
Odds are ratios of a player’s chances of losing to his or her chances of winning, or the average frequency of a loss to the average frequency of a win. If a player owns 1 of 4 tickets, his/her probability is 1
in 4 but his/her odds are 3
to 1. That means that there are 3 chances of losing and only 1 chance of winning. To convert odds to probability, take the player’s chance of winning, use it as the numerator and divide by the total number of chances, both winning and losing. For example, if the odds are 4 to 1, the probability equals 1 / (1 + 4) = 1/5 or 20%. Odds of 1 to 1 (50%) are called “evens,” and a payout of 1 to 1 is called “even money.” Epidemiologists use odds ratios to describe the risk for contracting a disease (e.g., a particular group of people might be 2.5 times more likely to have cancer than the rest of the population).
In gambling, “odds” rarely mean the actual chance of a win. Most of the time, when the word “odds” is used, it refers to a subjective estimate of the odds rather than a precise mathematical computation. Furthermore, the odds posted by a racetrack or bookie will not be the “true odds,” but the payout odds. The true odds are the actual chances of winning, whereas the payout odds are the ratio of payout for each unit bet. A favourite horse might be quoted at odds of 2 to 1, which mathematically would represent a probability of 33.3%, but in this case the actual meaning is that the track estimates that it will pay $2 profit for every $1 bet. A long shot (a horse with a low probability of winning) might be quoted at 18 to 1 (a mathematical probability of 5.3%), but these odds do not reflect the probability that the horse will win, they mean only that the payout for a win will be $18 profit for every $1 bet. When a punter says “those are good odds,” he or she is essentially saying that the payout odds compensate for the true odds against a horse winning. The true odds of a horse are actually unknown, but most often the true odds against a horse winning are longer (a lower chance of a win) than the payout odds (e.g., payout odds = 3 to 1; true odds = 5 to 1). The posted odds of a horse actually overestimate the horse’s chance of winning to ensure that the punter is underpaid for a win.
Equally Likely Outcomes
Central to probability is the idea of equally likely outcomes (Stewart, 1989). Each side of a die or coin is equally likely to come up. Probability, however, does not always seem to be about events that are equally likely. For example, the bar symbol on a slot machine might have a probability of 25%, while a double diamond might have a probability of 2%. This does not actually contradict the idea of equally likely outcomes. Instead, think of the 25% as 25 chances and the 2% as two chances, for a total of 27 chances out of 100. Each of those 27 chances is equally likely. As another example, in rolling two dice there are 36 possible outcomes: (1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (1, 6), (2, 1) . . . (6, 6); and each of these combinations is equally likely to happen. A player rolling 2 dice, however, is most likely to get a total of 7 because there are six ways to make a 7 from the two dice: (1, 6), (2, 5), (3, 4), (4, 3), (5, 2) and (6, 1). A player is least likely to get a total of either 2 or 12 because there is only one way to make a 2 (1, 1) and one way to make a 12 (6, 6).
Independence of Events
A basic assumption in probability theory is that each event is independent of all other events. That is, previous draws have no influence on the next draw. A popular catch phrase is “the dice have no memory.” A die or roulette ball cannot look back and determine that it is due for a 6 or some other number. How could a coin decide to turn up a head after 20 tails? Each event is independent and therefore the player can never predict what will come up next. If a fair coin was flipped 5 times and came up heads 5 times in a row, the next flip could be either heads or tails. The fact that heads have come up 5 times in a row has no influence on the next flip. It is wise not to treat something that is very very unlikely as if it were impossible (see Turner, 1998). In fact, if a coin is truly random, it must be possible for heads to come up 1 million times in a row. Such an event is extraordinarily unlikely, p = 1/21,000,000, but possible. Even then, the next flip is just as likely to be heads as it is tails. Nonetheless, many people believe that a coin corrects itself; if heads comes up too often, they think tails is due.
To complicate matters, however, there are cases where random events are not completely independent. With cards, the makeup of the deck is altered as cards are drawn from the deck. As a result, the value of subsequent cards is constrained by what has already been drawn. Nonetheless, each of the cards that remains in the deck is still equally probable. If, for example, there are only six cards left in a deck, four 7's and two 8's, a 7 is twice as likely to be drawn as an 8, but the specific card, the 7 of spades, has the same probability of being drawn as the 8 of diamonds.
Another key aspect to computing probability is factoring in the number of opportunities for something to occur. The more opportunities there are, the more likely it is that an event will occur. The more tickets a player buys or the more often a player buys them, the greater the player’s chances of winning. At the same time, the more tickets purchased, the greater the average expected loss. One thousand tickets means 1,000 opportunities to win, so that the chance of winning
Lotto 6/49 goes from 1 in 14 million to 1 in 14,000. However, because the expected return is nearly always negative, the player will still lose money, on average, no matter how many tickets the player purchases. This is true whether the player buys several tickets for the same draw or one ticket for every draw. Adding more opportunities (e.g., more tickets, bingo cards or slot machines) increases a player’s chance of a win, but does not allow him/her to beat the odds.
One final aspect of probability is the fact that the likelihood of two events occurring in combination is always less than the probability of either event occurring by itself. Friday occurs, once every 7 days (1/7) and the 13th day of the month comes once per month (about 1/30 on average). Friday the 13th, however, only occurs roughly once in 210 days (7 x 30) or once or twice per year.
To compute the joint probability of an event, multiply the probability of each of the two events. For example, the chances of rolling a 4 with a single dice are 1/6, or 16.7%. The chances of rolling a 4 two times in a row are: 1/6 x 1/6 = 1/36 (2.78%). The chances of rolling a 4 three times in a row is 1/6 x 1/6 x 1/6 = 1/216 (0.46%). It is important to note, however, that the joint probability of two events occurring refers only to events that have not happened yet. If something has already happened, then its chance of occurring is 100% because it has already happened. If the number 4 came up on the last two rolls, the chances of rolling another 4 are 1/6 not 1/216 because the new formula is 1 x 1 x 1/6, not 1/6 x 1/6 x 1/6. Each event is an independent event. In addition, the chances of any number coming up twice in a row are 1/6, not 1/36. This is because there are six possible ways (opportunities) of getting the same number twice in a row: (1/6 x 1/6) x 6 = 6/36 = 1/6.
It is the cumulative and multiplicative aspects of probability that lead people to overestimate their chances of winning. People tend to underestimate the chance of getting one or two of the same symbols on a slot machine because they do not take into account the number of opportunities. A number of studies have shown that people can unconsciously learn probability through experience (Reber, 1993). Suppose the chances of getting a diamond on a slot machine are 1 in 32 on each of three reels. The chance of getting at least one diamond is 3 (the number of reels) x 1/32 = 9.4%. That is, the player will see a diamond on the payline roughly one time every 10.6 spins. But their chances of getting three diamonds would be 1/32 x 1/32 x 1/32 = 1/ 32,768 = 0.003%. Because we occasionally see one (9.4%) or two (0.3%) winning symbols on the payline, we may overestimate the chances of getting three of the big win symbols. This overestimation of the odds is also likely enhanced by seeing the big win symbols spin by on each spin, the occurrence of big win symbols above or below the payline, the distortion of the apparent odds caused by virtual reel mapping, and the larger number of big win symbols on the first two reels (see Turner & Horbay, 2004).
Law of Averages and the Law of Large Numbers
Part of the explanation for the persistent belief among those who gamble that there are patterns in chance, may stem from a misunderstanding of two related “laws” of statistics: the law of averages and the law of large numbers. The first is an informal folk theory of statistics; the second is a statistical law. These laws can be summarized as follows:
Law of Averages: Things average out over time.
Law of Large Numbers: As the sample size increases the average of the actual outcomes will more closely approximate the mathematical probability.
The law of large numbers is a useful way to understand betting outcomes. A coin on average will come up heads 50% of the time. It could nonetheless come up heads 100% of the time or 0% of the time. In a short trial, heads may easily come up on every flip. The larger the number of flips, however, the closer the percentage will be to 50%.
The law of averages is an informal approximation of the law of large numbers. The problem with the law of averages, as it is often understood, is that people assume that if something has not happened it is due to happen. For example, a person who gambles might expect that if heads have come up 10 times in a row, the next flip is more likely to be tails because the flips have to average out to 50%. Many people believe that deviations from chance are corrected by subsequent events and refer to the law of averages in support of their belief. Turner, Wiebe, Falkowski-Ham, Kelly and Skinner (2005) found that 36% of the general population believes that after 5 heads in a row the next flip is more likely to be tails. The law of large numbers, on the other hand, asserts only that the average converges towards the true mean as more observations are added. The average is not somehow corrected to ensure it reflects the expected average. The key difference is in the expectation. After a streak of 10 heads in a row, the law of averages would predict that more tails should come up so that the average is balanced out. The law of large numbers only predicts that after a sufficiently large number of trials, the streak of 10 heads in a row will be statistically irrelevant and the average will be close to the mathematical probability.
Some people accept the idea that the measured average will reflect the probability percentage in the long run, but still expect that if a trial of coin tosses began with a streak of heads, after a million flips extra tails would have to have occurred for the measured average to be close to 50%. One individual argued that there had to be a “bias” in favour of tails to get the average back to 50%. This is still incorrect. According to the law of large numbers, it is not the
actual number of flips that converges to the probability percentage, but the
average number of flips. Suppose we start by getting 10 heads in a row and keep flipping the coin 1 million times. Does the difference of 10 go away? No. In fact, after 1 million flips the number of heads and tails could differ by as much as 1 or 2 thousand. Even a difference of 9,000 more tails than heads would still round off to 50% after one million flips. Consequently, the individual cannot use deviations from the expected average to get an edge.
It is important to realize that this “law” is really only a statement that summarizes what has been observed, most of the time, over a large (in theory, infinite) number of events. It says absolutely nothing about what will happen next or is likely to happen. Suppose a coin was tossed and the first 10 coin tosses resulted in the following sequence of heads and tails: T, H, H, H, H, H, T, H, H, H (20% tails, 80% heads). If the next 40 trials resulted in 19 tails and 21 heads (47.5% tails and 52.5% heads), the cumulative percentage of tails after all 50 trials would have moved from 20% to 42%—even though more heads came up during the subsequent flips. Incidentally, a player who bet $1 on tails on each of the 40 trials, assuming that tails was “due,” would have ended up losing $2. The average converges toward the expected mean, but it does not correct itself.
This can be illustrated by comparing Figures 1 and 2. Figure 1 shows the percentage of heads and tails in numerous coin tossing trials, while Figure 2 shows the actual number of heads and tails. In Figure 1, it is clear that the ratio of heads to tails is converging to the average of 50% as the number of tosses increases. Figure 2, however, shows that the actual number of heads and tails is not converging. In fact, as the number of tosses increases, the line depicting the balance of heads vs. tails drifts away from 0. In some cases, the line drifts up (more heads) and in some case it drifts down (more tails). Many people who gamble understand the idea that the average converges towards the mean (Figure 1), but mistakenly believe that the actual number of heads and tails also converges towards the mean. The thick line in both graphs represents an individual coin that started out with more heads than tails. Notice how even though its average converges towards 50% (Figure 1), the line depicting the balance of heads and tails continues to drift upwards away from the mean (Figure 2).