Simulating Coin Flips

by Justin Skycak (@justinskycak) on

Estimating probabilities by simulating a large number of random experiments.

This post is part of the book Introduction to Algorithms and Machine Learning: from Sorting to Strategic Agents. Suggested citation: Skycak, J. (2021). Simulating Coin Flips. In Introduction to Algorithms and Machine Learning: from Sorting to Strategic Agents. https://justinmath.com/simulating-coin-flips/


Want to get notified about new posts? Join the mailing list and follow on X/Twitter.

It’s often possible to estimate probabilities of events by simulating a large number of random experiments and measuring the proportion of trials in which the desired outcome occurred. This technique is known as Monte Carlo simulation, named after a famous casino town.

To simulate random experiments in code, it’s common to use a random number generator and then map the output of the random number generator to an outcome of the experiment. For example, to simulate a coin flip, one could generate a random decimal $r$ between $0$ and $1$ and apply the following function:

$\begin{align*} \textrm{outcome}(r) = \begin{cases} \textrm{heads} & \textrm{ if } r < 0.5 \\ \textrm{tails} & \textrm{ otherwise} \end{cases} \end{align*}$


Exercise

Write a function sim_probability(num_heads, num_flips) that uses Monte Carlo simulation to compute the probability of getting a given number of heads in a given number of flips of a fair coin.

  1. First, simulate a large number of trials (say, 1000). In each trial, flip a coin num_flips times and count how many heads appear. You may import a random number generator for this.
  2. Then, count the number of trials in which exactly num_heads heads appeared and divide it by the total number of trials (1000).

To test your implementation, work out the result by hand for several combinations of num_heads and num_flips, and verify that your function consistently returns results that are close to these true values.

High-Level Sanity Checks

In this exercise, we tested the function by working out the probability by hand. However, Monte Carlo simulation is often used to estimate probabilities that are too complicated to work out by hand. In such cases, it’s common to test the implementation by

  • working out simple cases by hand when possible, and
  • performing high-level sanity checks.

High-level sanity checks still involve identifying and verifying true statements, but these true statements do not need to refer to exact values.

To illustrate, let’s brainstorm some characteristics about how the probability of getting a particular number of heads is related to the number of coin flips:

  • The probabilities should form a distribution, i.e. they should be non-negative and add up to $1.$
  • This distribution should look somewhat like a bell curve, with the most likely outcome being that half of the coins land on heads and the other half on tails.
  • Since landing on heads is just as likely as landing on tails, the distribution should be symmetric.

To verify these characteristics, you can choose some value of num_flips, compute the values sim_probability(1, num_flips), sim_probability(2, num_flips), …, sim_probability(num_heads, num_flips), and then check the following:

  • The values are all non-negative and their sum is approximately $1.$
  • The graph of probability vs num_heads looks like a symmetric bell curve.

Try doing this for several values of num_flips.


This post is part of the book Introduction to Algorithms and Machine Learning: from Sorting to Strategic Agents. Suggested citation: Skycak, J. (2021). Simulating Coin Flips. In Introduction to Algorithms and Machine Learning: from Sorting to Strategic Agents. https://justinmath.com/simulating-coin-flips/


Want to get notified about new posts? Join the mailing list and follow on X/Twitter.