The story of the Monte Carlo method begins in the most unlikely way: with a mathematician in bed playing cards. In 1946, Stanisław Ulam, a Polish mathematician recovering from surgery, found himself playing solitaire to pass the time. Being a mathematician, he wondered: what are the chances of winning a game?
The problem was theoretically solvable: just enumerate every possible combination of cards and count the favorable ones. In practice, however, the number of combinations was so enormous that an exact calculation was completely impractical. Ulam then had an insight as simple as it was powerful: instead of computing the exact probability, why not simulate hundreds of games and count how many times you win?
The idea is disarmingly simple. If we play 1,000 games and win 230 of them, we can estimate the probability of winning at about 23%. The more games we simulate, the closer our estimate gets to the true value. This is, in essence, the Monte Carlo method: using random simulation to solve problems that would be too complex to tackle analytically.
Ulam shared the idea with his colleague John von Neumann, arguably the most brilliant mathematician of the 20th century, who immediately saw its potential. Von Neumann realized that ENIAC — one of the very first electronic computers, which filled an entire room — could run thousands of simulations in reasonable time. Together, they developed the method for a problem far more serious than solitaire: the diffusion of neutrons in atomic weapons, as part of the Manhattan Project at Los Alamos.
The name “Monte Carlo” was chosen as a code name, a reference to the famous Monte Carlo Casino in Monaco. Legend has it that the inspiration came from Ulam’s uncle, a notorious gambler. After all, the heart of the method is chance itself: generating random numbers to explore spaces of possibility too vast to traverse systematically.
From those early nuclear experiments of the 1940s, the Monte Carlo method has spread to every field of science and engineering. Today it is one of the most widely used computational tools in the world, from particle physics to finance, from cinematic rendering to drug discovery. Let’s see how it works.
The Monte Carlo method rests on a statistical principle we’ve encountered before: the law of large numbers. In simple terms, this law tells us that the average of a random sample approaches the population average as the sample grows. Translated into Monte Carlo language: the more simulations we run, the more accurate our result will be.
To run a Monte Carlo simulation, we need random numbers. In practice, computers don’t generate truly random numbers: they use deterministic algorithms that produce sequences of pseudo-random numbers with statistical properties indistinguishable from real randomness. In R, for example, the runif() function generates uniformly distributed numbers between 0 and 1.
A crucial aspect is the rate of convergence. The Monte Carlo estimation error decreases as 1/√n, where n is the number of simulations. This means that to halve the error, we need to quadruple our simulations; to gain one more decimal digit of precision, we need 100 times more iterations. It’s not particularly efficient, but the beauty of the method lies in the fact that it works regardless of the problem’s complexity: whether the problem has 2 or 2,000 variables, the convergence rate remains the same.
In practice, we must always balance desired precision with available computational resources. Increasing the number of simulations comes at a cost in computation time. Fortunately, modern computers make this trade-off much more favorable than in the days of ENIAC.
Let’s see concretely how the Monte Carlo method is applied. The procedure follows four fundamental steps:
1. Define the model. First, we identify the problem’s variables and the probability distributions that govern them. For instance, if we want to simulate an investment’s return, the model will include the expected return (mean) and volatility (standard deviation), typically assuming normally distributed returns.
2. Generate random scenarios. Using a pseudo-random number generator, we produce thousands of possible scenarios. Each scenario represents an “alternative history”: one way things could play out.
3. Compute the result for each scenario. For each scenario, we apply the model and obtain a result. If we’re simulating an investment, the result is the final portfolio value.
4. Aggregate the results. Finally, we analyze the set of results: we compute the mean, the median, the percentiles. This gives us not just an estimate of the expected outcome, but an entire distribution of possibilities. And this is where Monte Carlo truly shines: it tells us not only “how much we’re likely to earn” but also “how much we could lose in the worst case.”
Let’s use a quick example to illustrate convergence. Imagine flipping a coin and trying to estimate the probability of heads. After 10 flips, we might get 7 heads (70%), an estimate far from the true 50%. After 100 flips, we’ll be closer, perhaps 53%. After 10,000 flips, our estimate will be very close to 50%. This is Monte Carlo in action: replacing a theoretical calculation with an experiment repeated thousands of times.
The power of the method lies in its flexibility. While analytical methods require closed-form solutions (which often don’t exist for complex problems), Monte Carlo only requires the ability to simulate the process. If we can write a program that generates one scenario, Monte Carlo gives us the distribution of outcomes.
The most classic and pedagogically effective example of the Monte Carlo method is estimating the number π. The idea is elegant: consider a square of side 2 with a circle of radius 1 inscribed inside it. The area of the square is 4, the area of the circle is π. If we generate random points inside the square, the proportion falling inside the circle will be approximately π/4.
We compute this in R with 100,000 points:
set.seed(123)
n <- 100000
x <- runif(n, -1, 1)
y <- runif(n, -1, 1)
inside <- (x^2 + y^2) <= 1
pi_estimate <- 4 * sum(inside) / n
pi_estimate
# [1] 3.13956 The same in Python:
import random
random.seed(123)
n = 100000
inside = sum(1 for _ in range(n)
if random.uniform(-1, 1)**2 + random.uniform(-1, 1)**2 <= 1)
pi_estimate = 4 * inside / n
print(pi_estimate)
# 3.14268 With 100,000 points we already get a reasonable estimate, though not extremely precise: we’re accurate to about two decimal places. As we mentioned, gaining another digit of precision would require roughly 100 times more points. The computer does all the heavy lifting.
Let’s move to an example closer to real-world applications. Suppose we have a portfolio of three stocks with the following characteristics:
| Stock | Expected Return | Standard Deviation | Portfolio Weight |
|---|---|---|---|
| A | 8% | 12% | 40% |
| B | 10% | 15% | 30% |
| C | 12% | 18% | 30% |
We want to estimate the probability that the portfolio return exceeds 10%. We simulate in R with 10,000 scenarios:
set.seed(42)
sim_A <- rnorm(10000, mean = 0.08, sd = 0.12)
sim_B <- rnorm(10000, mean = 0.10, sd = 0.15)
sim_C <- rnorm(10000, mean = 0.12, sd = 0.18)
sim_portfolio <- 0.4 * sim_A + 0.3 * sim_B + 0.3 * sim_C
prob_result <- mean(sim_portfolio >= 0.10)
prob_result
# [1] 0.4504 The same in Python:
import random
random.seed(42)
n = 10000
count = 0
for _ in range(n):
a = random.gauss(0.08, 0.12)
b = random.gauss(0.10, 0.15)
c = random.gauss(0.12, 0.18)
ptf = 0.4 * a + 0.3 * b + 0.3 * c
if ptf >= 0.10:
count += 1
print(count / n)
# 0.4479 The result tells us there’s roughly a 45% chance of exceeding 10% return. Notice how Monte Carlo gives us not a single number, but an entire distribution: we could easily compute the median return, the worst-case 5th percentile, the probability of loss, and so on.
To make the concept even more tangible, we’ve built an interactive simulator that applies the Monte Carlo method to predict the future value of an investment. The underlying model is the Geometric Brownian Motion (GBM), the same model used in the famous Black-Scholes framework for options pricing.
Intuitively, an asset’s future price is computed as the current price multiplied by a random growth factor. The formula is:
S(t+1) = S(t) × exp((μ − σ²/2) + σ × Z)
where μ is the expected annual return (the “average growth”), σ is the volatility (how much the price fluctuates — our measure of uncertainty), and Z is a random number drawn from a normal distribution. Each simulation generates a different path: some scenarios see the portfolio grow substantially, others see it decline. The histogram shows the distribution of all possible outcomes.
From the nuclear physics of the 1940s, the Monte Carlo method has spread to domains that Ulam and von Neumann could never have imagined. Let’s look at some of the most fascinating applications.
3D rendering and cinema. Every time we watch a Pixar film or a blockbuster with visual effects, we’re admiring Monte Carlo at work. The technique is called path tracing: to compute the color of each pixel, the software simulates millions of light rays bouncing between surfaces in the scene. Each ray follows a random path, and the average of thousands of paths produces the photorealistic image we see on screen.
Finance and risk management. In the financial world, Monte Carlo is ubiquitous. Banks use it to calculate Value at Risk (VaR) — the maximum probable loss of a portfolio over a given time horizon. It’s the same principle as our simulator, applied to portfolios with hundreds of assets and complex correlations. Pricing exotic options that lack closed-form solutions also relies on Monte Carlo simulations.
Drug discovery. In pharmaceutical research, Monte Carlo is used to simulate molecular docking: how a candidate molecule binds to a target protein. By simulating millions of possible spatial configurations, researchers identify the most promising compounds before synthesizing them in the lab, saving years of experimentation.
Climate models. Models predicting climate change are inherently uncertain: they depend on emission scenarios, atmospheric feedback, ocean dynamics. Monte Carlo allows exploration of thousands of parameter combinations and generates the uncertainty bands we see in IPCC reports. Not a single prediction, but a distribution of possible futures.
Artificial intelligence. In machine learning, a technique called Monte Carlo dropout uses simulation to estimate the uncertainty of a neural network’s predictions. And the famous AlphaGo by DeepMind, which in 2016 defeated the world Go champion, used Monte Carlo Tree Search (MCTS) to explore possible moves in a game with more configurations than atoms in the universe.
| Field | Example | What is simulated |
|---|---|---|
| Cinema/3D | Path tracing (Pixar) | Light ray paths |
| Finance | Value at Risk | Market scenarios |
| Pharmaceuticals | Molecular docking | Spatial configurations |
| Climate | IPCC models | Parameter combinations |
| AI | AlphaGo (MCTS) | Possible moves |
Like any statistical tool, the Monte Carlo method has its strengths and limitations. Let’s examine them honestly.
Flexibility. The greatest advantage is versatility: Monte Carlo applies to complex problems of any size and in any field, from finance to engineering, physics to biology. It doesn’t require closed-form solutions, only the ability to simulate the process.
Accuracy. With a sufficient number of simulations, the estimate can be made arbitrarily precise. The more we run the method, the closer the result converges to the true value.
Scalability. Unlike grid-based methods, which suffer from the “curse of dimensionality” (cost explodes with the number of variables), Monte Carlo maintains the same convergence rate regardless of the number of dimensions. This makes it the only practical tool for high-dimensional problems.
However, the method also presents significant limitations:
Slow convergence. The 1/√n rate means that gaining one digit of precision requires 100 times more simulations. For problems demanding very high precision, this can be prohibitive.
Computational cost. For complex problems (many variables, heavy models), each individual simulation may require significant time. Multiplied by thousands or millions of iterations, the cost becomes considerable.
To mitigate these limitations, variance reduction techniques have been developed over the years, enabling more precise results with fewer simulations:
The Monte Carlo method represents one of the most powerful tools in computational statistics. In future articles, we’ll explore how some of these techniques — particularly the bootstrap, a close relative of Monte Carlo — apply to concrete problems in statistical inference.
For a deeper dive into the Monte Carlo method and its applications in finance, Monte Carlo Methods in Financial Engineering by Paul Glasserman is the most comprehensive reference: it covers theory and practice with detailed examples in derivative pricing and risk management.
Date Converter Use the converter to transform any Gregorian date into the corresponding French Revolutionary…
One of the most common questions when planning an A/B test is: how many users…
Introduction Machine Learning is changing the way we see the world around us. From weather…
The Gini coefficient is a measure of the degree of inequality in a distribution, and…
Contingency tables are used to evaluate the interaction between two categorical variables (qualitative). They are…
The Poisson distribution is a discrete probability distribution that describes the number of events occurring…