  {"id":3512,"date":"2026-03-11T15:49:05","date_gmt":"2026-03-11T14:49:05","guid":{"rendered":"https:\/\/www.gironi.it\/blog\/?p=3512"},"modified":"2026-03-13T09:07:30","modified_gmt":"2026-03-13T08:07:30","slug":"the-monte-carlo-method-explained-simply-with-real-world-applications","status":"publish","type":"post","link":"https:\/\/www.gironi.it\/blog\/en\/the-monte-carlo-method-explained-simply-with-real-world-applications\/","title":{"rendered":"The Monte Carlo Method Explained Simply with Real-World Applications"},"content":{"rendered":"\n<p><!--\n  The Monte Carlo Method - Enriched blog content\n  gironi.it\/blog\/en\/the-monte-carlo-method\/ (EN version)\n\n  Instructions:\n  1. Publish EN post via publish_montecarlo_en.py\n  2. Add iframe to simulator manually in Gutenberg (Custom HTML block)\n  3. Upload test_en.html via FTP to \/utility\/montecarlo-simulator-en\/index.html\n--><\/p>\n\n\n\n<p><!-- ============================================================ --><br><!-- SECTION 1: WHAT IS THE MONTE CARLO METHOD (~500 words) --><br><!-- ============================================================ --><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What is the Monte Carlo method<\/h2>\n\n\n\n<p>The story of the Monte Carlo method begins in the most unlikely way: with a mathematician in bed playing cards. In 1946, <strong>Stanis\u0142aw Ulam<\/strong>, a Polish mathematician recovering from surgery, found himself playing solitaire to pass the time. Being a mathematician, he wondered: what are the chances of winning a game?<\/p>\n\n\n\n<p>The problem was theoretically solvable: just enumerate every possible combination of cards and count the favorable ones. In practice, however, the number of combinations was so enormous that an exact calculation was completely impractical. Ulam then had an insight as simple as it was powerful: <strong>instead of computing the exact probability, why not simulate hundreds of games and count how many times you win?<\/strong><\/p>\n\n\n\n<!--more-->\n\n\n\n<p>The idea is disarmingly simple. If we play 1,000 games and win 230 of them, we can estimate the probability of winning at about 23%. The more games we simulate, the closer our estimate gets to the true value. This is, in essence, the <strong>Monte Carlo method<\/strong>: using random simulation to solve problems that would be too complex to tackle analytically.<\/p>\n\n\n\n<p>Ulam shared the idea with his colleague <strong>John von Neumann<\/strong>, arguably the most brilliant mathematician of the 20th century, who immediately saw its potential. Von Neumann realized that <strong>ENIAC<\/strong> \u2014 one of the very first electronic computers, which filled an entire room \u2014 could run thousands of simulations in reasonable time. Together, they developed the method for a problem far more serious than solitaire: the <strong>diffusion of neutrons<\/strong> in atomic weapons, as part of the Manhattan Project at Los Alamos.<\/p>\n\n\n\n<p>The name \u201cMonte Carlo\u201d was chosen as a code name, a reference to the famous <strong>Monte Carlo Casino<\/strong> in Monaco. Legend has it that the inspiration came from Ulam\u2019s uncle, a notorious gambler. After all, the heart of the method is chance itself: generating random numbers to explore spaces of possibility too vast to traverse systematically.<\/p>\n\n\n\n<p>From those early nuclear experiments of the 1940s, the Monte Carlo method has spread to every field of science and engineering. Today it is one of the most widely used computational tools in the world, from particle physics to finance, from cinematic rendering to drug discovery. Let\u2019s see how it works.<\/p>\n\n\n\n<p><!-- ============================================================ --><br><!-- SECTION 2: FUNDAMENTAL CONCEPTS (~300 words) --><br><!-- ============================================================ --><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Fundamental concepts<\/h2>\n\n\n\n<p>The Monte Carlo method rests on a statistical principle we\u2019ve encountered before: the <strong>law of large numbers<\/strong>. In simple terms, this law tells us that the average of a random sample approaches the population average as the sample grows. Translated into Monte Carlo language: <strong>the more simulations we run, the more accurate our result will be<\/strong>.<\/p>\n\n\n\n<p>To run a Monte Carlo simulation, we need <strong>random numbers<\/strong>. In practice, computers don\u2019t generate truly random numbers: they use deterministic algorithms that produce sequences of <strong>pseudo-random numbers<\/strong> with statistical properties indistinguishable from real randomness. In R, for example, the <code>runif()<\/code> function generates uniformly distributed numbers between 0 and 1.<\/p>\n\n\n\n<p>A crucial aspect is the <strong>rate of convergence<\/strong>. The Monte Carlo estimation error decreases as <strong>1\/\u221an<\/strong>, where n is the number of simulations. This means that to halve the error, we need to quadruple our simulations; to gain one more decimal digit of precision, we need 100 times more iterations. It\u2019s not particularly efficient, but the beauty of the method lies in the fact that <strong>it works regardless of the problem\u2019s complexity<\/strong>: whether the problem has 2 or 2,000 variables, the convergence rate remains the same.<\/p>\n\n\n\n<p>In practice, we must always balance <strong>desired precision<\/strong> with <strong>available computational resources<\/strong>. Increasing the number of simulations comes at a cost in computation time. Fortunately, modern computers make this trade-off much more favorable than in the days of ENIAC.<\/p>\n\n\n\n<p><!-- ============================================================ --><br><!-- SECTION 3: THE METHOD IN ACTION (~400 words) --><br><!-- ============================================================ --><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Monte Carlo method in action<\/h2>\n\n\n\n<p>Let\u2019s see concretely how the Monte Carlo method is applied. The procedure follows four fundamental steps:<\/p>\n\n\n\n<p><strong>1. Define the model.<\/strong> First, we identify the problem\u2019s variables and the probability distributions that govern them. For instance, if we want to simulate an investment\u2019s return, the model will include the expected return (mean) and volatility (standard deviation), typically assuming normally distributed returns.<\/p>\n\n\n\n<p><strong>2. Generate random scenarios.<\/strong> Using a pseudo-random number generator, we produce thousands of possible scenarios. Each scenario represents an \u201calternative history\u201d: one way things could play out.<\/p>\n\n\n\n<p><strong>3. Compute the result for each scenario.<\/strong> For each scenario, we apply the model and obtain a result. If we\u2019re simulating an investment, the result is the final portfolio value.<\/p>\n\n\n\n<p><strong>4. Aggregate the results.<\/strong> Finally, we analyze the set of results: we compute the mean, the median, the percentiles. This gives us not just an estimate of the expected outcome, but an entire <strong>distribution of possibilities<\/strong>. And this is where Monte Carlo truly shines: it tells us not only \u201chow much we\u2019re likely to earn\u201d but also \u201chow much we could lose in the worst case.\u201d<\/p>\n\n\n\n<p>Let\u2019s use a quick example to illustrate convergence. Imagine flipping a coin and trying to estimate the probability of heads. After 10 flips, we might get 7 heads (70%), an estimate far from the true 50%. After 100 flips, we\u2019ll be closer, perhaps 53%. After 10,000 flips, our estimate will be very close to 50%. This is Monte Carlo in action: replacing a theoretical calculation with an experiment repeated thousands of times.<\/p>\n\n\n\n<p>The power of the method lies in its <strong>flexibility<\/strong>. While analytical methods require closed-form solutions (which often don\u2019t exist for complex problems), Monte Carlo only requires the ability to simulate the process. If we can write a program that generates one scenario, Monte Carlo gives us the distribution of outcomes.<\/p>\n\n\n\n<p><!-- ============================================================ --><br><!-- SECTION 4: PRACTICAL EXAMPLES (~600 words) --><br><!-- ============================================================ --><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Practical examples: estimating \u03c0 and portfolio returns<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Example 1: estimating the value of \u03c0<\/h3>\n\n\n\n<p>The most classic and pedagogically effective example of the Monte Carlo method is <strong>estimating the number \u03c0<\/strong>. The idea is elegant: consider a square of side 2 with a circle of radius 1 inscribed inside it. The area of the square is 4, the area of the circle is \u03c0. If we generate random points inside the square, the proportion falling inside the circle will be approximately \u03c0\/4.<\/p>\n\n\n\n<p>We compute this in R with 100,000 points:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>set.seed(123)\nn &lt;- 100000\nx &lt;- runif(n, -1, 1)\ny &lt;- runif(n, -1, 1)\ninside &lt;- (x^2 + y^2) &lt;= 1\npi_estimate &lt;- 4 * sum(inside) \/ n\npi_estimate\n# &#91;1] 3.13956<\/code><\/pre>\n\n\n\n<p>The same in Python:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import random\nrandom.seed(123)\nn = 100000\ninside = sum(1 for _ in range(n)\n             if random.uniform(-1, 1)**2 + random.uniform(-1, 1)**2 &lt;= 1)\npi_estimate = 4 * inside \/ n\nprint(pi_estimate)\n# 3.14268<\/code><\/pre>\n\n\n\n<p>With 100,000 points we already get a reasonable estimate, though not extremely precise: we\u2019re accurate to about two decimal places. As we mentioned, gaining another digit of precision would require roughly 100 times more points. The computer does all the heavy lifting.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Example 2: estimating portfolio returns<\/h3>\n\n\n\n<p>Let\u2019s move to an example closer to real-world applications. Suppose we have a portfolio of three stocks with the following characteristics:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Stock<\/th><th>Expected Return<\/th><th>Standard Deviation<\/th><th>Portfolio Weight<\/th><\/tr><\/thead><tbody><tr><td>A<\/td><td>8%<\/td><td>12%<\/td><td>40%<\/td><\/tr><tr><td>B<\/td><td>10%<\/td><td>15%<\/td><td>30%<\/td><\/tr><tr><td>C<\/td><td>12%<\/td><td>18%<\/td><td>30%<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>We want to estimate the probability that the portfolio return exceeds 10%. We simulate in R with 10,000 scenarios:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>set.seed(42)\nsim_A &lt;- rnorm(10000, mean = 0.08, sd = 0.12)\nsim_B &lt;- rnorm(10000, mean = 0.10, sd = 0.15)\nsim_C &lt;- rnorm(10000, mean = 0.12, sd = 0.18)\nsim_portfolio &lt;- 0.4 * sim_A + 0.3 * sim_B + 0.3 * sim_C\nprob_result &lt;- mean(sim_portfolio &gt;= 0.10)\nprob_result\n# &#91;1] 0.4504<\/code><\/pre>\n\n\n\n<p>The same in Python:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import random\nrandom.seed(42)\nn = 10000\ncount = 0\nfor _ in range(n):\n    a = random.gauss(0.08, 0.12)\n    b = random.gauss(0.10, 0.15)\n    c = random.gauss(0.12, 0.18)\n    ptf = 0.4 * a + 0.3 * b + 0.3 * c\n    if ptf &gt;= 0.10:\n        count += 1\nprint(count \/ n)\n# 0.4479<\/code><\/pre>\n\n\n\n<p>The result tells us there\u2019s roughly a 45% chance of exceeding 10% return. Notice how Monte Carlo gives us not a single number, but an entire distribution: we could easily compute the median return, the worst-case 5th percentile, the probability of loss, and so on.<\/p>\n\n\n\n<p><!-- ============================================================ --><br><!-- SECTION 5: INTERACTIVE SIMULATOR (~200 words) --><br><!-- ============================================================ --><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Monte Carlo Simulator<\/h2>\n\n\n\n<p>To make the concept even more tangible, we\u2019ve built an <strong>interactive simulator<\/strong> that applies the Monte Carlo method to predict the future value of an investment. The underlying model is the <strong>Geometric Brownian Motion<\/strong> (GBM), the same model used in the famous Black-Scholes framework for options pricing.<\/p>\n\n\n\n<p>Intuitively, an asset\u2019s future price is computed as the current price multiplied by a random growth factor. The formula is:<\/p>\n\n\n\n<p class=\"has-text-align-center\"><strong>S(t+1) = S(t) \u00d7 exp((\u03bc \u2212 \u03c3\u00b2\/2) + \u03c3 \u00d7 Z)<\/strong><\/p>\n\n\n\n<p>where <strong>\u03bc<\/strong> is the expected annual return (the \u201caverage growth\u201d), <strong>\u03c3<\/strong> is the volatility (how much the price fluctuates \u2014 our measure of uncertainty), and <strong>Z<\/strong> is a random number drawn from a normal distribution. Each simulation generates a different path: some scenarios see the portfolio grow substantially, others see it decline. The histogram shows the distribution of all possible outcomes.<\/p>\n\n\n\n<iframe src=\"https:\/\/www.gironi.it\/utility\/montecarlo-simulator-en\/\" width=\"100%\" height=\"600\" style=\"border:none;border-radius:12px;\" loading=\"lazy\" title=\"Monte Carlo Simulator\"><\/iframe>\n\n\n\n<p><!-- ============================================================ --><br><!-- SECTION 6: MODERN APPLICATIONS (~400 words) --><br><!-- ============================================================ --><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Modern applications of the Monte Carlo method<\/h2>\n\n\n\n<p>From the nuclear physics of the 1940s, the Monte Carlo method has spread to domains that Ulam and von Neumann could never have imagined. Let\u2019s look at some of the most fascinating applications.<\/p>\n\n\n\n<p><strong>3D rendering and cinema.<\/strong> Every time we watch a Pixar film or a blockbuster with visual effects, we\u2019re admiring Monte Carlo at work. The technique is called <strong>path tracing<\/strong>: to compute the color of each pixel, the software simulates millions of light rays bouncing between surfaces in the scene. Each ray follows a random path, and the average of thousands of paths produces the photorealistic image we see on screen.<\/p>\n\n\n\n<p><strong>Finance and risk management.<\/strong> In the financial world, Monte Carlo is ubiquitous. Banks use it to calculate <strong>Value at Risk<\/strong> (VaR) \u2014 the maximum probable loss of a portfolio over a given time horizon. It\u2019s the same principle as our simulator, applied to portfolios with hundreds of assets and complex correlations. Pricing exotic options that lack closed-form solutions also relies on Monte Carlo simulations.<\/p>\n\n\n\n<p><strong>Drug discovery.<\/strong> In pharmaceutical research, Monte Carlo is used to simulate <strong>molecular docking<\/strong>: how a candidate molecule binds to a target protein. By simulating millions of possible spatial configurations, researchers identify the most promising compounds before synthesizing them in the lab, saving years of experimentation.<\/p>\n\n\n\n<p><strong>Climate models.<\/strong> Models predicting climate change are inherently uncertain: they depend on emission scenarios, atmospheric feedback, ocean dynamics. Monte Carlo allows exploration of thousands of parameter combinations and generates the <strong>uncertainty bands<\/strong> we see in IPCC reports. Not a single prediction, but a distribution of possible futures.<\/p>\n\n\n\n<p><strong>Artificial intelligence.<\/strong> In machine learning, a technique called <strong>Monte Carlo dropout<\/strong> uses simulation to estimate the uncertainty of a neural network\u2019s predictions. And the famous <strong>AlphaGo<\/strong> by DeepMind, which in 2016 defeated the world Go champion, used <strong>Monte Carlo Tree Search<\/strong> (MCTS) to explore possible moves in a game with more configurations than atoms in the universe.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Field<\/th><th>Example<\/th><th>What is simulated<\/th><\/tr><\/thead><tbody><tr><td>Cinema\/3D<\/td><td>Path tracing (Pixar)<\/td><td>Light ray paths<\/td><\/tr><tr><td>Finance<\/td><td>Value at Risk<\/td><td>Market scenarios<\/td><\/tr><tr><td>Pharmaceuticals<\/td><td>Molecular docking<\/td><td>Spatial configurations<\/td><\/tr><tr><td>Climate<\/td><td>IPCC models<\/td><td>Parameter combinations<\/td><\/tr><tr><td>AI<\/td><td>AlphaGo (MCTS)<\/td><td>Possible moves<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><!-- ============================================================ --><br><!-- SECTION 7: ADVANTAGES AND LIMITATIONS (~300 words) --><br><!-- ============================================================ --><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Advantages and limitations of the Monte Carlo method<\/h2>\n\n\n\n<p>Like any statistical tool, the Monte Carlo method has its strengths and limitations. Let\u2019s examine them honestly.<\/p>\n\n\n\n<p><strong>Flexibility.<\/strong> The greatest advantage is versatility: Monte Carlo applies to complex problems of any size and in any field, from finance to engineering, physics to biology. It doesn\u2019t require closed-form solutions, only the ability to simulate the process.<\/p>\n\n\n\n<p><strong>Accuracy.<\/strong> With a sufficient number of simulations, the estimate can be made arbitrarily precise. The more we run the method, the closer the result converges to the true value.<\/p>\n\n\n\n<p><strong>Scalability.<\/strong> Unlike grid-based methods, which suffer from the \u201ccurse of dimensionality\u201d (cost explodes with the number of variables), Monte Carlo maintains the same convergence rate regardless of the number of dimensions. This makes it the only practical tool for high-dimensional problems.<\/p>\n\n\n\n<p>However, the method also presents <strong>significant limitations<\/strong>:<\/p>\n\n\n\n<p><strong>Slow convergence.<\/strong> The 1\/\u221an rate means that gaining one digit of precision requires 100 times more simulations. For problems demanding very high precision, this can be prohibitive.<\/p>\n\n\n\n<p><strong>Computational cost.<\/strong> For complex problems (many variables, heavy models), each individual simulation may require significant time. Multiplied by thousands or millions of iterations, the cost becomes considerable.<\/p>\n\n\n\n<p>To mitigate these limitations, <strong>variance reduction techniques<\/strong> have been developed over the years, enabling more precise results with fewer simulations:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Importance sampling<\/strong>: sampling from an alternative distribution that \u201cconcentrates\u201d simulations in the most informative regions.<\/li>\n\n\n\n<li><strong>Control variates<\/strong>: using a correlated variable with known expected value to reduce the estimate\u2019s variance.<\/li>\n\n\n\n<li><strong>Stratified sampling<\/strong>: dividing the space into homogeneous subgroups and sampling from each.<\/li>\n\n\n\n<li><strong>Antithetic variates<\/strong>: exploiting pairs of negatively correlated random numbers to reduce variance.<\/li>\n<\/ul>\n\n\n\n<p><!-- ============================================================ --><br><!-- CLOSING --><br><!-- ============================================================ --><\/p>\n\n\n\n<p>The Monte Carlo method represents one of the most powerful tools in computational statistics. In future articles, we\u2019ll explore how some of these techniques \u2014 particularly the <strong>bootstrap<\/strong>, a close relative of Monte Carlo \u2014 apply to concrete problems in statistical inference.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p><!-- ============================================================ --><br><!-- FURTHER READING --><br><!-- ============================================================ --><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Further reading<\/h3>\n\n\n\n<p>For a deeper dive into the Monte Carlo method and its applications in finance, <a href=\"https:\/\/www.amazon.com\/dp\/1441915753?tag=consulenzeinf-21\" target=\"_blank\" rel=\"nofollow noopener sponsored\"><em>Monte Carlo Methods in Financial Engineering<\/em><\/a> by Paul Glasserman is the most comprehensive reference: it covers theory and practice with detailed examples in derivative pricing and risk management.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>What is the Monte Carlo method The story of the Monte Carlo method begins in the most unlikely way: with a mathematician in bed playing cards. In 1946, Stanis\u0142aw Ulam, a Polish mathematician recovering from surgery, found himself playing solitaire to pass the time. Being a mathematician, he wondered: what are the chances of winning &hellip; <a href=\"https:\/\/www.gironi.it\/blog\/en\/the-monte-carlo-method-explained-simply-with-real-world-applications\/\" class=\"more-link\">Leggi tutto<span class=\"screen-reader-text\"> &#8220;The Monte Carlo Method Explained Simply with Real-World Applications&#8221;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_uag_custom_page_level_css":"","footnotes":""},"categories":[161],"tags":[],"class_list":["post-3512","post","type-post","status-publish","format-standard","hentry","category-statistics"],"lang":"en","translations":{"en":3512,"it":509},"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false,"post-thumbnail":false},"uagb_author_info":{"display_name":"autore-articoli","author_link":"https:\/\/www.gironi.it\/blog\/author\/autore-articoli\/"},"uagb_comment_info":0,"uagb_excerpt":"What is the Monte Carlo method The story of the Monte Carlo method begins in the most unlikely way: with a mathematician in bed playing cards. In 1946, Stanis\u0142aw Ulam, a Polish mathematician recovering from surgery, found himself playing solitaire to pass the time. Being a mathematician, he wondered: what are the chances of winning&hellip;","_links":{"self":[{"href":"https:\/\/www.gironi.it\/blog\/wp-json\/wp\/v2\/posts\/3512","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.gironi.it\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.gironi.it\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.gironi.it\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.gironi.it\/blog\/wp-json\/wp\/v2\/comments?post=3512"}],"version-history":[{"count":5,"href":"https:\/\/www.gironi.it\/blog\/wp-json\/wp\/v2\/posts\/3512\/revisions"}],"predecessor-version":[{"id":3526,"href":"https:\/\/www.gironi.it\/blog\/wp-json\/wp\/v2\/posts\/3512\/revisions\/3526"}],"wp:attachment":[{"href":"https:\/\/www.gironi.it\/blog\/wp-json\/wp\/v2\/media?parent=3512"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.gironi.it\/blog\/wp-json\/wp\/v2\/categories?post=3512"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.gironi.it\/blog\/wp-json\/wp\/v2\/tags?post=3512"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}