In previous articles, we’ve examined statistical inference from a precise and coherent perspective: formulate a hypothesis, collect data, calculate a p-value, construct a confidence interval. We’ve conducted hypothesis tests, compared variants with A/B testing, and seen with the Central Limit Theorem why all of this works even when data isn’t normal.
This approach—called frequentist—has a clear logic: the parameter we want to estimate is a fixed value (even if unknown), and we “chase” it with data. But there’s another way to think about uncertainty, one that allows us to update our beliefs as new data arrives. It’s called the Bayesian approach, and in this article we’ll build its foundations.
Let’s start with a concrete example. Imagine we’ve just launched an advertising campaign and we don’t know the true click rate. We have an initial opinion based on experience (“click rates usually fall between 0% and 20%”), and then data starts coming in. The Bayesian approach lets us combine our initial opinion with the observed data to get an updated estimate—and repeat this process every time new information arrives.
Continue reading “Bayesian Statistics: How to Learn from Data, One Step at a Time”
