The Chi-Square Test: Goodness of Fit and Test of Independence

In previous posts, we have seen different types of tests that we can use to analyze our data and test hypotheses.

The chi-square test was proposed by Karl Pearson in 1900, and it is widely used to estimate how effectively the distribution of a categorical variable represents an expected distribution (in this case, we talk about the “Goodness of Fit Test”) or to estimate when two categorical variables are independent of each other (and then we talk about the “Test of Independence”).

Such is the importance and widespread use of this test that it was listed by the magazine Scientific American among the 20 most important scientific discoveries of the 20th century.

Continue reading “The Chi-Square Test: Goodness of Fit and Test of Independence”

The Normal Distribution

The concept of the normal distribution is one of the key elements in the field of statistical research. Very often, the data we collect shows typical characteristics, so typical that the resulting distribution is simply called… “normal”. In this post, we will look at the characteristics of this distribution, as well as touch on some other concepts of notable importance such as:


Continue reading “The Normal Distribution”

Descriptive Statistics: Measures of Position and Central Tendency

Measures of position, also known as position indices, or measures of central tendency, are values that summarize the position of a statistical distribution, providing a single figure that encapsulates the most important aspects of the data. In this brief discussion, we will explore some of the most common and practical indices, such as the various types of means, the median, quartiles, and percentiles.

Continue reading “Descriptive Statistics: Measures of Position and Central Tendency”