# An Introduction to Type I Errors

## What is a Type I Error?

A Type I error, also referred to as a false positive error, is when a researcher rejects a null hypothesis when in reality that null hypothesis is true.

But before we go into more depth on Type I errors, let’s review null and alternative hypotheses.

### Null Hypotheses vs. Alternative Hypotheses

In order to perform hypothesis testing, researchers first come up with a **null hypothesis**, and an **alternative hypothesis**.

A null hypothesis simply states what is already assumed about the population at hand, while an alternative hypothesis claims a deviation from the status quo or the norm.

Researchers then test the null hypothesis to determine if they will reject, or fail to reject it. Don’t let this language confuse you — failing to reject the null hypothesis is the same as accepting it.

This testing of the null hypothesis consists of taking a sample from the population, and calculating a statistic that attempts to estimate the parameter in question.

Then, using this statistic, researchers try to find the probability of getting the statistic if they were to assume that their null hypothesis is true.

This probability is referred to as a **p-value**.

## When Does Type I Error Occur?

In hypothesis testing there are two possible outcomes — researchers either reject the null hypothesis, or fail to reject the null hypothesis.

If the null hypothesis is rejected, researchers are siding against it, while if the null hypothesis is accepted, they are siding with it.

If the p-value is below a previously determined threshold, also known as the significance level, then researchers must reject the null hypothesis.

If the p-value is greater than or equal to the significance level, then researchers fail to reject the null hypothesis.

It’s important to note that when researchers reject a null hypothesis, it suggests that the alternative hypothesis could in fact be true. It does not, however, prove that the alternative hypothesis is true.

For the sake of example, let’s say that we are working with a significance level of 5 percent. If the probability of the null hypothesis being true for the sample is less than 5 percent, then it’s a reasonable decision to reject it.

But we’re all human, and we all make mistakes. That’s where Type I errors can have a seriously negative impact on our research.

Due to the fact that hypothesis testing is based on probability, it’s possible to get results that are contrary to reality.

A Type I error is an example of this. Consider a situation where the null hypothesis is true in reality. Suppose we conduct hypothesis testing and the numbers indicate that we should reject the null hypothesis.

**If the null is true but we reject it, we have committed a Type I error, and accordingly, our research will be useless.**

Now that you have some foundational knowledge of what Type I errors are, and when they can occur, you can do your best to avoid committing this research faux pas in the future.

Always be sure to check and double check your calculations, and put thorough consideration into how you are establishing your p-values.

Do you have a Type I horror story? If so, we want to hear it! Drop us a line in the comments below.