Skip to content

Type I and type II errors

Type I Error (False Positive)

You reject a true null hypothesis — you conclude something is happening when it actually isn’t.

Example:
A medical test says a patient has a disease, but they actually don’t.

Type II Error (False Negative)

You fail to reject a false null hypothesis — you miss a real effect.

Example:
A medical test says a patient does not have a disease, but they actually do.

🧪 Medical Example (Classic)

Let the null hypothesis be:
H₀: The patient does NOT have the disease.

OutcomeWhat it meansError type
Test says “disease present” but patient is healthyFalse alarmType I
Test says “no disease” but patient is sickMissed detectionType II

This exact framing appears in the Wikipedia example.

🔢 Probabilities: α and β

α (alpha) = probability of a Type I error
This is the significance level you choose (often 0.05).

β (beta) = probability of a Type II error

🧠 Intuition

Think of a courtroom:

Type I error: Convicting an innocent person
Type II error: Letting a guilty person go free

You can tighten the rules to avoid one type of error, but that usually increases the other.

⚖️ Trade‑off

Reducing Type I errors (making α smaller) makes it harder to detect real effects → increases Type II errors.
Increasing power (reducing β) makes it easier to detect real effects → increases risk of Type I errors.

This tension is why study design and sample size matter.

Leave a Reply

error: Content is protected !!