Skip to content

hypothesis testing using p-value

The p‑value is the probability of getting a result as extreme as (or more extreme than) your sample result if the null hypothesis were true.

In other words:

The p‑value tells you how surprising your data would be assuming the null hypothesis is correct.

It does not tell you the probability that the null is true.
It does not tell you the probability that your result is due to chance.
It tells you how compatible your data are with the null.

The Decision Rule

Once you compute the p‑value, you compare it to your chosen significance level \alpha :

  • If p ≤ α → reject H_0
  • If p > α → fail to reject H_0

That’s it.
The entire decision hinges on this comparison.

Why this works

If the p‑value is small, it means:

  • “If the null were true, this result would be very unlikely.”
  • So we have evidence against the null.

If the p‑value is large:

  • “This result is not surprising under the null.”
  • So we don’t have enough evidence to reject it.

Fresh, Original Examples

🧪 Example 1: Testing a Mean

A company claims the average battery life of their headphones is 20 hours.

You test 30 headphones and compute a p‑value of 0.03.

If \alpha = 0.05 :

  • p = 0.03 ≤ 0.05
    Reject H_0
  • Evidence suggests the true mean battery life is different from 20 hours.

🧠 Example 2: Testing a Proportion

A website claims that 60% of users prefer the dark mode theme.

You survey 200 users and get a p‑value of 0.18.

If \alpha = 0.05 :

  • p = 0.18 > 0.05
    Fail to reject H_0
  • Your sample does not provide strong evidence that the true proportion differs from 60%.
See also  hypothesis testing with critical values

🏃 Example 3: Comparing Two Groups

A sports scientist tests whether a new warm‑up routine improves sprint speed.

The independent‑samples t‑test gives p = 0.008.

If \alpha = 0.01 :

  • p = 0.008 ≤ 0.01
    Reject H_0
  • Strong evidence the warm‑up routine affects sprint speed.

How to interpret p‑values (the right way)

Correct interpretations

  • “If the null is true, this result would be unlikely.”
  • “The data are inconsistent with the null hypothesis.”
  • “There is evidence against the null.”

Incorrect interpretations (but very common!)

  • “The null is probably false.”
  • “There is a 3% chance the result is due to chance.”
  • “There is a 97% chance the alternative is true.”

These are tempting but wrong — the p‑value is about the data, not the truth of the hypotheses.

A simple analogy

Imagine the null hypothesis is a claim that a coin is fair.

You flip it 20 times and get 18 heads.

The p‑value answers:

“If the coin were fair, how likely is it to get 18 or more heads?”

If that probability is tiny, you doubt the coin is fair.

That’s hypothesis testing in a nutshell.

Leave a Reply

error: Content is protected !!