Bonferroni correction – What It Is and Why It Matters
The Bonferroni correction is a statistical method used to reduce the risk of Type I errors (false positives) when you run multiple hypothesis tests. Every time you test a hypothesis, there’s a chance you’ll incorrectly… Bonferroni correction – What It Is and Why It Matters
Type I and type II errors
Type I Error (False Positive) You reject a true null hypothesis — you conclude something is happening when it actually isn’t. Example:A medical test says a patient has a disease, but they actually don’t. Type… Type I and type II errors
independent samples in hypothesis testing
🧩 What “Independent Samples” Means Two samples are independent when the individuals in one group have no relationship to the individuals in the other group. This is the setup for the independent‑samples t‑test, also called… independent samples in hypothesis testing
One sample t-test
A one‑sample t‑test checks whether the mean of a single sample is significantly different from a known or hypothesized population mean. It answers the question: “Is my sample mean different enough from the population mean… One sample t-test
Student’s t-test & Student’s t-distribution
A t‑test is a hypothesis test used when you want to compare means but you don’t know the population standard deviation and your sample size is small. It’s used for: One‑sample t‑test → compare one… Student’s t-test & Student’s t-distribution
The choice of significance level in hypothesis testing
The significance level, usually written as , is the threshold for how much evidence you require before rejecting the null hypothesis. It is the probability of making a Type I error: So choosing α is… The choice of significance level in hypothesis testing
hypothesis testing using p-value
The p‑value is the probability of getting a result as extreme as (or more extreme than) your sample result if the null hypothesis were true. In other words: The p‑value tells you how surprising your… hypothesis testing using p-value
hypothesis testing with critical values
Instead of using a p‑value, you compare your test statistic (like a z‑score or t‑score) to a critical value that marks the boundary of the rejection region. If your test statistic falls beyond the critical… hypothesis testing with critical values
Fisher vs Neyman battle
🧪 Ronald Fisher: The p‑value Rebel Philosophy: Evidence, not decisions Fisher believed statistics should help scientists measure evidence against a null hypothesis. Key ideas Fisher’s vibe: The scientist as a detective, gathering clues and weighing… Fisher vs Neyman battle
setting up the hypothesis
At the heart of every hypothesis test are two competing statements about a population: They must be:Mutually exclusive (can’t both be true)Exhaustive (cover all possibilities)About population parameters, not sample statistics Let’s break down how to… setting up the hypothesis
What’s hypothesis testing
Hypothesis testing is a structured way to use sample data to make decisions or draw conclusions about a population. It answers questions like: It’s the backbone of inferential statistics. 🎯 The Core Idea You start… What’s hypothesis testing
central limit theorem & confident interval
⭐ Central Limit Theorem (CLT) The Central Limit Theorem says something surprisingly powerful: If you take many random samples and compute their means,the distribution of those sample means will be approximately normal,even if the original… central limit theorem & confident interval
Normalization & z-score
⭐ Z‑Score A z‑score tells you how many standard deviations an observation is from the mean. What it does Example Population mean , standard deviation .What is the z‑score of ? Interpretation:The value is 1.5… Normalization & z-score
Histogram versus density
⭐ Histogram vs. Density Plot Both visualize distributions, but they answer slightly different questions and behave differently. 📊 Histogram A histogram groups data into bins and shows counts (or proportions) in each bin. Key features… Histogram versus density
geometric distribution is memoryless
A random variable that follows a geometric distribution satisfies: This means: The probability you still have to wait more trials does NOT depend on how long you’ve already been waiting. Your past failures don’t change… geometric distribution is memoryless
geometric distribution
The geometric distribution models the number of trials needed until the first success occurs in a sequence of independent Bernoulli trials (like repeated coin flips). Think of it as the math of “How long until… geometric distribution
Binomial distribution
⭐ Binomial Distribution The binomial distribution models the number of successes in a fixed number of independent trials, where each trial has the same probability of success. Think of it as the math of “How… Binomial distribution
expectation is linear
The expected value (mean) of random variables adds even if the variables are dependent. This is the magic part: Expectation is always linear — no independence required. Formally, for any random variables and : And… expectation is linear
probability density
A probability density function describes the distribution of a continuous random variable. If is continuous, its PDF is a function such that: The key idea For continuous variables: The PDF is not a probability. Probability… probability density
probability mass function
A probability mass function is a function that gives the probability of each individual value of a discrete random variable. If is a discrete random variable, then its PMF is: It tells you: A PMF… probability mass function
Types of random variables
Most random variables fall into two big categories: Everything else is a refinement of these two. 🎯 1. Discrete Random Variables A discrete random variable takes countable values — usually integers. Key features Examples Common… Types of random variables










