Central Limit Theorem Calculator
Try different values to see how sample size affects the distribution of sample means
Simulation Statistics
Result
The Central Limit Theorem (CLT)
The Central Limit Theorem is one of the most important concepts in statistics. It states that if you take sufficiently large samples from any population, the distribution of the sample means will be approximately normally distributed, regardless of the shape of the original population distribution.
More formally, if X₁, X₂, ..., Xₙ are independent, identically distributed random variables with mean μ and finite variance σ², then the sampling distribution of the sample mean X̄ approaches a normal distribution with mean μ and variance σ²/n as the sample size n increases.
Key Implications of the CLT
- Normal approximation: As sample size increases, the distribution of sample means approaches a normal distribution.
- Same mean: The mean of the sampling distribution equals the population mean.
- Reduced variability: The standard deviation of the sampling distribution (standard error) equals the population standard deviation divided by the square root of the sample size: σ/√n.
- Independence from original distribution: This happens regardless of the shape of the original population distribution.
When Does the CLT Apply?
The CLT applies when:
- Samples are random
- Samples are independent
- Sample size is sufficiently large (typically n ≥ 30, but depends on the skewness of the original distribution)
- The population has a finite variance
Key Formulas
Mean of sampling distribution: μX̄ = μ
Standard error (standard deviation of sampling distribution): σX̄ = σ/√n
Standardized sample mean (z-score): z = (X̄ - μ) / (σ/√n)
Common Population Distributions
Distribution | Parameters | Mean | Variance |
---|---|---|---|
Uniform | Min (a), Max (b) | (a + b) / 2 | (b - a)² / 12 |
Normal | Mean (μ), SD (σ) | μ | σ² |
Exponential | Rate (λ) | 1/λ | 1/λ² |
Chi-Squared | Degrees of freedom (df) | df | 2 × df |
Binomial | Trials (n), Probability (p) | n × p | n × p × (1-p) |
Understanding the Central Limit Theorem
What's Happening in the Simulation?
In the simulator tab, we're demonstrating the Central Limit Theorem by:
- Generating random values from the selected population distribution.
- Taking samples of size n and calculating the mean of each sample.
- Plotting the distribution of these sample means.
- Overlaying the theoretical normal distribution the sample means should approach.
How to Interpret the Results
As you run the simulation and increase the number of samples, you should notice that:
- The histogram of sample means starts to look like a bell-shaped curve (normal distribution).
- The mean of the sample means gets closer to the population mean.
- The standard deviation of the sample means gets closer to the theoretical standard error (σ/√n).
Practical Applications
The Central Limit Theorem is fundamental to many statistical methods and has numerous practical applications:
- Statistical inference: Allows us to make predictions about populations based on samples.
- Hypothesis testing: Forms the basis for many statistical tests.
- Confidence intervals: Enables us to estimate population parameters with a specified level of confidence.
- Quality control: Used to monitor manufacturing processes and detect deviations.
Tips for Using the Simulator
- Try different distributions: Notice how even extremely skewed or unusual distributions will result in normally distributed sample means.
- Change the sample size: Observe how the distribution of sample means becomes more normal as you increase the sample size.
- Increase the number of samples: More samples will make the pattern clearer.