Skip to main content
n

Sample Size Calculator

Calculate the required sample size for surveys, A/B tests, and research studies. Determine how many respondents you need for statistically significant results.

%
%

Required Sample Size

385 respondents

Confidence Level95%
Margin of Error+/- 5%
Actual Margin of Error+/- 4.99%
Response Distribution50%
PopulationInfinite

Sample Size by Confidence Level

ConfidenceZ-ScoreSample Size
90%1.645271
95%1.960385
99%2.576664

Sample Size by Margin of Error

Margin of ErrorSample Size
+/- 1%9,604
+/- 2%2,401
+/- 3%1,068
+/- 5%385
+/- 10%97
i
Account for Response Rate: If you expect a 25% response rate, you need to contact approximately 1,540 people. For 10% response rate, contact 3,850 people.

Survey Sample Size Formula

n = (Z^2 * p * (1-p)) / E^2

Where Z = Z-score for confidence level, p = expected proportion, E = margin of error. For finite populations, apply correction: n_adj = n / (1 + (n-1)/N)

About This Calculator

The Sample Size Calculator determines how many participants, respondents, or observations you need for statistically valid research results. Whether you're conducting surveys, A/B tests, clinical trials, or market research, the correct sample size ensures your findings are accurate, reliable, and actionable.

Choosing the wrong sample size can undermine your entire research effort. Too small a sample leads to unreliable results and missed insights. Too large a sample wastes resources without improving accuracy. This calculator uses proven statistical formulas to find the optimal balance, considering your desired confidence level, margin of error, and population characteristics.

Our free sample size calculator supports two primary use cases: survey research (determining respondents needed for a given margin of error) and A/B testing (calculating participants needed to detect a meaningful difference between variants). Simply enter your parameters and get instant, mathematically precise sample size recommendations.

Note: This calculator provides statistically sound estimates based on standard formulas. For complex study designs (stratified sampling, cluster sampling, multi-arm trials), consult a statistician or biostatistician for customized power analysis.

How to Use the Sample Size Calculator

  1. 1**Select your calculation mode**: Choose "Survey/Research" for general survey sample sizes or "A/B Testing" for experiment design. Each mode requires different inputs and uses different formulas.
  2. 2**For surveys - Enter population size**: Input your total population size (the group you want to draw conclusions about). For very large populations (over 100,000) or unknown populations, select "Infinite/Unknown" as the impact on sample size is minimal.
  3. 3**Set your confidence level**: Choose 90%, 95%, or 99%. Higher confidence means more certainty that your results represent the true population value. 95% is the standard for most research.
  4. 4**Define your margin of error**: Enter the acceptable margin of error as a percentage (typically 3-5%). A 5% margin means results could be off by up to 5 percentage points in either direction.
  5. 5**Specify expected response distribution**: If you expect roughly 50/50 responses (maximum variability), use 50%. If you expect more skewed results (e.g., 80% yes), enter 80% for a potentially smaller sample size.
  6. 6**For A/B tests - Enter baseline conversion rate**: Input your current conversion rate (control group expected performance).
  7. 7**Set minimum detectable effect**: Enter the smallest improvement you want to reliably detect. Smaller effects require larger sample sizes.
  8. 8**Choose statistical power**: Select 80% or 90% power. Higher power means better ability to detect true effects, but requires more participants.
  9. 9**Review your results**: The calculator shows the required sample size, actual margin of error, and a comparison table showing how sample size changes with different parameters.

Formula

n = (Z^2 * p * (1-p)) / E^2

Where n is the required sample size, Z is the Z-score for your confidence level (1.96 for 95%), p is the expected proportion or response distribution (0.5 for maximum variability), and E is the margin of error as a decimal (0.05 for 5%). For finite populations, multiply by N/(N+n-1) where N is the population size.

Understanding Sample Size: Why It Matters

Sample size is the foundation of statistical validity. It determines whether your research conclusions can be trusted and applied to real-world decisions.

What Is Sample Size?

Sample size (often denoted as "n") is the number of observations, participants, or data points in your study. It's a subset of the larger population you want to understand.

Key concept: You can rarely study an entire population, so you study a sample and use statistics to make inferences about the whole population.

Why Sample Size Matters

1. Statistical Significance

Larger samples increase your ability to detect real effects and differences. A study with too few participants may miss important findings entirely.

2. Margin of Error

Sample size directly affects your margin of error. Doubling your sample size doesn't halve your margin of error (it reduces it by about 30%), but inadequate samples lead to unacceptably wide confidence intervals.

3. Generalizability

Properly sized samples allow you to confidently apply findings to your target population. Results from underpowered studies may not replicate.

4. Resource Optimization

Over-sampling wastes time and money. Under-sampling wastes your entire research investment by producing unreliable results.

Sample Size in Different Research Contexts

Research TypeTypical Sample Size RangeKey Considerations
National surveys1,000 - 2,500Geographic representation
Market research300 - 1,000Segment analysis needs
A/B tests1,000 - 100,000+ per variantEffect size sensitivity
Clinical trials50 - 10,000+Safety and efficacy endpoints
Academic research30 - 500Resource constraints
Quality control50 - 500Process variability

Related: Use our standard deviation calculator to measure variability in your data, which affects sample size requirements.

The Sample Size Formula Explained

Understanding the mathematics behind sample size calculation helps you make informed decisions about your research design.

Standard Sample Size Formula for Proportions

For surveys measuring proportions (percentages), the formula is:

n = (Z^2 * p * (1-p)) / E^2

Where:

  • n = required sample size
  • Z = Z-score corresponding to confidence level
  • p = expected proportion (response distribution)
  • E = margin of error (as decimal)

Z-Scores for Common Confidence Levels

Confidence LevelZ-ScoreInterpretation
90%1.64590% confident the true value is within the margin of error
95%1.96Industry standard for most research
99%2.576Used when high certainty is critical

Finite Population Correction

When your population is known and relatively small, apply the finite population correction (FPC):

n_adjusted = n / (1 + (n-1)/N)

Where:

  • N = total population size
  • n = sample size from initial formula
  • n_adjusted = corrected sample size

This correction reduces required sample size when sampling a significant proportion of the population.

Step-by-Step Example

Scenario: Survey 10,000 employees with 95% confidence, 5% margin of error, expecting 50% response distribution.

  1. Calculate initial sample size:

    • Z = 1.96 (for 95% confidence)
    • p = 0.50 (50% expected)
    • E = 0.05 (5% margin)
    • n = (1.96^2 * 0.50 * 0.50) / 0.05^2 = 384.16
  2. Apply finite population correction:

    • N = 10,000
    • n_adjusted = 384 / (1 + (384-1)/10,000) = 370

Result: You need approximately 370 respondents.

Learn more about confidence intervals with our confidence interval calculator.

Confidence Levels Explained

The confidence level is one of the most important (and often misunderstood) concepts in sample size determination.

What Is a Confidence Level?

The confidence level represents the probability that your sample accurately reflects the population parameter. A 95% confidence level means that if you repeated your study 100 times, approximately 95 of those samples would produce results containing the true population value.

Choosing the Right Confidence Level

90% Confidence Level (Z = 1.645)

Best for:

  • Preliminary research and pilot studies
  • Internal business decisions with low stakes
  • Situations where speed matters more than precision
  • Exploratory market research

Sample size impact: Smallest sample required

95% Confidence Level (Z = 1.96)

Best for:

  • Standard academic and professional research
  • Survey research and polling
  • Most business decisions
  • Published research and reports

Sample size impact: Moderate sample required (most common choice)

99% Confidence Level (Z = 2.576)

Best for:

  • High-stakes decisions
  • Clinical and medical research
  • Quality control in manufacturing
  • Legal or regulatory compliance

Sample size impact: Largest sample required

Confidence Level Comparison Table

Confidence LevelMargin of Error MultiplierSample Size Multiplier
90%1.00x (baseline)1.00x (baseline)
95%1.19x1.42x
99%1.57x2.45x

Common Misconceptions

Misconception: "95% confidence means 95% of respondents are accurately represented." Reality: It means the methodology produces accurate intervals 95% of the time across repeated samples.

Misconception: "Higher confidence is always better." Reality: Higher confidence requires larger samples. The tradeoff between precision and resources must be balanced.

Misconception: "Confidence level affects accuracy of individual responses." Reality: Confidence level addresses sampling error, not measurement error or response bias.

For probability concepts underlying confidence levels, see our probability calculator.

Margin of Error: What It Really Means

Margin of error (MOE) quantifies the uncertainty in your survey results. Understanding it is essential for interpreting and communicating your findings.

Definition

Margin of error is the range within which the true population value is likely to fall. If a survey shows 60% approval with a +/-3% margin of error, the true approval rate is likely between 57% and 63%.

How Margin of Error Relates to Sample Size

The relationship is non-linear. To cut your margin of error in half, you need to quadruple your sample size:

Margin of ErrorRelative Sample Size
10%1x (baseline)
5%4x
3%11x
2%25x
1%100x

Choosing Your Margin of Error

10% Margin of Error

  • Rough estimates only
  • Very early-stage research
  • Resource-constrained pilots

5% Margin of Error (Common)

  • Standard market research
  • Employee surveys
  • Customer satisfaction studies
  • General population polling

3% Margin of Error

  • Published academic research
  • High-stakes business decisions
  • Political polling
  • Regulatory submissions

1-2% Margin of Error

  • National census and major surveys
  • Critical policy decisions
  • Large-scale A/B tests
  • Precision marketing research

Margin of Error Formula

MOE = Z * sqrt(p * (1-p) / n)

Where:

  • Z = Z-score for confidence level
  • p = observed proportion
  • n = sample size

Real-World Example

A political poll of 1,000 voters shows Candidate A at 52% with a 3% margin of error (95% confidence).

Interpretation: We're 95% confident Candidate A's true support is between 49% and 55%. Since this range includes 50%, the race is statistically a "toss-up" despite the apparent lead.

Convert percentages to decimals easily with our percentage calculator.

A/B Testing Sample Size Calculations

A/B testing requires different sample size calculations than survey research. The goal is detecting meaningful differences between variants.

The A/B Testing Formula

Sample size per variant for comparing two proportions:

n = 2 * ((Z_alpha + Z_beta)^2 * p * (1-p)) / MDE^2

Where:

  • n = sample size per variant (multiply by number of variants for total)
  • Z_alpha = Z-score for significance level (typically 1.96 for 5% significance)
  • Z_beta = Z-score for statistical power (0.84 for 80%, 1.28 for 90%)
  • p = baseline conversion rate
  • MDE = minimum detectable effect (absolute difference)

Key A/B Testing Concepts

Baseline Conversion Rate

Your current conversion rate (control). Lower baseline rates require larger samples to detect the same relative improvement.

Baseline RateSample Needed for 10% Relative Lift
1%~150,000 per variant
5%~30,000 per variant
10%~15,000 per variant
25%~6,000 per variant

Minimum Detectable Effect (MDE)

The smallest improvement you want to reliably detect.

Absolute MDE: Direct percentage point change (e.g., 5% to 6% = 1 percentage point MDE)

Relative MDE: Percentage change from baseline (e.g., 5% to 6% = 20% relative lift)

Statistical Power

The probability of detecting a true effect when it exists.

Power LevelMiss RateUse Case
80%20%Standard testing
90%10%Important decisions
95%5%Critical tests

A/B Test Duration Calculator

Estimate how long your test needs to run:

Duration (days) = (Sample size per variant * 2) / Daily traffic

Example: 10,000 visitors needed per variant, 2,000 daily visitors Duration = (10,000 * 2) / 2,000 = 10 days minimum

Common A/B Testing Mistakes

  1. Stopping tests early when results look significant
  2. Using too small MDE (requiring impractically large samples)
  3. Ignoring day-of-week effects in short tests
  4. Testing too many variants simultaneously
  5. Not accounting for multiple comparisons

For calculating averages and means in your test results, use our average calculator.

Population Size Effects

Many researchers overestimate how much population size affects sample size requirements. The relationship is often counterintuitive.

The Surprising Truth

For large populations (over 20,000), population size has minimal impact on required sample size. The difference between surveying a city of 100,000 and a country of 100 million is surprisingly small.

Population Size Impact Table

Settings: 95% confidence, 5% margin of error, 50% response distribution

Population SizeRequired Sample% of Population
1008080%
50021743%
1,00027828%
5,0003577.1%
10,0003703.7%
50,0003810.76%
100,0003830.38%
1,000,0003840.038%
Infinite385~0%

Why This Happens

The finite population correction formula shows why:

n_adjusted = n / (1 + (n-1)/N)

As N approaches infinity, the correction factor approaches 1, meaning no adjustment needed. The correction only matters when your sample is a significant fraction of the population.

When Population Size Matters

Population size significantly affects sample size when:

  1. Small populations (under 1,000)
  2. Sampling more than 5% of population
  3. Specialized or niche populations
  4. Internal company surveys with defined employee counts

Practical Implications

Large Populations (N > 20,000)

  • Use the "infinite population" formula
  • Population size can usually be ignored
  • Focus on margin of error and confidence level

Small Populations (N < 1,000)

  • Always apply finite population correction
  • May need to sample a large percentage
  • Consider census (surveying everyone) if feasible

Medium Populations (1,000 - 20,000)

  • Calculate both with and without FPC
  • Use corrected value if difference is meaningful
  • Document your population size assumption

Survey Best Practices for Valid Results

Proper sample size is necessary but not sufficient for valid survey results. These best practices ensure your data is both statistically sound and practically meaningful.

Before Your Survey

1. Define Clear Objectives

  • What decisions will this data inform?
  • What precision level do those decisions require?
  • Who is the target population?

2. Account for Response Rate

Inflate your outreach based on expected response rate:

Contacts needed = Required sample / Expected response rate
Survey TypeTypical Response RateMultiplier Needed
Internal employee40-70%1.5-2.5x
Customer email10-25%4-10x
Phone survey5-15%7-20x
Online panel20-40%2.5-5x
Mail survey5-10%10-20x

3. Consider Subgroup Analysis

If you need to analyze subgroups separately, each subgroup needs adequate sample size:

Example: 400 total respondents, 4 demographic segments = 100 per segment (may be insufficient for reliable segment-level analysis)

During Data Collection

1. Monitor Response Distribution

Track demographics and characteristics as responses come in. Significant skew may indicate sampling bias.

2. Watch for Response Patterns

  • Straight-lining (same answer for all questions)
  • Speeding (unrealistically fast completion)
  • Missing data patterns

3. Maintain Consistency

Keep survey instrument, distribution method, and timing consistent throughout collection period.

After Your Survey

1. Report Your Methodology

Always document and report:

  • Sample size achieved
  • Response rate
  • Margin of error and confidence level
  • Sampling method
  • Data collection dates

2. Acknowledge Limitations

Be transparent about:

  • Non-response bias potential
  • Coverage gaps in your sampling frame
  • Any weighting applied

3. Use Appropriate Statistical Tests

Match your analysis to your sample design:

  • Simple random samples: Standard tests
  • Stratified samples: Weighted analysis
  • Cluster samples: Design-effect adjusted tests

Sample Size for Different Study Types

Different research contexts have unique sample size considerations beyond the basic formula.

Market Research

Typical requirements:

  • 95% confidence, 5% margin of error: n = 385
  • With subgroup analysis: n = 100-300 per subgroup

Special considerations:

  • Brand awareness studies may need larger samples
  • Conjoint analysis typically needs 300-1,000
  • Price sensitivity research needs 200-400 minimum

Clinical Trials

Factors affecting sample size:

  • Expected treatment effect size
  • Variability in primary endpoint
  • Dropout rate (inflate by 10-20%)
  • Regulatory requirements

Typical ranges:

  • Phase I: 20-80 participants
  • Phase II: 100-300 participants
  • Phase III: 300-3,000+ participants

Quality Control

Acceptance sampling: Based on AQL (Acceptable Quality Level) and lot size:

  • Small lots: May require 100% inspection
  • Large lots: Statistical sampling tables (MIL-STD-1916)

Process capability studies:

  • Minimum: 30 samples
  • Recommended: 50-100 samples

Academic Research

Psychology/Social sciences:

  • Minimum for detecting medium effects: ~64 per group
  • Recommended for reliability: 100+ per group

Economics/Finance:

  • Large datasets preferred (thousands of observations)
  • Panel data increases effective sample size

Employee Surveys

Census vs. sample decision:

Company SizeRecommendation
< 100Census (survey all)
100-500Census or large sample
500-5,000Sample with stratification
5,000+Well-designed sample

For each study type, remember to verify your calculations using our confidence interval calculator to ensure your results will have acceptable precision.

Pro Tips

  • 💡**Start with your margin of error needs**: Most business decisions work fine with 5% margin of error. Only aim for 3% or lower if the extra precision significantly improves decision quality - remember, halving the margin requires quadrupling the sample.
  • 💡**Use 50% response distribution when uncertain**: This gives the most conservative (largest) sample size estimate. You can reduce sample size if you have strong evidence for a skewed distribution from previous studies.
  • 💡**Account for non-response from the start**: Plan to contact 2-10x your required sample size depending on your survey method. A 20% response rate means you need 5x the contacts to achieve your target sample.
  • 💡**For A/B tests, focus on meaningful effect sizes**: Detecting tiny improvements requires enormous samples. Ask whether a 1% improvement is actionable before designing a test that requires 100,000 visitors.
  • 💡**Consider subgroup analysis needs upfront**: If you need reliable results for 5 customer segments, each segment needs adequate sample size - not just the total. This may multiply your overall sample requirement.
  • 💡**Population size rarely matters for large populations**: For populations over 20,000, you can usually ignore population size in calculations. The difference in required sample between 50,000 and 50 million is negligible.
  • 💡**Validate critical calculations**: For important research, verify your sample size using multiple methods or tools. Cross-check with specialized software like G*Power for complex study designs.
  • 💡**Budget for quality over quantity**: A smaller, well-designed sample with high response rates often yields better data than a larger sample with systematic biases and low engagement.

Frequently Asked Questions

The sample size you need depends on four key factors: population size, confidence level, margin of error, and expected response distribution.

Quick reference for 95% confidence and 5% margin of error:

  • Population 500: Need 217 responses
  • Population 1,000: Need 278 responses
  • Population 10,000: Need 370 responses
  • Population 100,000+: Need 384 responses

To reduce your margin of error to 3% (higher precision), multiply these numbers by approximately 2.8.

To increase confidence to 99%, multiply by approximately 1.7.

Use our calculator above for precise calculations based on your specific parameters. Remember to account for response rate when planning outreach - you may need to contact 2-10x your required sample size depending on your survey distribution method.

Nina Bao
Written byNina BaoContent Writer
Updated January 16, 2026

More Calculators You Might Like