Survey Design Basics: Sample Sizes, Confidence Intervals, and Margin of Error
Learn how many people you need to survey for reliable results, what confidence intervals actually mean, and how to interpret the margin of error in polls and research.
The question that trips everyone up
A few years ago, one of my tutoring students came in with a university assignment: design a survey to find out how students felt about the campus cafeteria. She had written brilliant questions, tested the wording, even piloted it with flatmates. Then her supervisor asked, “How many responses do you need?” She stared at me and said, “I was hoping you would tell me fifty is enough.”
It is a fair hope. Fifty feels like a decent number. But whether fifty is enough depends on how precise you need your results to be, how varied the opinions are, and how large the population is. Get the sample size wrong and you end up with results that look scientific but could easily be noise. Get it right and even a surprisingly small number of responses can tell you something genuinely reliable.
This guide unpacks the three ideas you need to design a trustworthy survey: sample size, confidence intervals, and margin of error. They are deeply connected, and once you see how they fit together, polling results and research papers will make a lot more sense.
What sample size really means
Sample size is simply the number of people (or items, or observations) you include in your study. The core tension is straightforward: larger samples give you more reliable results, but they cost more time and money. The goal is to find the sweet spot — enough responses to be confident in your conclusions without surveying the entire population.
Three factors drive the calculation:
- Population size. If you are surveying a school of 500 students, you need fewer responses than if you are surveying a city of 500,000. For very large populations the required sample size plateaus — which is why national polls can get away with interviewing around 1,000 people.
- Desired margin of error. This is the plus-or-minus figure you see reported alongside poll results. A smaller margin demands a larger sample.
- Confidence level. Typically 95 percent, meaning that if you repeated the survey 100 times, roughly 95 of those samples would capture the true value. Higher confidence requires more respondents.
The surprising result is that you rarely need to survey as many people as you think. For a population of 10,000 with a 5 percent margin of error and 95 percent confidence, the required sample is around 370 — well under 4 percent of the group. Try different scenarios yourself with the Sample Size Calculator:
Confidence intervals: what “95 percent confident” actually means
This is where most people trip up — understandably, because the language is a little counterintuitive. A 95 percent confidence interval does not mean there is a 95 percent probability that the true value falls inside it. What it means is that the method you used will produce intervals containing the true value 95 times out of 100, in the long run.
Think of it like archery. Your bow (the sampling method) has a certain accuracy. If you shoot 100 arrows, about 95 of them will hit the target. Any individual arrow might miss, but you can trust the process overall. The interval itself — say, 48 percent to 54 percent support a proposal — is one arrow. You cannot know for certain that this particular arrow hit, but you do know the bow is good.
In practice, a confidence interval gives you a range of plausible values for the thing you are trying to measure. Wider intervals are less precise but more likely to contain the truth. Narrower intervals are more useful but require larger samples or less variability in your data.
Two things widen a confidence interval: a smaller sample size and greater variability in responses. Two things narrow it: collecting more data and having a population that largely agrees. If 90 percent of people prefer option A, you need far fewer responses to pin that down than if the split is 50-50.
Explore how sample size and variability affect the interval with the Confidence Interval Calculator:
Margin of error: the plus-or-minus number
When a news report says a political party is polling at 42 percent with a margin of error of 3 points, it means the true level of support is likely between 39 and 45 percent. The margin of error is half the width of the confidence interval, and it is the single most useful number for judging how seriously to take a poll.
A few things worth knowing about margin of error:
- It only accounts for sampling error — the natural randomness from surveying a subset rather than everyone. It does not cover badly worded questions, non-response bias, or people who lie to pollsters. A poll can have a tight margin of error and still be wildly wrong if the methodology is flawed.
- It depends on sample size, not population size (once the population is large enough). This is why a well-designed survey of 1,500 adults can represent an entire country within about 2.5 percentage points.
- It assumes a simple random sample. In reality, most surveys use more complicated sampling designs, but the reported margin of error is usually calculated as if the sample were random.
The next time you see two candidates separated by 2 points in a poll with a margin of error of 3 points, you will know the race is genuinely too close to call. The overlap between their confidence intervals means either one could be ahead.
Calculate the margin of error for your own surveys with the Margin of Error Calculator:
How the three ideas connect
Sample size, confidence intervals, and margin of error are three views of the same underlying trade-off. Pick any two and the third is determined:
- Choose your confidence level and margin of error and the sample size calculator tells you how many responses you need.
- Collect a fixed number of responses and the margin of error calculator tells you how precise your estimates are.
- Report a result with its confidence interval and readers can see both the estimate and the uncertainty wrapped into one statement.
Understanding this triangle is the difference between designing a survey that produces actionable insights and one that produces numbers nobody should trust. It is also the key to reading research critically. Whenever a study makes a claim, check the sample size and the reported margin. If neither is mentioned, treat the findings with caution.
Practical tips for better surveys
Knowing the maths is only part of the picture. A few practical pointers can save you from common pitfalls:
- Aim for a response rate, not just a sample size. If you need 400 responses and expect a 25 percent response rate, you need to send your survey to at least 1,600 people. Plan for non-response from the start.
- Watch out for self-selection bias. People who feel strongly about a topic are more likely to respond. If only the most passionate cafeteria critics fill in your survey, the results will skew negative no matter how large your sample.
- Report the margin of error alongside every percentage. It keeps you honest and helps your audience interpret the results properly.
- Use 95 percent confidence unless you have a good reason not to. It is the standard in most fields. Dropping to 90 percent shrinks your required sample but weakens the guarantee. Going to 99 percent is sometimes necessary in medical or safety research but requires significantly more data.
- Remember that margin of error is widest at a 50-50 split. If preliminary data suggests opinion is heavily lopsided, you may be able to get away with a smaller sample.
From intuition to confidence
Statistics can feel intimidating when it is presented as a wall of formulas. But at its heart, survey design is about one very human question: how sure do I need to be? The sample size tells you how much effort the answer requires. The confidence interval tells you the range of reasonable answers. The margin of error tells you how much wiggle room to expect.
My student, by the way, ended up surveying 380 of her university’s 4,000 students — a number she calculated herself once the logic clicked. Her supervisor was impressed. More importantly, she could explain exactly why 380 was enough, and fifty was not. That kind of reasoning is worth more than any formula.
Calculators used in this article
Math / Statistics
Sample Size Calculator
Estimate the required sample size for a survey from confidence level, margin of error, expected proportion, and optional population size.
Math / Statistics
Confidence Interval Calculator
Calculate the confidence interval for a mean or proportion from sample statistics and confidence level, with margin of error and bounds.
Math / Statistics
Margin of Error Calculator
Calculate the margin of error for a proportion from sample size and confidence level, with confidence interval bounds.