How big should my sample size be?

That excruciating inadequacy. The feeling you don’t quite have enough to satisfy. Of course, the good thing about sample is you can always increase the size with a little extra budget.

But is it really necessary?

One common question I am asked by clients is “how big should my sample be?”. Naturally, the answer is wonderfully vague… it depends.

*Depends on what exactly?*

**1] Expectations of your audience.**

Not wishing to shrug the sexual innuendo quite yet, the size you need will depend on the expectations of your audience. Journalists, for example, expect n=1,000 or n=2,000. This is broadly how they judge your survey to be credible. This has little to do with statistical validity, more with what they are used to. As well as a perception of what their audiences will expect.

High-ranking, C-suite execs may also believe a minimum sample of n=1,000 is required. In both cases, you’re very unlikely to convince them otherwise so perhaps just go with it and suck up the additional expense?

**2] The validity question. **

The truth is, you don’t need a particularly large sample to ensure statistically valid results. What’s much more important than size is *quality* (of course!). Take the example of a piece of research which fundamentally requires a nationally representative UK sample. Far better to have n=200 spread evenly across age bands, gender and location than it would be to have n=2,000 old men living in Bristol. Quality counts far more here than quantity.

Using some data to illustrate, the chart below shows two key variables which are linked to sample size: i) cost and ii) confidence intervals (think, statistical validity).

Assuming each completed survey costs £1.50, there are no economies of scale here, so n=200 sample will cost £300 and n=2,000 sample will cost £3,000.

But if you look at the red line, you’ll notice there isn’t a corresponding linear trend with validity. Increasing sample size does increase validity, but there are diminishing returns. Whilst it’s true that a small n=100 sample can give wide margins of error, when you get to about n=600 the additional cost of buying sample does not give the same increased confidence in the data.

Additional confidence in survey data may be more critical in some cases – eg. in public health where lives literally depend on decisions being made – but for most surveys, and most of us, the cost of topping up our sample by 000s is pretty much a waste of money.

**3] Sub-groups and segments.**

Where you are looking to identify segments, or carry out further analysis of sub-groups, then a larger overall sample may be required. That said, there is rarely a case where you would need to increase it into the 000s. As a rule of thumb, I prefer that the sub-demos, sub-groups or segments I work with have a minimum of n=100. Yet I have seen analysis done pretty convincingly on less sample than this. So, a survey which identifies roughly 8 equally sized segments would be ok with an overall sample of n=800 (ie. not in the 000s.).

So the simple message here is: to save money on research, reduce sample size.

I’ve oversimplified in this post for ease of understanding. Your specific needs may require more of a conversation, taking in the size of our research universe for example. If you have any further questions on sample, research etc then please don’t hesitate to get in touch: gideon@profundo.co.uk.

For a handy calculator to estimate the sample size you need, try Hotjar’s free sample size calculator.

**Why not let me keep in touch with you with a regular dose of free insight and curated articles of interest?**