The 1936 US Presidential election was between FDR and Alf Landon. In order to try to predict the result, the Literary Digest mailed 10,000,000 surveys to Americans selected from lists of automobile owners, telephone subscriptions and country club memberships. 2,300,000 questionnaires were completed and returned. The poll predicted 57% for Landon and 43% for FDR.

1o million surveys, with 2.3 million returned is a massive survey, far bigger than today’s opinion polls, and the Literary Digest obviously thought that the sheer number of responses would make the results accurate. However, sending the survey to car owners, country club members and those with a telephone

*in 1936*was a massive mistake. In the 30s the people selected for the survey were not a representative cross section of the wider US population, and were much more likely to vote for the Republican candidate, Landon.George Gallup ran a poll with a sample of 50,000 people and predicted a win for FDR. The actual result was 61% for FDR and 37% for Landon. Gallup ensured that the sample was representative, and went one to become a guru of opinion polling and found one of the major polling companies. The Literary Digest, however, shortly went out of business.

This demonstrates that that proper sampling is necessary to ensure that those surveyed provide a representative view of the whole population. As Gallup himself said:

When a housewife wants to test the quality of the soup she is making, she tastes only a teaspoonful or two. She knows that if the soup is thoroughly stirred, one teaspoonful is enough to tell her whether she has the right mixture of ingredients.

The problem described above is called selection bias, i.e. the people selected to take part in the survey do not adequately represent the wider population you are trying to survey. This can be a problem with online surveys. If you want to survey young people then online surveys can be effective because almost all young people are online. However, if you also need to survey older people, particularly those aged over 60 or 70 then an online survey would only gain the views of technologically literate older people, and not the wider population.

**How to design a sample plan**

Prior to the commencement of a survey you need to design a sample plan; this is a table that shows how many interviews will be achieved in any of the geographic or demographic groups used for the survey. It is important that those interviewed for a particular survey are representative of the target population, so Census data is usually used when designing a sample plan.

It is also important to ensure that the right numbers of interviews are achieved; this is partly dependent on how the survey data will be used and the types of analysis that are required. Depending on the subject under study you may wish to create a sample plan based on age, gender, location, economic status, education level, or any other number of demographic factors.

Confidence levels and confidence intervals are used to work out the best sample size. Not only is it important to decide upon the total number of surveys required, but also to consider whether any sub-group analysis is required. If you wish to analyse the data by any sub-group (such as area or age group) then you also need to consider the confidence intervals for the sub-groups. As a general rule I recommend at least 100 interviews per sub-group that will be used for analysis. If you have fewer interviews it makes it increasingly difficult to detect significant differences between the sub-groups.

When developing a sample framework the first step is to work out how many interviews would be conducted if the sample was strictly proportional to the population of the zones. If the proportional sample for all the zones is roughly equivalent and there are no zones with very small samples, the proportional sample is used for the survey. Figure 1 shows an example where there are no very small zones and the percentage range of the different zones is not too large (14% to 26%), where I would use a target that is directly proportional to the population figures.

*Figure 1*

However, in cases where there are some zones that are a lot smaller than others, there would be zones which would have very few interviews completed in them if the sample was proportional. In these cases the number of interviews in these smaller zones needs to be boosted to ensure the reliability of the survey, because you could not make any sort of inter-zone comparisons based on a small number of interviews.

Figure 2 shows an example where there is a much greater percentage range between the zones (this time from 4% to 40%) and zones 4 and 5 have small proportional targets of 67 and 42 interviews respectively. Here I would suggest that these two zones are boosted to 100 surveys per zone, whilst the remaining 800 surveys are distributed proportionally across zones 1, 2 and 3.

*Figure 2*

This does obviously reduce the number of interviews conducted in the larger zones, but in my view it is better to boost the smaller zones so that robust data are collected in all of the survey zones.

**Confidence Intervals and Levels**

Confidence interval – this is the margin of error; if 40% of the sample picks one answer, and the confidence interval is 5, we can say that if the question was asked of the entire target population that the ‘true’ answer would be between 35% and 45%.

Confidence level – this is a percentage that represents how confident you can be that the ‘true’ percentage of the target population would pick an answer that lies within the confidence interval. In common with the majority of researchers, RMG:Clarity uses the 95% level (i.e. you can be 95% certain that the ‘true’ answer lies within the confidence interval). The 99% confidence level provides a higher level of accuracy, but almost doubles the sample size needed.

Taken together, you can say that you are 95% confident that the ‘true’ percentage of the target population who would provide a particular answer is between 35% and 45%.

Obviously, choosing a higher confidence interval will give greater accuracy to the survey results, however, as the chart below shows, at a certain point the benefits of conducting more surveys begins to tail off:

*Figure 3*

25 interviews has a confidence interval of 19.6 and 100 interviews has a confidence interval of 9.8, whilst it falls to 4.4 at 500 interviews and 3.1 at 1000 interviews. However, a further 1000 interviews only lowers the confidence interval by 0.9 to 2.2. A confidence interval of 1 (not shown on the graph) would require a massive 9604 interviews.

To further explore the relationship between number of interviews and confidence intervals and levels, here is a handy online sample size calculator. You will see that there is space on the form to enter the population – this refers to the total population of the target area. The target population does not have to be entered, and will only affect the sample size needed or confidence interval if it is less than about 5,000, and can thus largely be ignored; the sample size is far more important than the population size.

The ‘Find Confidence Interval’ form linked to above also includes ‘percentage’ which is set at a default of 50. As well as sample size, the accuracy of survey results also depends on the percentage of the sample that picks a particular answer. If, for example, 99% of the sample said “Yes” and 1% said “No,” the chances of error are remote, irrespective of sample size. However, if the percentages are 51% and 49% the chances of error are much greater. It is easier to be sure of extreme answers than of middle-of-the-road ones. For this reason, when determining the sample size needed for a given level of accuracy you must use the worst case percentage (50%).

If you wish to determine the confidence interval for a specific answer, you can use the specific percentage picking that answer and get a smaller interval. Figure 4, below, shows the confidence intervals for various percentages, based on a survey with 1,000 interviews. For example, if 20% of the respondents select a particular response, the confidence interval is 2.5, meaning that you can be 95% sure that the ‘true’ answer is between 18.5% and 22.5%.

*Figure 4*

You can also see that the confidence intervals form an elongated ‘U’ shape, meaning that that, for example, the confidence interval for 20% and 80% is the same (2.5). The closer to the extremes (0% and 100%) the more sure you can be of the answer.