Opinion polls dominate every election. They can be influential in whether or not people bother to vote. They inform the national debate, but they can sometimes get their predictions wrong!
What are opinion polls?
Simply, they are samples of opinion gathered from members of the public on issues such as, who they would vote for; issues they think are important, and what they think about those issues.
The crucial point here is that they have to be samples — small slices of the overall population. And that is where the problems start. If you sample a population, how do you know your sample is representative of the whole?
Most of the political polls you will see are actually conducted by market research organisations, that make most of their living finding out what sort of products or services people are likely to buy. They often conduct political opinion polls as a way of promoting themselves — they certainly get lots of free publicity, especially during elections or a referendum.
How do most polls work?
Most national opinion polls published during this election will be a fairly small sample — usually between one and two thousand people. As there are nearly 46m people eligible to vote in parliamentary elections in 650 constituencies in England, Scotland, Wales and Northern Ireland, a sample of a couple of thousand is tiny.
In deciding how to conduct a poll, the pollsters have some big choices to make. Firstly, how many people to ask? Secondly, how to choose them – do they do it randomly or by targeting specific groups by age, location, or something else? Then, how do they contact them? Face-to-face, at home or in the street, by phone or online? How do they deal with non-responses that might skew their results?
And, of course, they have to frame their questions very carefully. There is lots of evidence to suggest that how a question is posed can have significant effects on how people respond. Not all polls ask about the same issue in the same way.
Having collected their samples of opinion, the pollsters then have to analyse their results. And it is here most controversy arises because to try and make their results representative, they introduce what is called “weighting”. That is they adjust their raw results to try and make them better reflect what the whole population is likely to think. So they adjust for a range of factors, including demographics (for example, age, sex, ethnicity) and affiliations.
While these factors are drawn from available data like population statistics, what to use and how to apply them in weighting samples remain a tricky problem.
So why don’t they just do bigger polls?
Well some do, but they are very expensive so, in any election, there will probably only be a handful of much larger polls — usually in the tens of thousands. YouGov has conducted one massive poll of 100,000 voters.
But however big a poll, some of the fundamental problems remain. YouGov’s poll is still only about 0.25% of the electorate, or about 150 people per constituency.
So even a big poll still has to be weighted to make sure it is as representative as possible. And weighting is still a tricky problem.
Haven’t the polls got it badly wrong recently?
In 2015, it was widely accepted that the polls got the election result wrong because their weightings produced an over-estimate of the Labour vote and an under-estimate of how well the Tories would do. So they mostly forecast a hung Parliament when, in reality, David Cameron’s Tories won a majority.
In the 2017 election, it is also widely accepted they then over-compensated by adjusting their weightings too far the other way: forecasting a Tory win, when actually Theresa May lost her majority.
Is it the polls that are the problem, or the way we talk about them?
Part of the problem is the way polls are reported, and how we all tend to talk about them. Most media coverage gives a single number — as in “the Conservatives are on 40%”. This makes the polling sound far more accurate and easy to understand and discuss than it really is.
The pollsters probably said, “The Tories are on 40%, plus or minus three points” — but the nuance gets lost in translation.
In close elections, the margin of error, as it is called, can be all the difference between winning and losing.
In the 2016 EU referendum, the actual results — 52 to 48 — were easily within the margin of error for most polls.
What is this MRP thing people are talking about?
MRP (or multi-level regression and post-stratification) is a combination of much bigger sample polls, like the YouGov one already mentioned and sophisticated statistical analysis using demographic and previous opinion polling to finely tune the results down to a constituency-by-constituency level.
A previous YouGov MRP poll in 2017 was credited with most closely predicting the actual outcome. But another MRP poll didn’t. So, is the excitement about MRP misplaced?
Finally, it is worth remembering people answering pollsters questions may not always be telling the truth. In 1992, the polls famously got it wrong because of people who were called “reluctant Tories” — voters who didn’t want to admit they were voting Conservative.
And it’s not always on purpose. Record numbers of people are saying they genuinely don’t know who they will vote for, or are switching at the last minute.
This briefing is produced by The Day in association with ENGAGE Public Policy.
- Has election campaigning by the main parties changed your views positively about any of them?
- Conduct a poll in your classroom: Which party would you vote for today? Compare your results with the latest YouGov poll (see Expert Links).
- To bias (slant) towards one particular group.
- Connections to political parties or regions.
- Hung Parliament
- When no single political party wins a majority in the House of Commons. It is also known as a situation of no overall control. When there is no majority, the Prime Minister in power before the general election stays in power and is given the first chance to create a government. They may either negotiate with another party or parties to build a coalition; or try to govern with a minority of MPs, or they can resign.
- A subtle difference.
- Margin of error
- This tells you how many percentage points your results will differ from the real population value.