A publication of the Centre for Advancing Journalism, University of Melbourne

Opinion polls often a matter of opinion, say pundits

Opinion polls make headlines, claim political scalps and become the form guide for the election horse race. But how reliable are they?

Words by Rose Iser

Six polling companies – Galaxy, Nielsen, Newspoll, Morgan, Essential Research and ReachTEL – dominate media headlines.

But their results are often reported with little interrogation of their methodology, and while most pollsters back their polls as accurate barometers of public opinion, political analysts argue their relative usefulness can come down to polling methods and sample sizes.

The ABC’s polling expert, Antony Green, says that all pollsters are aware of the limitations of polling and try to reduce statistical errors arising from their methods. And he believes that, often, the people who complain about a specific poll are those who don’t like the result, and who will question the poll’s methodology in a bid to cast doubt on unfavourable numbers.

But he sounds a note of caution, too: “You have to choose your polls. And it depends on what you use them for.”

Mr Green nominates Newspoll, founded on the same questions for 30 years, as providing the most consistently useful results, but believes some others are less reliable.

Adrian Beaumont, a PhD candidate in statistics at Melbourne University, agrees that Newspoll’s final pre-election poll predictions have been consistently accurate.

Mr Beaumont notes that polling companies can lose credibility when their final election polls are wide of the mark, such as Nielsen’s 2007 election eve poll in which Labor scored 57 per cent of the national vote. The next day, Labor won the election with only 52.7 per cent support, well below Nielsen’s numbers.

Mr Beaumont believes it is not enough for journalists simply to report the numbers without providing details about a poll’s sample size, the dates on which it was conducted and the questions asked.

“You have to choose your polls. And it depends on what you use them for.” — analyst Antony Green

In addition, the order of questions can be crucial: “All pollsters should ask the question about voting intention first up,” he maintains. “If you ask other questions first, you can pollute the answers.”

All analysts point to the method of selecting people and contacting them as key determinants of reliable polling. Representative polling is based on the views of a random sample. But if the sample does not represent all groups in the community, the results will be skewed.

And yet, a reality of modern polling and whether a voter’s views are captured by the pollsters will almost certainly depend on whether they have a landline, whether theirs is a silent number and whether they are at home in the evenings.

Polls using only landline phone numbers exclude mobile phone users – more likely to be young people and professionals.  And nearly all polls rely on methods that exclude responses from non-English speaking voters.

Dr Kevin Bonham, a polling analyst at the University of Tasmania, insists that no single poll is perfect. “All polls have error margins and flaws.”

It can be difficult getting the picture right, he adds, particularly when results are close. Then, the margin of error of a poll, determined by the size of its sample, can be critical. 

    Sample size


    of error

   400 people

     +/- 5%

   1100 people

     +/- 3%

   2500 people

     +/- 2%

   10,000 people

     +/- 1%

In short, the bigger the numbers of people invited to give their opinions, the more likely the findings will represent the electorate’s thinking. But a sample size of just 400 people could be out by as much as five percentage points, rendering its findings highly speculative.

So, which polling companies in Australia use polling methods that are most representative and useful?

Galaxy is one of the only companies that calls mobile phones, but its published polls often draw on sample sizes of only 800 or so, with results weighted to be more representative, which further reduces their accuracy. Nielsen doesn’t call mobile phones, but its sample sizes tend to be more than 1000.

ReachTEL conducts what is known as ‘robopolling’ – automated phone calls where a machine asks the questions – which is widely considered to add about one percentage point to the conservative vote, partly because the system does not favour voter ambivalence and experience suggests progressive voters are less likely to tolerate automated calls.

On the other hand, Morgan’s polling methods, including face-to-face interviews, says statistician Adrian Beaumont, can favour progressive results by up to two percentage points, while newly-implemented methodology has not been tested at a federal election.

Given the range of polling methods and associated flaws, Dr Bonham suggests that the most valuable information can be drawn from aggregated poll results such as those calculated by the Crikey blogger PollBludger, aka William Bowe, a PhD candidate in politics at the University of Western Australia.

On his BludgerTrack 2013 site, Bowe explains he calculates “an aggregate of all published national opinion polls. Each poll is adjusted to account for observed biases and weighted according to sample size and past reliability”.

While Mr Green is more circumspect about the usefulness of PollBludger, Dr Bonham believes Bowe “does a very good job”. 

An additional limitation of all polls is that they are national in scope whereas elections in Australia are decided by results in individual electorates, where votes can be swayed by parochial issues. Applying national results uniformly can lead to inaccuracies in predictions.

Mr Green argues, therefore, that state polling, although expensive, is more a more useful barometer of voter intentions. “If you want to do accurate polling you should be doing state polling,” he argues.

However, only some opinion polls break down results state-by-state, and often the sample sizes for each state are well below 1000 people, thereby carrying a large margin of error. 

Electorate-specific polls are even more infrequent and while polling in marginal seats is often more common closer to an election, small sample sizes typically yield wildly varying results.

By way of example, two recent polls in the seat of Melbourne produced vastly different outcomes. The polls, conducted one month apart, involved small sample sizes and used different methods that gave the incumbent, The Greens’ Adam Bandt, a 15 percentage point variance in primary vote. But the statistically significant differences in just how the polls were conducted went largely unreported by media generally.

GalaxyMid-  July


  • 400 people
  • Weighted by age, gender and postcode.
  • Phone-poll from White Pages

    Margin of error 

        +/- 5%


15 August          


  • 860 people
  • Weighted by age and gender
  • Automated telephone calls

    Margin of error 

        +/- 5%

 Adam Bandt




Cath Bowtell




Sean Armistead          (Lib)



The ABC’s Green says that the manner in which polls are reported can undermine their accuracy and usefulness. “I often tell people to look at the original tables,” he says.

Mr Beaumont argues that journalists should report all opinion polls and take into account contradictory results. But the way in which news outlets commission polls, essentially for exclusivity, works against this.

Polls make headlines because newspapers have to justify the expense in commissioning them, Green explains. “They tend to build them up because they are one of the few things, these days, that they have exclusively.”

This can lead to media exaggerating the importance of poll results, according to Beaumont. Given that all polls have flaws and margins of error, he concedes that accurate polling still requires “a bit of luck”.

The American writer E B White displayed a healthy suspicion of opinion polling. After the polls had predicted erroneously that incumbent Harry Truman would be beaten by Republican challenger Thomas Dewey in the 1948 US presidential election, White wrote: “People are unpredictable by nature, and although you can take a nation’s pulse, you can’t be sure that the nation hasn’t just run up a flight of stairs.”

POLL POSITIONS: How the pollsters line up 


 Polls for:




  News Corp       tabloids

 Random telephone  surveying including mobile  phones

  Small samples

  Results weighted

  Questions occasionally use   provocative language


     The          Australian  

 Random telephone  surveying exclusively  landlines

  Excludes mobile phone users


  The Age;      Sydney    Morning    Herald 

 Random telephone  surveying exclusively  landlines

  Excludes mobile phone users



 Robopolling — automatic  phone calls to randomly  selected landline phones

  Excludes mobile phone users

  Widely considered skewed towards     conservative voters

     Roy Morgan

  Formerly      Bulletin    magazine

 Multi-mode including face-  to-face, SMS and phone

  Untested at federal election

  Face-to-face data considered skewed towards progressive voters




  Online poll

  Sample pool drawn from     panel

  Stable panel leads to static results

  Adjustments required for small sample pool

About The Citizen

THE CITIZEN is a publication of the Centre for Advancing Journalism. It has several aims. Foremost, it is a teaching tool that showcases the work of the students in the University of Melbourne’s Master of Journalism and Master of International Journalism programs, giving them real-world experience in working for publication and to deadline. Find out more →

  • Editor: Jo Chandler
  • Reporter: Jack Banister
  • Audio & Video editor: Louisa Lim
  • Data editor: Craig Butt
  • Editor-In-Chief: Andrew Dodd
  • Business editor: Lucy Smy
Winner — BEST PUBLICATION 2016 Ossie Awards