You hear about political polls every time you turn on the news, but where do all those numbers come from?

WinMcNamee/Getty Images

adAfterSmallInset

How Political Polling Works

The news cycle is jam-packed with polls. Flip on the nightly broadcasts or browse online news articles and you'll be bombarded by the latest "statistics" about the percentage of Americans who believe in God; the breakdown of dog-lovers versus cat-lovers; and how many people between the ages of 65 and 85 own a Nintendo Wii. When it's an election year, poll numbers are headlines in themselves: "Obama Leads Romney Among Female Voters;" "Republicans More Likely to Vote Than Democrats;" and even the occasional "Dewey Defeats Truman!"

Sure, polls make great headlines, but how accurate are they? How much faith should we put in polls, particularly political polls that attempt to predict the outcome of an election? Who conducts these polls, and how do they decide whom to ask? Is it possible to achieve a representative sample of voters from randomly pulling numbers out of the phone book?

Political polling is a type of public opinion polling. When done right, public opinion polling is an accurate social science with strict rules about sample size, random selection of participants and margins of error. However, even the best public opinion poll is only a snapshot of public opinion at the particular moment in time, not an eternal truth [source: Zukin]. If you poll public opinion on nuclear energy right after a nuclear disaster, it's going to be much lower than the day before the disaster. The same is true for political polls. Voter opinion shifts dramatically from week to week, even day to day, as candidates battle it out on the campaign field.

Political polling wasn't always so scientific. In the late 19th and early 20th centuries, journalists would conduct informal straw polls of average citizens to gauge public opinion on politicians and upcoming elections [source: Crossen]. A newspaperman traveling on a train might ask the same question to everyone sitting in his car, tally the results and publish them as fact in the next day's paper.

In the 1930s, the popular magazine "Literary Digest" conducted public opinion polls of its large subscribership by mail and phone, believing that a large sample size would automatically generate infallible results. The magazine failed to notice that its readership was wealthier and more likely to be Republican than the average U.S. voter, leading the magazine to erroneously predict a landslide victory for Alf Landon over Franklin D. Roosevelt in 1936 [source: Crossen].

Today, the top political polling organizations employ mathematical methods and computer analysis to collect responses from the best representative sample of the American voting public. But there's still plenty of "art" in the science of political polling [source: Zukin]. Even random responses must be adjusted and sifted to identify subtle trends in voter opinion that can help predict the eventual winner on Election Day.

Let's start our examination of political polling with the most important indicator of accuracy: a representative sample.

Cell Phones and Polls

According to 2008 data, 19.1 percent of Americans exclusively had a landline telephone, 63.2 percent had both a landline and a cell phone, and 14.5 percent only had a cell phone [source: Lambert]. Since cell phones are almost always unlisted, pollsters like ABC News randomly dial numbers in known cell phone-only exchanges and weight those responses separately against the landline sample. Rasmussen Reports, another national polling organization, uses online polls to reach people who have abandoned traditional landline phones altogether [source: Rasmussen Reports].

Getting a Representative Sample

The mission of political polling is to gauge the political opinion of the entire nation by asking only a small sample of likely voters. For this to work, pollsters have to ensure that the sample group accurately represents the larger population. If 50 percent of voters are female, then 50 percent of the sample group needs to be female. The same applies to characteristics like age, race and geographic location.

To get the most accurate representative sample, political pollsters take a page from probability theory [source: Zukin]. The goal of probability theory is make mathematical sense out of seemingly random data. By creating a mathematical model for the data, researchers can accurately predict the probability of future outcomes. Political pollsters are trying to come up with models that accurately predict the outcome of elections. To do that, they need to start with a perfectly random sample and then adjust the sample so that it closely matches the characteristics of the entire population.

The most popular method for achieving a random sample is through random digit dialing (RDD). Pollsters start with a continually updated database of all listed telephone numbers in the country -- both landline and cell phones. If they only called the numbers in the database, then they'd exclude all unlisted numbers, which wouldn't be a truly random sample. Using computers, pollsters analyze the database of listed numbers to identify all active blocks of numbers, which are area codes and exchanges (the second three digits) actively in use [source: Langer]. The computers are then programmed to randomly dial every possible number combination in each active block.

For a truly random sample, pollsters not only need to dial random numbers, but choose random respondents within the household. Statistics show that women and older people are far more likely to answer the phone than other Americans [source: Zukin]. To randomize the sample, political pollsters often ask to speak to the member of the house with the most recent birthday.

Once a political polling organization has collected responses from a sufficiently random sample, it must adjust or weight that sample to match the most recent census data about the sex, age, race, and geographical breakdown of the American public. We'll talk more about weighting in a later section, but first, let's settle some of the mystery behind margins of error.

Margins of Error

What does it really mean when the news anchor says: "The latest polls show Johnson with 51 percent of the vote and Smith with 49 percent, with a 3 percent margin of error"? If there is a 3 percent margin of error, and Johnson leads Smith by only two percentage points, then isn't the poll useless? Isn't it equally possible that Smith is winning by one point?

The margin of error is one of the least understood aspects of political polling. The confusion begins with the name itself. The official name of the margin of error is the margin of sampling error (MOSE). The margin of sampling error is a statistically proven number based on the size of the sample group [source: American Association for Public Opinion Research]. It has nothing to do with the accuracy of the poll itself. The true margin of error of a political poll is impossible to measure, because there are so many different things that could alter the accuracy of a poll: biased questions, poor analysis, simple math mistakes.

Instead, the MOSE is a straightforward equation based solely on the size of the sample group (assuming that the total population is 10,000 or greater) [source: AAPOR]. As a rule, the larger the sample group, the smaller the margin of error. For example, a sample size of 100 respondents has a MOSE of +/- 10 percentage points, which is pretty huge. A sample of 1,000 respondents, however, has a MOSE of +/- 3 percentage points. To achieve a MOSE of +/- 1 percentage point, you need a sample of at least 5,000 respondents [source: AAPOR]. Most political polls aim for 1,000 respondents, because it delivers the most accurate results with the fewest calls.

Let's get back to our tight political race between Johnson and Smith. Does a 2-percent lead mean anything in a poll with a 3 percent margin of sampling error? Not really. In fact, it's worse than you think. The margin of error applies to each candidate independently [source: Zukin]. When the poll says that Johnson has 51 percent of the vote, it really means that he has anywhere between 48 and 54 percent of the vote. Likewise, Smith's 49 percent really means that he has between 46 and 52 percent of the vote. So the poll could just as likely have Smith winning 52 to 48.

Next we'll look at one of the most important factors that determine the accuracy of a political poll: the wording of the questions and answers.

Push Polls

Push polls are negative political advertising disguised as legitimate political polls [source: AAPOR]. Instead of asking a respondent a series of questions about several political candidates and issues, the questions focus exclusively on negative impressions of the target candidate. Examples could be: "Would you be less likely to vote for Candidate A if you knew he had an affair with his secretary when his wife was in the hospital?" or "If you could impeach the president for gross incompetence and lack of patriotism, would you do it?" Push polls are illegal in some states and are treated as breaches of election law. The American Association for Public Opinion Research denounces push polls for eroding public faith in political polling.

Poll Questions and Answers

Questions and answers are the reason we have political polls. "Which candidate will you vote for in the election?" "Do you approve of the president's performance?" "How likely are you to vote in the midterm Congressional elections?" But the order of those questions, and the answers that respondents can choose from, can greatly affect the accuracy of the poll.

Ordering of questions is known to play a significant role in influencing responses to political polls. Let's use the example of the "horse-race" question, in which respondents are asked whom they would vote for in a head-to-head race: Candidate A or Candidate B. To ensure the most accurate result, political pollsters ask this horse-race question first. Why? Because the wording of preceding questions could influence the respondent's answer [source: Zukin].

Polls, as we mentioned, are a snapshot of the respondent's opinion in the moment the question is asked. Although many voters have a firm and long-formed opinion on politics and political candidates, other voter's views are constantly evolving -- sometimes from moment to moment. A respondent to a political poll might begin the poll with a slight lean toward Candidate A. But after a series of questions about Candidate A's views on the economy, foreign policy and social issues, the respondent might realize that he actually agrees more with Candidate B.

In pre-election polls, in particular, it is crucial to ask the race-horse question first, because voters enter the voting booth "cold," without first responding to a list of "warm-up" questions [source: Zukin].

Most political polls are conducted over the phone, and whether the pollster is a live interviewer or an automated system, there are usually set answers from which to choose. Political pollsters have discovered that the wording of these answers can offer improved insight into political opinions.

For example, the polling firm Rasmussen Reports tracks the approval rating of the president. Instead of simply asking if respondents approve or disapprove of his performance, Rasmussen asks them to choose from the following options: strongly approve, somewhat approve, somewhat disapprove or strongly disapprove. The firm has found that the "somewhat" options are important for capturing "minority" opinions [source: Rasmussen]. For example, if a registered Democrat isn't thrilled with President Obama's performance, she still might choose "approve" over "disapprove" if those were the only options. But the "somewhat disapprove" option allows her to be more honest without undercutting her support of the president. Similarly, a registered Republican could "somewhat approve" of a Democratic president without feeling he has betrayed his party.

On the next page, we'll talk about the trickiest part of creating an accurate political poll: weighting the results.

Exit Polls

Ever wonder how news organizations can predict the winner of an election before the polls have even closed? That's because political polling organizations send out an army of field workers to ask voters who they voted for and why. The results of these exit polls are collected in real time and shared with the National Election Pool, a consortium of the major television news agencies [source: Edison Research].

Weighting Poll Results

As we discussed earlier, randomness is important to achieving a representative sample of the population. By using random dialing software, political polling organizations try to reach a perfectly randomized sample of respondents. But there are limits to the effectiveness of random dialing. For example, women and older Americans tend to answer the phone more often, which throws off the sex and age ratios of the sample. Instead of relying exclusively on random number dialing, political pollsters take the extra step of adjusting or weighting results to match the demographic profile of likely voters.

Notice that we said "likely voters," not the entire voting age population of the United States. That's an important distinction. If pollsters wanted to weight their results to match the entire voting age population, then they would adjust the results to match the latest census data. First they would distribute results geographically, keeping more responses from more populous states and cities. Then they would adjust results to match the demographic distributions of sex, age and race in America.

But if you want to achieve the most accurate political polling results, you need to filter the sample even further to weed out all of the respondents who are unlikely to vote. This is where the "art" of political polling comes into play. The best pollsters are the organizations who can develop poll questions and analysis models that separate the wheat from the chaff and isolate the responses of the most likely voters only [source: Zukin]. After all, in politics, your opinion only counts if you actually vote.

Each polling organization has its own system for identifying and weighting likely voters. ABC News uses a series of questions to gauge likeliness to vote:

  • How old are you?
  • Have you registered to vote?
  • Do you intend to vote?
  • Are you paying close attention to the race?
  • Did you vote in past elections?
  • Do you know the location of your polling station? [source: Langer]

Not all political polls are used to predict the outcome of elections. Sometimes pollsters want to gauge public opinion on different political issues, often in an attempt to compare the opinions of different demographic groups: old vs. young, Democrat vs. Republican, black vs. white. In that case, pollsters don't aim for a perfectly representative sample. Instead, they engage in oversampling to build a sample that includes an equal amount of respondents from each demographic. For example, if you want to poll the attitudes of white and black voters on a political issue, you would need to oversample black households because a randomized sample would only include 10 to 15 percent black respondents.

For lots more information on American politics and elections, see the related links on the next page.

Author's Note

I have never been contacted by a public opinion poll, and frankly, I'm a little hurt. Doesn't my educated and (obviously) fascinating opinion count for anything? In researching this article, I came across the following estimate of the number of polls conducted each year and the number of people contacted. If there are roughly 2,500 national polls conducted each year and each poll contacts 1,000 participants, then only 2,500,000 of the nation's 200 million adults get to participate each year. That gives me a one in 100 chance of getting a call from Gallup. Until then, I'm sticking to my same response every time I see the latest poll numbers claiming to represent the opinion of all Americans: "Well, nobody asked me."

Related ArticlesSources
  • American Association for Public Opinion Research. "AAPOR Statement on Push Polls." June 2007 (April 10, 2012.) http://www.aapor.org/AAPOR_Statements_on_Push_Polls1.htm
  • American Association for Public Opinion Research. "Margin of Sampling Error." (April 12, 2012.) http://www.aapor.org/Margin_of_Sampling_Error1.htm
  • Crossen, Cynthia. The Wall Street Journal. "Fiasco in 1936 Survey Brought "Science" to Election Polling." October 2, 2006 (April 10, 2012.) http://online.wsj.com/public/article/SB115974322285279370-_rk13XDUHmIcnA8DYs5VUscZG94_20071001.html?mod=rss_free
  • Edison Research. "Exit Polling" (April 10, 2012.) http://www.edisonresearch.com/us_exit_polling.php
  • Lambert, David; Langer, Gary; and McMenemy, Mike. Synovate. "Cell-Phone Sampling: An Alternative Approach." May 14, 2010 (April 11, 2012.) http://abcnews.go.com/images/PollingUnit/Cell-OnlySampling-Lambert-Langer-McMenemy-2010.pdf
  • Langer, Gary. ABC News. "ANC News' Polling Methodology and Standards." November 15, 2011 (April 10, 2012.) http://abcnews.go.com/US/PollVault/abc-news-polling-methodology-standards/story?id=145373#.T4XxV5pYvL9
  • Rasmussen, Scott. Rasmussen Reports. "Comparing Approval Ratings from Different Polling Firms." March 17, 2009 (April 12, 2012.) http://www.rasmussenreports.com/public_content/political_commentary/commentary_by_scott_rasmussen/comparing_approval_ratings_from_different_polling_firms
  • Rasmussen Reports. "Methodology" (April 11, 2012.) http://www.rasmussenreports.com/public_content/about_us/methodology
  • Zukin, Cliff. American Association for Public Opinion Research. "Sources of Variation in Published Election Polling: A Primer." October 2004 (April 10, 2012.) http://www.aapor.org/AM/Template.cfm?Section=Poll_andamp_Survey_FAQ&Template=/CM/ContentDisplay.cfm&ContentID=3226