Predicting the 2016 Presidential Election: What Went Wrong with the Polls?
Patti Solis Doyle, CNN political contributor,
Tom Miller, SPS MS in Predictive Analytics director
Larry Stuelpnagel: Medill Journalism professor
Marianne Seiler: SPS predictive analytics and information design faculty member
Soo La Kim (moderator): SPS Associate Dean of Academic Programs
KIM: Hello, everyone. We’re going to get started now, and if you can’t hear us, let us know. But welcome. My name is Soo La Kim, I’m the assistant dean of the graduate programs here at the School of Professional Studies, and we’re thrilled to have you here for our first ever online event for One Book One Northwestern. I want to thank a few people, I want to thank Peter KAYE, the assistant dean of undergraduate programs at SPS, and this was his brainchild, so we are happy to have this event here because of his hard work. We want to thank the marketing and IP teams and a shout out to Lynn Shik, who is one of our graduate student PAs and she’ll be monitoring chats for your questions. There are a lot of people behind an event like this, behind the scenes, as you can imagine. And then I want to thank our distinguished panelists who are calling in from different locations. And they’ll be introducing themselves in a moment, but I just want to go over our agenda and the structure of how the webinar will go. We’ll spend about the first 30 minutes or so in discussion with the panelists, we have a couple of prepared questions, but then we want to spend the second half answering your questions, and we welcome you, invite you to send in your questions via the chat feature at any time during the session, and Lynn will monitor those questions and we’ll get to as many of the questions as we can. We’re scheduled to go until 6:00pm but we are happy to extend the session a little bit so we can get to more questions, as many questions as we can, and when you do send in your questions, if you can indicate whether it’s for a specific panelist, that would be helpful as we moderate the session. So with that I’m going to turn things over to Marianne Seiler who will be one of the panelists and the moderator for tonight’s session.
SEILER: Thanks, Soo La. Good afternoon. As Soo La mentioned, my name is Marianne Seiler and I will be the moderator for today’s discussion. I’ve been affiliated with Northwestern School of Professional Studies since 2012, and currently I teach in both the Master’s in predictive analytics and the information design and strategy program. My background is in helping organizations apply data and analytics in the customer facing function of marketing, sales and customer service. I’ve worked in both industry and in consulting for firms such as Dow Jones, Viacom, JC Penny, and Accenture. At Accenture I led the firm’s North American practice in customer analytics for our financial service clients, particularly those in banking and insurance. Really throughout my career I’ve been focused on helping organizations understand their customers. What they think, what they say and what they do, which are often three very different things. I’ve obtained insights for my clients from both primary research using techniques ranging from focus groups, quantitative research studies, and also to a lot of ethnographic research, and of course as well from data modeling. My research and writing focus continue my interest in understanding customers and applying it to more effective customer experiences, increasing loyalty, lifetime value, and the relationship, strengthening the relationship between a company and its customers. I hold a PhD in management from the Peter Drucker School at Claremont Graduate University, an MBA from the University of Texas at Austin, and a BFA in broadcast journalism from Southern Methodist University. So I’m very much looking forward to participating in this evening’s events. Also on the panel is Patti Solis Doyle. Patti, may I turn this over to you to introduce yourself?
SOLIS-DOYLE: Sure, thank you so much. I am Patti Solis Doyle, I am a political consultant, a Democratic strategist and a commentator on CNN. And also a very proud graduate of the School of Professional Studies at Northwestern University.
I have…for over 25 years, I’ve worked for Hillary Clinton in both her (audio distortions) I was a campaign manager for her 2008 Presidential race, I worked for Barrack Obama in 2008 as the chief of staff for the Vice Presidential operation. I’ve done some local races in Chicago and I have worked with political, some of the best political pollsters, and I can tell you, in campaigns, we rely heavily on research and polling, so I think that’s why I’m here tonight, to talk about polling.
SEILER: Great, thank you Patti. Our next panelist is Tom Miller, who is also affiliated with the School of Professional Studies.
MILLER: Thanks. I am also with the School of Professional Studies, and I’m the faculty director of the predictive analytics program, which is a Master’s program. Predictive analytics, sometimes known as data science, is an emerging discipline that combines information technology and modeling and business organizational problems, and we’ve been offering courses in that area since 2011. I joined the program prior to its first term in the fall of 2011. I’ve also written books in the area of data science, I’m a perpetual student, four graduate degrees, statistics and psychometrics from the University of Minnesota and business and economics from the University of Oregon. As faculty director I helped to shape the curriculum, suggest new courses, encourage faculty to teach those courses, and because of the area, try to keep things up to date. This is a fast-moving area, heavily influenced by technology, so we have to continually revise our curriculum and make sure that we’re covering the needs of business today. Regarding politics, I am not a political consultant. I do consulting in business. I do consulting in business and sports included, but I don’t work specifically in politics. I vote, but I’m not a heavy partisan of any kind, I’m more of an observer of the political scene. I think my role here tonight is to talk about methodology and how we can do a better job with methodology in this area, political research as well as other areas. So happy to join the pack.
SEILER: Thank you, Tom. Our last speaker is Larry Stuelpnagel.
STUELPNAGEL: Good evening, it’s a pleasure to be here. I have been at Northwestern University now for the last 21 years. I have a dual appointment at Northwestern University, half of my appointment is in the Weinberg College and the Department of Political Science and in the School of Professional Studies. The other half is in the Medill School, Medill School of Journalism. And prior to coming to Northwestern, I spent 25 years as a television reporter, the last 14 years I spent in New Jersey, covering New Jersey politics for WNET in New York and New Jersey public television. I covered Presidential races as they came through that state and New Jersey is an important, may be small, but it carries some clout in the Electoral College. So I covered my share of Presidential campaigns as they came through. I also covered the political conventions in 1988 and 1992, and I’ve been teaching a class since the year 2000, which gets updated every four years, called “The Press and Presidential Election”, and it’s due for a big update this year. So I’m happy to join the panel and looking forward to conversing with our audience.
SEILER: Thank you, Larry. I think many people in our nation were really surprised at the outcome of the election. I mean, I certainly was. I turned in early figuring I knew what was going to happen, and when I got up in the morning I was a little surprised, to say the least. I’m wondering if the rest of you were surprised and if so or if not, why not? Larry, how about yourself? Are you surprised at the outcome?
STUELPNAGEL: Not toward the end I wasn’t. As the, as we headed into the home stretch, I was telling my students and my colleagues that I thought Trump had at least a 50% chance of winning, based on a number of factors. But he seemed to have enthusiasm going for him in visible ways, at least what we were seeing on television anyway, but I didn’t see that much for Hillary Clinton. He was, he made the right call to pay attention in Wisconsin and Michigan and Pennsylvania, and he was just drawing some huge crowds that, along with the tightening of the polls, told me that he had momentum going. I can go back to 1992 when I remember when Ross Perot was running as a third party candidate against George H.W. Bush and Bill Clinton, and I picked up on the ground by covering some of the events that there was an enthusiasm there that, it tilted that particular election, and I remember telling people, I predicted he was going to get 20% of the vote and everybody looked askance at me, and I said well, and then it turned out he got 19% of the vote. So as the race was tightening toward the end and then it did swing back in Hillary Clinton’s favor on the national level in those three key states, the Senate races and the congressional races were tightening up and so I was surprised but I wasn’t that surprised.
SEILER: Great. Tom, how about yourself, were you surprised?
MILLER: I was expecting Clinton to win, but I wasn’t especially surprised for the reason is that I’ve seen a lot of polls over the course of more than a year of campaigning. And they moved around a lot, which indicated there was a variability. Of course, there’s always variability in polling and any kind of measurement. There was a great deal of variability in this particular election, so it didn’t surprise me that much. I think that what we’re dealing with now is kind of in post-mortem, we’re trying to figure out, well what happened? You think about it and you look at the results, it was really, you know, three states that made the difference. And if you look at Wisconsin, 23,000 votes, Michigan, 12,000 votes, Pennsylvania, 68,000 votes that spelled the difference between Trump and Clinton, and had those votes gone the other way, we would have not been surprised at all, right? The result would have been what many of the pollsters were saying. To understand measurement and variability, I don’t think you’re surprised by things like this, I think you’re, your knowledge of and faith in statistics is reinforced, and that true understanding is an appreciation of variability, and you certainly saw it in this election.
SEILER: Patti, obviously, I know I had a chance to hear you at the events in October and so I’m sure this probably came as a surprise to you, but if not, why not?
SOLIS-DOYLE: It was a shock to me, in all honesty. Looking back on it in retrospect, sort of go through the post-mortems of it, it begins to make more sense and I can go through some of the facts as to why it makes more sense, but that night, I was shocked. I was in a green room with Corey Lewindowski ready to go on air once the race was called. The beginning of the night he was pretty down…by the end of the night, very much…and the reason I was shocked, because Donald Trump had done so many things that any other traditional candidate would not have survived, much less come victorious, had said so many disqualifying things. I just assumed she was going to win. Also, the polling up until election night were in her favor, the races start to… (Audio distortions) After the Comey letter came out, but the campaign, internal campaign, internal polling seemed to indicate that she had come back from the event, the significant event, ten days before the election. In retrospect, you know, once people start dissecting what happened, several things obviously. She was a flawed candidate, a flawed messenger at a time when the country was really looking for change. The Clinton campaign tried to put together the whole Obama coalition, which was very successful for, you know, President Obama twice. Hillary Clinton is not Barrack Obama, she’s Hillary Clinton. Being able to get the same kind of voters to come out in the same…for instance, African-Americans and young people, while they came out, did not come out in the same numbers, and that caused a significant problem. And then the third, which is apropos of this conversation, national polls, you sort of take with a grain of salt because they’re not as detailed as your own internal poll, but what the Clinton campaign failed to do in the last two weeks of the election is they were so confident in their blue wall…their daily tracking polls, and Wisconsin, and so they really were caught off guard when those numbers in those states began to fall. And as you might have recalled, towards the campaign, there was a significant presence…not only by (audio distortions).
KIM: This is for you, Larry. As a journalist and political scientist, you’ve closely studied Presidential elections since 1980. From your perspective, how has the role of polling evolved in Presidential elections? And how does the 2016 election stand out, if it does, from these previous elections?
STUELPNAGEL: Well, we can go all the way back to the Truman/Dewey race, where the Reader’s Digest poll successfully predicted that Dewey was going to win, but I would, you know, when polling first started, when people first started being polled, we can go into the 70s here where people actually were happy to pick up the phone and do phone interviews and talk about their political views. Now, people are more skeptical about it. And so it’s become a whole lot harder to get an accurate poll now than it was back in the 1970s. For one thing, you have, I think only about 40% of America now uses a landline phone, and so you have cell phones coming into the equation, and unlike landline phones, because it’s by federal law, a cell phone can’t be robo-dialed, so it’s getting harder for people to, for the pollsters and the candidates that they represent to get a more accurate survey. So that’s been in recent years, I think, one of the biggest shifts in trying to get an accurate number on the polls.
SOLIS-DOYLE: I would just like to add to that, I think one of the things that made the polling even more difficult to be accurate this time around was the use of social media. So many people don’t even communicate with their cell phones anymore, they communicate via Snapchat or Twitter or texting, and so there’s a broad base of demographics that pollsters couldn’t get, they just couldn’t get to, and I’m talking specifically about Millennials. They just didn’t have time for pollsters.
STUELPNAGEL: That’s a very good point, very good point. You could argue that Millennials were one of the big, you are arguing that Millennials were a big part of the puzzle that were not as well represented.
SEILER: There was a, there’s been a lot written about just the necessity to get more accurate in predicting who will actually vote, first, and then to figure out what their voting predisposition…
STUELPNAGEL: And I think also what Patti, what you were just saying also ties into, I mean, poll results don’t come out of nowhere, they come out of the messages that people are receiving and where they’re receiving them, and I’m just wondering, I’ll throw this out there for Patti and for everybody, in terms of the messaging, was Trump more successful with a more simple message? Just “make America great again” that touched that nerve, that may or may not have resonated in the polls, but it sure did when people turned out.
SOLIS-DOYLE: I think that is an excellent point. I think one of Hillary’s problems during the 2016 race was, she had too many messages. She had a laundry list of policy ideas which, you know, I personally felt was great, I like my President to have a lot of ideas that, it sort of all got convoluted and it was never clear like “let’s make America great again” or “make America great again.” So I think you hit the nail right on the head that Donald Trump message really hit a nerve during a time when many Americans were just very angry with the status quo and what was happening in Washington, and Hillary never hit that nerve. In fact, she sort of epitomized what was going on in Washington, she was Washington, and so they almost didn’t even want to hear what she had to say, like I said in my opening remarks, she was just the wrong messenger for the time. And I’m not certain whether the Trump campaign polled it or just had an innate, Donald Trump had an innate sense and a feel for what was going on in the country that he really gauged by his rallies rather than polls. Which, again, on the laundry list of things that were done in this campaign that were, not been in any other or at least not in traditional methods, that was one of them.
SEILER: Tom, Larry has mentioned a couple of issues about specific use of landlines verses mobile phones and creating bias in the samples and the unresponsive, you know, large groups of potential voters were not actually included. What other, can you speak a little bit about the other biases that you saw that you feel came out potentially in the 2016 election predictions?
MILLER: I can talk about the general problem as I see it, the issues we had to face. On the one hand we have sampling issues, and Larry was referring back to, goodness, it was 68 years ago now, the Truman/Dewey election. There was also the, even earlier, using a lot of classes and statistics, goes back to 1936 when there were predictions from the Literary Digest poll and that year that Alf Landon was going to beat FDR, and those are sampling issues, which is the first thing to consider. Is the sample covering the population, is it representing the population, covering all the groups? Then there’s also the sample size issue. Now that wasn’t a problem with Literary Digest poll, they had 2.3 million respondents and got it wrong, so, but you have to do both, you have to have sufficient sample, you have to have sufficient coverage, representativeness, and you also have to have sufficient sample size in order to reduce the sampling error that’s involved. The other component of the three, another component of the three is measurement, how we go about asking questions and what media we use in asking those questions. Results from online are different from telephone which are different from mail which are different from face to face, and as we shift from one medium to another, we have to make adjustments in our interpretation of the results. And then the final aspects, there’s sampling and measurements, there’s also the modeling aspect. And a great deal of work is done on the modeling side, on one hand, to adjust polling results so that they better, do a better job of representing the population, and on the other hand to do predictive models. When you see the commentary on TV or online, many times predictions are made based on one variable at a time. Let’s see how age affects things, let’s see how sex affects things, how education, race, religion, one variable at a time. Very simple models. And if you look at each one of them, maybe you come out with the conclusion that Democrats should win every election, and that’s not obviously the case. So we have to think more, I think, think in a more sophisticated way about how do we go about making predictions, developing better measures, using better sampling, and also looking at more variant models and predictive modeling process. All of these are techniques that are well understood and utilized extensively in business, and they could be used I think more effectively in political research.
SEILER: I think that’s an interesting point, Tom. I had raised before in my conversation with you about, you know, the whole question of whether demographics are actually, have any predictive value anymore. We, many businesses have moved away from using those because they have tended to misrepresent, they’ve kind of disguised the issues that are really driving behavior and maybe that was a little bit more appropriate 25 years ago, was oKAYE, but now it’s not anymore. And making decisions based on demographics is proving in business to not be very substantively predictive anymore.
MILLER: Right. And you know, when we think about what we do in business research, many times we’re trying to assess the value of things, what people are willing to pay on a continuum, how strongly people feel about things, and then you look at the political polls and it’s categorical, often a binary response. Are you going to vote for Trump or Clinton? Simple binary response, rather than getting a better feeling for the strength of feeling about candidates. I do surveys at the beginning of classes and ask students about their opinions about different topics, so I can assign students to teams, and I ask students to distribute points. You hear eight topics, distribute 100 points across the eight topics based upon your feeling, your strength of interest, and like and desire to research on these topics. I think a similar sort of thing could be done in political campaigns. Rather than just doing this 1-0 binary response, or here are four candidates, which one you going to vote for, suppose you were going to bet $100 on the candidates and distribute the $100 across the candidates, that would give you a better feeling for opinion. That gets at the measurement part and what kind of measures we use. All of these things I think should be explored more completely and researched. If we do that, we’ll have better predictions in the future.
SEILER: Tom, there’s a question that came in from a student who asked, can you really describe the process of survey weighting and what do you think, how much of a problem do you think weighting actually was in the incorrect predictions that occurred so frequently?
MILLER: Well, the issue comes in when you have, what you’re trying to do is you’re trying to match up the sample with what you expect in the population, so you want to have the sample representative of population in terms of these various criteria that you’re looking at, usually demographics that you’re trying to look at. Although you may not have as many responses say from Latinos as you expect in the population, we’re going to take the responses that you have and weight them, increase their value in the sum in order to have them be representative, and that’s based upon your knowledge of the sample, how it breaks up in terms of demographics and the population, so there’s a model that’s used, there’s a weighting model that’s used to get that final percentage of vote, predictive percentage of vote for each of the candidates. That model is based upon assumptions that you’re making, assumptions about the likelihood of voting as well. So it’s not just counting the poll results, it’s also utilizing the poll results in a mathematical model to come up with a prediction. All of those things need to be questioned as a result of what we saw in this recent election.
SEILER: Thank you. Patti, I’m wondering if you have any thoughts on the undecided number of voters. I was one of them, and I’m also wondering if when we do polls it’s a snapshot and sometimes in this election, perhaps someone had thought they were decided and then switched and then didn’t and was there a lot of back and forth that was not captured in a lot of the polls that were going on?
SOLIS-DOYLE: Well, a poll, particularly in a political campaign environment, is always a snapshot at any given time over the two to three day period that that poll has taken. And this particular Presidential campaign was so volatile and so, I mean, it was a rollercoaster ride for everyone, for both campaigns and then also for the voters across the country. There was always something happening, either Donald Trump said something provocative or there was the Access Hollywood tape or there was the James Comey letter or there was the, you know, FBI investigation, there was always something very provocative and pivotal happening almost on a weekly basis, and so I think voters had, you know, whiplash from everything that was going on, so many voters who felt that they were decided became undecided when the Access Hollywood tape came out or people who thought they had settled on Hillary Clinton decided, you know what? Donald Trump is the less of the two evils after the James Comey letter to Congress saying that he was going to reopen the email investigation. So it is very much a snapshot in time, and particularly in this, again, this specific election, there were so many pivotal events that happened late-breaking that it was almost impossible to sort of grasp through a poll, which is another reason I think many of the polls got it wrong, to grasp the actual impact that it had on voters. Ten days is really not enough time to let sentiments settle in or opinions settle in before you can really poll it, and so you know, I can only speak for the Clinton campaign, they were somewhat flying blind there for the last ten days.
SEILER: Do you think that led to the, you know, one of the, we had a, pardon me, a question come in that asked about, how could the Clinton campaign have missed the shortfall in Wisconsin, Michigan, and Pennsylvania? This part of the dynamic that created that?
SOLIS-DOYLE: I think that absolutely had something to do with it. That late-breaking news was very late-breaking. And then in all honesty, it was arrogant. They felt strongly that their blue wall was a solid blue wall. And they, as I mentioned earlier, I don’t know if people heard it, in the last two weeks, they were so confident in Michigan and Wisconsin that they, in the last two weeks, they stopped their daily tracking poll, which really tells you on a daily basis how you’re doing, and you know, campaigns are about strategic decisions really that are moved by resources, by money, so if you have X millions of dollars are you going to spend it on a tracking poll in a place like Michigan that you’re pretty certain is going to go solid blue? Are you going to spend it somewhere in a battleground state? I think the Clinton campaign was a little overconfident in their blue wall, and when you compound that with the unsettling effect of the Comey letter, this is what happens. You get a very close, but a loss on the left.
SEILER: Thank you. Mari, we have a question that’s come in that asks basically, the statement said, a lot of conservatives claim that mainstream media pushes the polls that oversample likely Democratic voters, and Trump has tweeted his low approval rating is unfair due to oversampling. I’ve also read a number of articles that talked about kind of confirmation bias among the media. They were each kind of confirming their own polls based on looking at polls from other third parties without really understanding the science that went into collecting that data. What is your perspective as an active member of the media on both of those?
STUELPNAGEL: Well, I think if you go back to the 2012 election, there was criticism that the polls were oversampling the Romney vote. So, and it’s, so I, polls in, pre-debate polls, historically, have been weighted toward the candidate that’s ahead in the polls just prior to a Presidential debate. And so yeah. But there’s not an intentional, you know, conservatives claim that I think there’s an intentional bias there against them, which I don’t think is the case. But we go to the weighting issue here, an idea that I want to throw out there, particularly after this last election, is, it’s hard to see for a poll to pick up on the enthusiasm that voters have for their candidates. They’re not that good at registering emotion and enthusiasm and commitment, so I think that has a way of, and certainly the Trump voters in those three states were showing a lot more enthusiasm, and the people who were showing that enthusiasm, historically, lesser educated people. Historically don’t trust the media and they don’t trust polls and they don’t trust the establishment and they were more motivated to go out and vote, and that was something that the polls weren’t picking up on.
SOLIS-DOYLE: That’s a very good point. I’d like to add to that, I might also, we talked, the media talked about that quote-unquote secret Trump vote. Because Donald Trump was such a lightning rod of a candidate, many people, or at least some people, enough people to actually make a demographic, decided that they didn’t want to tell their neighbors that they were supporting Donald Trump, or decided that they didn’t want to tell a pollster on the telephone that they were supporting Donald Trump, because they didn’t want the backlash. But indeed, they went out there and voted for Donald Trump. So I think pollsters missed that sliver of the electorate as well.
SEILER: Yeah, the issue of social desirability and your willingness to be up front about it, yeah. I’m also wondering, and Tom, if you have a perspective on this too, as well as Larry and Patti, I just feel myself that my phone rang off the hook for what seemed like at least three months, every night, with people asking me to participate in some type of a poll or another. I’m wondering if we just saturated the market so much that people will tell you anything to get you off the phone or they won’t participate and so it begs the question as to whether opinion polling is really going to be a viable means going forward, because technology has allowed anybody and their brother to conduct a survey, and I think people may feel, I feel over-surveyed. I don’t know what your perspective is on how other people are feeling and its impact on opinion polling.
MILLER: I don’t think polling is going to go away. (Chuckles) I think if anything, we’re going to see more of it, and we’re going to see a lot more experimentation with alternative methods of polling. You know, the things that Mari was mentioning earlier, the move from landlines to cell phones, the existence of caller ID, I mean, do you really pick up the number if you don’t recognize the person? In that kind of technology world, it’s awfully hard to replicate what used to be random digit dialing on landlines. And I think too, we’re living in a world today, it’s pull rather than push, right? We select what media we’re going to attend to. We have lots of options. It’s no longer a world of broadcast TV with three or four stations and well-edited broadcasts. It’s a Wild, Wild West of anything goes, and you decide what information sources you’re going to attend to, and somehow we have to do a better job of understanding the world that we live in, given this new technology and new media that we’re dealing with. One of the other points I wanted to make was this, the world oversampling is used in a pejorative way by some media and it’s not really a bad thing. Oversampling is usually intentional, because some groups that you want to understand are less represented than other groups. So you try to get more of them in your sample, so your estimate for that particular group has less error. It is actually a technology that improves if it’s done right, that improves the accuracy of the polls and predictions.
SEILER: Tom, we had a question come in where it was directed to you… it was whether in fact at this point in time people kind of subconsciously are connecting different types of polling organizations specifically to some form of the media, and that is creating some level of cynicism and dislike for the press as well as leading to high levels of non-response or, you know, inaccurate responses telling somebody whatever they want to hear to get them off the phone or out of your face.
MILLER: Well, of course. There’s some, many media outlets conduct their own polls and it’s the world of pull again. People are going to believe the polls of the media that they support more likely than not. Of course, there’s one counter example there in my own area in Los Angeles with the Los Angeles tracking poll, where ordinarily you would think of, in the liberal leaning media say, but that poll predicted Trump many more times than other polls did. Which is another interesting issue. How do you go about doing, forming your sample? Do you go back to the same individuals or a panel essentially and track them over time, or do you go to a completely new group each time when you’re doing polls week by week? There could be some good arguments for using a panel if you’re trying to keep track of things over time and see how things are changing. So after the election of course, the tracking poll got a lot more positive press.
SEILER: Absolutely. Well, you know, there’s been a lot written about Allan Lichtman and his, you know, 13 keys model which does not rely on necessarily public opinion polling, and it being perhaps in some ways more accurate in a number of the elections that have gone, not just this one, but those preceding it. Any thoughts, Tom, on the issue of using alternative variables to try and predict behavior rather than asking the individual?
MILLER: Well, this gets at behavior, attitudes and behavior. We’re doing a poor job, I think, in polls generally of measuring both. We’re just asking, which candidate are you going to vote for? We’re not really getting at motivation. Why are you saying, we’re not getting at intensity of feeling or enthusiasm, the idea about having $100 that you spend on the candidates rather than one vote that you spend across the candidates would give you a better feeling for enthusiasm or the degree to which you would, you like the candidates. If you ask the question about, given Trump or Clinton, what percentage of the time do you agree with Trump and what percentage of the time do you agree with Clinton? That might give you a better idea about how the people are going to ultimately vote, than to just ask them which candidate are you going to vote for?
SEILER: Yeah. Restructuring the question. Go ahead Larry.
STUELPNAGEL: I just want to throw one other thing into the mix, and Patti, I’d like to get your reaction to this as someone who’s on CNN. I enjoy your work, I enjoy your colleague’s work. But CNN, MSNBC, Fox spend a lot of time in the studio talking about things, reacting to polls, and if polls are a reaction to things that people see and feel in their lives, I’m seeing precious little from the press out there now giving people a mirror of what is actually going on in their lives, and that could also affect the outcome of the polls. I think that the press has failed in recent years just getting out there and just seeing what people’s lives are actually like, and that effects the polls and voter turnout.
SOLIS-DOYLE: I couldn’t agree with you more. Usually the way, I can only speak for CNN. I happen to think that CNN does a better job than most. They send their reporters out to cover the candidates, so they’ll travel with the candidate and they will speak to people who are at a candidate’s rally, and of course if you’re at the candidate’s rally, those people are already predisposed. So they don’t like, you know, hit the road like they used to. They used to do that. They used to hit the road and travel the country and talk to people, to real people, to real voters. They don’t do it now. I’m not on the business end of CNN, but I would imagine it’s because of resources and a lot of, and money, and you know, this particular election was very good for business in terms of the media because it was such a provocative, untraditional race that they really spent their time and energy focused on covering Trump’s rallies rather than going and talking to real people, because that was good for ratings. You know. I think volumes will be written about whether or not that helped one candidate over another in terms of the kind of lopsided coverage, but you know, I won’t speak to that. The media, the press has become a business, and in my mind anyway, the fact that it is a business to make money, it has diminished in its, you know, what it was originally supposed to be, a free press, and that is to keep an eye on our elected officials and tell the news in an unbiased, objective way.
STUELPNAGEL: I totally agree with you.
SEILER: Patti, sounds like what you’re also saying is that there was a failure to get out into the field, meant there was a lack of kind of intuitive or kind of just a gut reaction saying gee, maybe these polls aren’t as right as we think they are, because I’m hearing something else, even if it’s intuitively, there was no point of judgment because they weren’t out there.
SOLIS-DOYLE: Well, the press polls, the polls conducted by the media are not as good or as specific as a candidate poll. If the Clinton campaign were doing a poll, they ask much more specific questions about themselves, about the character traits of, I’m going to say for, I’m working for the Hillary campaign, and the character traits of Hillary Clinton, on the character traits of Donald Trump. What do you like about Hillary, what don’t you like about Hillary, what do you like about Donald Trump, what don’t you like about Donald Trump? They ask, you know, what do you think of Hillary’s job plan, what do you think of Donald Trump’s wall? It’s less binary than the media polls. And they also, you know, they test the text, so that’s why as a political person, as someone who’s worked in campaigns, I take the polls that are conducted by the media much less seriously than I do, whether it’s candidate polls or whether it’s third party polls or whether it’s Super Pac polls, because they get much more into the weeds. But just the public polls a little bit here, you know, the national polls were kind of on target and Hillary Clinton did end up winning the popular vote by two million plus votes. The national vote, the national polls got a little bit of a better read. I think that’s for, one of the reasons for that is, the people who lived in the battleground states as you mentioned, Marianne, they were so inundated every day, their phones rang five, six times a day from, they were being called by a pollster, their mailbox was inundated with mailers, 20 pieces of mail if you lived in Pennsylvania from…Pacific. So pollsters just weren’t able to get to those, because they were being hung up on, they weren’t able to get to the people who lived in Pennsylvania because they hung up on them. So they were able to reach somebody who lives in California or Alaska because they get no love whatsoever. So it was just, it was off-balance, basically, the battleground state polls.
SEILER: So Tom, a question has come in that asks about the impact that the flawed polling will potentially have on the predictive analytics industry going forward. Like to comment on what you think the backlash may or may not be?
MILLER: Well, there’s backlash to encourage skepticism. I’m all for it. Used to be a t-shirt that the American Statistical Association would sell to its members, “statistics means never having to say you’re certain.”
Appreciation for variability and error in measurement and sampling, that’s good. And I’d say that on the other hand there’s going to be all the more need for good training in this area, for people who know how to work with data and know how to be appropriately skeptical about data, because that’s the way of the world. The…
SEILER: All needing to be better, more literate consumers of the data.
MILLER: Absolutely, the book that started this process, The Signal and the Noise. Don’t forget the noise.
SEILER: Perfect. There was one other question directed to anyone in the panel about averaging kind of polls together to get a better indication than any one poll, and it seems to me that somewhat, people don’t average the polls, but they look across a series of polls to get a better sense of what’s actually happening. Is that accurate, Larry? I’m sure the media has to rely on more than just a single poll, or did they really fall back on what they’ve done, the research they’ve conducted?
STUELPNAGEL: Well, for the cost reason that Patti mentioned, a lot of media companies have been consolidating and working with each other to save some money on these things because she’s right, they’re not as good as the campaign polls that are out there. But you know, I think what, real clear politics. I saw a lot of averaging going on the air of people taking data that real, clear politics was putting together and averaging it out, and I was doing that too. I was looking across the battleground states toward the end, which, when I saw Feingold was in trouble in Wisconsin, I said oops, I bet Clinton’s in trouble there too. Personally, as a media person looking at averages across, it just gives me one more thing to think analytically about in terms of drawing my conclusions, and it was doing that that gave me toward the end of the campaign my own personal analysis, I was telling my students that I thought he had a 50% chance of winning.
SEILER: Any comments about the caveats from a data science perspective on looking across polls and to try to use them to represent a larger sample, and you’ve mentioned a lot of, how are the questions constructed, how are the samples decided on, how representative are they? Any other thoughts you have in terms of caveats if we’re looking across a series of polls?
MILLER: Well, it’s a known technology, you know, metadata analysis. It’s used extensively in many areas. It’s a trusted method, rather than relying upon a single scientific result, you look at the results across many studies. It’s done a lot in medicine, where one study shows that the drug is effective and another shows it isn’t and you grab lots and lots of data that are coming in, conflicting data, you want to make sense of it all, and an appropriate, you can call it averaging, but it’s often a weighted averaging approach of some kind, call it metadata analysis is common. The issue with the polling of course is you’ve got things happening in time as well as location, so you’ve got the geography to be concerned with and also the time element, finding the right things to average could be a challenge. Do you average, and what is your time interval? Do you average the last three days of results, the last week of results, the last two weeks of results? Those are decisions you have to make and hopefully make them intelligently based upon the data that are coming in and your past experience. These are all challenging questions and good questions that need to be asked and we have to come up with appropriate and well-reasoned answers.
SEILER: Follow-on question to that that asks, with the rise of social media, might we be better to try and perform sentiment analysis on Twitter, Facebook, Google, etc., rather than or perhaps in addition to opinion polling?
MILLER: Absolutely. That’s what companies are doing. A lot of companies have moved away from or use less of the custom market research polling that they did before and they’re relying more on social media, text analytics, including sentiment analysis, to gauge people’s opinions about brands. We’ve had students in our classes look at for example the effect of the Note 7 on Samsung’s brand. We have a student in our program who just did a large text analysis of the political campaigns, looking at Clinton verses Trump and the topics that were dealt with in the social media, actually both social media and also the news media. Those tools are out there and they can be used systematically in the future to gauge opinion and to get a sense for what’s going to happen, also to guide the campaigns. A campaign that isn’t using text analytics is making a big mistake these days.
SEILER: So Patti, do text analytics, is it fairly common now on the candidates and political parties?
SOLIS-DOYLE: I’m sorry, can you repeat that?
SEILER: Yes, I was just asking if the use of text analytics on social media is something that candidates are starting to actively use in order to help them understand the constituency and the issues that are important to them and just where they stand overall in an election process.
SOLIS-DOYLE: Yes, well, just judging by this last race, they’ve started doing that. It’s such a new phenomenon that, in all honesty, we don’t know whether or not it works, but it certainly can be layered upon traditional opinion polling to give you more information, but you know, I remember when I worked in 2008 on that Presidential campaign, you know, I think the Obama campaign maybe sent out one tweet. You go eight years later and it’s just a new world and it’s an ever-changing world, and there’s new mediums almost every day, many of which I don’t even, I can’t even pretend to know about. But in terms of political campaign, I will tell you, because the opinion polling is so erratic, what they’re relying more and more on is focus groups, because that really does give you a lot of nuance and more time to really dig in deep. It’s impossible to do just focus groups on a national campaign, just because of time and money, but it really does give you a much better, nuanced picture of what voters think about your candidates, about your opponents, about policy, about personal characteristics, it’s about, and I think it is a much better indicator on that enthusiasm question that we’ve been talking so much about tonight.
SEILER: The question came in, will you be getting your own show in the near future?
SOLIS-DOYLE: (Laughs) I don’t think so. But I’ll be on your local CNN, I mean, not local, national CNN, giving my opinion every now and again.
SEILER: Great. I think Larry is trying to get back in again, I know Patti has an appointment, so let me just ask if there are any final questions. So this is a good question. Do we think that we may find that November, the 2016 polling and voting data a year from now is kind of just a nonstarter from a conversation perspective? Or think given who our new President is, we’ll probably have a lot of other things to talk about in the meantime, but Tom and Patti, any thoughts on that?
SOLIS-DOYLE:Our future President loves polls. I don’t think we’re going to stop talking about polls anytime soon. He just loves talking about his polls. Whether they’re good, whether they’re rigged if they’re not good, you know, he loves to talk about polls. So I think polls are with us at least for the next four years without question.
MILLER: I think we’ll be talking about this election for some time, and it’s going to be an interesting conversation and a fun conversation. We’re going to learn a lot across time. We still, I think, the official results aren’t all in yet, we’re still counting votes in some areas and figuring out what actually happened. This will be a topic for many doctoral dissertations and political science, I’m sure.
SEILER: I would imagine. I think there was one other question left, which was, any thoughts about the debate between the popular verses the electoral votes and whether that will in fact be restructuring how organizations think about opinion polling and Presidential elections going forward? I read an article that said that, you know, opinion polls are getting further and further away from the reality of what the Electoral College looks like.
MILLER: Well, the Electoral College is a major issue in our world. Maine and Nebraska have taken the lead in offering what’s called congressional district popular voting, so two votes, two Electoral College votes go to the statewide winner and then one to each, the winner in each of the congressional districts. There’s often talk about a Constitutional amendment to change the Electoral College, but we don’t really need that. All we have to do is have individual states take the lead of Maine and Nebraska and divide their votes accordingly. We won’t get a complete rendering or a true rendering of the popular vote, who wins, but we’ll get a lot closer and we won’t have this situation where someone can win the popular vote by almost three million and lose the election. So I think states should seriously consider that. Whether they will is another matter, but it’d be nice if our President in fact was a representative of the feelings of the people.
SOLIS-DOYLE: I agree with that. I don’t know what the answer is. I mean, right now, the way the Electoral College system works, you have anywhere between eight and twelve battleground states, and those are the states that get all of the attention. They get the candidates, they get the surrogates, they get the ads, they get the mail, they get all of the attention and the rest of the country doesn’t, and their votes don’t count. But if you make it a popular vote, then those ten or twelve battleground states turn into the eight largest states, right? California and New York and Texas will get all the attention, so I’m not sure what the answer is, it could be what Maine and Nebraska are doing, but there are significant problems with those systems.
SEILER: Larry, are you able to rejoin us
STUELPNAGEL: I’m sorry, I just didn’t pick up on the conversation. I know this is going to sound overly simple, but I’m a big fan of, if it’s the popular vote that is going to elect my mayor or my governor or my Senator, I think that’s a perfectly reasonable way to elect my President. That’s it in a nutshell.
SEILER: Thank you.
MILLER: The chance of that happening is very slim, I would think.
STUELPNAGEL: They are, they are.
SEILER: You don’t want to make a prediction on the chance of that happening?
STUELPNAGEL: At the moment, slim and none.
SOLIS-DOYLE: Slim to none, yes, definitely at this moment, right. Well, I know Patti mentioned that she needed to depart, she had another commitment, and we are about 20 minutes after the hour, so I think that I will try to wrap this up and if there are unanswered questions we’ll try to sort them out and get an answer back to people.
STUELPNAGEL: Can I just make one last point here? Maybe while I was gone, and that is, we’re not the only, a lot of people are talking about this and it’s been great to have this audience tonight, and the American Association for Public Opinion Research has an ad hoc committee in place to look at this, to look at the, to study the election, and they’re supposed to have their report back in May, so maybe we can have a reunion then.
SEILER: There you go. I know they produced one in 2008 so I’m assuming they’re probably working away on this one as well.
STUELPNAGEL: Thank you.
KAYE: If I can intervene here, this is Peter K, assistant dean at the School of Professional Studies. I want to thank each of our panelists for this lively and important discussion. I also want to apologize for the technical difficulties we had along the way. Simply put, we were having feedback issues and we learned our lesson about how not to set up certain things and we’ll take care of that in the future, but I hope that each of the people who attended was at least able to listen to most of the event, and as speaking to someone who was listening to the entire event, it was wonderfully informative and I think a good example of what happens in the one book, one Northwestern initiative where knowledgeable people talk about and reflect upon important issues. So thanks to everyone, and with that, I bid you goodnight.