Sunday, September 9, 2012

Warren Kinsella - You Get What You Pay For

And given that I’m unfortunately forced to have Sun TV on my cable box, I feel more like he and his lot should be paying me. But, this is besides the point – I just wanted to mention that it royally pisses me off.

But so does Warren Kinsella today, with his newest article “Political polls – you get what you pay for,” where he disparages all things polling and projection-related, in a way that is so blatantly bad, I’ve written this very long post about it.

It’s time to deconstruct Warren Kinsella’s worst article to date (in my opinion).

Part One: Quebec Pollsters Got it Wrong Because… Just Because

If you’re someone who is blind enough to take polls at direct face value, then yes, the pollsters did get it wrong in many ways – they failed to predict the closeness of the race between the PLQ and the PQ, they failed to get their regionals right (PLQ had better support in the Montreal RMR than expected), and overall they overestimated the chances of the PQ by coming out with some assured statements as “PQ looking at majority."

Let’s get that last point out the way right now: Forum made a mistake in calling such a sure result, and there’s no doubt about that. I would suspect that Lorne Bozinoff knows better, but apparently this time didn’t bother to think about it; they went with the story, and they paid for it. But had their numbers been right, the PQ would’ve had an easy majority, make no doubt about that, so the call wasn’t completely unjustified.

But what is unjustified is Kinsella’s treatment of Forum, as well as that of Leger Marketing, CROP, and then, with what was some major doses of misinformation. It’s not that he lied, he just didn’t tell the whole story.

For instance, Kinsella went on about Leger and CROP, one of the two most lauded pollsters in the province, for having placed the Charest Liberals’ numbers too low and “outside of the margin of error.” Granted as far as I can tell, they were – Leger and CROP had margins of errors around 2-3% usually, given their sample sizes were between 1,500 to 2,000. Leger’s last poll had the Liberals at 27%, and CROP had them at 26%.

Of course, what Kinsella doesn’t say is that both the CAQ and the PQ were well within the margin of error for these pollsters. Kinsella focuses solely on the Liberals, which is a mistake; you have to take in the entire poll to get the right context. This is a big mistake in my eyes, because in order to discredit a poll you should have something to gripe about, meaning all of its numbers should be off in some way. This was not the case here, and there’s likely a reason for that – similar to the British “Shy Tory” effect, there was probably a “Shy PLQ” effect in play here. That, or the advance voters skewed heavily towards the Liberals. Either way, it is somewhat out of the control of the pollsters, who can only collect data from the people they survey.

And that’s where we come to the margin of error. Please, please, please keep in mind that a margin of error is simply the median of a confidence interval. If a pollster gives the result of 40%, and a margin of error of, say, 5%, it means that pollsters believe if this poll was conducted a hundred times, 95 out of 100 surveys would show similar numbers, between 35% to 45%.

It does not mean that the pollster is predicting that the final result will be within 5% of their result. It is simply the confidence they have in their poll’s data, that the data collected was correct. It is simply inferred that the poll should reflect the final numbers, but pollsters don’t know the final numbers, only what their data tells them.

One can make the case that this inference is incorrect, and one may not be wrong – we’ve obviously seen that the data can be faulty, for whatever reason. Referring back to the “Shy PLQ” effect, pollsters can only rely on the data they’re given by those surveyed. If those surveyed don’t give the pollster their true answer, then the data is faulty. The margin of error exists there to try and correct that, but it can’t compensate for everything. That is the nature of statistics in general – you can only compensate for variations in data so much, but if the data is just wrong, its just wrong. And in this case, it’s probably not the fault of the pollsters – it’s the fault of those surveyed, who for whatever reasons failed to give the correct response.

Kinsella should, by all means, not be ignorant to this fact. Yet he says it anyways. Take that inference as you will.

But let’s take note that if you’re sane and intelligent, like Eric Grenier at ThreeHundredEight is, you include ranges to ensure that there is a large enough variation to account for the usual random-sample issues we have. Eric was very close to getting it right, and Kinsella, who says simply that the Eric predicted “predicted the PQ could win up to 75 seats, but they only won 54, whasupwittat eh?” (paraphrased), is being very disingenuous. As I said before, pollsters don’t predict the final result, they infer it – and Eric, with the proper caution required, inferred correctly. Kinsella is taking it too far.

Part 2: Pollsters Got it Wrong in the Past, Therefore Never Use that Pollster Again

This particularly stupid line near the end of Kinsella’s article irks me, because it is just so stupid:
It’s the new way in political polling — something doesn’t have to be true, anymore, just truthy. It just needs to be plausible. So there was the Toronto Star, the morning after the stunning Quebec result, predicting Liberals would lose a byelection in Kitchener-Waterloo. The polling agency they used? Forum Research. The same firm that got it so wrong in Quebec and Alberta!
Seriously? Nevermind the fact that Forum correctly inferred the NDP victory two days before this article was released – oops, Warren – but it is just a silly statement overall.

Pollsters will not get it right every time, nor will they get it wrong every time. The issue is one the reliability of their data, which takes into account how they’ve collected it – how the question was asked, who they contacted, how they spread their regionals out, what demographic variations they added to it to reflect an accurate sample of the population, etc. Those being the variables that are under the pollster’s control – the person who is the data point may themselves not be giving accurate information, whether its intentional or by next afternoon, they’ve changed their minds completely.

When a pollster does get it wrong, you shouldn’t necessarily be surprised. Polls are useful for watching trends and getting information on-the-fly, but the poll that truly matters is the one on E-day. That poll is the most accurate one, after all, given that it is a 100% to-scale sample size.

But pollsters can’t contact everyone in the province, and never will, thus they rely on much smaller, though representative, sample sizes. But right there you already have a knock against your confidence interval, because the smaller a sample size, the less accurate you’ll be. That is just a fact of math.

So take polls with a grain of salt, because they have an automatic chance of being wrong every single time. But more often than not, they aren’t – polls, when conducted correctly, will more than likely produce a result similar to what the final result will be. Simply looking at the polls in Canada, we’ve seen that pollsters got Ontario, Manitoba, the federal election in 2011, BC 2009, Quebec 2008, Quebec 2007, Ontario 2007, and a whole crap load of other elections in recent years roughly correct. That’s an impressive track record, and frankly Quebec 2012 should be up there as well – they predicted a PQ government more-or-less, with a surging CAQ to boot. Guess what we got, folks.

The biggest failure was, of course, Alberta 2011, and I doubt anyone will figure out what happened there with great confidence. But Kinsella needs to shut up about it, because its getting annoying – he should be pushing for those pollsters to figure out what went wrong, not berate them. But, a Sun News reporter will do what a Sun News reporter will, which is not give any freggin’ context to anything.

But this doesn’t mean abandoning polls at the first sight of being off a few percentage points. They’re not meant to be taken as a showcase of the final numbers, because pollsters aren’t psychics. They’re data collectors. Kinsella should know this, yet acts like he doesn’t.

Part 3: Polls as Crack

Kinsella is not necessarily wrong about this point, which to be fair was kind of hard to type after writing everything above. Modern media, which has an insatiable appetite for information to overlay on graphics that fly out at you on the screen, relies a lot on a poll to illustrate its point.
Notice my choice of words here: a poll. Not polls. Like Kinsella, the media treats polls as if they’re some sort of psychics that will give the correct analysis every time, instead of collecting and collating the data, taking into account the margin of error, reminding themselves that a poll is a snapshot of time, not a snapshot of the future, assessing it all, and then saying, “here’s something interesting to look at!”

No, the media just gets the press release from a pollster and throws it up. It isn’t how polls are meant to be sense. The general public generally doesn’t understand statistics, they simply see a number and assume that is the final result. “What the hell is a confidence interval? Get away from me you freak!,” so said my last date.

Kinsella takes advantage of this fact, while also pointing it out. He’s writing about how bad pollsters are, not giving the actual context; while also pointing out how the media is stupid to rely so much on them because of the lack of context they give them… which is generally correct. It’s an odd article when you read it that way.


But it makes it very clear that Kinsella is just ragging on about polls because… well I suppose he doesn’t have much else to do right now. Ever since Alberta, he’s had it in for polls in a bad way – the thing is that he looks sillier and sillier, the more he talks about it.

Again, as I pointed out earlier, Kinsella name-dropped the KitWat riding poll done by Forum as an example of how the media relies on pollsters that get it wrong… yet the poll ended up being right. Not exact, of course, but the Liberals certainly didn’t win KitWat, while they did win Vaughan, which Forum showed them winning. That’s two polls that Kinsella says are from a horrible polling company that just got it right.

But does he mention that? Of course not. It is two days later, and maybe he prepared this article before the by-election… but it still got printed anyways. His point was clearly contradicted by reality, and it still went out. And no correction yet. Heh.

I don’t know about you, but I think that speaks volumes more about Kinsella than it does pollsters.

1 comment:

  1. Kinsella is about atuned to Canadian Politics as David Orchard, Paul Hellyer, Larry Spencer, or Tony Genco.

    If he keeps going down this path he'll be a John Turmel levels by 2020