Sunday, July 22, 2007

My "Conversation" with Survey USA

I was at a friend's house this weekend and the phone rang. After looking at the caller id, my friend decided not to answer. However, I noticed that the caller was a familiar name: it was Survey USA. Since a lot of my research focuses on public opinion, the call from Survey USA was intriguing enough for me to ask my friend if I could answer it. Survey USA has gained notoriety in recent years for their methodology--they use automated calls that ask respondents to push buttons on their phones to answer questions asked by a recorded voice (see a more extensive discussion of their methodolgy here; see a Slate article on how their method stacks up here). Overall, the method (Interactive Voice Response) has been shown to produce fairly accurate estimates of support for candidates during election campaigns.

Two main concerns with the method deal with the selection of the appropriate interviewee within a household and the fact that the automated method precludes any interaction or clarification during the survey. Well, on the first point, the fact that I answered my friend's phone points to the type of in-house selection problems faced by Survey USA. I am not the person or household that they randomly selected to participate in the survey, and there was no way for the computer to know that (I have no land line, so I wouldn't even be in their sampling frame). Survey USA does nothing to make sure a particular person (male/female, head of household, etc.) is on the phone at the beginning of the survey. During my interview, they asked some basic information (gender, race, age), which they will use to weight the responses. The concern that some have, however, is how well that weighting actually accounts for these in-house selection effects. Of course, all surveys have to use weighting to overcome selection problems, so this may simply be a matter of degree.

In a recent Public Opinion Quarterly article, Mark Blumenthal (of pollster.com) notes:

"Yes, IVR studies appear to push the envelope with respect to in-house selection and demographic weighting, but they are extending similar compromises already made by conventional surveys. What evidence do we have that respondents obtained through an inbound IVR study are more biased than those obtained on the phone with live interviewers? The vote validation studies we have available show that the IVR studies are consistent with, and possibly superior to, surveys done with live interviewers."

I think Blumenthal's point is on the mark, and for what they aim to do (quickly capturing public opinion in a cost efficient way), the IVR technology works great.

But after taking the survey, I'd say that one problem that may have been overlooked a bit to this point is that IVR surveys may be more prone to satisficing. Satisficing (see Jon Krosnick's work on this) occurs when a respondent is essentially just answering questions without thinking much about them (probably because he/she just wants to finish the survey). Because IVR means that respondents are pushing buttons rather than verbalizing answers, I expect that the method would be even more prone to satisficing. After all, how many times have you been guiding yourself through an automated menu on the phone and accidentally hit the wrong button because you weren't really listening that carefully? Would this be less likely to happen if you were talking to a human? In the survey I took, I was asked a series of questions about each candidate for the Democratic presidential nomination (whether I felt positive, negative, or neutral toward the candidate). As we were going through the list, I was getting into a pretty good groove with pressing the buttons until one candidate came along for whom I almost pressed the wrong button. I would expect that this happens a lot with IVR, since it is a lot easier to cut off a computer by pressing a button than it is to cut off a human being who is asking you a question. Also, the process of verbalizing a response likely requires more cognitive work than simply pressing a button on a phone (I'm not sure precisely what the cognitive science says on this, but I'm just hypothesizing that this is the case). It is worth noting that there was no apparent way to go back to a question if you accidentally hit the wrong button.
Thus, as a way of capturing more involved attitudes, I'm not sure the method is nearly as useful yet. The questions that can be asked are too limited and the method is probably too prone to satisficing, particularly if the survey was extended to capture more detailed attitudes.

Nevertheless, the nice thing about the Survey USA poll is that it was quick and I come from a generation that (for better or worse) is very comfortable interacting with computer voices. I think that IVR offers a quick economical way to take a snapshot of public attitudes on a limited range of questions. For example, Survey USA will be calling me (well, my friend) back at 9pm tonight to get a quick reaction about the Democratic Debate. Because they do not need to employ human interviewers to do this, the feat will not be that difficult (finding enough people who actually watch the debate is another matter, however). While the detail of the interview will be lacking, it is still great for political scientists to have an idea of who viewers thought won or lost a debate before the media's spin takes hold. Thus, different survey methodologies are well suited to performing different tasks, and I think that the more options we have to choose from the better. Just as long as we are well-aware of the limitations inherent in each method.

UPDATE (7/24/07): Mysterypollster.com has posted about the initial results from this survey here. It appears that impressions of Biden increased markedly after the debate, but that hasn't changed the fact that Clinton remains the person that respondents believe would make the best president.

No comments: