A new national poll by Rasmussen Reports took a look at where people stood on last week's mini-debate between Clinton and Obama. If you recall, the question asked at the CNN/Youtube debate was whether Obama or Clinton would meet with world leaders of rogue nations during his/her first year without setting any preconditions. Media commentators (and others, like myself) thought that Clinton had landed a blow by highlighting her experience with the answer to the question last week. After Obama said that he would meet with these world leaders, Clinton voiced the following reply:
CLINTON: Well, I will not promise to meet with the leaders of these countries during my first year. I will promise a very vigorous diplomatic effort because I think it is not that you promise a meeting at that high a level before you know what the intentions are.
I don't want to be used for propaganda purposes. I don't want to make a situation even worse. But I certainly agree that we need to get back to diplomacy, which has been turned into a bad word by this administration.
And I will purse very vigorous diplomacy.
And I will use a lot of high-level presidential envoys to test the waters, to feel the way. But certainly, we're not going to just have our president meet with Fidel Castro and Hugo Chavez and, you know, the president of North Korea, Iran and Syria until we know better what the way forward would be.
Later in the week, Clinton called Obama's promise to meet with these leaders without preconditions "irresponsible and frankly naive."
The immediate perception was that Clinton had successfully used the question to highlight the difference between her experience and that of Obama. Yet, this latest survey suggests that more Americans side with Obama on this one. When asked whether the next president should meet with leaders of nations like Syria, North Korea, and Iran "without setting any preconditions," 42% agreed with the statement, 34% disagreed, and 24% were not sure. Support for Obama's position is even more significant among Democrats (55% of Democrats agreed). Give Obama credit, he did not really back down from what he said at the debate. In fact, he criticized Clinton's position as not being very distinct from the current policy favored by the Bush White House. And if this latest survey is correct, his willingness to stick to what he said at the debate may have been the right choice (politically, at least; no judgement here on whether it would make for good foreign policy).
Tuesday, July 31, 2007
Sunday, July 29, 2007
More on American Ideology...
I want to elaborate a bit on my post on ideology from last week. Jim Stimson (UNC), one of the best public opinion scholars out there, is studying this question of how Americans often mislabel their own ideology.
One of Stimson's major contributions to the public opinion field has been the development of his "mood" index. Essentially, Stimson's measure of mood uses survery data that asks citizens their positions on a range of issues. Stimson combines these responses and creates a measure of the percentage of Americans who want policy to move in a more liberal direction. In a recent presentation, Stimson demonstrated how the American electorate appears to be far more liberal according to his "mood" index than they are when they are asked to describe their own ideology. The image below, taken from the presentation, demonstrates how these two measures track over the past half century.
There are two notable patterns in this figure. First, note the large gap between the percentage of citizens who consider themselves liberal (blue line) and the percentage who answer a variety of policy questions in a liberal way (red line). This gap persists throughout the period examined.
A second point is that while there was always a gap between these two measures, they generally moved in the same direction during this period. During the 1970s, both measures declined; during the 1980s, both measures were on the rise. However, this pattern has reversed a bit since the mid-1990s. Note that while "mood" is moving in a substantially more liberal direction in recent years, the percentage of Americans who call themselves liberal is continuing to decline (though at a modest rate).
The fact that the two measures do not appear to even move in the same direction in recent years really illustrates the disconnect between how citizens identify their own ideology when asked whether they are liberal, moderate or conservative and the way they actually behave when asked about specific policy issues. How pollsters and political scientists deal with this problem is another question.
One of Stimson's major contributions to the public opinion field has been the development of his "mood" index. Essentially, Stimson's measure of mood uses survery data that asks citizens their positions on a range of issues. Stimson combines these responses and creates a measure of the percentage of Americans who want policy to move in a more liberal direction. In a recent presentation, Stimson demonstrated how the American electorate appears to be far more liberal according to his "mood" index than they are when they are asked to describe their own ideology. The image below, taken from the presentation, demonstrates how these two measures track over the past half century.
There are two notable patterns in this figure. First, note the large gap between the percentage of citizens who consider themselves liberal (blue line) and the percentage who answer a variety of policy questions in a liberal way (red line). This gap persists throughout the period examined.
A second point is that while there was always a gap between these two measures, they generally moved in the same direction during this period. During the 1970s, both measures declined; during the 1980s, both measures were on the rise. However, this pattern has reversed a bit since the mid-1990s. Note that while "mood" is moving in a substantially more liberal direction in recent years, the percentage of Americans who call themselves liberal is continuing to decline (though at a modest rate).
The fact that the two measures do not appear to even move in the same direction in recent years really illustrates the disconnect between how citizens identify their own ideology when asked whether they are liberal, moderate or conservative and the way they actually behave when asked about specific policy issues. How pollsters and political scientists deal with this problem is another question.
Thursday, July 26, 2007
Is it Time for a New Terminology for American Ideology?
One of the questions asked at the CNN/You Tube debate on Monday that stood out for me was asked of Hillary Clinton. The questioner asked Clinton whether she would consider herself liberal. Clinton's response was this:
"...Unfortunately, in the last 30, 40 years, [liberal] has been turned up on its head and it's been made to seem as though it is a word that describes big government, totally contrary to what its meaning was in the 19th and early 20th century. I prefer the word 'progressive,' which has a real American meaning, going back to the progressive era at the beginning of the 20th century.
I consider myself a modern progressive, someone who believes strongly in individual rights and freedoms, who believes that we are better as a society when we're working together and when we find ways to help those who may not have all the advantages in life get the tools they need to lead a more productive life for themselves and their family."
The underlying point here is that no Democratic presidential candidate can call themselves a "liberal" and still expect to win the general election; Republicans have been far too successful in adding a negative connotation to the term. Many citizens who probably are "liberals" also shy away from the label since it generally takes on a negative connotation. In the latest CBS/NY Times poll, only 15% of respondents considered themselves "liberal" and 31% called themselves a "conservative." At the same time, 48% labeled themselves as a "moderate."
CCPS is currently putting together a survey on health care issues. We are allowed only a limited number of questions, so one of the first cuts I suggested was the ideology question. While you will get some debate from scholars such as Morris Fiorina, I tend to agree that the electorate is fairly polarized along party lines at the present. This means that party and ideology are highly correlated, much more than they were when you had a substantial number of conservative southern Democrats (most of them have now relocated into the Republican Party). If made to choose, I'd rather use the party identification question, because I think it will give a better sense of ideology since "liberals" who do not like the "liberal" label will still admit to being a Democrat. That is why in the same survey you have only 15% of the public admitting to being a "liberal," you have over 30% identifying with the Democratic Party. On the other side, slightly more citizens claim the "conservative" label than admit to identifying with the Republican Party. This shows just how much of a problem the ideology questions poses to pollsters and political scientists. "Conservative" and "liberal" are not just polar opposites on the ideological scale, they are unequal in their social acceptability. Consider two people on an ideological scale, person A is about the same distance to the left of center as person B is to the right of center. The problem is that because of the loaded nature of the term "liberal," person A is not as likely to use the "liberal" label to define him/herself as person B is to choose the "conservative" label. Thus, even if you have an equal number of people to the left as you ahve to the right, that will not be reflected in the ideological self placement question presently used by pollsters.
Some studies use answers to a series of questions about policy issues to place respondents on an ideological scale in addition to asking them to place themselves. These studies find that a lot more people actually answer policy questions in a liberal way than admit to being a liberal. For a really interesting breakdown of present-day American ideology using responses to policy question, see the comprehensive study conducted by the Pew Research Center for the People and the Press here.
Unfortunately, survey questions cost money and in most cases it is not feasible to ask 10 policy questions just to come up with a respondent's ideology. Thus, because "liberal" has become such a loaded term, it may be time for a new vernacular when it comes to describing American political ideology. I'm not sure what the new terminology should be, but in many other countries, terms such as "left," "center," or "right" are used instead of "liberal," "moderate," or "conservative." While it would certainly take some getting used to for politicians, pundits, journalists, and the public, the new terminology may provide a more meaningful way for people to think about their own ideology and a better way for pollsters to measure the concept.
"...Unfortunately, in the last 30, 40 years, [liberal] has been turned up on its head and it's been made to seem as though it is a word that describes big government, totally contrary to what its meaning was in the 19th and early 20th century. I prefer the word 'progressive,' which has a real American meaning, going back to the progressive era at the beginning of the 20th century.
I consider myself a modern progressive, someone who believes strongly in individual rights and freedoms, who believes that we are better as a society when we're working together and when we find ways to help those who may not have all the advantages in life get the tools they need to lead a more productive life for themselves and their family."
The underlying point here is that no Democratic presidential candidate can call themselves a "liberal" and still expect to win the general election; Republicans have been far too successful in adding a negative connotation to the term. Many citizens who probably are "liberals" also shy away from the label since it generally takes on a negative connotation. In the latest CBS/NY Times poll, only 15% of respondents considered themselves "liberal" and 31% called themselves a "conservative." At the same time, 48% labeled themselves as a "moderate."
CCPS is currently putting together a survey on health care issues. We are allowed only a limited number of questions, so one of the first cuts I suggested was the ideology question. While you will get some debate from scholars such as Morris Fiorina, I tend to agree that the electorate is fairly polarized along party lines at the present. This means that party and ideology are highly correlated, much more than they were when you had a substantial number of conservative southern Democrats (most of them have now relocated into the Republican Party). If made to choose, I'd rather use the party identification question, because I think it will give a better sense of ideology since "liberals" who do not like the "liberal" label will still admit to being a Democrat. That is why in the same survey you have only 15% of the public admitting to being a "liberal," you have over 30% identifying with the Democratic Party. On the other side, slightly more citizens claim the "conservative" label than admit to identifying with the Republican Party. This shows just how much of a problem the ideology questions poses to pollsters and political scientists. "Conservative" and "liberal" are not just polar opposites on the ideological scale, they are unequal in their social acceptability. Consider two people on an ideological scale, person A is about the same distance to the left of center as person B is to the right of center. The problem is that because of the loaded nature of the term "liberal," person A is not as likely to use the "liberal" label to define him/herself as person B is to choose the "conservative" label. Thus, even if you have an equal number of people to the left as you ahve to the right, that will not be reflected in the ideological self placement question presently used by pollsters.
Some studies use answers to a series of questions about policy issues to place respondents on an ideological scale in addition to asking them to place themselves. These studies find that a lot more people actually answer policy questions in a liberal way than admit to being a liberal. For a really interesting breakdown of present-day American ideology using responses to policy question, see the comprehensive study conducted by the Pew Research Center for the People and the Press here.
Unfortunately, survey questions cost money and in most cases it is not feasible to ask 10 policy questions just to come up with a respondent's ideology. Thus, because "liberal" has become such a loaded term, it may be time for a new vernacular when it comes to describing American political ideology. I'm not sure what the new terminology should be, but in many other countries, terms such as "left," "center," or "right" are used instead of "liberal," "moderate," or "conservative." While it would certainly take some getting used to for politicians, pundits, journalists, and the public, the new terminology may provide a more meaningful way for people to think about their own ideology and a better way for pollsters to measure the concept.
Monday, July 23, 2007
EPAAI Pictures, Part I
Jennifer Singleterry, Haley Adams, and Claudia Thurber enjoy the sunshine in front of our hotel the first night of EPAAI.
Max Glass, Murphy Hebert and Stephanie Johnson chat with Professor Thurber in front of the hotel.
Dr.'s Sheridan (left) and Thurber (right) pose with Christine Gould of CropLife International. Christine was a student of Dr. Sheridan's and also took the Public Affairs and Advocacy Institute in the U.S. before graduating from AU and moving to Brussels to work for CropLife.
Jennifer Singleterry, Angela Cavallucci, and Brian O'Laughlin enjoy one of Belgium's finer delicacies - waffles.
The group relaxes at a cafe before having lunch with an MEP.
EPAAI: Summer 2007
CCPS ran the second installment of its European Public Affairs and Advocacy Institute in Brussels, Belgium on June 24th-30th. Fourteen students participated in the institute - all graduate-level students in political science, public policy, or public administration. The course was taught by Dr. Thurber and Dr. Jerry Sheridan, who is the head of AU Abroad's program in Brussels and an economic scholar focused on the EU. I was fortunate enough to go along as the teaching assistant for the course. I think all would consider the week a resounding success.
In the next few days, I will be posting a few entries relating our time and work in Brussels. First, a general summary of our schedule, speakers, and activities follows.
Most of us arrived over the weekend (three sans luggage, but that's another story), and our first outing was an optional day trip to Bruges. It is a wonderfully-preserved 13th Century town, home to beautiful architecture, canals, and some impressive art collections. Dr. Sheridan gave the group a tour of the town in the morning, and left us with the afternoon to explore. Back in Brussels, we had our opening EPAAI dinner at Le Grande Cafe right off the Grande Place, sampling a traditional Belgian dish called chicken waterzooi. We had a good time getting to know one another in a relaxed atmosphere - which was a good thing, because we hit the ground running the next day, hearing five speakers in a row.
Monday
John Vassallo, Senior Counsel and European Affairs Director, GE Europe
Kristian Schmidt, Deputy Chef du Cabinet, EU Commissioner Kallas
Dr. Jamie Shea, Political Director of the Cabinet of the Secretary
Marcel Claes, CEO, Amcham Belgium
Jose Lalloum, Chairman, European Public Affairs Consultancies Association
Tuesday
Eva Grut, Senior Director, Pfizer Public Affairs Europe
George Parker, Brussels Bureau Chief, the Financial Times
Stefan Krawczyk, Deputy Regional Director for Europe, International Federation of the Phonographic Industry
Michelle O'Neill, Public Affairs Officer, Honeywell
Wednesday
Wilfred Aspinall, founder, European Link
Christine Gould, Plant Biotechnology, CropLife International
Daniel Mulaney, Office of the US Trade Representative, US Mission to the EU
Ambassador C. Boyden Grey, U.S. Ambassador to the EU
Edward Thomas & Anita Kelly, Amcham EU
Thursday
David Bushong, APCO Worldwide
Bill Newton Dunn, Member of the European Parliament
Jorn Fleck, Transatlantic Policy Network
Stephane Ducable, Microsoft Europe
Friday
Jeremy Rand, Bureaucrat, General Secretariat of the (EU) Council
Catherine Van Reeth, Corporate Public Affairs Manager, InBev Corporation
Tour of Cantillon Brewery (a small, organic brewery in Brussels)
After our brewery tour, we went our separate ways; some to return to the U.S. and back to work/school, and some to travel further in Europe. More to come later...
In the next few days, I will be posting a few entries relating our time and work in Brussels. First, a general summary of our schedule, speakers, and activities follows.
Most of us arrived over the weekend (three sans luggage, but that's another story), and our first outing was an optional day trip to Bruges. It is a wonderfully-preserved 13th Century town, home to beautiful architecture, canals, and some impressive art collections. Dr. Sheridan gave the group a tour of the town in the morning, and left us with the afternoon to explore. Back in Brussels, we had our opening EPAAI dinner at Le Grande Cafe right off the Grande Place, sampling a traditional Belgian dish called chicken waterzooi. We had a good time getting to know one another in a relaxed atmosphere - which was a good thing, because we hit the ground running the next day, hearing five speakers in a row.
Monday
John Vassallo, Senior Counsel and European Affairs Director, GE Europe
Kristian Schmidt, Deputy Chef du Cabinet, EU Commissioner Kallas
Dr. Jamie Shea, Political Director of the Cabinet of the Secretary
Marcel Claes, CEO, Amcham Belgium
Jose Lalloum, Chairman, European Public Affairs Consultancies Association
Tuesday
Eva Grut, Senior Director, Pfizer Public Affairs Europe
George Parker, Brussels Bureau Chief, the Financial Times
Stefan Krawczyk, Deputy Regional Director for Europe, International Federation of the Phonographic Industry
Michelle O'Neill, Public Affairs Officer, Honeywell
Wednesday
Wilfred Aspinall, founder, European Link
Christine Gould, Plant Biotechnology, CropLife International
Daniel Mulaney, Office of the US Trade Representative, US Mission to the EU
Ambassador C. Boyden Grey, U.S. Ambassador to the EU
Edward Thomas & Anita Kelly, Amcham EU
Thursday
David Bushong, APCO Worldwide
Bill Newton Dunn, Member of the European Parliament
Jorn Fleck, Transatlantic Policy Network
Stephane Ducable, Microsoft Europe
Friday
Jeremy Rand, Bureaucrat, General Secretariat of the (EU) Council
Catherine Van Reeth, Corporate Public Affairs Manager, InBev Corporation
Tour of Cantillon Brewery (a small, organic brewery in Brussels)
After our brewery tour, we went our separate ways; some to return to the U.S. and back to work/school, and some to travel further in Europe. More to come later...
Sunday, July 22, 2007
My "Conversation" with Survey USA
I was at a friend's house this weekend and the phone rang. After looking at the caller id, my friend decided not to answer. However, I noticed that the caller was a familiar name: it was Survey USA. Since a lot of my research focuses on public opinion, the call from Survey USA was intriguing enough for me to ask my friend if I could answer it. Survey USA has gained notoriety in recent years for their methodology--they use automated calls that ask respondents to push buttons on their phones to answer questions asked by a recorded voice (see a more extensive discussion of their methodolgy here; see a Slate article on how their method stacks up here). Overall, the method (Interactive Voice Response) has been shown to produce fairly accurate estimates of support for candidates during election campaigns.
Two main concerns with the method deal with the selection of the appropriate interviewee within a household and the fact that the automated method precludes any interaction or clarification during the survey. Well, on the first point, the fact that I answered my friend's phone points to the type of in-house selection problems faced by Survey USA. I am not the person or household that they randomly selected to participate in the survey, and there was no way for the computer to know that (I have no land line, so I wouldn't even be in their sampling frame). Survey USA does nothing to make sure a particular person (male/female, head of household, etc.) is on the phone at the beginning of the survey. During my interview, they asked some basic information (gender, race, age), which they will use to weight the responses. The concern that some have, however, is how well that weighting actually accounts for these in-house selection effects. Of course, all surveys have to use weighting to overcome selection problems, so this may simply be a matter of degree.
In a recent Public Opinion Quarterly article, Mark Blumenthal (of pollster.com) notes:
"Yes, IVR studies appear to push the envelope with respect to in-house selection and demographic weighting, but they are extending similar compromises already made by conventional surveys. What evidence do we have that respondents obtained through an inbound IVR study are more biased than those obtained on the phone with live interviewers? The vote validation studies we have available show that the IVR studies are consistent with, and possibly superior to, surveys done with live interviewers."
I think Blumenthal's point is on the mark, and for what they aim to do (quickly capturing public opinion in a cost efficient way), the IVR technology works great.
But after taking the survey, I'd say that one problem that may have been overlooked a bit to this point is that IVR surveys may be more prone to satisficing. Satisficing (see Jon Krosnick's work on this) occurs when a respondent is essentially just answering questions without thinking much about them (probably because he/she just wants to finish the survey). Because IVR means that respondents are pushing buttons rather than verbalizing answers, I expect that the method would be even more prone to satisficing. After all, how many times have you been guiding yourself through an automated menu on the phone and accidentally hit the wrong button because you weren't really listening that carefully? Would this be less likely to happen if you were talking to a human? In the survey I took, I was asked a series of questions about each candidate for the Democratic presidential nomination (whether I felt positive, negative, or neutral toward the candidate). As we were going through the list, I was getting into a pretty good groove with pressing the buttons until one candidate came along for whom I almost pressed the wrong button. I would expect that this happens a lot with IVR, since it is a lot easier to cut off a computer by pressing a button than it is to cut off a human being who is asking you a question. Also, the process of verbalizing a response likely requires more cognitive work than simply pressing a button on a phone (I'm not sure precisely what the cognitive science says on this, but I'm just hypothesizing that this is the case). It is worth noting that there was no apparent way to go back to a question if you accidentally hit the wrong button.
Thus, as a way of capturing more involved attitudes, I'm not sure the method is nearly as useful yet. The questions that can be asked are too limited and the method is probably too prone to satisficing, particularly if the survey was extended to capture more detailed attitudes.
Nevertheless, the nice thing about the Survey USA poll is that it was quick and I come from a generation that (for better or worse) is very comfortable interacting with computer voices. I think that IVR offers a quick economical way to take a snapshot of public attitudes on a limited range of questions. For example, Survey USA will be calling me (well, my friend) back at 9pm tonight to get a quick reaction about the Democratic Debate. Because they do not need to employ human interviewers to do this, the feat will not be that difficult (finding enough people who actually watch the debate is another matter, however). While the detail of the interview will be lacking, it is still great for political scientists to have an idea of who viewers thought won or lost a debate before the media's spin takes hold. Thus, different survey methodologies are well suited to performing different tasks, and I think that the more options we have to choose from the better. Just as long as we are well-aware of the limitations inherent in each method.
UPDATE (7/24/07): Mysterypollster.com has posted about the initial results from this survey here. It appears that impressions of Biden increased markedly after the debate, but that hasn't changed the fact that Clinton remains the person that respondents believe would make the best president.
Two main concerns with the method deal with the selection of the appropriate interviewee within a household and the fact that the automated method precludes any interaction or clarification during the survey. Well, on the first point, the fact that I answered my friend's phone points to the type of in-house selection problems faced by Survey USA. I am not the person or household that they randomly selected to participate in the survey, and there was no way for the computer to know that (I have no land line, so I wouldn't even be in their sampling frame). Survey USA does nothing to make sure a particular person (male/female, head of household, etc.) is on the phone at the beginning of the survey. During my interview, they asked some basic information (gender, race, age), which they will use to weight the responses. The concern that some have, however, is how well that weighting actually accounts for these in-house selection effects. Of course, all surveys have to use weighting to overcome selection problems, so this may simply be a matter of degree.
In a recent Public Opinion Quarterly article, Mark Blumenthal (of pollster.com) notes:
"Yes, IVR studies appear to push the envelope with respect to in-house selection and demographic weighting, but they are extending similar compromises already made by conventional surveys. What evidence do we have that respondents obtained through an inbound IVR study are more biased than those obtained on the phone with live interviewers? The vote validation studies we have available show that the IVR studies are consistent with, and possibly superior to, surveys done with live interviewers."
I think Blumenthal's point is on the mark, and for what they aim to do (quickly capturing public opinion in a cost efficient way), the IVR technology works great.
But after taking the survey, I'd say that one problem that may have been overlooked a bit to this point is that IVR surveys may be more prone to satisficing. Satisficing (see Jon Krosnick's work on this) occurs when a respondent is essentially just answering questions without thinking much about them (probably because he/she just wants to finish the survey). Because IVR means that respondents are pushing buttons rather than verbalizing answers, I expect that the method would be even more prone to satisficing. After all, how many times have you been guiding yourself through an automated menu on the phone and accidentally hit the wrong button because you weren't really listening that carefully? Would this be less likely to happen if you were talking to a human? In the survey I took, I was asked a series of questions about each candidate for the Democratic presidential nomination (whether I felt positive, negative, or neutral toward the candidate). As we were going through the list, I was getting into a pretty good groove with pressing the buttons until one candidate came along for whom I almost pressed the wrong button. I would expect that this happens a lot with IVR, since it is a lot easier to cut off a computer by pressing a button than it is to cut off a human being who is asking you a question. Also, the process of verbalizing a response likely requires more cognitive work than simply pressing a button on a phone (I'm not sure precisely what the cognitive science says on this, but I'm just hypothesizing that this is the case). It is worth noting that there was no apparent way to go back to a question if you accidentally hit the wrong button.
Thus, as a way of capturing more involved attitudes, I'm not sure the method is nearly as useful yet. The questions that can be asked are too limited and the method is probably too prone to satisficing, particularly if the survey was extended to capture more detailed attitudes.
Nevertheless, the nice thing about the Survey USA poll is that it was quick and I come from a generation that (for better or worse) is very comfortable interacting with computer voices. I think that IVR offers a quick economical way to take a snapshot of public attitudes on a limited range of questions. For example, Survey USA will be calling me (well, my friend) back at 9pm tonight to get a quick reaction about the Democratic Debate. Because they do not need to employ human interviewers to do this, the feat will not be that difficult (finding enough people who actually watch the debate is another matter, however). While the detail of the interview will be lacking, it is still great for political scientists to have an idea of who viewers thought won or lost a debate before the media's spin takes hold. Thus, different survey methodologies are well suited to performing different tasks, and I think that the more options we have to choose from the better. Just as long as we are well-aware of the limitations inherent in each method.
UPDATE (7/24/07): Mysterypollster.com has posted about the initial results from this survey here. It appears that impressions of Biden increased markedly after the debate, but that hasn't changed the fact that Clinton remains the person that respondents believe would make the best president.
Tuesday, July 17, 2007
CCPS Research Update: How Women Govern
This begins what we hope will be a regular feature on the blog--updating readers on research being conducted by scholars associated with CCPS.
Maryann Barakso, a Research Fellow with CCPS, has a very interesting article out in the most recent edition of Politics & Gender. Barakso looks at the rules that women's interest groups use to govern themselves. Research shows that women tend to desire more consensus-oriented approaches to decision making, so Barakso expected to find that groups formed by women would be more likely to adopt democratic rules that encourage participation from their members. However, she finds no such thing. Rather, women's organizations appear little different from other groups with regard to how they are structured. According to Barakso's analysis, "Contrary to expectations, many women’s organizations are quite undemocratic and this is particularly true of most organizations founded since 1960."
So, who are the most and least democratic organizations according to Barakso? Here are the top 3 and bottom 3 on Barakso's "Internal Democracy Index":
Most Demcocratic Organizations:
Coalition of Labor Union Women
National Organization for Women
International Women's Insolvency & Restructuring Confederation
Least Democratic Organizations:
National Black Women's Health Project
Women in Film and Video
Women in Government
Maryann Barakso, a Research Fellow with CCPS, has a very interesting article out in the most recent edition of Politics & Gender. Barakso looks at the rules that women's interest groups use to govern themselves. Research shows that women tend to desire more consensus-oriented approaches to decision making, so Barakso expected to find that groups formed by women would be more likely to adopt democratic rules that encourage participation from their members. However, she finds no such thing. Rather, women's organizations appear little different from other groups with regard to how they are structured. According to Barakso's analysis, "Contrary to expectations, many women’s organizations are quite undemocratic and this is particularly true of most organizations founded since 1960."
So, who are the most and least democratic organizations according to Barakso? Here are the top 3 and bottom 3 on Barakso's "Internal Democracy Index":
Most Demcocratic Organizations:
Coalition of Labor Union Women
National Organization for Women
International Women's Insolvency & Restructuring Confederation
Least Democratic Organizations:
National Black Women's Health Project
Women in Film and Video
Women in Government
Monday, July 16, 2007
Hillary Clinton's High Unfavorable Ratings
I received an email today that, in part, discussed Hillary Clinton's relatively high unfavorable ratings. This is one of the points that Clinton '08 doubters often focus on, arguing that a candidate with unfavorable ratings so high will have a hard time winning a national election. (Charles Franklin has posted some nice historic data on her unfavorable ratings on his blog and at Mysterpollster.com).
One thing I was curious about is whether a different Democratic nominee would be able to win the votes of those who view Clinton unfavorably. So, I took a look at the most recent data I could get my hands on quickly. The national survey was conducted by Pew in early December (2006). One nice thing about the data is that they broke down favorability ratings on a 4 point scale (very/mostly favorable, very/mostly unfavorable). Here is the breakdown for Clinton in that survey:
Very Favorable: 21.3%
Mostly Favorable: 33.9%
Mostly Unfavorable: 19.8%
Very Unfavorable: 25.1%
Now, let's look at how those ratings break down along party lines:
Republicans:
Very Favorable: 5%
Mostly Favorable: 14.3%
Mostly Unfavorable: 31.3%
Very Unfavorable: 49.5%
Independents:
Very Favorable: 14.0%
Mostly Favorable: 40.3%
Mostly Unfavorable: 22.1%
Very Unfavorable: 23.7%
Democrats:
Very Favorable: 40.4%
Mostly Favorable: 45.0%
Mostly Unfavorable: 8.3%
Very Unfavorable: 6.3%
Unfortunately, the survey didn't include any other Democratic candidates that I could compare Clinton's numbers with, but I'm not submitting this post for publication in a journal, so what the heck! We aren't too interested in Republicans here, because in the last few elections they have been very loyal in voting for their presidential candidates. Thus, no Democrat is likely to capture much of that vote. Clinton's unfavorables among Democrats is about 15%, which may be a little higher than one would expect. But, a little further investigation into that 15% reveals that they are mostly Democrats who identify themselves as conservatives or moderates and reside in the South or the Midwest. This is not a loyal Democratic constituency. Nevertheless, just over half of this 15% reported that they voted for John Kerry in 2004, so the Clinton camp may be concerned about at least part of that 15%.
The most important group are those identifying themselves as independents, since that is likely to be the swing vote in the election. About 46% of that group gives Clinton an unfavorable rating. However, public opinion research shows consistently that many people like to think of themselves as independents even if they loyally support one party or the other. Fortunately, the Pew survey added a question asking citizens which party they leaned towards. Here are how Hillary's favorables/unfavorables break down among independents who did and did not lean toward one party or the other:
Independents Leaning Republican:
Very Favorable: 8.3%
Mostly Favorable: 16.6%
Mostly Unfavorable: 32.5%
Very Unfavorable: 42.7%
Independents Leaning Democratic:
Very Favorable: 21.0%
Mostly Favorable: 53.7%
Mostly Unfavorable: 15.5%
Very Unfavorable: 9.9%
Independents Not Leaning Toward Either Party:
Very Favorable: 15.7%
Mostly Favorable: 37.1%
Mostly Unfavorable: 20.0%
Very Unfavorable: 27.1%
This gives us some interesting insight into Clinton's numbers. Note that there are roughly equivalent numbers of leaning Republicans who have favorable views of Clinton as there are leaning Democrats who have unfavorable views. Thus, they mostly cancel each other out. Clinton supporters might be somewhat concerned that leaning Republicans are more likely to have very unfavorable feelings than leaning Democrats are to fall into the very favorable category. Clinton would surely also like to have fewer unfavorables in the non-leaning independents category, but I hestitate to make too much of that group since it is a mishmash of voters that are not nearly as likely to vote as the other groups.
Based on this crude analysis, I'd say that Clinton does not appear to have high unfavorable ratings among any group that would otherwise want to vote Democratic. In other words, most people who give Clinton an unfavorable rating are probably not going to vote for any Democratic nominee and most of those who view her favorably would probably vote for nearly any Democrat anyway. All of this could change, of course. Pundits often use Clinton's high favorables and unfavorables to note that Clinton is a polarizing figure, but I think that what this analysis indicates is that there is little remarkable about these ratings. Views toward Clinton are polarized, but so is the American political climate more generally. I'd bet that any Democratic nominee will end up with favorables/unfavorables like Clinton's a year from now; Clinton's long history in the public eye just helped her get there first.
One thing I was curious about is whether a different Democratic nominee would be able to win the votes of those who view Clinton unfavorably. So, I took a look at the most recent data I could get my hands on quickly. The national survey was conducted by Pew in early December (2006). One nice thing about the data is that they broke down favorability ratings on a 4 point scale (very/mostly favorable, very/mostly unfavorable). Here is the breakdown for Clinton in that survey:
Very Favorable: 21.3%
Mostly Favorable: 33.9%
Mostly Unfavorable: 19.8%
Very Unfavorable: 25.1%
Now, let's look at how those ratings break down along party lines:
Republicans:
Very Favorable: 5%
Mostly Favorable: 14.3%
Mostly Unfavorable: 31.3%
Very Unfavorable: 49.5%
Independents:
Very Favorable: 14.0%
Mostly Favorable: 40.3%
Mostly Unfavorable: 22.1%
Very Unfavorable: 23.7%
Democrats:
Very Favorable: 40.4%
Mostly Favorable: 45.0%
Mostly Unfavorable: 8.3%
Very Unfavorable: 6.3%
Unfortunately, the survey didn't include any other Democratic candidates that I could compare Clinton's numbers with, but I'm not submitting this post for publication in a journal, so what the heck! We aren't too interested in Republicans here, because in the last few elections they have been very loyal in voting for their presidential candidates. Thus, no Democrat is likely to capture much of that vote. Clinton's unfavorables among Democrats is about 15%, which may be a little higher than one would expect. But, a little further investigation into that 15% reveals that they are mostly Democrats who identify themselves as conservatives or moderates and reside in the South or the Midwest. This is not a loyal Democratic constituency. Nevertheless, just over half of this 15% reported that they voted for John Kerry in 2004, so the Clinton camp may be concerned about at least part of that 15%.
The most important group are those identifying themselves as independents, since that is likely to be the swing vote in the election. About 46% of that group gives Clinton an unfavorable rating. However, public opinion research shows consistently that many people like to think of themselves as independents even if they loyally support one party or the other. Fortunately, the Pew survey added a question asking citizens which party they leaned towards. Here are how Hillary's favorables/unfavorables break down among independents who did and did not lean toward one party or the other:
Independents Leaning Republican:
Very Favorable: 8.3%
Mostly Favorable: 16.6%
Mostly Unfavorable: 32.5%
Very Unfavorable: 42.7%
Independents Leaning Democratic:
Very Favorable: 21.0%
Mostly Favorable: 53.7%
Mostly Unfavorable: 15.5%
Very Unfavorable: 9.9%
Independents Not Leaning Toward Either Party:
Very Favorable: 15.7%
Mostly Favorable: 37.1%
Mostly Unfavorable: 20.0%
Very Unfavorable: 27.1%
This gives us some interesting insight into Clinton's numbers. Note that there are roughly equivalent numbers of leaning Republicans who have favorable views of Clinton as there are leaning Democrats who have unfavorable views. Thus, they mostly cancel each other out. Clinton supporters might be somewhat concerned that leaning Republicans are more likely to have very unfavorable feelings than leaning Democrats are to fall into the very favorable category. Clinton would surely also like to have fewer unfavorables in the non-leaning independents category, but I hestitate to make too much of that group since it is a mishmash of voters that are not nearly as likely to vote as the other groups.
Based on this crude analysis, I'd say that Clinton does not appear to have high unfavorable ratings among any group that would otherwise want to vote Democratic. In other words, most people who give Clinton an unfavorable rating are probably not going to vote for any Democratic nominee and most of those who view her favorably would probably vote for nearly any Democrat anyway. All of this could change, of course. Pundits often use Clinton's high favorables and unfavorables to note that Clinton is a polarizing figure, but I think that what this analysis indicates is that there is little remarkable about these ratings. Views toward Clinton are polarized, but so is the American political climate more generally. I'd bet that any Democratic nominee will end up with favorables/unfavorables like Clinton's a year from now; Clinton's long history in the public eye just helped her get there first.
Saturday, July 14, 2007
Speaking of Framing...
Immigration is a good example of an issue that is ripe for framing, because there are so many different aspects that either side can stress. There are also a lot of nuances to the issue that can be used by political elites. Frank Luntz has conducted some of his famous focus groups on the issue and has been telling Republicans how to talk about the issue. One of the words Luntz has been instructing Republicans to use is "amnesty," and I'm sure you've heard House Republicans pushing this frame. Well, a Pew Research Center for the People and the Press survey recently used a little experiment to help us get a sense of how well the "amnesty" terms works for Republicans. You can find the report here: http://people-press.org/reports/display.php3?ReportID=335. Essentially, the people at Pew randomly asked half of the survey respondents a question about whether illegal immigrants should be provided a way to gain citizenship while the other half was asked if they should be provided with amnesty (see the full questions here: http://people-press.org/reports/questionnaires/335.pdf).
Interestingly, the findings from this poll reveal similar patterns that my co-author and I find in the paper we wrote for the recent framing conference hosted by CCPS (you can view that paper here: http://nw08.american.edu/~schaffne/schaffner_atkinson.pdf). Essentially, the Republican frame works, but it really only works for certain groups--mainly Republicans. The Pew report demonstrates that 64% of "conservative Republicans" favored a way to citizenship for illegals, but only 44% favored it when the "amnesty" frame was invoked. That represents a major shift in opinion on the issue. On the other hand, the "amnesty" frame only decreased support for a path to citizenship by 8% among "moderate and liberal Republicans" and "Independents" and 4% among Democrats. This is similar to what we found when looking at the use of the "death tax" frame. The frame worked, but mostly just on Republicans. This is because Republican citizens tend to be more open to arguments and frames being advanced by Republican elites and Democratic citizens tend to filter out those messages. The messenger is sometimes just as important as the message.
We may view this as a good thing in one sense since it means that Republicans won't be easily led astray by Democratic frames, nor will Democrats be fooled by Republican frames. But it also leads to a question debated at the recent conference. That is, which attitudes are the "true" attitudes or opinions? Are Republicans 64% in favor or 44% in favor of giving illegal immigrants an opportunity to become citizens? Or is there even such a thing as "true attitudes?"
Interestingly, the findings from this poll reveal similar patterns that my co-author and I find in the paper we wrote for the recent framing conference hosted by CCPS (you can view that paper here: http://nw08.american.edu/~schaffne/schaffner_atkinson.pdf). Essentially, the Republican frame works, but it really only works for certain groups--mainly Republicans. The Pew report demonstrates that 64% of "conservative Republicans" favored a way to citizenship for illegals, but only 44% favored it when the "amnesty" frame was invoked. That represents a major shift in opinion on the issue. On the other hand, the "amnesty" frame only decreased support for a path to citizenship by 8% among "moderate and liberal Republicans" and "Independents" and 4% among Democrats. This is similar to what we found when looking at the use of the "death tax" frame. The frame worked, but mostly just on Republicans. This is because Republican citizens tend to be more open to arguments and frames being advanced by Republican elites and Democratic citizens tend to filter out those messages. The messenger is sometimes just as important as the message.
We may view this as a good thing in one sense since it means that Republicans won't be easily led astray by Democratic frames, nor will Democrats be fooled by Republican frames. But it also leads to a question debated at the recent conference. That is, which attitudes are the "true" attitudes or opinions? Are Republicans 64% in favor or 44% in favor of giving illegal immigrants an opportunity to become citizens? Or is there even such a thing as "true attitudes?"
Thursday, July 12, 2007
Issue Framing Conference
Last month, I hosted a conference on issue framing that was sponsored by CCPS. We were honored to have an all-star cast of framing scholars attend and present their work. Information about the conference and copies of the papers presented are available here:
http://spa.american.edu/ccps/events.php?ID=577
We took video of the presentations and we are still working on converting that to an electronic format and posting it on the site. Stay tuned for details on that front.
There was a lot of interesting discussion at the conference. One striking thing really stood out for me--we had some of the most prolific researchers studying issue framing at the conference and they spent much of the time debating how we should even define the concept of framing. If we are able to post the video online, I'll definitely point everyone to that debate. Really great exchange.
Overall, all the papers were really well developed and we are hoping to produce an edited volume from these works. I encourage you to explore the papers before they have to be chopped down to book chapter length. There is a lot of great stuff there.
http://spa.american.edu/ccps/events.php?ID=577
We took video of the presentations and we are still working on converting that to an electronic format and posting it on the site. Stay tuned for details on that front.
There was a lot of interesting discussion at the conference. One striking thing really stood out for me--we had some of the most prolific researchers studying issue framing at the conference and they spent much of the time debating how we should even define the concept of framing. If we are able to post the video online, I'll definitely point everyone to that debate. Really great exchange.
Overall, all the papers were really well developed and we are hoping to produce an edited volume from these works. I encourage you to explore the papers before they have to be chopped down to book chapter length. There is a lot of great stuff there.
Welcome to the CCPSBlog
We at the Center for Congressional and Presidential Studies have begun this blog as a forum for informing the public about programs and research being conducted at the center, but also as a way for CCPS faculty and staff to post about all things congressional, presidential, and political oriented. Enjoy!
Subscribe to:
Posts (Atom)