Friday, December 5, 2008
Thoughts on the so-called "War of Ideas"
"No matter how powerful our military is, we will not be powerful if we lose the war of ideas."
--Senator Joe Biden, soon after 9/11; cited in Ira Teinowitz, "Congress will support the war of ideas" (Advertising Age, 6/17/2002)
"We're in a war of ideas."
--Donald Rumsfeld, cited in Bill Gertz, "Rumsfeld pushes 'new sense of urgency'; 'War of ideas' needed to defeat the terrorists" (The Washington Times, October 24, 2003)
War has long been a term used by governments to mobilize their populations. In the twentieth century, for example, "war of ideas" was in circulation when referring to America's conflicts with Germany and Russia. Lately, this verbal construct, used off and on by a number of pundits and politicians since 9/11, has enjoyed something of a resurgence -- thanks to statements made by the current Under Secretary of State for Public Diplomacy and Public Affairs, James Glassman.
Glassman's emphasis on the "war of ideas," for which he advocates the use of Internet social networking to discredit "violent extremism," has received, on the whole, a positive reception in the United States, with some exceptions; but should a "war of ideas," on or off cyberspace, be part of how we Americans determine our country's role in the world during the new millenium?
1. To understand the deceptive nature of the term "war of ideas," it helps to go back to Plato -- as Alfred North Whitehead famously said, Western philosophy is but a "series of footnotes to Plato." In Plato's Gorgias we gather from Socrates that persuasion is not dialogue, and indeed that rhetoric and philosophy are in a state of tension if not opposition. Following this train of thought, it becomes evident that Glassman's "war," by definition a win-or-lose conflict rather than an intellectual exchange, has nothing to do with ideas as such. It has to do -- and wouldn't terrorists find much in common with the Under Secretary's bellicose, unsubtle, hit-'em-hard approach? -- with changing behavior to advance one's interests: in other words propaganda, a weapon of war which, at its most rudimentary, appeals to atavistic emotions, not the inquisitive intellect.
2. To suggest -- as Glassman's "war of ideas" does -- that violent extremists are capable of ideas is to give them intellectual credit that they seldom deserve. To be sure, some of the terrorists' statements are taken seriously in some societies, but that is because they inflame the spirits rather than enlighten the mind, for often regrettable but understandable reasons. And words, which in discourse are a vehicle for thought, are at most of secondary importance to terrorists. What they most believe in, as a means of getting their way, is the propaganda of the deed (the more violent the better), a phrase traced back to a 19th century Italian revolutionary. How ironic, then, that Mr. Glassman's predecessor, Ms. Karen Hughes, referred to "diplomacy of deeds" as a central part of her agenda, and that Dr. Condoleezza Rice, while teaching at Stanford, stated that "I tell my students that policy-making is 90 percent blocking and tackling and 10 percent intellectual."
3. Meanwhile, what, exactly, are our American "ideas" in the "war of ideas?" -- and, indeed, what are the ideas of the US-politically acceptable Middle East moderate "locals" who can fight the evil "them" for "us," to follow the contentions of Robert Satloff, executive director of the Washington Institute for Near East Policy? To seek to define America through certain principles ("life, liberty, and the pursuit of happiness") is all well and good, but to reduce the United States to a fixed set of ideas it "fights for" simplifies the complexity and changeability of the United States. In fact, what perhaps most characterizes the U.S. is that it contains a multitude of differing and evolving ideas, rather than permanent ideas everyone agrees upon. The notion of an American "war of ideas" is, therefore, an attack on ideas in the United States, as it implicity limits their infinite variety. As Mr. Glassman himself wrote in 1997: "Of course, not every new idea ... will be a good one. But the trial-and-error process of learning is essential to the progress and plenitude of American life. Whether in science, technology, business, or popular culture, we cannot know in advance which experiments will succeed. For a political class dedicated to technocratic planning, that is a scary idea."
4. In the long term, a crude propaganda campaign thinly disguised under the term "war of ideas" may in fact discredit the U.S. far more than its "enemies" by confirming what violent extremists claim -- that America is not truthful about what it does, what many in the world are predisposed to believe, given the Bush administration's hypocritical record in Iraq and elsewhere. As a noted scholar, known for not mincing words, informed me by e-mail, Glassman's war of ideas is simply "dumb." "Because," he explains, "the most subversive thing we can do is be ourselves, guilelessly and unapologetically. Wars of ideas are not our style." (He also notes, on a less idealistic level, that "as any poker player knows, you don't win the game by announcing that you are out to win the game." I would only add to this observation that seasoned propagandists, approve of them or not, know that the best propaganda is the least propagandistic: subtlety, not bombs or a loudly-proclaimed "war on ideas," is the best propaganda, especially in the long-term. Just ask the BBC.)
5. One of the most important articles to appear, a few weeks after 9/11, was by Douglas McGray in The Christian Science Monitor (September 26, 2001), under the headline -- "Don't Oversell an 'Idea War'" -- and with the following wisdom: "Richard Nixon ... declared 'war' on drugs ... Even earlier, President Johnson's administration declared 'war' on poverty ... These wars are 'ideas wars,' in which leaders appropriate the language of war to rally political support and signal big budget commitment. ... Meanwhile, the real fight against terrorism, an ongoing combination of thankless police and intelligence work -- more like like fighting crime on a global scale than waging war -- could get overshadowed."
6. Full disclosure: I was a public diplomacy Foreign Service officer during the Cold War and its aftermath (1981-2003), serving mostly in Eastern and Central Europe (Prague, Krakow, Tallinn, Kiev, Belgrade, Moscow). The Agency that provided me with a paycheck, the USIA (United States Information Information Agency), claimed to be engaged in a "war of ideas" with the Soviet Union (according to Washington headquarters, depending on the political season). Nevertheless, I felt that my role "in the field" was not, directly (stupidly?) to confront Soviet-thug-thoughtlessness -- which had nothing to do with Marx, after all a serious philosopher -- but rather meet with persons, from all sides of the political fence and sectors of society, who were concerned with ideas, including about the human condition and America's relation with their country. My guide for these cherished meetings was far more Plato's dialogues than any official statements about the "war of ideas." And no one in DC headquarters ever bothered to fire me, perhaps because ideas were never considered that important in Washington to begin with.
John Brown compiles The Public Diplomacy Press and Blog Review
Friday, November 14, 2008
Want to join the Foreign Service? Advice from a "public diplomacy" diplomat
Dear Ms. [...]:
... I'll try to answer your question re preparing to join the Foreign Service [FS] as best I can:
1. Don't daydream about joining the FS by cramming to "pass the FS exam." Before trying to "become a diplomat," please consider this: live overseas, learn a foreign language(s) and read history. All this takes time, not instant "exam-preparation-gratification."
2. Graduate-level courses in "public diplomacy" can do you no harm (expect if you have to pay for them), but remember that there are no fool (full?)-proof guides to PD. It's essentially learned by experience in the field rather than by "stimulated (simulated?) situations" in the classroom.
3. Be aware that public diplomacy, as implemented by the State Department, is an official US Government function, for better or for worse. So, if you wish to be an FSO [Foreign Service Officer], be psychologically ready for the endless embassy "staff meetings," "getting along with colleagues," side of the FS life, which (if my biases are not completely off target) is really what is most "tested" on the FS oral exam (or at least when I took it, too many decades ago).
4. Keep up with foreign affairs as best you can by reading major dailies and magazines, if only to "fake" -- please find a better word -- your way through the FS written exam, again as I experienced it. (After all, knowledge is a life-long search, with no one ever really knowing the automatically "right answer.")
5. Leave your options -- and sense of humor -- open. There’s life (far more financially profitable but perhaps far less rewarding) before, during, and beyond the FS.
6. Most important: remember the Foreign Service is a service -- a service to your country as you -- and (unfortunately) as circumstances allow -- can best carry it out.
best, john
... I'll try to answer your question re preparing to join the Foreign Service [FS] as best I can:
1. Don't daydream about joining the FS by cramming to "pass the FS exam." Before trying to "become a diplomat," please consider this: live overseas, learn a foreign language(s) and read history. All this takes time, not instant "exam-preparation-gratification."
2. Graduate-level courses in "public diplomacy" can do you no harm (expect if you have to pay for them), but remember that there are no fool (full?)-proof guides to PD. It's essentially learned by experience in the field rather than by "stimulated (simulated?) situations" in the classroom.
3. Be aware that public diplomacy, as implemented by the State Department, is an official US Government function, for better or for worse. So, if you wish to be an FSO [Foreign Service Officer], be psychologically ready for the endless embassy "staff meetings," "getting along with colleagues," side of the FS life, which (if my biases are not completely off target) is really what is most "tested" on the FS oral exam (or at least when I took it, too many decades ago).
4. Keep up with foreign affairs as best you can by reading major dailies and magazines, if only to "fake" -- please find a better word -- your way through the FS written exam, again as I experienced it. (After all, knowledge is a life-long search, with no one ever really knowing the automatically "right answer.")
5. Leave your options -- and sense of humor -- open. There’s life (far more financially profitable but perhaps far less rewarding) before, during, and beyond the FS.
6. Most important: remember the Foreign Service is a service -- a service to your country as you -- and (unfortunately) as circumstances allow -- can best carry it out.
best, john
Thursday, November 13, 2008
US Public Diplomacy/ Obama America's "image" in world: an important article
http://www.prwatch.org/blog/1781
Judith Siers-Poisson's blog
Black man, black woman, black baby /White man, white woman, white baby /White man, black woman, black baby /Black man, white woman, black baby."
Public Enemy, Fear of a Black Planet
There is no doubt that the election of Barack Obama as President of the United States is historic. But does framing him as America's "first black president" show that we have not come nearly as far as we'd like to think?
The mainstream U.S. news -- and the majority of the American public, whether for or against him -- consider Barack Obama to be the first African American President. While he is certainly a member of the black community (and much more literally African-American due to his father being a Kenyan immigrant), he is also equally part of the white community. His mother was white. The grandmother who helped raise him (and whom he tragically lost to cancer on the eve of his election) was also white. But historically, and apparently to this day, to be black to any degree is to be exclusively black. Is our celebration of Barack Obama as the first black president proof that we haven't moved very far past the "one-drop rule"?
A Drop of Black, and You Never Go Back
The one-drop rule is the perception that any amount of non-white ancestral heritage makes a person non-white. But there is more than one interpretation of the concept. For some, the distinction is based on physical traits. If you appear to have black features, then you are black, whether it is more or less than 50% of your ancestry. Slightly differently, some believe that if there is even the most dilute black blood in a person's make-up, there will be a tell-tale sign of some kind that will prove the mixed heritage -- a birth mark, the shape of the crescent in the nail bed, or others.
But what we are seeing with the advent of Barack Obama as a national figure fits more within yet another third interpretation. Philosophy professor and author Naomi Zack defined it in her 1998 book, Thinking About Race. "One-drop rule: American social and legal custom of classifying anyone with one black ancestor, regardless of how far back, as black.” I asked Zack for her comments about Barack Obama. She replied: "Why is someone with an African father and a white mother, who if race were real would be mixed race, considered 'Black?' Why is it not also absurd to refer to that person as 'a multi-racial African American'?"
In 1994, legal scholar Julie C. Lythcott-Haims wrote in the Harvard Civil Rights-Civil Liberties Law Review that the one-drop rule "still exists today; Americans who are part-Black are socially considered Black, and only Black by most Americans. ... The one-drop rule is so ingrained in the American psyche that Blacks and Whites do not think twice about it."
In 1997, we saw Tiger Woods as a multiracial person being reduced to one facet of his identity. On Oprah Winfrey's show, he was asked if it bothered him to be referred to simply as African-American. He responded, "It does. Growing up, I came up with this name: I'm a 'Cablinasian'" (meaning Caucasian-Black-Indian-Asian). "I'm just who I am," Woods told Winfrey, "whoever you see in front of you." Sportswriter Ralph Riley wrote about Woods' background and the one-drop rule, without naming it. "Tiger's Asian heritage defines him as thoroughly as any other aspect of his makeup, although we tend to throw everyone brown and American with nice lips into the black blender."
It isn't just white culture that follows the one-drop rule, as Tiger Woods experienced in 1997. A May 1997 article in Time magazine looked at the reaction to Woods' statement on Oprah. "Kerboom! a mini-racial fire storm erupted. Woods' remarks infuriated many African Americans who hailed his record-setting triumph at the Masters as a symbol of racial progress but see him as a traitor. To them Woods appeared to be running away from being an African American ... In their rush to judgment, the fearful apparently never stopped to consider that Woods was not turning his back on any part of his identity but instead was embracing every aspect of it."
Fast-forward to November 2006 and in a 2006 Zogby International poll, 55% of whites considered Obama as biracial after being told that Obama's mother was white and his Kenyan father was black. Even more Hispanics -- 61% -- also saw Obama as biracial. But interestingly, 66% of the blacks polled classified Obama as black.
The October 23, 2006, cover story in Time magazine shows that we still have a hard time letting people of mixed racial backgrounds "embrace every aspect" of being "just who I am." In the story, titled "Why Barack Obama Could Be the Next President," reporter Joe Klein compared Obama to Colin Powell, and employed the one-drop assumption: "Powell and Obama have another thing in common: they are black people who -- like Tiger Woods, Oprah Winfrey and Michael Jordan -- seem to have an iconic power over the American imagination because they transcend racial stereotypes." Although Obama and Woods are both multiracial, Klein referred to them solely as black and even as "iconic" African-Americans.
What Race Is, and Isn't
Historically, race has been treated as a natural category for classifying human beings. The assumption that people can be grouped into distinct races has political overtones and motives. Activities such as slavery, domination, and oppression have been justified in large part by claims that those who dominate are inherently different (and superior) to those they dominate. Modern science, however, has shown that this system for classifying people has little if any basis in biology or genetics.
According to the current position on race of the American Anthropological Association, drafted in 1998, "The concept of race is a social and cultural construction... Race simply cannot be tested or proven scientifically ... It is clear that human populations are not unambiguous, clearly demarcated, biologically distinct groups. The concept of 'race' has no validity ... in the human species." According to the U.S. Census Bureau, race is "self-identification by people according to the race or races with which they most closely identify. These categories are sociopolitical constructs and should not be interpreted as being scientific or anthropological in nature. Furthermore, the race categories include both racial and national-origin groups." When speaking of human genetic variations, scientists today study "populations" rather than "races," a more precise term that avoids the misleading assumption that superficial characteristics such as skin color group automatically with other characteristics such as intelligence or character. In everyday life, however, "race" is still the most commonly used term and the most widely accepted concept.
Barack Obama's life experience makes him a particularly interesting case study in the problems inherent in trying to classify people by race. Obama is the son of a Kenyan man who came to study in the U.S. He was born and raised by his white maternal family in multiracial, multiethnic Hawai'i, and spent a portion of his young life living in Indonesia. He is "black" in the sense that he has an African father, but his experience growing up is quite different from that of a "typical" African American. Of course, the idea that there is a "typical" African American experience is itself rather suspect. Generations have passed since the first Africans arrived on American shores, and many African Americans have a variety of non-African ancestors with Native American, Caucasian or other roots. Ironically, therefore, Obama's mixed ancestry may be the most "typical" characteristic he shares with other African Americans.
We're Not There Yet
Even when Obama's mixed racial background is mentioned, the one-drop assumptions and default terms come into play. In a November 8, 2008 article titled "'Mutts Like Me' -- Obama Shows Ease Discussing Race," writer Alan Fram focuses on a comment that the president-elect made about what type of puppy his girls would bring to the White House with them. "Obviously, a lot of shelter dogs are mutts like me," Obama said. Fram seems to be getting to the heart of the matter, saying "The message seemed clear -- here is a president who will be quite at ease discussing race, a complex issue as unresolved as it is uncomfortable for many to talk about openly. And at a time when whites in the country are not many years from becoming the minority." However, old habits die hard. Fram also says, "By now, almost everyone knows that Obama's mother was white and father was black, putting him on track to become the nation's first African-American president."
Should embracing the multiracial background of people like Barack Obama or Tiger Woods take away from the pride and sense of accomplishment that different communities take in his achievements? Is it really less of a victory for blacks if Obama's mixed race is acknowledged and celebrated? In a November 10, 2008, article for Salon.com titled "Our Biracial President," James Hannaham wrote, "Obama's biracial. ... This is not to say that he hasn't received some of the same treatment as black Americans, or that he is not welcome among them, or that people should denigrate his need to make his background understandable to people who think that 'biracial' means a type of airplane. It suggests something far less divisive. It means that black and white people (not to mention other ethnicities chained to the binary idiocy of American race relations) can share his victory equally."
In 1967, there were still sixteen U.S. states that had laws on the books banning interracial marriage. That isn't a typo -- 1967. It was in that year that the US. Supreme Court unanimously struck down laws banning interracial marriages with these words: "The freedom to marry, or not marry, a person of another race resides within the individual and cannot be infringed on by the State." Barack Obama's parents met, married, and gave birth to him in Hawai'i in the early 1960s. It is a matter of chance that they were not in one of the states where interracial marriage and sex was illegal. In addition, the 2000 U.S. census was the first one in which respondents could choose to identify themselves as belonging to more than a single race. Given that recent history, perhaps we could all celebrate how far we have come by electing a biracial President.
Judith Siers-Poisson's blog
Black man, black woman, black baby /White man, white woman, white baby /White man, black woman, black baby /Black man, white woman, black baby."
Public Enemy, Fear of a Black Planet
There is no doubt that the election of Barack Obama as President of the United States is historic. But does framing him as America's "first black president" show that we have not come nearly as far as we'd like to think?
The mainstream U.S. news -- and the majority of the American public, whether for or against him -- consider Barack Obama to be the first African American President. While he is certainly a member of the black community (and much more literally African-American due to his father being a Kenyan immigrant), he is also equally part of the white community. His mother was white. The grandmother who helped raise him (and whom he tragically lost to cancer on the eve of his election) was also white. But historically, and apparently to this day, to be black to any degree is to be exclusively black. Is our celebration of Barack Obama as the first black president proof that we haven't moved very far past the "one-drop rule"?
A Drop of Black, and You Never Go Back
The one-drop rule is the perception that any amount of non-white ancestral heritage makes a person non-white. But there is more than one interpretation of the concept. For some, the distinction is based on physical traits. If you appear to have black features, then you are black, whether it is more or less than 50% of your ancestry. Slightly differently, some believe that if there is even the most dilute black blood in a person's make-up, there will be a tell-tale sign of some kind that will prove the mixed heritage -- a birth mark, the shape of the crescent in the nail bed, or others.
But what we are seeing with the advent of Barack Obama as a national figure fits more within yet another third interpretation. Philosophy professor and author Naomi Zack defined it in her 1998 book, Thinking About Race. "One-drop rule: American social and legal custom of classifying anyone with one black ancestor, regardless of how far back, as black.” I asked Zack for her comments about Barack Obama. She replied: "Why is someone with an African father and a white mother, who if race were real would be mixed race, considered 'Black?' Why is it not also absurd to refer to that person as 'a multi-racial African American'?"
In 1994, legal scholar Julie C. Lythcott-Haims wrote in the Harvard Civil Rights-Civil Liberties Law Review that the one-drop rule "still exists today; Americans who are part-Black are socially considered Black, and only Black by most Americans. ... The one-drop rule is so ingrained in the American psyche that Blacks and Whites do not think twice about it."
In 1997, we saw Tiger Woods as a multiracial person being reduced to one facet of his identity. On Oprah Winfrey's show, he was asked if it bothered him to be referred to simply as African-American. He responded, "It does. Growing up, I came up with this name: I'm a 'Cablinasian'" (meaning Caucasian-Black-Indian-Asian). "I'm just who I am," Woods told Winfrey, "whoever you see in front of you." Sportswriter Ralph Riley wrote about Woods' background and the one-drop rule, without naming it. "Tiger's Asian heritage defines him as thoroughly as any other aspect of his makeup, although we tend to throw everyone brown and American with nice lips into the black blender."
It isn't just white culture that follows the one-drop rule, as Tiger Woods experienced in 1997. A May 1997 article in Time magazine looked at the reaction to Woods' statement on Oprah. "Kerboom! a mini-racial fire storm erupted. Woods' remarks infuriated many African Americans who hailed his record-setting triumph at the Masters as a symbol of racial progress but see him as a traitor. To them Woods appeared to be running away from being an African American ... In their rush to judgment, the fearful apparently never stopped to consider that Woods was not turning his back on any part of his identity but instead was embracing every aspect of it."
Fast-forward to November 2006 and in a 2006 Zogby International poll, 55% of whites considered Obama as biracial after being told that Obama's mother was white and his Kenyan father was black. Even more Hispanics -- 61% -- also saw Obama as biracial. But interestingly, 66% of the blacks polled classified Obama as black.
The October 23, 2006, cover story in Time magazine shows that we still have a hard time letting people of mixed racial backgrounds "embrace every aspect" of being "just who I am." In the story, titled "Why Barack Obama Could Be the Next President," reporter Joe Klein compared Obama to Colin Powell, and employed the one-drop assumption: "Powell and Obama have another thing in common: they are black people who -- like Tiger Woods, Oprah Winfrey and Michael Jordan -- seem to have an iconic power over the American imagination because they transcend racial stereotypes." Although Obama and Woods are both multiracial, Klein referred to them solely as black and even as "iconic" African-Americans.
What Race Is, and Isn't
Historically, race has been treated as a natural category for classifying human beings. The assumption that people can be grouped into distinct races has political overtones and motives. Activities such as slavery, domination, and oppression have been justified in large part by claims that those who dominate are inherently different (and superior) to those they dominate. Modern science, however, has shown that this system for classifying people has little if any basis in biology or genetics.
According to the current position on race of the American Anthropological Association, drafted in 1998, "The concept of race is a social and cultural construction... Race simply cannot be tested or proven scientifically ... It is clear that human populations are not unambiguous, clearly demarcated, biologically distinct groups. The concept of 'race' has no validity ... in the human species." According to the U.S. Census Bureau, race is "self-identification by people according to the race or races with which they most closely identify. These categories are sociopolitical constructs and should not be interpreted as being scientific or anthropological in nature. Furthermore, the race categories include both racial and national-origin groups." When speaking of human genetic variations, scientists today study "populations" rather than "races," a more precise term that avoids the misleading assumption that superficial characteristics such as skin color group automatically with other characteristics such as intelligence or character. In everyday life, however, "race" is still the most commonly used term and the most widely accepted concept.
Barack Obama's life experience makes him a particularly interesting case study in the problems inherent in trying to classify people by race. Obama is the son of a Kenyan man who came to study in the U.S. He was born and raised by his white maternal family in multiracial, multiethnic Hawai'i, and spent a portion of his young life living in Indonesia. He is "black" in the sense that he has an African father, but his experience growing up is quite different from that of a "typical" African American. Of course, the idea that there is a "typical" African American experience is itself rather suspect. Generations have passed since the first Africans arrived on American shores, and many African Americans have a variety of non-African ancestors with Native American, Caucasian or other roots. Ironically, therefore, Obama's mixed ancestry may be the most "typical" characteristic he shares with other African Americans.
We're Not There Yet
Even when Obama's mixed racial background is mentioned, the one-drop assumptions and default terms come into play. In a November 8, 2008 article titled "'Mutts Like Me' -- Obama Shows Ease Discussing Race," writer Alan Fram focuses on a comment that the president-elect made about what type of puppy his girls would bring to the White House with them. "Obviously, a lot of shelter dogs are mutts like me," Obama said. Fram seems to be getting to the heart of the matter, saying "The message seemed clear -- here is a president who will be quite at ease discussing race, a complex issue as unresolved as it is uncomfortable for many to talk about openly. And at a time when whites in the country are not many years from becoming the minority." However, old habits die hard. Fram also says, "By now, almost everyone knows that Obama's mother was white and father was black, putting him on track to become the nation's first African-American president."
Should embracing the multiracial background of people like Barack Obama or Tiger Woods take away from the pride and sense of accomplishment that different communities take in his achievements? Is it really less of a victory for blacks if Obama's mixed race is acknowledged and celebrated? In a November 10, 2008, article for Salon.com titled "Our Biracial President," James Hannaham wrote, "Obama's biracial. ... This is not to say that he hasn't received some of the same treatment as black Americans, or that he is not welcome among them, or that people should denigrate his need to make his background understandable to people who think that 'biracial' means a type of airplane. It suggests something far less divisive. It means that black and white people (not to mention other ethnicities chained to the binary idiocy of American race relations) can share his victory equally."
In 1967, there were still sixteen U.S. states that had laws on the books banning interracial marriage. That isn't a typo -- 1967. It was in that year that the US. Supreme Court unanimously struck down laws banning interracial marriages with these words: "The freedom to marry, or not marry, a person of another race resides within the individual and cannot be infringed on by the State." Barack Obama's parents met, married, and gave birth to him in Hawai'i in the early 1960s. It is a matter of chance that they were not in one of the states where interracial marriage and sex was illegal. In addition, the 2000 U.S. census was the first one in which respondents could choose to identify themselves as belonging to more than a single race. Given that recent history, perhaps we could all celebrate how far we have come by electing a biracial President.
Wednesday, November 12, 2008
Islam and the West: The Myth of the Green Peril: Not be Missed for US Public Diplomacy
Islam and the West: The Myth of the Green Peril
November 5, 2008 by Leon Hadar, Antiwar.com
The 9/11 attacks and the ensuing "war on terror" have provided an opportunity for the U.S. foreign policy establishment, suffering from Enemy Deprivation Syndrome since the Cold War's end, to settle on a potential new bogeyman. It is radical Islam, or the "Green Peril" – a term I used in an article 15 years ago in Foreign Affairs, spring 1993. I challenged Samuel Huntington's clash-of-civilizations paradigm, which predicted that the West and Islam would engage in a long and bloody struggle over control of the Middle East, including its oil resources. The neoconservative ideologues who hijacked President George W. Bush's foreign policy apparatus have embraced Huntington's notion of a confrontation between Islam and the West. They see it as a way to justify American military power, to establish U.S. hegemony in the Middle East while imposing American values, the so-called "freedom agenda," to deal with the rise of Islamofascism, a Khomeini-like creature, armed with a radical ideology, equipped with nuclear weapons, and intent on launching a violent jihad against Western civilization.
According to this neoconservative dogma, which Bush has attempted to apply in Mesopotamia, a free and democratic Iraq would become a model for political and economic reform in the Arab world and the broader Middle East, and a series of mostly peaceful democratic revolutions would be unleashed from the Islamic frontiers of China, through Iran, Syria, Lebanon, and Palestine, to the Balkan borders. Hence, following the fall of Saddam Hussein, the Bush supporters recalling the dramatic changes in Eastern Europe after the Soviet Union's collapse expected the democratic dominoes to fall in Syria and Iran, while arguing that Lebanon's "Cedar Revolution" and the planned election in Palestine reflected the shape of things to come.At the same time, even the more liberal and internationalist foreign policy pundits like New York Times columnist Tom Friedman, critical of some aspects of the neoconservative agenda, insisted that the U.S. needed to launch a massive campaign to help modernize/democratize/liberalize/secularize the Arab Middle East and by extension the entire Muslim world, preferably through public diplomacy and education, and as a last resort, military force.
Indeed, against the backdrop of U.S. involvement in two major wars in the Middle East and the increasingly assertive position of Iran and its regional allies, a consensus is evolving among Washington's chattering class about the obligation to launch a Wilsonian campaign to bring the Middle East into the modern age, while extinguishing radical Islam. Washington's failure to do that would not only endanger Israel and other Mideast allies. With stratospheric energy prices igniting anxiety in Washington over access to Persian Gulf oil resources, the civilization-clash theory has acquired a economic veneer. Imagine if Osama bin Laden controlled the Middle East's energy assets, a.k.a. "Arab Oil," and used them as a "weapon" against the West!
New foreign policy paradigms, like new religions and political ideologies, are produced by intellectual entrepreneurs hoping to win status and influence over those seeking power. At the same time, politicians use these worldviews to mobilize public support as they lead the nation/people/class against an outside threat that allegedly challenges core interests and values. From this perspective, the new Islamic bogeyman promoted by entrepreneurial neocons has clearly served the interests of Washington's Iron Triangle of bureaucrats, lawmakers, and interest groups, as well as foreign players who have pressed for growing U.S. military engagement in the Middle East.For the Iron Triangle, the Islamic threat – very much like Communism during the Cold War – helps create expanding budget pressure for defense, covert operations, and the current favorite interest group, while allowing foreign players like the Israelis, the Indians, or the many 'Stans to highlight their own roles as Washington's regional surrogates. At the same, neocon intellectuals and their adjunct brigades of "terrorism experts" have increased their access to governmental decision-making and the media and reaped other political and financial rewards.
The problem is that foreign policy paradigms are intellectual constructs that reflect the imaginations of their producers and the interests of their promoters, not necessarily reality. As a result, when policies formed on the basis of such conceptual frameworks are implemented, reality tends to bite. Hence, during the Cold War, the notion of a global and monolithic Soviet-led Communism made it inevitable that the U.S. would confuse the national interests that drove the policies of Vietnam, China, and Cuba with the global interests of the Soviet Union, leading to disastrous U.S. policy outcomes. Similarly, after the Soviet Union had vanished into thin air, Americans discovered that the collapse of Communism failed to unleash political and economic freedom in the former Soviet Empire. Hungary, Poland, and Czechoslovakia have acquired membership in the Western club, a reflection of their European political cultures, while many of the more backward 'Stans have embraced authoritarian political orders and statist economic systems. Russia seems to have chosen its own unique Third Way of state capitalism.During the 1990s there was talk in Washington about the challenge the West was supposedly facing from a new East Asian model, represented by Japan and other emerging economies in the region. The champions of this model included Lee Kuan Yew, Singapore's leader, and Huntington, who embraced the idea of a "Sinic" civilization. They argued that unique East Asian Confucian values such as family, corporate, and national loyalty, the precedence of society's stability and prosperity over personal interests and freedoms, and a strong work ethic and thriftiness are why East Asians support authoritarian governments and the collective well-being rather than democracy, and why state-managed capitalist economies are more successful than Anglo-American ones. But the Asian financial crisis of the 1990s and the region's diverging political and economic systems (Singapore vs. Taiwan) have undermined the notion of a monolithic and successful Asian model, although China's dramatic economic rise may have revived it.
Similarly, the time has come to challenge the grand idea that the Muslim world, or the Middle East, or the Arab world – terms that seem interchangeable in the American media – has a unique and monolithic political and economic culture that makes it resistant to the West's modernizing effects. Note that here again, a multitude of labels, including democracy, capitalism, secularism, and feminism, are used in association with modernity and Westernization.The proponents of this idea suggest that only an American-led effort to "export" democracy to that region of the world would bring about the necessary cultural, political, and economic reforms, making Middle Easterners/Arabs/Muslims "more like us." "Us" includes a not very monolithic West, with America's Deep South, where racist legislation predominated until the 1960s; Switzerland, where women were finally given the right to vote in 1971; the Anglo-Saxon model of capitalism; Germany's social capitalism; libertine Las Vegas and prudish Salt Lake City; and "law-abiding" Northern Europe and "corruption-infested" Southern Europe. And so it goes.
Hence, careful study of the cultural, political, and economic entity called the West reveals diverse and evolving attitudes about what it means to be a Westerner in the 21st century. This depends very much on values and interests, political principles, religious faiths, racial background, economic and social status, gender, education, sexual orientation, and even the political and the economic systems citizens embrace under certain environmental conditions and historical settings.
The fact that there isn't a one-dimensional Westerner makes it easier to understand why the one-dimensional Muslim or Arab doesn't exist either – except, that is, in the rival twin minds of radical Muslims who promote the ideology of al-Qaeda and the Christian Right Westerners who advance neoconservative dogma.
Notwithstanding Washington's propaganda regarding the global threat of Islamofascism, there are no common ideological foundations that unite the various strains of Islamic-influenced groups. The hugely divergent groups include the secular Arab nationalist movements of Ba'athism and Nasserism, combining socialist and fascist ideologies imported from Europe; Saudi Arabia's dominant and strict religious doctrine of Wahhabism; the revolutionary and millennialist dogma that guides the ruling Shi'ites in Iran and their Middle Eastern satellites; the Kemalist secular, republican, and statist tradition of Turkey, challenged now by modernist and pro-free-market and democratic Islamist parties that want Turkey to join the European Union; the tolerant and multicultural societies and capitalist economies of Indonesia and Malaysia; the radical Islamists of South and Central Asia; Westernized, multiethnic, and multireligious Lebanon; and, finally, Moammar Gaddafi's strict and somewhat bizarre form of the Islamic revolutionary system in Libya.From this perspective, the Muslim world or the Middle East or the Arab Middle East is a mosaic of nation-states, ethnic groups, religious sects, and tribal groups, and a mishmash of political ideologies, economic systems, and cultural orientations. Some of these players have gradually joined the modern age and play an active role in the global economy: Malaysia, Indonesia, Turkey, and the UAE. Others have clearly remained on the margins of the recent economic and technological revolutions: Sudan, Mauritania, the Gaza Strip, and Yemen. Most Islam-dominated states find themselves somewhere in between: Egypt, Jordan, Saudi Arabia, and Libya.
There is no doubt that some parts of the Middle East are "notable for [their] disturbingly low profile in matters of economics and globalization," as Zachary Karabell, a Middle East expert and investment banker, put it. After all, the region, with its 350 million people, located at the intersection of Europe, Asia, and Africa and renowned for its historical legacy as the Cradle of Civilization as well as its huge energy resources, would be expected to be on par with other leading emerging economies. Its GDP is more than $900 billion a year. Its economic growth rate is about 5 percent per year.
The recent rise in energy prices has benefited some parts of the region, in particular the booming oil states in the Persian Gulf. In contrast to the oil explosion of the 1970s, these states are now investing their profits in the region, encouraging stock market growth, a surge in real estate developments, and the building of modern economic infrastructure that is helping to turn the UAE and other Persian Gulf states into centers of global commerce and finance like Singapore.
At the same time, there are signs that Arab economies that have been ruled for decades by military dictators – Egypt, Tunisia, Jordan, and now perhaps even Iraq, Syria, Palestine, and Libya – are taking important steps to reform their economies and open them to foreign investment and trade. Through the efforts of France and the rest of the EU, the creation of a European-Mediterranean economic club could accelerate this process and encourage the return of expatriates, including many professionals and businessmen, from the West.
In a way, Western powers have been responsible for the fact that military dictatorships retarding economic reform have controlled Mideast nation-states for so long. The geostrategic competition among outside powers, especially during the Cold War, encouraged the U.S. and its allies to exploit regional conflicts like the Arab-Israeli one and to provide military and economic support to local strongmen who were supposed to serve the outsiders' interests. But the time has come for Western powers, particularly the EU, to focus efforts on an end to the Arab-Israeli conflict and to create incentives for the region to open up to the global economy. This includes liberalizing their economies, reducing tariff barriers, and encouraging direct foreign investment.
While free trade is not a panacea, it could be a necessary building block for a more peaceful and prosperous Middle East. It could encourage the rise of a professional middle class with values more in tune with modern ideas and technologies. That effort could also help reduce poverty and economic inequality, and all of these could foster what Erik Gartzke, a Canadian political economist, describes as "capitalist peace."
Indeed, when globalization seems to be bypassing the Middle East, it's important to remember that the region was once a center of global commerce, and that its merchants and traders – Syrians, Lebanese, Jews, Armenians, Greeks, and others – helped spread the culture of business across the Mediterranean and throughout the world. That old Spirit of the Levant could be revived under these conditions of capitalist peace and help transform the Middle East from the global economy's backwater into one of its most powerful engines.
http://www.antiwar.com/hadar/?articleid=13718
November 5, 2008 by Leon Hadar, Antiwar.com
The 9/11 attacks and the ensuing "war on terror" have provided an opportunity for the U.S. foreign policy establishment, suffering from Enemy Deprivation Syndrome since the Cold War's end, to settle on a potential new bogeyman. It is radical Islam, or the "Green Peril" – a term I used in an article 15 years ago in Foreign Affairs, spring 1993. I challenged Samuel Huntington's clash-of-civilizations paradigm, which predicted that the West and Islam would engage in a long and bloody struggle over control of the Middle East, including its oil resources. The neoconservative ideologues who hijacked President George W. Bush's foreign policy apparatus have embraced Huntington's notion of a confrontation between Islam and the West. They see it as a way to justify American military power, to establish U.S. hegemony in the Middle East while imposing American values, the so-called "freedom agenda," to deal with the rise of Islamofascism, a Khomeini-like creature, armed with a radical ideology, equipped with nuclear weapons, and intent on launching a violent jihad against Western civilization.
According to this neoconservative dogma, which Bush has attempted to apply in Mesopotamia, a free and democratic Iraq would become a model for political and economic reform in the Arab world and the broader Middle East, and a series of mostly peaceful democratic revolutions would be unleashed from the Islamic frontiers of China, through Iran, Syria, Lebanon, and Palestine, to the Balkan borders. Hence, following the fall of Saddam Hussein, the Bush supporters recalling the dramatic changes in Eastern Europe after the Soviet Union's collapse expected the democratic dominoes to fall in Syria and Iran, while arguing that Lebanon's "Cedar Revolution" and the planned election in Palestine reflected the shape of things to come.At the same time, even the more liberal and internationalist foreign policy pundits like New York Times columnist Tom Friedman, critical of some aspects of the neoconservative agenda, insisted that the U.S. needed to launch a massive campaign to help modernize/democratize/liberalize/secularize the Arab Middle East and by extension the entire Muslim world, preferably through public diplomacy and education, and as a last resort, military force.
Indeed, against the backdrop of U.S. involvement in two major wars in the Middle East and the increasingly assertive position of Iran and its regional allies, a consensus is evolving among Washington's chattering class about the obligation to launch a Wilsonian campaign to bring the Middle East into the modern age, while extinguishing radical Islam. Washington's failure to do that would not only endanger Israel and other Mideast allies. With stratospheric energy prices igniting anxiety in Washington over access to Persian Gulf oil resources, the civilization-clash theory has acquired a economic veneer. Imagine if Osama bin Laden controlled the Middle East's energy assets, a.k.a. "Arab Oil," and used them as a "weapon" against the West!
New foreign policy paradigms, like new religions and political ideologies, are produced by intellectual entrepreneurs hoping to win status and influence over those seeking power. At the same time, politicians use these worldviews to mobilize public support as they lead the nation/people/class against an outside threat that allegedly challenges core interests and values. From this perspective, the new Islamic bogeyman promoted by entrepreneurial neocons has clearly served the interests of Washington's Iron Triangle of bureaucrats, lawmakers, and interest groups, as well as foreign players who have pressed for growing U.S. military engagement in the Middle East.For the Iron Triangle, the Islamic threat – very much like Communism during the Cold War – helps create expanding budget pressure for defense, covert operations, and the current favorite interest group, while allowing foreign players like the Israelis, the Indians, or the many 'Stans to highlight their own roles as Washington's regional surrogates. At the same, neocon intellectuals and their adjunct brigades of "terrorism experts" have increased their access to governmental decision-making and the media and reaped other political and financial rewards.
The problem is that foreign policy paradigms are intellectual constructs that reflect the imaginations of their producers and the interests of their promoters, not necessarily reality. As a result, when policies formed on the basis of such conceptual frameworks are implemented, reality tends to bite. Hence, during the Cold War, the notion of a global and monolithic Soviet-led Communism made it inevitable that the U.S. would confuse the national interests that drove the policies of Vietnam, China, and Cuba with the global interests of the Soviet Union, leading to disastrous U.S. policy outcomes. Similarly, after the Soviet Union had vanished into thin air, Americans discovered that the collapse of Communism failed to unleash political and economic freedom in the former Soviet Empire. Hungary, Poland, and Czechoslovakia have acquired membership in the Western club, a reflection of their European political cultures, while many of the more backward 'Stans have embraced authoritarian political orders and statist economic systems. Russia seems to have chosen its own unique Third Way of state capitalism.During the 1990s there was talk in Washington about the challenge the West was supposedly facing from a new East Asian model, represented by Japan and other emerging economies in the region. The champions of this model included Lee Kuan Yew, Singapore's leader, and Huntington, who embraced the idea of a "Sinic" civilization. They argued that unique East Asian Confucian values such as family, corporate, and national loyalty, the precedence of society's stability and prosperity over personal interests and freedoms, and a strong work ethic and thriftiness are why East Asians support authoritarian governments and the collective well-being rather than democracy, and why state-managed capitalist economies are more successful than Anglo-American ones. But the Asian financial crisis of the 1990s and the region's diverging political and economic systems (Singapore vs. Taiwan) have undermined the notion of a monolithic and successful Asian model, although China's dramatic economic rise may have revived it.
Similarly, the time has come to challenge the grand idea that the Muslim world, or the Middle East, or the Arab world – terms that seem interchangeable in the American media – has a unique and monolithic political and economic culture that makes it resistant to the West's modernizing effects. Note that here again, a multitude of labels, including democracy, capitalism, secularism, and feminism, are used in association with modernity and Westernization.The proponents of this idea suggest that only an American-led effort to "export" democracy to that region of the world would bring about the necessary cultural, political, and economic reforms, making Middle Easterners/Arabs/Muslims "more like us." "Us" includes a not very monolithic West, with America's Deep South, where racist legislation predominated until the 1960s; Switzerland, where women were finally given the right to vote in 1971; the Anglo-Saxon model of capitalism; Germany's social capitalism; libertine Las Vegas and prudish Salt Lake City; and "law-abiding" Northern Europe and "corruption-infested" Southern Europe. And so it goes.
Hence, careful study of the cultural, political, and economic entity called the West reveals diverse and evolving attitudes about what it means to be a Westerner in the 21st century. This depends very much on values and interests, political principles, religious faiths, racial background, economic and social status, gender, education, sexual orientation, and even the political and the economic systems citizens embrace under certain environmental conditions and historical settings.
The fact that there isn't a one-dimensional Westerner makes it easier to understand why the one-dimensional Muslim or Arab doesn't exist either – except, that is, in the rival twin minds of radical Muslims who promote the ideology of al-Qaeda and the Christian Right Westerners who advance neoconservative dogma.
Notwithstanding Washington's propaganda regarding the global threat of Islamofascism, there are no common ideological foundations that unite the various strains of Islamic-influenced groups. The hugely divergent groups include the secular Arab nationalist movements of Ba'athism and Nasserism, combining socialist and fascist ideologies imported from Europe; Saudi Arabia's dominant and strict religious doctrine of Wahhabism; the revolutionary and millennialist dogma that guides the ruling Shi'ites in Iran and their Middle Eastern satellites; the Kemalist secular, republican, and statist tradition of Turkey, challenged now by modernist and pro-free-market and democratic Islamist parties that want Turkey to join the European Union; the tolerant and multicultural societies and capitalist economies of Indonesia and Malaysia; the radical Islamists of South and Central Asia; Westernized, multiethnic, and multireligious Lebanon; and, finally, Moammar Gaddafi's strict and somewhat bizarre form of the Islamic revolutionary system in Libya.From this perspective, the Muslim world or the Middle East or the Arab Middle East is a mosaic of nation-states, ethnic groups, religious sects, and tribal groups, and a mishmash of political ideologies, economic systems, and cultural orientations. Some of these players have gradually joined the modern age and play an active role in the global economy: Malaysia, Indonesia, Turkey, and the UAE. Others have clearly remained on the margins of the recent economic and technological revolutions: Sudan, Mauritania, the Gaza Strip, and Yemen. Most Islam-dominated states find themselves somewhere in between: Egypt, Jordan, Saudi Arabia, and Libya.
There is no doubt that some parts of the Middle East are "notable for [their] disturbingly low profile in matters of economics and globalization," as Zachary Karabell, a Middle East expert and investment banker, put it. After all, the region, with its 350 million people, located at the intersection of Europe, Asia, and Africa and renowned for its historical legacy as the Cradle of Civilization as well as its huge energy resources, would be expected to be on par with other leading emerging economies. Its GDP is more than $900 billion a year. Its economic growth rate is about 5 percent per year.
The recent rise in energy prices has benefited some parts of the region, in particular the booming oil states in the Persian Gulf. In contrast to the oil explosion of the 1970s, these states are now investing their profits in the region, encouraging stock market growth, a surge in real estate developments, and the building of modern economic infrastructure that is helping to turn the UAE and other Persian Gulf states into centers of global commerce and finance like Singapore.
At the same time, there are signs that Arab economies that have been ruled for decades by military dictators – Egypt, Tunisia, Jordan, and now perhaps even Iraq, Syria, Palestine, and Libya – are taking important steps to reform their economies and open them to foreign investment and trade. Through the efforts of France and the rest of the EU, the creation of a European-Mediterranean economic club could accelerate this process and encourage the return of expatriates, including many professionals and businessmen, from the West.
In a way, Western powers have been responsible for the fact that military dictatorships retarding economic reform have controlled Mideast nation-states for so long. The geostrategic competition among outside powers, especially during the Cold War, encouraged the U.S. and its allies to exploit regional conflicts like the Arab-Israeli one and to provide military and economic support to local strongmen who were supposed to serve the outsiders' interests. But the time has come for Western powers, particularly the EU, to focus efforts on an end to the Arab-Israeli conflict and to create incentives for the region to open up to the global economy. This includes liberalizing their economies, reducing tariff barriers, and encouraging direct foreign investment.
While free trade is not a panacea, it could be a necessary building block for a more peaceful and prosperous Middle East. It could encourage the rise of a professional middle class with values more in tune with modern ideas and technologies. That effort could also help reduce poverty and economic inequality, and all of these could foster what Erik Gartzke, a Canadian political economist, describes as "capitalist peace."
Indeed, when globalization seems to be bypassing the Middle East, it's important to remember that the region was once a center of global commerce, and that its merchants and traders – Syrians, Lebanese, Jews, Armenians, Greeks, and others – helped spread the culture of business across the Mediterranean and throughout the world. That old Spirit of the Levant could be revived under these conditions of capitalist peace and help transform the Middle East from the global economy's backwater into one of its most powerful engines.
http://www.antiwar.com/hadar/?articleid=13718
Tuesday, November 11, 2008
Message from Harry C Blaney III
Dear John,
I think we have an extraordinary opportunity to not only "rejuvenate" public diplomacy but also help reshape it for the 21st century....key will be first that we have Obama who will just in himself help that process with the world, and that is a great opportunity we should not waste and we need to follow that with a strong proposal for not only a new and unified (and funded) PD "agency," but also for a set of policies and programs that show a new face of America that is relevant to our current challenges and the landscape beyond our borders. But to make that possible all of those who care about this issue will need to work in a large measure of unity and express it in a strong and effective voice (this should naturally be what the profession is all about!).
First, we need to get to the transition team on State and national security, and second we need to have some public debate on this with op-eds and getting on talk shows, third work on the Hill. And it need to start now. ...
Cheers, Harry
HARRY C. BLANEY, III
Senior Fellow, Center for International Policy
I think we have an extraordinary opportunity to not only "rejuvenate" public diplomacy but also help reshape it for the 21st century....key will be first that we have Obama who will just in himself help that process with the world, and that is a great opportunity we should not waste and we need to follow that with a strong proposal for not only a new and unified (and funded) PD "agency," but also for a set of policies and programs that show a new face of America that is relevant to our current challenges and the landscape beyond our borders. But to make that possible all of those who care about this issue will need to work in a large measure of unity and express it in a strong and effective voice (this should naturally be what the profession is all about!).
First, we need to get to the transition team on State and national security, and second we need to have some public debate on this with op-eds and getting on talk shows, third work on the Hill. And it need to start now. ...
Cheers, Harry
HARRY C. BLANEY, III
Senior Fellow, Center for International Policy
Thursday, November 6, 2008
Rejuvenate Public Diplomacy! Bring Culture Back to the White House
Note: An updated version of the below appeared in Common Dreams
The many reports that have appeared on the failures of American public diplomacy during the Bush years have stressed its limitations in the area of information and educational programs. What some call the third pillar of public diplomacy -- cultural programs -- has, however, been little mentioned.
This is not surprising. As I pointed out, not very originally, in a long essay, "Arts Diplomacy: The Neglected Aspect of Cultural Diplomacy," and in a recent book review on the arts and democracy, Americans are uneasy not only with federal government support for the arts, but with the very notion of "culture" (high culture with a "capital K") itself. Our Puritan roots -- and they are still alive and well -- underscore that overcoming the all-encompassing fear of predestined eternal damnation can be achieved (but not with certitude, which makes us work even harder) through "busy-ness" (business), not the "dangerously" hedonistic pursuit of pleasure (See, of course, Max Weber).
When we Americans do allow ourselves time for lassitude, we do so, as a rule, in a very planned, business-like manner (or totally "drop out" through drugs). Las Vegas, "sin city," is the best example of this pleasureless, high-strung "fun-fun-fun," which has little to do with the dolce far niente, a key -- frivolous "art for art's sake" types would say -- to savoring life in an aesthetic (meaningful?) way.
We Americans are known worldwide for our power to "entertain" (and Hollywood-style entertainment, it could be argued, is essentially about biological "relaxation" -- comparable to a satisfying bowel movement or "pigging-out" on junk food). Mindless blockbuster movies and vulgar pop "music" are among our most profitable exports.
Based on my experience in the Foreign Service (and, needless to say, personal biases), however, I have found that many foreigners, no matter what social class or education, don't understand why our official diplomatic missions show so little interest in presenting "serious" American culture to them (and course "serious" depends on whom you're talking with).
Non-Americans are aware that the U.S. does have splendid orchestras, theaters, museums. I don't want to suggest, mind you, that America is without culture; I simply want to say that "culture" does not play the central role in American life that it plays in other countries in continental Europe, Asia, and parts of the Middle East. An Italian government official said at a White House conference that her country's Ministry of Culture was as important in Italy as is the Petroleum Ministry in Saudi Arabia. What she said about the Saudis/Italy could apply to the U.S., perhaps.
Foreigners are struck by how little the world's most powerful nation does -- in an "official" way -- to display its art to interested persons. Interestingly but not surprisingly, when the USG does -- all too rarely -- fund cultural activities overseas, it likes to call them "workshops." That, of course, spares the State Department of being accused of frivolity by Congresspersons claiming to represent the hard-working taxpayer; artists are working, so everything's ok, no money is being wasted. Another favorite Foggy Bottom "cultural" program, by the way, is "arts management" -- and yes, that's very important business. Again, let's get 'em artists working -- i.e., producing as if in a corporation -- right.
During the past eight years, many abroad have considered America hostage to a crude & rude "cowboy president." Bush, despite his Yale and Harvard "education," has been seen as uncivilized (a word all too often used by critics of America, which is far too busy reinventing itself to be "civilized"), not only because of his barbaric, scorched-earth "shock and awe" policies (for which Americans will pay a price for many years) but also, I would suggest, because of the little respect he showed toward the fine arts (in Russia, there was a rumor that Bush, in a St. Petersburg palace, stuck chewing gum beneath the table at which he was sitting).
The favorite form of relaxation for this preppy cheerleader reformed alcoholic is physical exercise (of course, nothing wrong with that), an activity also much favored by his football-crazy Secretary of State (it was reported that a preferred topic of their discussions is sports -- as Americans were dying in Iraq?, some may ask).
Among the many not-so-subliminal "W" messages, during the past eight miserable years, to the homeland, was the following: "I, your mission-accomplished commander in chief -- while engaging in my 'free time' in communications with the Almighty -- work too hard during the day to listen to music or read a book" (I personally wonder if he's ever really read the Bible, one of the great literary masterpieces). Say a "prayer" and in bed by 10 pm. No nonsense.
Under Bush, the presidency was totally divorced from culture; how many persons in the world associate "Dubya" with an exhibit or concert (or an experimental artistic project on the Internet)? Very few, if any; indeed one of Bush's "pleasures" was to show Saddam Hussein's handgun to White House visitors. In all fairness to the Bushes, First Lady Laura the Librarian showed an interest in books; and a picture of Bush that will always be remembered is his holding a book -- yes, Bush with a book!: My Pet Goat, in front of students at Emma E. Booker Elementary School in Sarasota, Florida, on September 11, 2001, as flames ravaged the World Trade Center in New York.
Given that Americans are reluctant to support their culture overseas -- Hey, why should we? We've got Hollywood doing that! Get real! We're in the middle of a hell of a recession! First things first! -- it cannot be expected that public diplomacy will receive the funding to significantly increase its cultural programs under the new administration (but then one never knows; miracles do happen).
Meanwhile, however, instead of waiting for miracles, Americans with an appreciation for the arts -- and such Americans, many of them, do exist, more than foreigners are willing to acknowledge, as snobs among the overseas elite (many of them sending their children to colleges in the U.S. no American can afford) hold their "culture" as a superiority point over the bons sauvages in the New World -- such Americans should encourage the new president, Barack Obama, to make the White House a more culture-friendly place. As was the case during the Kennedy years, the residence of our Chief Executive should be a venue for cultural activities of all types, ranging from concerts to poetry readings, to which foreigners (including, needless to say, visiting heads of state and other official representatives, including in the field of culture) would be invited.
Non-Americans felt that the Kennedys were "one of them" because of the presidential interest in the arts. No reason why the articulate Barack and his elegant spouse cannot show the same interest in the enchanting sides of life while they serve in the White House (and they do not necessarily have to be culture-vultures to do so; after all Ian Fleming was one of JFK's favorite authors).
Bringing culture to the White House would do much to demonstrate to the world that Americans can, indeed, value the arts. True, we'll never have a Ministry of Culture (nor should we), but if our new president (a published author who has a literary bent) takes the arts seriously (and I do not mean solemnly) and shares this appreciation publicly with his fellow citizens and other inhabitants of Mother Earth, it will help show our small planet that the cowboy presidency is indeed over and that after eight xenophobic years we Americans are again trying to connect with the rest humankind -- a humankind defined, in many ways, by its greatest cultural achievements, of infinite variety throughout the world.
And, finally, how about starting off the new administration on the right cultural footing, by having a poet (say the Library of Congress's Poet Laureate, Kay Ryan, who has written about the "idle maunderings poets feed upon") read at the Obama inauguration, just as Robert Frost (ironically, something of a Puritan himself) did when John F. Kennedy assumed the presidency?
The many reports that have appeared on the failures of American public diplomacy during the Bush years have stressed its limitations in the area of information and educational programs. What some call the third pillar of public diplomacy -- cultural programs -- has, however, been little mentioned.
This is not surprising. As I pointed out, not very originally, in a long essay, "Arts Diplomacy: The Neglected Aspect of Cultural Diplomacy," and in a recent book review on the arts and democracy, Americans are uneasy not only with federal government support for the arts, but with the very notion of "culture" (high culture with a "capital K") itself. Our Puritan roots -- and they are still alive and well -- underscore that overcoming the all-encompassing fear of predestined eternal damnation can be achieved (but not with certitude, which makes us work even harder) through "busy-ness" (business), not the "dangerously" hedonistic pursuit of pleasure (See, of course, Max Weber).
When we Americans do allow ourselves time for lassitude, we do so, as a rule, in a very planned, business-like manner (or totally "drop out" through drugs). Las Vegas, "sin city," is the best example of this pleasureless, high-strung "fun-fun-fun," which has little to do with the dolce far niente, a key -- frivolous "art for art's sake" types would say -- to savoring life in an aesthetic (meaningful?) way.
We Americans are known worldwide for our power to "entertain" (and Hollywood-style entertainment, it could be argued, is essentially about biological "relaxation" -- comparable to a satisfying bowel movement or "pigging-out" on junk food). Mindless blockbuster movies and vulgar pop "music" are among our most profitable exports.
Based on my experience in the Foreign Service (and, needless to say, personal biases), however, I have found that many foreigners, no matter what social class or education, don't understand why our official diplomatic missions show so little interest in presenting "serious" American culture to them (and course "serious" depends on whom you're talking with).
Non-Americans are aware that the U.S. does have splendid orchestras, theaters, museums. I don't want to suggest, mind you, that America is without culture; I simply want to say that "culture" does not play the central role in American life that it plays in other countries in continental Europe, Asia, and parts of the Middle East. An Italian government official said at a White House conference that her country's Ministry of Culture was as important in Italy as is the Petroleum Ministry in Saudi Arabia. What she said about the Saudis/Italy could apply to the U.S., perhaps.
Foreigners are struck by how little the world's most powerful nation does -- in an "official" way -- to display its art to interested persons. Interestingly but not surprisingly, when the USG does -- all too rarely -- fund cultural activities overseas, it likes to call them "workshops." That, of course, spares the State Department of being accused of frivolity by Congresspersons claiming to represent the hard-working taxpayer; artists are working, so everything's ok, no money is being wasted. Another favorite Foggy Bottom "cultural" program, by the way, is "arts management" -- and yes, that's very important business. Again, let's get 'em artists working -- i.e., producing as if in a corporation -- right.
During the past eight years, many abroad have considered America hostage to a crude & rude "cowboy president." Bush, despite his Yale and Harvard "education," has been seen as uncivilized (a word all too often used by critics of America, which is far too busy reinventing itself to be "civilized"), not only because of his barbaric, scorched-earth "shock and awe" policies (for which Americans will pay a price for many years) but also, I would suggest, because of the little respect he showed toward the fine arts (in Russia, there was a rumor that Bush, in a St. Petersburg palace, stuck chewing gum beneath the table at which he was sitting).
The favorite form of relaxation for this preppy cheerleader reformed alcoholic is physical exercise (of course, nothing wrong with that), an activity also much favored by his football-crazy Secretary of State (it was reported that a preferred topic of their discussions is sports -- as Americans were dying in Iraq?, some may ask).
Among the many not-so-subliminal "W" messages, during the past eight miserable years, to the homeland, was the following: "I, your mission-accomplished commander in chief -- while engaging in my 'free time' in communications with the Almighty -- work too hard during the day to listen to music or read a book" (I personally wonder if he's ever really read the Bible, one of the great literary masterpieces). Say a "prayer" and in bed by 10 pm. No nonsense.
Under Bush, the presidency was totally divorced from culture; how many persons in the world associate "Dubya" with an exhibit or concert (or an experimental artistic project on the Internet)? Very few, if any; indeed one of Bush's "pleasures" was to show Saddam Hussein's handgun to White House visitors. In all fairness to the Bushes, First Lady Laura the Librarian showed an interest in books; and a picture of Bush that will always be remembered is his holding a book -- yes, Bush with a book!: My Pet Goat, in front of students at Emma E. Booker Elementary School in Sarasota, Florida, on September 11, 2001, as flames ravaged the World Trade Center in New York.
Given that Americans are reluctant to support their culture overseas -- Hey, why should we? We've got Hollywood doing that! Get real! We're in the middle of a hell of a recession! First things first! -- it cannot be expected that public diplomacy will receive the funding to significantly increase its cultural programs under the new administration (but then one never knows; miracles do happen).
Meanwhile, however, instead of waiting for miracles, Americans with an appreciation for the arts -- and such Americans, many of them, do exist, more than foreigners are willing to acknowledge, as snobs among the overseas elite (many of them sending their children to colleges in the U.S. no American can afford) hold their "culture" as a superiority point over the bons sauvages in the New World -- such Americans should encourage the new president, Barack Obama, to make the White House a more culture-friendly place. As was the case during the Kennedy years, the residence of our Chief Executive should be a venue for cultural activities of all types, ranging from concerts to poetry readings, to which foreigners (including, needless to say, visiting heads of state and other official representatives, including in the field of culture) would be invited.
Non-Americans felt that the Kennedys were "one of them" because of the presidential interest in the arts. No reason why the articulate Barack and his elegant spouse cannot show the same interest in the enchanting sides of life while they serve in the White House (and they do not necessarily have to be culture-vultures to do so; after all Ian Fleming was one of JFK's favorite authors).
Bringing culture to the White House would do much to demonstrate to the world that Americans can, indeed, value the arts. True, we'll never have a Ministry of Culture (nor should we), but if our new president (a published author who has a literary bent) takes the arts seriously (and I do not mean solemnly) and shares this appreciation publicly with his fellow citizens and other inhabitants of Mother Earth, it will help show our small planet that the cowboy presidency is indeed over and that after eight xenophobic years we Americans are again trying to connect with the rest humankind -- a humankind defined, in many ways, by its greatest cultural achievements, of infinite variety throughout the world.
And, finally, how about starting off the new administration on the right cultural footing, by having a poet (say the Library of Congress's Poet Laureate, Kay Ryan, who has written about the "idle maunderings poets feed upon") read at the Obama inauguration, just as Robert Frost (ironically, something of a Puritan himself) did when John F. Kennedy assumed the presidency?
Saturday, November 1, 2008
Persuasion and Enchantment
Public Diplomacy can't make up its mind about what it's about -- and that's fine. I'm thinking of writing a piece on the tension between persuasion and enchantment as the "goal" of public diplomacy. Pentagon/State Dept green-shades types, corporate think-thank funders, beltway heavy-hitters, etc. (sorry for the generalization) are heavily into PD being a form of "persuasion" -- i.e., make 'em do what we want 'em to do (basically, propaganda). But it's not that simple. There's a role for enchantment (you want a definition? Find it on the Internet!) in American public diplomacy: How about enchantment instead of "shock and awe" or "convincing the natives" -- i.e, the rest of the world -- about the virtues of our so-called all-American values? Ben Franklin, considered by many our first "public" diplomat, would have understood this, but few of our "strategic communications" experts even consider it (enchantment): not "serious" and "quantifiable" enough for them. We've gotta to win that war on terror right now ... (as if Franklin were not "fighting" the most important American war of them all -- the War of Independence).
"Ô douce volupté, sans qui dès notre enfance Le vivre et le mourir nous deviendraient égaux."
I do eat freedom fries.
"Ô douce volupté, sans qui dès notre enfance Le vivre et le mourir nous deviendraient égaux."
I do eat freedom fries.
Second Life
What I most like about Second Life is that, at least superficially, it is a bit like a Brazilian carnival (on the other hand, it also reminds me of the lifeless Giant supermarket where I shop, so different from open-air markets throughout the world, with their real-world smells and agitation). To my parochial, earth-bound taste, there are sinister sides to SL that are not (at least to yours truly) so appealing: a kind of fake-light, all-enclosingness to SL (like the neon shadeless illumination at Giant), as if SL were more "After" Life (without Virgil!) than a door-opening "Second" Life. And, could there be nothing better, for the sake of humanity (at this stage of human "evolution"), than the human presence in all its breathing, sun-lit (I must also arctic-cooling, to be PC) humanity? ... but maybe we all are "emerging" beyond this all-too-human state, like the fish leaving the deep oceans to become a terrestial lizard (do I have the right specie?), as Darwin's revelations tell us (but then man makes evolution looks ridiculous, to paraphrase the famous phrase; is there really an "evolution" -- or should it be a "devolution"). Cynics would say: whom would you rather spend a evening with: a human being or a clam?
Wednesday, October 22, 2008
First Person Plural
November 2008
An evolving approach to the science of pleasure suggests that each of us contains multiple selves—all with different desires, and all fighting for control. If this is right, the pursuit of happiness becomes even trickier. Can one self bind” another self if the two want different things? Are you always better off when a Good Self wins? And should outsiders, such as employers and policy makers, get into the fray?
by Paul Bloom, The Atlantic
First Person Plural
BY Paul Bloom, The Atlantic
Imagine a long, terrible dental procedure. You are rigid in the chair, hands clenched, soaked with sweat—and then the dentist leans over and says, “We’re done now. You can go home. But if you want, I’d be happy to top you off with a few minutes of mild pain.”
There is a good argument for saying “Yes. Please do.”
The psychologist and recent Nobel laureate Daniel Kahneman conducted a series of studies on the memory of painful events, such as colonoscopies. He discovered that when we think back on these events, we are influenced by the intensity of the endings, and so we have a more positive memory of an experience that ends with mild pain than of one that ends with extreme pain, even if the mild pain is added to the same amount of extreme pain. At the moment the dentist makes his offer, you would, of course, want to say no—but later on, you would be better off if you had said yes, because your overall memory of the event wouldn’t be as unpleasant.
Such contradictions arise all the time. If you ask people which makes them happier, work or vacation, they will remind you that they work for money and spend the money on vacations. But if you give them a beeper that goes off at random times, and ask them to record their activity and mood each time they hear a beep, you’ll likely find that they are happier at work. Work is often engaging and social; vacations are often boring and stressful. Similarly, if you ask people about their greatest happiness in life, more than a third mention their children or grandchildren, but when they use a diary to record their happiness, it turns out that taking care of the kids is a downer—parenting ranks just a bit higher than housework, and falls below sex, socializing with friends, watching TV, praying, eating, and cooking.
The question “What makes people happy?” has been around forever, but there is a new approach to the science of pleasure, one that draws on recent work in psychology, philosophy, economics, neuroscience, and emerging fields such as neuroeconomics. This work has led to new ways—everything from beepers and diaries to brain scans—to explore the emotional value of different experiences, and has given us some surprising insights about the conditions that result in satisfaction.
But what’s more exciting, I think, is the emergence of a different perspective on happiness itself. We used to think that the hard part of the question “How can I be happy?” had to do with nailing down the definition of happy. But it may have more to do with the definition of I. Many researchers now believe, to varying degrees, that each of us is a community of competing selves, with the happiness of one often causing the misery of another. This theory might explain certain puzzles of everyday life, such as why addictions and compulsions are so hard to shake off, and why we insist on spending so much of our lives in worlds—like TV shows and novels and virtual-reality experiences—that don’t actually exist. And it provides a useful framework for thinking about the increasingly popular position that people would be better off if governments and businesses helped them inhibit certain gut feelings and emotional reactions.
Like any organ, the brain consists of large parts (such as the hippocampus and the cortex) that are made up of small parts (such as “maps” in the visual cortex), which themselves are made up of smaller parts, until you get to neurons, billions of them, whose orchestrated firing is the stuff of thought. The neurons are made up of parts like axons and dendrites, which are made up of smaller parts like terminal buttons and receptor sites, which are made up of molecules, and so on.
This hierarchical structure makes possible the research programs of psychology and neuroscience. The idea is that interesting properties of the whole (intelligence, decision-making, emotions, moral sensibility) can be understood in terms of the interaction of components that themselves lack these properties. This is how computers work; there is every reason to believe that this is how we work, too.
But there is no consensus about the broader implications of this scientific approach. Some scholars argue that although the brain might contain neural subsystems, or modules, specialized for tasks like recognizing faces and understanding language, it also contains a part that constitutes a person, a self: the chief executive of all the subsystems. As the philosopher Jerry Fodor once put it, “If, in short, there is a community of computers living in my head, there had also better be somebody who is in charge; and, by God, it had better be me.”
More-radical scholars insist that an inherent clash exists between science and our long-held conceptions about consciousness and moral agency: if you accept that our brains are a myriad of smaller components, you must reject such notions as character, praise, blame, and free will. Perhaps the very notion that there are such things as selves—individuals who persist over time—needs to be rejected as well.
The view I’m interested in falls between these extremes. It is conservative in that it accepts that brains give rise to selves that last over time, plan for the future, and so on. But it is radical in that it gives up the idea that there is just one self per head. The idea is that instead, within each brain, different selves are continually popping in and out of existence. They have different desires, and they fight for control—bargaining with, deceiving, and plotting against one another.
The notion of different selves within a single person is not new. It can be found in Plato, and it was nicely articulated by the 18th-century Scottish philosopher David Hume, who wrote, “I cannot compare the soul more properly to any thing than to a republic or commonwealth, in which the several members are united by the reciprocal ties of government and subordination.” Walt Whitman gave us a pithier version: “I am large, I contain multitudes.”
The economist Thomas Schelling, another Nobel laureate, illustrates the concept with a simple story: As a boy I saw a movie about Admiral Byrd’s Antarctic expedition and was impressed that as a boy he had gone outdoors in shirtsleeves to toughen himself against the cold. I resolved to go to bed at night with one blanket too few. That decision to go to bed minus one blanket was made by a warm boy; another boy awoke cold in the night, too cold to retrieve the blanket … and resolving to restore it tomorrow. The next bedtime it was the warm boy again, dreaming of Antarctica, who got to make the decision, and he always did it again.
Examples abound in our own lives. Late at night, when deciding not to bother setting up the coffee machine for the next morning, I sometimes think of the man who will wake up as a different person, and wonder, What did he ever do for me? When I get up and there’s no coffee ready, I curse the lazy bastard who shirked his duties the night before.
But anyone tempted by this theory has to admit just how wrong it feels, how poorly it fits with most of our experience. In the main, we do think of ourselves as singular individuals who persist over time. If I were to learn that I was going to be tortured tomorrow morning, my reaction would be terror, not sympathy for the poor guy who will be living in my body then. If I do something terrible now, I will later feel guilt and shame, not anger at some other person.
It could hardly be otherwise. Our brains have evolved to protect our bodies and guide them to reproduce, hence our minds must be sensitive to maintaining the needs of the continuing body—my children today will be my children tomorrow; if you wronged me yesterday, I should be wary of you today. Society and human relationships would be impossible without this form of continuity. Anyone who could convince himself that the person who will wake up in his bed tomorrow is really someone different would lack the capacity for sustained self-interest; he would feel no long-term guilt, love, shame, or pride.
The multiplicity of selves becomes more intuitive as the time span increases. Social psychologists have found certain differences in how we think of ourselves versus how we think of other people—for instance, we tend to attribute our own bad behavior to unfortunate circumstances, and the bad behavior of others to their nature. But these biases diminish when we think of distant past selves or distant future selves; we see such selves the way we see other people. Although it might be hard to think about the person who will occupy your body tomorrow morning as someone other than you, it is not hard at all to think that way about the person who will occupy your body 20 years from now. This may be one reason why many young people are indifferent about saving for retirement; they feel as if they would be giving up their money to an elderly stranger.
One can see a version of clashing multiple selves in the mental illness known as dissociative-identity disorder, which used to be called multiple-personality disorder. This is familiar to everyone from the dramatic scenes in movies in which an actor is one person, and then he or she contorts or coughs or shakes the head, and—boom!—another person comes into existence. (My own favorite is Edward Norton in Primal Fear, although—spoiler alert—he turns out in the end to be faking.)
Dissociative-identity disorder is controversial. It used to be rarely diagnosed, then the number of reported cases spiked dramatically in the 1980s, particularly in North America. The spike has many possible explanations: the disorder was first included as a specific category in the 1980 version of the Diagnostic and Statistical Manual of Mental Disorders, just as an influential set of case studies of multiple personalities was published. And increased popular interest was fueled by the 1973 novel Sybil and its 1976 movie adaptation, which starred Sally Field as a woman with 16 different personalities.
Some psychologists believe that this spike was not the result of better diagnosis. Rather, they say it stemmed in part from therapists who inadvertently persuaded their patients to create these distinct selves, often through role-playing and hypnosis. Recent years have seen a backlash, and some people diagnosed with the disorder have sued their therapists. One woman got a settlement of more than $2 million after alleging that her psychotherapist had used suggestive memory “recovery” techniques to convince her that she had more than 120 personalities, including children, angels, and a duck.
Regardless of the cause of the spike, considerable evidence, including recent brain-imaging studies, suggests that some people really do shift from one self to another, and that the selves have different memories and personalities. In one study, women who had been diagnosed with dissociative-identity disorder and claimed to be capable of shifting at will from one self to another listened to recordings while in a PET scanner. When the recordings told of a woman’s own traumatic experience, the parts of the brain corresponding to autobiographic memory became active—but only when she had shifted to the self who had endured that traumatic experience. If she was in another self, different parts of the brain became active and showed a pattern of neural activity corresponding to hearing about the experience of a stranger.
Many psychologists and philosophers have argued that the disorder should be understood as an extreme version of normal multiplicity. Take memory. One characteristic of dissociative-identity disorder is interpersonality amnesia—one self doesn’t have access to the memories of the other selves. But memory is notoriously situation-dependent even for normal people—remembering something is easiest while you are in the same state in which you originally experienced it. Students do better when they are tested in the room in which they learned the material; someone who learned something while he was angry is better at remembering that information when he is angry again; the experience of one’s drunken self is more accessible to the drunk self than to the sober self. What happens in Vegas stays in Vegas.
Personality also changes according to situation; even the most thuggish teenager is not the same around his buddies as he is when having tea with Grandma. Our normal situation dependence is most evident when it comes to bad behavior. In the 1920s, Yale psychologists tested more than 10,000 children, giving them a battery of aptitude tests and putting them in morally dicey situations, such as having an opportunity to cheat on a test. They found a striking lack of consistency. A child’s propensity to cheat at sports, for instance, had little to do with whether he or she would lie to a teacher.
More-recent experiments with adults find that subtle cues can have a surprising effect on our actions. Good smells, such as fresh bread, make people kinder and more likely to help a stranger; bad smells, like farts (the experimenters used fart spray from a novelty store), make people more judgmental. If you ask people to unscramble sentences, they tend to be more polite, minutes later, if the sentences contain positive words like honor rather than negative words like bluntly. These findings are in line with a set of classic experiments conducted by Stanley Milgram in the 1960s—too unethical to do now—showing that normal people could be induced to give electric shocks to a stranger if they were told to do so by someone they believed was an authoritative scientist. All of these studies support the view that each of us contains many selves—some violent, some submissive, some thoughtful—and that different selves can be brought to the fore by different situations.
The population of a single head is not fixed; we can add more selves. In fact, the capacity to spawn multiple selves is central to pleasure. After all, the most common leisure activity is not sex, eating, drinking, drug use, socializing, sports, or being with the ones we love. It is, by a long shot, participating in experiences we know are not real—reading novels, watching movies and TV, daydreaming, and so forth.
Enjoying fiction requires a shift in selfhood. You give up your own identity and try on the identities of other people, adopting their perspectives so as to share their experiences. This allows us to enjoy fictional events that would shock and sadden us in real life. When Tony Soprano kills someone, you respond differently than you would to a real murder; you accept and adopt some of the moral premises of the Soprano universe. You become, if just for a moment, Tony Soprano.
Some imaginative pleasures involve the creation of alternative selves. Sometimes we interact with these selves as if they were other people. This might sound terrible, and it can be, as when schizophrenics hear voices that seem to come from outside themselves. But the usual version is harmless. In children, we describe these alternative selves as imaginary friends. The psychologist Marjorie Taylor, who has studied this phenomenon more than anyone, points out three things. First, contrary to some stereotypes, children who have imaginary friends are not losers, loners, or borderline psychotics. If anything, they are slightly more socially adept than other children. Second, the children are in no way deluded: Taylor has rarely met a child who wasn’t fully aware that the character lived only in his or her own imagination. And third, the imaginary friends are genuinely different selves. They often have different desires, interests, and needs from the child’s; they can be unruly, and can frustrate the child. The writer Adam Gopnik wrote about his young daughter’s imaginary companion, Charlie Ravioli, a hip New Yorker whose defining quality was that he was always too busy to play with her.
Long-term imaginary companions are unusual in adults, but they do exist—Taylor finds that many authors who write books with recurring characters claim, fairly convincingly, that these characters have wills of their own and have some say in their fate. But it is not unusual to purposefully create another person in your head to interact with on a short-term basis. Much of daydreaming involves conjuring up people, sometimes as mere physical props (as when daydreaming about sports or sex), but usually as social beings. All of us from time to time hold conversations with people who are not actually there.
Sometimes we get pleasure from sampling alternative selves. Again, you can see the phenomenon in young children, who get a kick out of temporarily adopting the identity of a soldier or a lion. Adults get the same sort of kick; exploring alternative identities seems to be what the Internet was invented for. The sociologist Sherry Turkle has found that people commonly create avatars so as to explore their options in a relatively safe environment. She describes how one 16-year-old girl with an abusive father tried out multiple characters online—a 16-year-old boy, a stronger, more assertive girl—to try to work out what to do in the real world. But often the shift in identity is purely for pleasure. A man can have an alternate identity as a woman; a heterosexual can explore homosexuality; a shy person can try being the life of the party.
Online alternative worlds such as World of Warcraft and Second Life are growing in popularity, and some people now spend more time online than in the real world. One psychologist I know asked a research assistant to try out one of these worlds and report on what it is like and how people behave there. The young woman never came back—she preferred the virtual life to the real one.
Life would be swell if all the selves inhabiting a single mind worked as a team, pulling together for a common goal. But they clash, and sometimes this gives rise to what we call addictions and compulsions.
This is not the traditional view of human frailty. The human condition has long been seen as a battle of good versus evil, reason versus emotion, will versus appetite, superego versus id. The iconic image, from a million movies and cartoons, is of a person with an angel over one shoulder and the devil over the other.
The alternative view keeps the angel and the devil, but casts aside the person in between. The competing selves are not over your shoulder, but inside your head: the angel and the devil, the self who wants to be slim and the one who wants to eat the cake, all exist within one person. Drawing on the research of the psychiatrist George Ainslie, we can make sense of the interaction of these selves by plotting their relative strengths over time, starting with one (the cake eater) being weaker than the other (the dieter). For most of the day, the dieter hums along at his regular power (a 5 on a scale of 1 to 10, say), motivated by the long-term goal of weight loss, and is stronger than the cake eater (a 2). Your consciousness tracks whichever self is winning, so you are deciding not to eat the cake. But as you get closer and closer to the cake, the power of the cake eater rises (3 … 4 …), the lines cross, the cake eater takes over (6), and that becomes the conscious you; at this point, you decide to eat the cake. It’s as if a baton is passed from one self to another.
Sometimes one self can predict that it will later be dominated by another self, and it can act to block the crossing—an act known as self-binding, which Thomas Schelling and the philosopher Jon Elster have explored in detail. Self-binding means that the dominant self schemes against the person it might potentially become—the 5 acts to keep the 2 from becoming a 6. Ulysses wanted to hear the song of the sirens, but he knew it would compel him to walk off the boat and into the sea. So he had his sailors tie him to the mast. Dieters buy food in small portions so they won’t overeat later on; smokers trying to quit tell their friends never to give them cigarettes, no matter how much they may later beg. In her book on gluttony, Francine Prose tells of women who phone hotels where they are going to stay to demand a room with an empty minibar. An alarm clock now for sale rolls away as it sounds the alarm; to shut it off, you have to get up out of bed and find the damn thing.
You might also triumph over your future self by feeding it incomplete or incorrect information. If you’re afraid of panicking in a certain situation, you might deny yourself relevant knowledge—you don’t look down when you’re on the tightrope; you don’t check your stocks if you’re afraid you’ll sell at the first sign of a downturn. Chronically late? Set your watch ahead. Prone to jealousy? Avoid conversations with your spouse about which of your friends is the sexiest.
Working with the psychologists Frank Keil, of Yale University, and Katherine Choe, now at Goucher College, I recently studied young children’s understanding of self-binding, by showing them short movies of people engaged in self-binding and other behaviors and asking them to explain what was going on. The children, aged 4 to 7, easily grasped that someone might put a video game on a high shelf so that another person couldn’t get it. But self-binding confused them: they were mystified when people put away the game so that they themselves couldn’t get hold of it.
But even though young children don’t understand self-binding, they are capable of doing it. In a classic study from the 1970s, psychologists offered children a marshmallow and told them they could either have it right away, or get more if they waited for a few minutes. As you would expect, waiting proved difficult (and performance on this task is a good predictor, much later on, of such things as SAT scores and drug problems), but some children managed it by self-binding—averting their eyes or covering the marshmallow so as to subvert their temptation-prone self for the greater pleasure of the long-term self.
Even pigeons can self-bind. Ainslie conducted an experiment in which he placed pigeons in front of a glowing red key. If they pecked it immediately, they got a small reward right away, but if they waited until the key went dark, they got a larger one. They almost always went for the quick reward—really, it’s hard for a pigeon to restrain itself. But there was a wrinkle: the key glowed green for several seconds before turning red. Pecking the key while it was green would prevent it from turning red and providing the option of the small, quick reward. Some of the pigeons learned to use the green key to help themselves hold out for the big reward, just as a person might put temptation out of reach.
For adult humans, though, the problem is that the self you are trying to bind has resources of its own. Fighting your Bad Self is serious business; whole sections of bookstores are devoted to it. We bribe and threaten and cajole, just as if we were dealing with an addicted friend. Vague commitments like “I promise to drink only on special occasions” often fail, because the Bad Self can weasel out of them, rationalizing that it’s always a special occasion. Bright-line rules like “I will never play video games again” are also vulnerable, because the Bad Self can argue that these are unreasonable—and, worse, once you slip, it can argue that the plan is unworkable. For every argument made by the dieting self—“This diet is really working” or “I really need to lose weight”—the cake eater can respond with another—“This will never work” or “I’m too vain” or “You only live once.” Your long-term self reads voraciously about the benefits of regular exercise and healthy eating; the cake eater prefers articles showing that obesity isn’t really such a problem. It’s not that the flesh is weak; sometimes the flesh is pretty damn smart.
It used to be simpler. According to the traditional view, a single, long-term-planning self—a you—battles against passions, compulsions, impulses, and addictions. We have no problem choosing, as individuals or as a society, who should win, because only one interest is at stake—one person is at war with his or her desires. And while knowing the right thing to do can be terribly difficult, the decision is still based on the rational thoughts of a rational being.
Seeing things this way means we are often mistaken about what makes us happy. Consider again what happens when we have children. Pretty much no matter how you test it, children make us less happy. The evidence isn’t just from diary studies; surveys of marital satisfaction show that couples tend to start off happy, get less happy when they have kids, and become happy again only once the kids leave the house. As the psychologist Daniel Gilbert puts it, “Despite what we read in the popular press, the only known symptom of ‘empty-nest syndrome’ is increased smiling.” So why do people believe that children give them so much pleasure? Gilbert sees it as an illusion, a failure of affective forecasting. Society’s needs are served when people believe that having children is a good thing, so we are deluged with images and stories about how wonderful kids are. We think they make us happy, though they actually don’t.
The theory of multiple selves offers a different perspective. If struggles over happiness involve clashes between distinct internal selves, we can no longer be so sure that our conflicting judgments over time reflect irrationality or error. There is no inconsistency between someone’s anxiously hiking through the Amazon wishing she were home in a warm bath and, weeks later, feeling good about being the sort of adventurous soul who goes into the rain forest. In an important sense, the person in the Amazon is not the same person as the one back home safely recalling the experience, just as the person who honestly believes that his children are the great joy in his life might not be the same person who finds them terribly annoying when he’s actually with them.
Even if each of us is a community, all the members shouldn’t get equal say. Some members are best thought of as small-minded children—and we don’t give 6-year-olds the right to vote. Just as in society, the adults within us have the right—indeed, the obligation—to rein in the children. In fact, talk of “children” versus “adults” within an individual isn’t only a metaphor; one reason to favor the longer-term self is that it really is older and more experienced. We typically spend more of our lives not wanting to snort coke, smoke, or overeat than we spend wanting to do these things; this means that the long-term self has more time to reflect. It is less selfish; it talks to other people, reads books, and so on. And it tries to control the short-term selves. It joins Alcoholics Anonymous, buys the runaway clock, and sees the therapist. As Jon Elster observes, the long-term, sober self is a truer self, because it tries to bind the short-term, drunk self. The long-term, sober self is the adult.
Governments and businesses, recognizing these tendencies, have started offering self-binding schemes. Thousands of compulsive gamblers in Missouri have chosen to sign contracts stating that if they ever enter a casino, anything they win will be confiscated by the state, and they could be arrested. Some of my colleagues at Yale have developed an online service whereby you set a goal and agree to put up a certain amount of money to try to ensure that you meet it. If you succeed, you pay nothing; if you fail, the money is given to charity—or, in a clever twist, to an organization you oppose. A liberal trying to lose a pound a week, for instance, can punish herself for missing her goal by having $100 donated to the George W. Bush Presidential Library.
The natural extension of this type of self-binding is what the economist Richard Thaler and the legal scholar Cass Sunstein describe as “libertarian paternalism”—a movement to engineer situations so that people retain their choices (the libertarian part), but in such a way that these choices are biased to favor people’s better selves (the paternalism part). For instance, many people fail to save enough money for the future; they find it too confusing or onerous to choose a retirement plan. Thaler and Sunstein suggest that the default be switched so that employees would automatically be enrolled in a savings plan, and would have to take action to opt out. A second example concerns the process of organ donation. When asked, most Americans say that they would wish to donate their organs if they were to become brain-dead from an accident—but only about half actually have their driver’s license marked for donation, or carry an organ-donor card. Thaler and Sunstein have discussed a different idea: people could easily opt out of being a donor, but if they do nothing, they are assumed to consent. Such proposals are not merely academic musings; they are starting to influence law and policy, and might do so increasingly in the future. Both Thaler and Sunstein act as advisers to politicians and policy makers, most notably Barack Obama.
So what’s not to like? There is a real appeal to anything that makes self-binding easier. As I write this article, I’m using a program that disables my network connections for a selected amount of time and does not allow me to switch them back on, thereby forcing me to actually write instead of checking my e-mail or reading blogs. A harsher (and more expensive) method, advised by the author of a self-help book, is to remove your Internet cable and FedEx it to yourself—guaranteeing a day without online distractions. One can also chemically boost the long-term self through drugs such as Adderall, which improves concentration and focus. The journalist Joshua Foer describes how it enabled him to write for hour-long chunks, far longer than he was usually capable of: “The part of my brain that makes me curious about whether I have new e-mails in my inbox apparently shut down.”
It’s more controversial, of course, when someone else does the binding. I wouldn’t be very happy if my department chair forced me to take Adderall, or if the government fined me for being overweight and not trying to slim down (as Alabama is planning to do to some state employees). But some “other-binding” already exists—think of the mandatory waiting periods for getting a divorce or buying a gun. You are not prevented from eventually taking these actions, but you are forced to think them over, giving the contemplative self the chance to override the impulsive self. And since governments and businesses are constantly asking people to make choices (about precisely such things as whether to be an organ donor), they inevitably have to provide a default option. If decisions have to be made, why not structure them to be in individuals’ and society’s best interests?
The main problem with all of this is that the long-term self is not always right. Sometimes the short-term self should not be bound. Of course, most addictions are well worth getting rid of. When a mother becomes addicted to cocaine, the pleasure from the drug seems to hijack the neural system that would otherwise be devoted to bonding with her baby. It obviously makes sense here to bind the drug user, the short-term self. On the other hand, from a neural and psychological standpoint, a mother’s love for her baby can also be seen as an addiction. But here binding would be strange and immoral; this addiction is a good one. Someone who becomes morbidly obese needs to do more self-binding, but an obsessive dieter might need to do less. We think one way about someone who gives up Internet porn to spend time building houses for the poor, and another way entirely about someone who successfully thwarts his short-term desire to play with his children so that he can devote more energy to making his second million. The long-term, contemplative self should not always win.
This is particularly true when it comes to morality. Many cruel acts are perpetrated by people who can’t or don’t control their short-term impulses or who act in certain ways—such as getting drunk—that lead to a dampening of the contemplative self. But evil acts are also committed by smart people who adopt carefully thought-out belief systems that allow them to ignore their more morally astute gut feelings. Many slave owners were rational men who used their intelligence to defend slavery, arguing that the institution was in the best interests of those who were enslaved, and that it was grounded in scripture: Africans were the descendants of Ham, condemned by God to be “servants unto servants.” Terrorist acts such as suicide bombings are not typically carried out in an emotional frenzy; they are the consequences of deeply held belief systems and long-term deliberative planning. One of the grimmest examples of rationality gone bad can be found in the psychiatrist Robert Jay Lifton’s discussion of Nazi doctors. These men acted purposefully for years to distance themselves from their emotions, creating what Lifton describes as an “Auschwitz self” that enabled them to prevent any normal, unschooled human kindness from interfering with their jobs.
I wouldn’t want to live next door to someone whose behavior was dominated by his short-term selves, and I wouldn’t want to be such a person, either. But there is also something wrong with people who go too far in the other direction. We benefit, intellectually and personally, from the interplay between different selves, from the balance between long-term contemplation and short-term impulse. We should be wary about tipping the scales too far. The community of selves shouldn’t be a democracy, but it shouldn’t be a dictatorship, either.
The URL for this page is http://www.theatlantic.com/doc/200811/multiple-personalities
An evolving approach to the science of pleasure suggests that each of us contains multiple selves—all with different desires, and all fighting for control. If this is right, the pursuit of happiness becomes even trickier. Can one self bind” another self if the two want different things? Are you always better off when a Good Self wins? And should outsiders, such as employers and policy makers, get into the fray?
by Paul Bloom, The Atlantic
First Person Plural
BY Paul Bloom, The Atlantic
Imagine a long, terrible dental procedure. You are rigid in the chair, hands clenched, soaked with sweat—and then the dentist leans over and says, “We’re done now. You can go home. But if you want, I’d be happy to top you off with a few minutes of mild pain.”
There is a good argument for saying “Yes. Please do.”
The psychologist and recent Nobel laureate Daniel Kahneman conducted a series of studies on the memory of painful events, such as colonoscopies. He discovered that when we think back on these events, we are influenced by the intensity of the endings, and so we have a more positive memory of an experience that ends with mild pain than of one that ends with extreme pain, even if the mild pain is added to the same amount of extreme pain. At the moment the dentist makes his offer, you would, of course, want to say no—but later on, you would be better off if you had said yes, because your overall memory of the event wouldn’t be as unpleasant.
Such contradictions arise all the time. If you ask people which makes them happier, work or vacation, they will remind you that they work for money and spend the money on vacations. But if you give them a beeper that goes off at random times, and ask them to record their activity and mood each time they hear a beep, you’ll likely find that they are happier at work. Work is often engaging and social; vacations are often boring and stressful. Similarly, if you ask people about their greatest happiness in life, more than a third mention their children or grandchildren, but when they use a diary to record their happiness, it turns out that taking care of the kids is a downer—parenting ranks just a bit higher than housework, and falls below sex, socializing with friends, watching TV, praying, eating, and cooking.
The question “What makes people happy?” has been around forever, but there is a new approach to the science of pleasure, one that draws on recent work in psychology, philosophy, economics, neuroscience, and emerging fields such as neuroeconomics. This work has led to new ways—everything from beepers and diaries to brain scans—to explore the emotional value of different experiences, and has given us some surprising insights about the conditions that result in satisfaction.
But what’s more exciting, I think, is the emergence of a different perspective on happiness itself. We used to think that the hard part of the question “How can I be happy?” had to do with nailing down the definition of happy. But it may have more to do with the definition of I. Many researchers now believe, to varying degrees, that each of us is a community of competing selves, with the happiness of one often causing the misery of another. This theory might explain certain puzzles of everyday life, such as why addictions and compulsions are so hard to shake off, and why we insist on spending so much of our lives in worlds—like TV shows and novels and virtual-reality experiences—that don’t actually exist. And it provides a useful framework for thinking about the increasingly popular position that people would be better off if governments and businesses helped them inhibit certain gut feelings and emotional reactions.
Like any organ, the brain consists of large parts (such as the hippocampus and the cortex) that are made up of small parts (such as “maps” in the visual cortex), which themselves are made up of smaller parts, until you get to neurons, billions of them, whose orchestrated firing is the stuff of thought. The neurons are made up of parts like axons and dendrites, which are made up of smaller parts like terminal buttons and receptor sites, which are made up of molecules, and so on.
This hierarchical structure makes possible the research programs of psychology and neuroscience. The idea is that interesting properties of the whole (intelligence, decision-making, emotions, moral sensibility) can be understood in terms of the interaction of components that themselves lack these properties. This is how computers work; there is every reason to believe that this is how we work, too.
But there is no consensus about the broader implications of this scientific approach. Some scholars argue that although the brain might contain neural subsystems, or modules, specialized for tasks like recognizing faces and understanding language, it also contains a part that constitutes a person, a self: the chief executive of all the subsystems. As the philosopher Jerry Fodor once put it, “If, in short, there is a community of computers living in my head, there had also better be somebody who is in charge; and, by God, it had better be me.”
More-radical scholars insist that an inherent clash exists between science and our long-held conceptions about consciousness and moral agency: if you accept that our brains are a myriad of smaller components, you must reject such notions as character, praise, blame, and free will. Perhaps the very notion that there are such things as selves—individuals who persist over time—needs to be rejected as well.
The view I’m interested in falls between these extremes. It is conservative in that it accepts that brains give rise to selves that last over time, plan for the future, and so on. But it is radical in that it gives up the idea that there is just one self per head. The idea is that instead, within each brain, different selves are continually popping in and out of existence. They have different desires, and they fight for control—bargaining with, deceiving, and plotting against one another.
The notion of different selves within a single person is not new. It can be found in Plato, and it was nicely articulated by the 18th-century Scottish philosopher David Hume, who wrote, “I cannot compare the soul more properly to any thing than to a republic or commonwealth, in which the several members are united by the reciprocal ties of government and subordination.” Walt Whitman gave us a pithier version: “I am large, I contain multitudes.”
The economist Thomas Schelling, another Nobel laureate, illustrates the concept with a simple story: As a boy I saw a movie about Admiral Byrd’s Antarctic expedition and was impressed that as a boy he had gone outdoors in shirtsleeves to toughen himself against the cold. I resolved to go to bed at night with one blanket too few. That decision to go to bed minus one blanket was made by a warm boy; another boy awoke cold in the night, too cold to retrieve the blanket … and resolving to restore it tomorrow. The next bedtime it was the warm boy again, dreaming of Antarctica, who got to make the decision, and he always did it again.
Examples abound in our own lives. Late at night, when deciding not to bother setting up the coffee machine for the next morning, I sometimes think of the man who will wake up as a different person, and wonder, What did he ever do for me? When I get up and there’s no coffee ready, I curse the lazy bastard who shirked his duties the night before.
But anyone tempted by this theory has to admit just how wrong it feels, how poorly it fits with most of our experience. In the main, we do think of ourselves as singular individuals who persist over time. If I were to learn that I was going to be tortured tomorrow morning, my reaction would be terror, not sympathy for the poor guy who will be living in my body then. If I do something terrible now, I will later feel guilt and shame, not anger at some other person.
It could hardly be otherwise. Our brains have evolved to protect our bodies and guide them to reproduce, hence our minds must be sensitive to maintaining the needs of the continuing body—my children today will be my children tomorrow; if you wronged me yesterday, I should be wary of you today. Society and human relationships would be impossible without this form of continuity. Anyone who could convince himself that the person who will wake up in his bed tomorrow is really someone different would lack the capacity for sustained self-interest; he would feel no long-term guilt, love, shame, or pride.
The multiplicity of selves becomes more intuitive as the time span increases. Social psychologists have found certain differences in how we think of ourselves versus how we think of other people—for instance, we tend to attribute our own bad behavior to unfortunate circumstances, and the bad behavior of others to their nature. But these biases diminish when we think of distant past selves or distant future selves; we see such selves the way we see other people. Although it might be hard to think about the person who will occupy your body tomorrow morning as someone other than you, it is not hard at all to think that way about the person who will occupy your body 20 years from now. This may be one reason why many young people are indifferent about saving for retirement; they feel as if they would be giving up their money to an elderly stranger.
One can see a version of clashing multiple selves in the mental illness known as dissociative-identity disorder, which used to be called multiple-personality disorder. This is familiar to everyone from the dramatic scenes in movies in which an actor is one person, and then he or she contorts or coughs or shakes the head, and—boom!—another person comes into existence. (My own favorite is Edward Norton in Primal Fear, although—spoiler alert—he turns out in the end to be faking.)
Dissociative-identity disorder is controversial. It used to be rarely diagnosed, then the number of reported cases spiked dramatically in the 1980s, particularly in North America. The spike has many possible explanations: the disorder was first included as a specific category in the 1980 version of the Diagnostic and Statistical Manual of Mental Disorders, just as an influential set of case studies of multiple personalities was published. And increased popular interest was fueled by the 1973 novel Sybil and its 1976 movie adaptation, which starred Sally Field as a woman with 16 different personalities.
Some psychologists believe that this spike was not the result of better diagnosis. Rather, they say it stemmed in part from therapists who inadvertently persuaded their patients to create these distinct selves, often through role-playing and hypnosis. Recent years have seen a backlash, and some people diagnosed with the disorder have sued their therapists. One woman got a settlement of more than $2 million after alleging that her psychotherapist had used suggestive memory “recovery” techniques to convince her that she had more than 120 personalities, including children, angels, and a duck.
Regardless of the cause of the spike, considerable evidence, including recent brain-imaging studies, suggests that some people really do shift from one self to another, and that the selves have different memories and personalities. In one study, women who had been diagnosed with dissociative-identity disorder and claimed to be capable of shifting at will from one self to another listened to recordings while in a PET scanner. When the recordings told of a woman’s own traumatic experience, the parts of the brain corresponding to autobiographic memory became active—but only when she had shifted to the self who had endured that traumatic experience. If she was in another self, different parts of the brain became active and showed a pattern of neural activity corresponding to hearing about the experience of a stranger.
Many psychologists and philosophers have argued that the disorder should be understood as an extreme version of normal multiplicity. Take memory. One characteristic of dissociative-identity disorder is interpersonality amnesia—one self doesn’t have access to the memories of the other selves. But memory is notoriously situation-dependent even for normal people—remembering something is easiest while you are in the same state in which you originally experienced it. Students do better when they are tested in the room in which they learned the material; someone who learned something while he was angry is better at remembering that information when he is angry again; the experience of one’s drunken self is more accessible to the drunk self than to the sober self. What happens in Vegas stays in Vegas.
Personality also changes according to situation; even the most thuggish teenager is not the same around his buddies as he is when having tea with Grandma. Our normal situation dependence is most evident when it comes to bad behavior. In the 1920s, Yale psychologists tested more than 10,000 children, giving them a battery of aptitude tests and putting them in morally dicey situations, such as having an opportunity to cheat on a test. They found a striking lack of consistency. A child’s propensity to cheat at sports, for instance, had little to do with whether he or she would lie to a teacher.
More-recent experiments with adults find that subtle cues can have a surprising effect on our actions. Good smells, such as fresh bread, make people kinder and more likely to help a stranger; bad smells, like farts (the experimenters used fart spray from a novelty store), make people more judgmental. If you ask people to unscramble sentences, they tend to be more polite, minutes later, if the sentences contain positive words like honor rather than negative words like bluntly. These findings are in line with a set of classic experiments conducted by Stanley Milgram in the 1960s—too unethical to do now—showing that normal people could be induced to give electric shocks to a stranger if they were told to do so by someone they believed was an authoritative scientist. All of these studies support the view that each of us contains many selves—some violent, some submissive, some thoughtful—and that different selves can be brought to the fore by different situations.
The population of a single head is not fixed; we can add more selves. In fact, the capacity to spawn multiple selves is central to pleasure. After all, the most common leisure activity is not sex, eating, drinking, drug use, socializing, sports, or being with the ones we love. It is, by a long shot, participating in experiences we know are not real—reading novels, watching movies and TV, daydreaming, and so forth.
Enjoying fiction requires a shift in selfhood. You give up your own identity and try on the identities of other people, adopting their perspectives so as to share their experiences. This allows us to enjoy fictional events that would shock and sadden us in real life. When Tony Soprano kills someone, you respond differently than you would to a real murder; you accept and adopt some of the moral premises of the Soprano universe. You become, if just for a moment, Tony Soprano.
Some imaginative pleasures involve the creation of alternative selves. Sometimes we interact with these selves as if they were other people. This might sound terrible, and it can be, as when schizophrenics hear voices that seem to come from outside themselves. But the usual version is harmless. In children, we describe these alternative selves as imaginary friends. The psychologist Marjorie Taylor, who has studied this phenomenon more than anyone, points out three things. First, contrary to some stereotypes, children who have imaginary friends are not losers, loners, or borderline psychotics. If anything, they are slightly more socially adept than other children. Second, the children are in no way deluded: Taylor has rarely met a child who wasn’t fully aware that the character lived only in his or her own imagination. And third, the imaginary friends are genuinely different selves. They often have different desires, interests, and needs from the child’s; they can be unruly, and can frustrate the child. The writer Adam Gopnik wrote about his young daughter’s imaginary companion, Charlie Ravioli, a hip New Yorker whose defining quality was that he was always too busy to play with her.
Long-term imaginary companions are unusual in adults, but they do exist—Taylor finds that many authors who write books with recurring characters claim, fairly convincingly, that these characters have wills of their own and have some say in their fate. But it is not unusual to purposefully create another person in your head to interact with on a short-term basis. Much of daydreaming involves conjuring up people, sometimes as mere physical props (as when daydreaming about sports or sex), but usually as social beings. All of us from time to time hold conversations with people who are not actually there.
Sometimes we get pleasure from sampling alternative selves. Again, you can see the phenomenon in young children, who get a kick out of temporarily adopting the identity of a soldier or a lion. Adults get the same sort of kick; exploring alternative identities seems to be what the Internet was invented for. The sociologist Sherry Turkle has found that people commonly create avatars so as to explore their options in a relatively safe environment. She describes how one 16-year-old girl with an abusive father tried out multiple characters online—a 16-year-old boy, a stronger, more assertive girl—to try to work out what to do in the real world. But often the shift in identity is purely for pleasure. A man can have an alternate identity as a woman; a heterosexual can explore homosexuality; a shy person can try being the life of the party.
Online alternative worlds such as World of Warcraft and Second Life are growing in popularity, and some people now spend more time online than in the real world. One psychologist I know asked a research assistant to try out one of these worlds and report on what it is like and how people behave there. The young woman never came back—she preferred the virtual life to the real one.
Life would be swell if all the selves inhabiting a single mind worked as a team, pulling together for a common goal. But they clash, and sometimes this gives rise to what we call addictions and compulsions.
This is not the traditional view of human frailty. The human condition has long been seen as a battle of good versus evil, reason versus emotion, will versus appetite, superego versus id. The iconic image, from a million movies and cartoons, is of a person with an angel over one shoulder and the devil over the other.
The alternative view keeps the angel and the devil, but casts aside the person in between. The competing selves are not over your shoulder, but inside your head: the angel and the devil, the self who wants to be slim and the one who wants to eat the cake, all exist within one person. Drawing on the research of the psychiatrist George Ainslie, we can make sense of the interaction of these selves by plotting their relative strengths over time, starting with one (the cake eater) being weaker than the other (the dieter). For most of the day, the dieter hums along at his regular power (a 5 on a scale of 1 to 10, say), motivated by the long-term goal of weight loss, and is stronger than the cake eater (a 2). Your consciousness tracks whichever self is winning, so you are deciding not to eat the cake. But as you get closer and closer to the cake, the power of the cake eater rises (3 … 4 …), the lines cross, the cake eater takes over (6), and that becomes the conscious you; at this point, you decide to eat the cake. It’s as if a baton is passed from one self to another.
Sometimes one self can predict that it will later be dominated by another self, and it can act to block the crossing—an act known as self-binding, which Thomas Schelling and the philosopher Jon Elster have explored in detail. Self-binding means that the dominant self schemes against the person it might potentially become—the 5 acts to keep the 2 from becoming a 6. Ulysses wanted to hear the song of the sirens, but he knew it would compel him to walk off the boat and into the sea. So he had his sailors tie him to the mast. Dieters buy food in small portions so they won’t overeat later on; smokers trying to quit tell their friends never to give them cigarettes, no matter how much they may later beg. In her book on gluttony, Francine Prose tells of women who phone hotels where they are going to stay to demand a room with an empty minibar. An alarm clock now for sale rolls away as it sounds the alarm; to shut it off, you have to get up out of bed and find the damn thing.
You might also triumph over your future self by feeding it incomplete or incorrect information. If you’re afraid of panicking in a certain situation, you might deny yourself relevant knowledge—you don’t look down when you’re on the tightrope; you don’t check your stocks if you’re afraid you’ll sell at the first sign of a downturn. Chronically late? Set your watch ahead. Prone to jealousy? Avoid conversations with your spouse about which of your friends is the sexiest.
Working with the psychologists Frank Keil, of Yale University, and Katherine Choe, now at Goucher College, I recently studied young children’s understanding of self-binding, by showing them short movies of people engaged in self-binding and other behaviors and asking them to explain what was going on. The children, aged 4 to 7, easily grasped that someone might put a video game on a high shelf so that another person couldn’t get it. But self-binding confused them: they were mystified when people put away the game so that they themselves couldn’t get hold of it.
But even though young children don’t understand self-binding, they are capable of doing it. In a classic study from the 1970s, psychologists offered children a marshmallow and told them they could either have it right away, or get more if they waited for a few minutes. As you would expect, waiting proved difficult (and performance on this task is a good predictor, much later on, of such things as SAT scores and drug problems), but some children managed it by self-binding—averting their eyes or covering the marshmallow so as to subvert their temptation-prone self for the greater pleasure of the long-term self.
Even pigeons can self-bind. Ainslie conducted an experiment in which he placed pigeons in front of a glowing red key. If they pecked it immediately, they got a small reward right away, but if they waited until the key went dark, they got a larger one. They almost always went for the quick reward—really, it’s hard for a pigeon to restrain itself. But there was a wrinkle: the key glowed green for several seconds before turning red. Pecking the key while it was green would prevent it from turning red and providing the option of the small, quick reward. Some of the pigeons learned to use the green key to help themselves hold out for the big reward, just as a person might put temptation out of reach.
For adult humans, though, the problem is that the self you are trying to bind has resources of its own. Fighting your Bad Self is serious business; whole sections of bookstores are devoted to it. We bribe and threaten and cajole, just as if we were dealing with an addicted friend. Vague commitments like “I promise to drink only on special occasions” often fail, because the Bad Self can weasel out of them, rationalizing that it’s always a special occasion. Bright-line rules like “I will never play video games again” are also vulnerable, because the Bad Self can argue that these are unreasonable—and, worse, once you slip, it can argue that the plan is unworkable. For every argument made by the dieting self—“This diet is really working” or “I really need to lose weight”—the cake eater can respond with another—“This will never work” or “I’m too vain” or “You only live once.” Your long-term self reads voraciously about the benefits of regular exercise and healthy eating; the cake eater prefers articles showing that obesity isn’t really such a problem. It’s not that the flesh is weak; sometimes the flesh is pretty damn smart.
It used to be simpler. According to the traditional view, a single, long-term-planning self—a you—battles against passions, compulsions, impulses, and addictions. We have no problem choosing, as individuals or as a society, who should win, because only one interest is at stake—one person is at war with his or her desires. And while knowing the right thing to do can be terribly difficult, the decision is still based on the rational thoughts of a rational being.
Seeing things this way means we are often mistaken about what makes us happy. Consider again what happens when we have children. Pretty much no matter how you test it, children make us less happy. The evidence isn’t just from diary studies; surveys of marital satisfaction show that couples tend to start off happy, get less happy when they have kids, and become happy again only once the kids leave the house. As the psychologist Daniel Gilbert puts it, “Despite what we read in the popular press, the only known symptom of ‘empty-nest syndrome’ is increased smiling.” So why do people believe that children give them so much pleasure? Gilbert sees it as an illusion, a failure of affective forecasting. Society’s needs are served when people believe that having children is a good thing, so we are deluged with images and stories about how wonderful kids are. We think they make us happy, though they actually don’t.
The theory of multiple selves offers a different perspective. If struggles over happiness involve clashes between distinct internal selves, we can no longer be so sure that our conflicting judgments over time reflect irrationality or error. There is no inconsistency between someone’s anxiously hiking through the Amazon wishing she were home in a warm bath and, weeks later, feeling good about being the sort of adventurous soul who goes into the rain forest. In an important sense, the person in the Amazon is not the same person as the one back home safely recalling the experience, just as the person who honestly believes that his children are the great joy in his life might not be the same person who finds them terribly annoying when he’s actually with them.
Even if each of us is a community, all the members shouldn’t get equal say. Some members are best thought of as small-minded children—and we don’t give 6-year-olds the right to vote. Just as in society, the adults within us have the right—indeed, the obligation—to rein in the children. In fact, talk of “children” versus “adults” within an individual isn’t only a metaphor; one reason to favor the longer-term self is that it really is older and more experienced. We typically spend more of our lives not wanting to snort coke, smoke, or overeat than we spend wanting to do these things; this means that the long-term self has more time to reflect. It is less selfish; it talks to other people, reads books, and so on. And it tries to control the short-term selves. It joins Alcoholics Anonymous, buys the runaway clock, and sees the therapist. As Jon Elster observes, the long-term, sober self is a truer self, because it tries to bind the short-term, drunk self. The long-term, sober self is the adult.
Governments and businesses, recognizing these tendencies, have started offering self-binding schemes. Thousands of compulsive gamblers in Missouri have chosen to sign contracts stating that if they ever enter a casino, anything they win will be confiscated by the state, and they could be arrested. Some of my colleagues at Yale have developed an online service whereby you set a goal and agree to put up a certain amount of money to try to ensure that you meet it. If you succeed, you pay nothing; if you fail, the money is given to charity—or, in a clever twist, to an organization you oppose. A liberal trying to lose a pound a week, for instance, can punish herself for missing her goal by having $100 donated to the George W. Bush Presidential Library.
The natural extension of this type of self-binding is what the economist Richard Thaler and the legal scholar Cass Sunstein describe as “libertarian paternalism”—a movement to engineer situations so that people retain their choices (the libertarian part), but in such a way that these choices are biased to favor people’s better selves (the paternalism part). For instance, many people fail to save enough money for the future; they find it too confusing or onerous to choose a retirement plan. Thaler and Sunstein suggest that the default be switched so that employees would automatically be enrolled in a savings plan, and would have to take action to opt out. A second example concerns the process of organ donation. When asked, most Americans say that they would wish to donate their organs if they were to become brain-dead from an accident—but only about half actually have their driver’s license marked for donation, or carry an organ-donor card. Thaler and Sunstein have discussed a different idea: people could easily opt out of being a donor, but if they do nothing, they are assumed to consent. Such proposals are not merely academic musings; they are starting to influence law and policy, and might do so increasingly in the future. Both Thaler and Sunstein act as advisers to politicians and policy makers, most notably Barack Obama.
So what’s not to like? There is a real appeal to anything that makes self-binding easier. As I write this article, I’m using a program that disables my network connections for a selected amount of time and does not allow me to switch them back on, thereby forcing me to actually write instead of checking my e-mail or reading blogs. A harsher (and more expensive) method, advised by the author of a self-help book, is to remove your Internet cable and FedEx it to yourself—guaranteeing a day without online distractions. One can also chemically boost the long-term self through drugs such as Adderall, which improves concentration and focus. The journalist Joshua Foer describes how it enabled him to write for hour-long chunks, far longer than he was usually capable of: “The part of my brain that makes me curious about whether I have new e-mails in my inbox apparently shut down.”
It’s more controversial, of course, when someone else does the binding. I wouldn’t be very happy if my department chair forced me to take Adderall, or if the government fined me for being overweight and not trying to slim down (as Alabama is planning to do to some state employees). But some “other-binding” already exists—think of the mandatory waiting periods for getting a divorce or buying a gun. You are not prevented from eventually taking these actions, but you are forced to think them over, giving the contemplative self the chance to override the impulsive self. And since governments and businesses are constantly asking people to make choices (about precisely such things as whether to be an organ donor), they inevitably have to provide a default option. If decisions have to be made, why not structure them to be in individuals’ and society’s best interests?
The main problem with all of this is that the long-term self is not always right. Sometimes the short-term self should not be bound. Of course, most addictions are well worth getting rid of. When a mother becomes addicted to cocaine, the pleasure from the drug seems to hijack the neural system that would otherwise be devoted to bonding with her baby. It obviously makes sense here to bind the drug user, the short-term self. On the other hand, from a neural and psychological standpoint, a mother’s love for her baby can also be seen as an addiction. But here binding would be strange and immoral; this addiction is a good one. Someone who becomes morbidly obese needs to do more self-binding, but an obsessive dieter might need to do less. We think one way about someone who gives up Internet porn to spend time building houses for the poor, and another way entirely about someone who successfully thwarts his short-term desire to play with his children so that he can devote more energy to making his second million. The long-term, contemplative self should not always win.
This is particularly true when it comes to morality. Many cruel acts are perpetrated by people who can’t or don’t control their short-term impulses or who act in certain ways—such as getting drunk—that lead to a dampening of the contemplative self. But evil acts are also committed by smart people who adopt carefully thought-out belief systems that allow them to ignore their more morally astute gut feelings. Many slave owners were rational men who used their intelligence to defend slavery, arguing that the institution was in the best interests of those who were enslaved, and that it was grounded in scripture: Africans were the descendants of Ham, condemned by God to be “servants unto servants.” Terrorist acts such as suicide bombings are not typically carried out in an emotional frenzy; they are the consequences of deeply held belief systems and long-term deliberative planning. One of the grimmest examples of rationality gone bad can be found in the psychiatrist Robert Jay Lifton’s discussion of Nazi doctors. These men acted purposefully for years to distance themselves from their emotions, creating what Lifton describes as an “Auschwitz self” that enabled them to prevent any normal, unschooled human kindness from interfering with their jobs.
I wouldn’t want to live next door to someone whose behavior was dominated by his short-term selves, and I wouldn’t want to be such a person, either. But there is also something wrong with people who go too far in the other direction. We benefit, intellectually and personally, from the interplay between different selves, from the balance between long-term contemplation and short-term impulse. We should be wary about tipping the scales too far. The community of selves shouldn’t be a democracy, but it shouldn’t be a dictatorship, either.
The URL for this page is http://www.theatlantic.com/doc/200811/multiple-personalities
Monday, October 20, 2008
Silly explanations -- why readers of this blog are not getting access to it
If you make a large number of posts in a single day, you will be required to complete word verification. After 24 hours, the word verification will automatically be removed. Learn more
Shortcuts: press Ctrl with: B = Bold, I = Italic, P = Publish, D = Draft more
Uploading Video Cancel You can continue to edit your post during upload, but can't close this window or publish until it's finished. Processing Video... Cancel You can save your post and return to publish when processing is complete.
document.write("\x3ca id\x3d\x22publishButton\x22 class\x3d\x22cssButton\x22 href\x3d\x22javascript:void(0)\x22 onclick\x3d\x22if (this.className.indexOf(\x26quot;ubtn-disabled\x26quot;) \x3d\x3d -1) {var e \x3d document[\x26#39;stuffform\x26#39;].publish;(e.length) ? e[0].click() : e.click(); if (window.event) window.event.cancelBubble \x3d true; return false;}\x22\x3e\x3cdiv class\x3d\x22cssButtonOuter\x22\x3e\x3cdiv class\x3d\x22cssButtonMiddle\x22\x3e\x3cdiv class\x3d\x22cssButtonInner\x22\x3ePublish Post\x3c/div\x3e\x3c/div\x3e\x3c/div\x3e\x3c/a\x3e");
Publish Post
document.write("\x3cinput type\x3d\x22submit\x22 id\x3d\x22publishButton-hidden\x22 name\x3d\x22publish\x22 value\x3d\x22Publish Post\x22 onclick\x3d\x22\x22 tabindex\x3d\x22-1\x22 style\x3d\x22position:absolute; display:block; width:0; padding:0; z-index:-1; border:none; top:-5000px; left:-5000px\x22\x3e");
document.write("\x3ca id\x3d\x22saveButton\x22 class\x3d\x22cssButton\x22 href\x3d\x22javascript:void(0)\x22 onclick\x3d\x22if (this.className.indexOf(\x26quot;ubtn-disabled\x26quot;) \x3d\x3d -1) {var e \x3d document[\x26#39;stuffform\x26#39;].saveDraft;(e.length) ? e[0].click() : e.click(); if (window.event) window.event.cancelBubble \x3d true; return false;}\x22\x3e\x3cdiv class\x3d\x22cssButtonOuter\x22\x3e\x3cdiv class\x3d\x22cssButtonMiddle\x22\x3e\x3cdiv class\x3d\x22cssButtonInner\x22\x3eSave as Draft\x3c/div\x3e\x3c/div\x3e\x3c/div\x3e\x3c/a\x3e");
Save as Draft
document.write("\x3cinput type\x3d\x22submit\x22 id\x3d\x22saveButton-hidden\x22 name\x3d\x22saveDraft\x22 value\x3d\x22Save as Draft\x22 onclick\x3d\x22setPostAsSubmitDraft()\x22 tabindex\x3d\x22-1\x22 style\x3d\x22position:absolute; display:block; width:0; padding:0; z-index:-1; border:none; top:-5000px; left:-5000px\x22\x3e");
Shortcuts: press Ctrl with: B = Bold, I = Italic, P = Publish, D = Draft more
Uploading Video Cancel You can continue to edit your post during upload, but can't close this window or publish until it's finished. Processing Video... Cancel You can save your post and return to publish when processing is complete.
document.write("\x3ca id\x3d\x22publishButton\x22 class\x3d\x22cssButton\x22 href\x3d\x22javascript:void(0)\x22 onclick\x3d\x22if (this.className.indexOf(\x26quot;ubtn-disabled\x26quot;) \x3d\x3d -1) {var e \x3d document[\x26#39;stuffform\x26#39;].publish;(e.length) ? e[0].click() : e.click(); if (window.event) window.event.cancelBubble \x3d true; return false;}\x22\x3e\x3cdiv class\x3d\x22cssButtonOuter\x22\x3e\x3cdiv class\x3d\x22cssButtonMiddle\x22\x3e\x3cdiv class\x3d\x22cssButtonInner\x22\x3ePublish Post\x3c/div\x3e\x3c/div\x3e\x3c/div\x3e\x3c/a\x3e");
Publish Post
document.write("\x3cinput type\x3d\x22submit\x22 id\x3d\x22publishButton-hidden\x22 name\x3d\x22publish\x22 value\x3d\x22Publish Post\x22 onclick\x3d\x22\x22 tabindex\x3d\x22-1\x22 style\x3d\x22position:absolute; display:block; width:0; padding:0; z-index:-1; border:none; top:-5000px; left:-5000px\x22\x3e");
document.write("\x3ca id\x3d\x22saveButton\x22 class\x3d\x22cssButton\x22 href\x3d\x22javascript:void(0)\x22 onclick\x3d\x22if (this.className.indexOf(\x26quot;ubtn-disabled\x26quot;) \x3d\x3d -1) {var e \x3d document[\x26#39;stuffform\x26#39;].saveDraft;(e.length) ? e[0].click() : e.click(); if (window.event) window.event.cancelBubble \x3d true; return false;}\x22\x3e\x3cdiv class\x3d\x22cssButtonOuter\x22\x3e\x3cdiv class\x3d\x22cssButtonMiddle\x22\x3e\x3cdiv class\x3d\x22cssButtonInner\x22\x3eSave as Draft\x3c/div\x3e\x3c/div\x3e\x3c/div\x3e\x3c/a\x3e");
Save as Draft
document.write("\x3cinput type\x3d\x22submit\x22 id\x3d\x22saveButton-hidden\x22 name\x3d\x22saveDraft\x22 value\x3d\x22Save as Draft\x22 onclick\x3d\x22setPostAsSubmitDraft()\x22 tabindex\x3d\x22-1\x22 style\x3d\x22position:absolute; display:block; width:0; padding:0; z-index:-1; border:none; top:-5000px; left:-5000px\x22\x3e");
Sunday, October 19, 2008
Barack Obama and Fred Astaire: What a Pair!
Barack Obama and Fred Astaire: What a pair!
It dawned upon me while watching the third presidential debate -- Barack Obama and Fred Astaire: Were they separated at birth?
Barack debates the way Fred dances. Svelte, smooth, self-controlled. What Barack says is -- well, interesting -- but his moves, his gestures, carry the day. Like any great politician, he's a performer.
Think of Fred's and Obama's similarities:
They both have big ears! (ok, the photoshopped image on the left isn't fair to Barack; but maybe it suggests that he listens).
Both have roots in the heartland, the Midwest. And both came from "mixed" families. We all know about Barack's African-American heritage and the Kansas grandparents that brought him up in Hawaii, but remember that Fred was originally from Nebraska and that "his mother was born in the United States to Lutheran German immigrants from East Prussia and Alsace, and Astaire's father was a Catholic of Jewish ancestry."
They make it look easy. While McCain grimaced during the debate, Obama was Mr. Cool. Unruffled. Just like Fred dancing -- it looked so "natural." But of course looking "natural" takes years of work and discipline.
Barack debates the way Fred dances. Svelte, smooth, self-controlled. What Barack says is -- well, interesting -- but his moves, his gestures, carry the day. Like any great politician, he's a performer.
Think of Fred's and Obama's similarities:
They both have big ears! (ok, the photoshopped image on the left isn't fair to Barack; but maybe it suggests that he listens).
Both have roots in the heartland, the Midwest. And both came from "mixed" families. We all know about Barack's African-American heritage and the Kansas grandparents that brought him up in Hawaii, but remember that Fred was originally from Nebraska and that "his mother was born in the United States to Lutheran German immigrants from East Prussia and Alsace, and Astaire's father was a Catholic of Jewish ancestry."
They make it look easy. While McCain grimaced during the debate, Obama was Mr. Cool. Unruffled. Just like Fred dancing -- it looked so "natural." But of course looking "natural" takes years of work and discipline.
All-American smile. Fred and Obama both have it. Anguish behind that smile? They sure don't show it!
Elegance. Take a look at Barack at the Alfred E. Smith dinner memorial dinner held on October 17; and of course Fred is Mr. Evening Attire.
Fit and trim. Not an ounce of fat, or so it appears, on either. Is it genetic? Too much exercise?
They both look great in the arms of a beautiful woman. And Barack doesn't need to dance in movies to do so. He's married to Michelle and they have two beautiful children.
Oh, I forgot: Fred and Barack both smoke(d) (or at least Barack used to). Well, nobody's perfect.
P.S. Remember the tap-dancing lawyer in the film "Chicago"? Hey, could that be the real Barack ... That's the question that could be asked if and once he gets elected. Just askin', folks.
P.S. Remember the tap-dancing lawyer in the film "Chicago"? Hey, could that be the real Barack ... That's the question that could be asked if and once he gets elected. Just askin', folks.
Subscribe to:
Posts (Atom)