Republican presidential candidate Donald Trump speaks with reporters on Aug. 29, in Nashville, Tenn. (Photo: Mark Humphrey/AP)
Democracy depends on a free and independent press, which is why all tyrants try to squelch it. They use seven techniques that, worryingly, President-elect Donald Trump already employs.
1. Berate the media. Last week, Trump summoned two-dozen TV news anchors and executives to the twenty-fifth floor of Trump Tower to berate them for their reporting about him during the election. For twenty minutes he railed at what he called their “outrageous” and “dishonest” coverage. According to an attendee, “Trump kept saying, ‘we’re in a room of liars, the deceitful dishonest media who got it all wrong,’” and he called CNN a “network of liars.” He accused NBC of using unflattering pictures of him, demanding to know why they didn’t use “nicer” pictures.
Another person who attended the meeting said Trump “truly doesn’t seem to understand the First Amendment. He thinks we are supposed to say what he says and that’s it.”
2. Blacklist critical media. During the campaign, Trump blacklisted news outlets whose coverage he didn’t approve of. In June he pulled The Washington Post’s credentials. “Based on the incredibly inaccurate coverage and reporting of the record setting Trump campaign, we are hereby revoking the press credentials of the phony and dishonest Washington Post,” read a post on Trump’s Facebook page.
After the election Trump agreed to meet with the New York Times and then suddenly cancelled the meeting when he didn’t like the terms, tweeting “Perhaps a new meeting will be set up with the @nytimes. In the meantime they continue to cover me inaccurately and with a nasty tone!” (He then reversed himself again and met with the Times.)
3. Turn the public against the media. Trump refers to journalists as “lying,” “dishonest,” “disgusting” and “scum.” Referring to the journalists at his rallies, Trump said, “I hate some of these people,” adding (presumably in response to allegations of Vladimir Putin’s treatment of dissident journalists) “but I’d never
kill ‘em."
He questions the press’s motives, claiming, for example, that The Washington Post wrote negative things about him because its publisher, Jeffrey Bezos, a founder of Amazon, “thinks I would go after him for antitrust.” When the New York Times wrote that his transition team was in disarray, Trump tweeted that the newspaper was "just upset that they looked like fools in their coverage of me” during the presidential campaign.
4. Condemn satirical or critical comments. Trump continues to condemn the coverage he’s received from NBC’s “Saturday Night Live.” In response to Alex Baldwin’s recent portrayal of him as overwhelmed by the prospect of being president, Trump tweeted that it was a “totally one-sided, biased show – nothing funny at all. Equal time for us?”
When Brandon Victor Dixon, the actor who plays Aaron Burr in the Broadway musical “Hamilton,” read from the stage a message to Vice President-elect Mike Pence, who was in the audience – expressing fears about the pending Trump administration for the “diverse group of men and women of different colors, creeds and orientations” on the cast – Trump responded angrily. He tweeted that Pence had been “harassed,” and insisted that the cast and producers of the show, “which I hear is highly overrated,” apologize.
5. Threaten the media directly. Trump said he plans to change libel laws in the United States so that he can have an easier time suing news organizations. “One of the things I’m going to do if I win … I’m going to open up our libel laws so when they write purposely negative and horrible and false articles, we can sue them and win lots of money.”
During the campaign, Trump specifically threatened to sue the Times for libel in response to an article that featured two women accusing him of touching them inappropriately years ago. Trump claimed the allegations were false, and his lawyer demanded that the newspaper retract the story and issue an apology. Trump also threatened legal action after the Times published and wrote about part of his 1995 tax return.
6. Limit media access. Trump hasn’t had a news conference since July. He has blocked the media from traveling with him, or even knowing whom he’s meeting with. His phone call with Vladimir Putin, which occurred shortly after the election, was first reported by the Kremlin.
This is highly unusual. In 2000, President-elect George W. Bush called a press conference three days after the Supreme Court determined the outcome of the election. In 2008, President-elect Obama also meet with the press three days after being elected.
7. Bypass the media and communicate with the public directly. The American public learns what Trump thinks through his tweets. Shortly after the election, Trump released a video message outlining some of the executive actions he plans to take on his first day in office.
Aids say Trump has also expressed interest in continuing to hold the large rallies that became a staple of his candidacy. They say he likes the instant gratification and adulation that the cheering crowds provide.
The word “media” comes from “intermediate” between newsmakers and the public. Responsible media hold the powerful accountable by asking them hard questions and reporting on what they do. Apparently Trump wants to eliminate such intermediaries.
Historically, these seven techniques have been used by demagogues to erode the freedom and independence of the press. Even before he’s sworn in, Trump seems intent on doing exactly this.
Long before Stalin’s gulags, the Tsars used Siberia’s frozen wastes to bury even the most harmless ‘disreputables’, as Daniel Beer shows in horrific, gripping detail
Image from article, with caption: ‘The Road to Siberia’ by Sergei Dmitrievich Miloradovich
The House of the Dead: Siberian Exile Under the TsarsDaniel Beer
Allen Lane, pp.512, £30
Almost as soon as Siberia was first colonised by Cossack conquistadors in the 17th century, it became a place of banishment and punishment. As early as the 1690s the Russian state began to use Siberia as a dumping ground for its criminals, as though its vastness could quarantine evil. Katorga — from the Greek word for galley — was the judicial term for a penal sentence where inmates performed hard labour in the service of the state. The sentence was commonly imposed in place of death from the reign of Peter the Great onwards. And in many ways Siberia truly was a House of the Dead — as Daniel Beer, who borrows the title of Fyodor Dostoevsky’s prison novel for his masterful new study, recounts in horrific and gripping detail.
In a letter to his brother, Dostoevsky described his own five years as a political prisoner in Siberia as a ‘ceaseless, merciless assault on my soul… eternal hostility and bickering all around, cursing, cries, din, uproar’. Dostoevsky had initially been sentenced to death and was reprieved only as he and his fellow members of the liberal Petrashevsky Circle stood before a firing squad. The experience was to shape his life — not least because katorga had shown Dostoevsky the beast in man.
‘Whoever has experienced the power and the unrestrained ability to humiliate another human being automatically loses his own sensations,’ he wrote in The House of the Dead. ‘Tyranny is a habit; it has its own organic life; it develops finally into a disease… Blood and power intoxicate.’
But for every banished high-profile radical like Dostoevsky, thousands of unknown common criminals and their families were marched off to Siberia and into oblivion. Beer uses police reports, petitions, court records and official correspondence ‘stitched into bundles and filed away in rough cardboard folders’ to tell their story.
Exile, like transportation, its British judicial equivalent, was a deliberate act of expulsion of poison from the body politic. ‘In the same way that we have to remove harmful agents from the body so that the body does not expire, so it is in the community of citizens,’ declared the Bishop of Tobolsk and Siberia, Ioann Maksimovich, in 1708. ‘All healthy and harmless objects can abide within it, but that which is harmful must be cut out.’
But the scale of Russia’s penal migration dwarfed that of western European nations. ‘In the eight decades between 1787 and 1868 Britain transported around 160,000 convicts to Australia,’ writes Beer, a historian at Royal Holloway University of London. ‘By contrast, between 1801 and 1917, more than one million Tsarist subjects were banished to Siberia.’
The crimes for which a man could be exiled included fortune-telling, vagrancy, ‘begging with false distress’, prizefighting, wife-beating, illicit tree-felling and ‘recklessly driving a cart without use of reins’. Until the mid-18th century these exiles were always branded, usually on the face or right hand, to prevent them ever making their way back to the world. In later years half a prisoner’s head was shaven — to distinguish them from soldiers, who were often shaven-headed to avoid lice — and fettered for the initial part of their journey.
Before the Trans-Siberian railway was completed in 1916, the convicts walked to their place of punishment. The journey was supposed to take 30 weeks — but some men spent up to two years shuffling in columns along the great Siberian trunk road known as the Trakt. The jingle of their chains and the ritual cries of ‘Fathers, have pity on us!’ as the condemned men held out their caps for food was, for all the travellers who passed them in their high-wheeled carriages, the sound of Siberia. By tradition at Tobolsk 1,100 miles from Moscow, the prisoners’ leg irons were removed — a mercy, but also a sign that they had gone too far into the wilderness to survive escape. Their sentences began only once they had arrived at their designated place of exile.
Authority and discipline died over Siberia’s vast distances. If detached from European Russia, Siberia would still be the largest country in the world — it is bigger than the United States and Europe combined. Feudal Russia’s institutions — serfdom, aristocracy and the authority of the Church — all dissolved in the rough egalitarianism of the frontier. Like America’s Wild West, the empty land filled with a mismatched population of God-fearing schismatics and violent criminals. By 1897 more than 300,000 out of a total Siberian population of 5.7 million were convicts and exiles. Every spring the roads of Siberia filled with escaping prisoners, known as General Cuckoo’s Army. When recaptured, so many of them pretended to have forgotten their names that the commonest moniker in the police records was ‘Ivan Nepomnushchy,’ Ivan I-don’t-remember. Desperate for money and food, Beer tells us, the escapees terrorised the local population, butchering impoverished peasants for the smallest of sums.
Conditions varied widely. For the aristocratic rebels sent to Siberia for backing the failed Decembrist rebellion against Nicholas I in 1825, the punishment was simply never to see civilisation again — the comfortable mansions where they lived with the wives who voluntarily followed them into exile still stand in Irkutsk. For common criminals sent to the mines at Nerchinsk, conditions were so horrific that they would insert finely chopped horsehair into self-inflicted wounds on the penis to mimic the symptoms of syphilis to escape from work.
‘We have let millions of people rot in jail, and let them rot to no purpose, treating them with an indifference that is little short of barbaric,’ wrote Anton Chekhov to his editor Aleksei Suvorin, during a visit to Russia’s newest labour camps on Sakhalin Island in 1893. ‘We have forced them to drag themselves in chains across tens of thousands of kilometers in freezing conditions, infected them with syphilis, debauched them and hugely increased the criminal population.’ Chekhov’s blistering report into the inhuman conditions of prisoners shocked liberal society in Russia and abroad — and the American journalist and explorer George Kennan’s reports on the terrible condition of political prisoners helped make Siberia, says Beer, ‘a byword for the despotism of the Tsars’.
Beer’s fascinating book teems with human detail — mercifully not all of it grim. There’s the remarkable story of Andrei Tsybulenko, an escaped convict who served as a crewman on a famously daring voyage along the north coast of Siberia in 1877. When he arrived in St Petersburg he was rearrested — but the Tsar was so impressed by his feat of navigation that he was pardoned and given a medal. We meet ‘a vagabond juggler called Tumanov’ who organised entertainments for the guards at Tobolsk prison, with a teetering human pyramid as the star turn. Climbing to the top of the pyramid, Tumanov promptly leapt over the wall and escaped.
The most famous Tsarist-era political prisoners were of course the socialist revolutionaries who would take power in 1917. Lenin, Stalin and Trotsky all served time in Siberian exile — though none was forced to perform hard labour and all escaped easily and often — as did the future architects of the Soviet terror-state Felix Dzherzhinsky and Genrich Yagoda.
In an important sense, 20th-century Russia was a creation of the Tsarist penal system. The October Revolution itself was the exiles’ revenge on their old captors. And the Soviet machinery of state repression that Alexander Solzhenitsyn called the Gulag Archipelago was a crueller, vaster and more inhuman version of pre-Revolutionary katorga.
Because of its far greater scale and brutality, the Soviet gulag has eclipsed the memory of the Tsarist penal system in the popular imagination. Beer redresses that imbalance by bringing the voices of the million-plus victims of katorga vividly to life. The House of the Dead tells the story of how ‘the Tsarist regime collided violently with the political forces of the modern world’ — and how modern Russia was born among the squalor, the cockroaches and the casual violence of the world’s largest open-air prison.
Some scholars think the field has become cynical and paranoid By Marc Parry NOVEMBER 27, 2016
In the low-budget realm of humanities grantmaking, a University of Virginia press release this May came as a shock. The Danish National Research Foundation had awarded roughly $4.2 million to a literary-studies project led by an English professor at Virginia, Rita Felski. And this wasn’t yet another big-ticket digital-humanities effort to map the social history of the United States or crunch the cultural data stored in five million books. This money would help Felski assemble a team of scholars to investigate the social uses of literature.
For Felski, the windfall validates a nearly decade-long push to change the way literature and other art forms are studied. In a series of manifestoes, she has developed a sophisticated language for talking about our attachments to literature and prodded literary scholars to reconsider their habit of approaching texts like suspicious detectives on the hunt for hidden meanings. Felski’s message boils down to prefixes. Literary critics have emphasized "de" words, like "debunk" and "deconstruct." But they’ve shortchanged "re" words — literature’s capacity to reshape and recharge perception.
"There’s actually quite a diverse range of intellectual frameworks, politically, theoretically, philosophically," says Felski, who specializes in literary theory and method. "Yet there’s an underlying similarity in terms of this mood of vigilance, wariness, suspicion, distrust, which doesn’t really allow us to grapple with these really basic questions about why people actually take up books in the first place, why they matter to people."
“If you challenge the idea of suspicion as the only mode of reading, you are then immediately accused of being conservative.”
Though the size of her grant may be unique, Felski’s sense of frustration is not. Her work joins a groundswell of scholarship questioning a certain kind of critique that has prevailed in literary studies in recent decades. "Critique" can be a blurry word — isn’t all criticism critique? — but in Felski’s usage it carries a specific flavor. Critique means a negative commentary, an act of resistance against dominant values, an intellectual discourse that defines itself against popular understanding. Felski sketches the shake-up of literary studies that started in the ’60s as a shift from criticism ("the interpretation and evaluation of literary works") to critique ("the politically motivated analysis of the larger philosophical or historical conditions shaping these works"). Most frameworks taught today in a literary-theory class, such as feminism, Marxism, deconstruction, structuralism, and psychoanalysis, would count as variants of critique.
Contemporary literary scholarship has never lacked for detractors: Down with politics in the academy! Back to the Great Books! What’s different now is that the questioning of critique is coming from people steeped in its theories. Eve Kosofsky Sedgwick, a founder of queer theory and sexuality studies, galvanized this soul-searching with a 2003 essay arguing that theory had spawned a paranoid mood in literary studies. The debate gained momentum with a special issue of the journal Representations in 2009, when Stephen Best and Sharon Marcus challenged a method of interpretation known as symptomatic reading, in which critics read texts like psychoanalysts probing for repressed meanings.
Then, last year, came Lisa Ruddick’s essay "When Nothing Is Cool," a hand grenade lobbed at her field. Ruddick, an expert on British literature at the University of Chicago, attacked literary studies for favoring an antihumanist ideology that looks askance at inner life and, in her view, alienates scholars from their own moral intuitions. "I have spoken with many young academics who say that their theoretical training has left them benumbed," she wrote in The Point magazine. "After a few years in the profession, they can hardly locate the part of themselves that can be moved by a poem or novel. It is as if their souls have gone into hiding, to await tenure or some other deliverance."
If you exist outside the bubble of academic literary criticism, some of these ideas, like cultivating the inner life or talking about the pleasures of literature, might seem uncontroversial — obvious, even. But the recent debates over literary method have generated considerable hostility because they touch on existential questions of what English professors do. If they abandon suspicion, does that mean retreating into banal admiring description? Should criticism always have a political aim? Is it really necessary, as one of Felski’s allies puts it, for a literary critic to speak truth to power every time she reads Virginia Woolf?
Members of Felski’s circle, who sometimes publish under the banner of "postcritical reading," feel a need to emphasize that questioning critique does not mean abandoning one’s political commitments, be they Marxist, feminist, or queer. "If you challenge the idea of suspicion as the only mode of reading, you are then immediately accused of being conservative in relation to all those politics," says Toril Moi, a Duke literature professor who contributed to a forthcoming essay collection on critique. "I don’t think that’s true at all. I still think I’m a feminist." The current "revolt," she says, "is very much against the idea that we all can only read for one reason, namely political critique."
But critics of that "revolt" contend that its advocates offer a distorted picture of what’s actually happening in literary studies. These skeptics, in classic critique fashion, also see the methods fight as a displacement of larger economic concerns: an attempt to make a case for literary study as budgets are cut and career opportunities dry up. But no change of methods will appease outside detractors of literary studies, they warn.
"Graduate students who are facing an extremely bad job market — really a collapse of the job market — may look at the ordinary procedures of criticism and say, ‘How can people go on performing these critical acts, these interpretive acts, when the world has just fallen apart for us?’ " says Bruce Robbins, a professor of English and comparative literature at Columbia University. Referring to Ruddick’s essay, which got an emotional response, he adds, "It may be that — and this is perfectly legitimate — there are people who are ripe for that kind of denunciation because they feel betrayed. They were led to think that their talents could lead them into good careers, and all they had to do was keep plugging along. And then they plug along and suddenly the whole structure just collapses around them."
Rita Felski is a pillar of that structure, which gives particular weight to her analysis of what ails English departments. The British-born scholar edits New Literary History, an influential journal of theory and criticism that prides itself on redrawing the frameworks of literary studies. Her own writing balances a commitment to high theory with a sympathy for ordinary language and everyday experience. For example, her first book, Beyond Feminist Aesthetics (Harvard University Press, 1989), attempted to defend the value of popular feminist fiction of the ’70s and ’80s. It challenged efforts to anchor feminist literary criticism in a general notion of female identity or feminine poetic writing. Felski turned instead to the sociological concept of the "public sphere" — a space where people come together for critical discussion and political debate — arguing that popular feminist fiction had created a "feminist counter-public sphere" that spread new scripts and stories for women (a feminist bildungsroman, for example).
Felski’s more recent writing arose from her frustration with the limited vocabulary of literary critique, particularly its inability, in her view, to consider fundamental questions about why literature matters. What interested her was how literature creates powerful bonds across space and time: how we become attached to a 300-year-old play, or get transfixed by a novel written in a very different historical or cultural context. When theorists addressed such positive aspects of literature or art, they tended to put forward what Felski felt was "a rather narrow view of what’s going on in aesthetic experience." We enjoy art because of the elegance of its form, they might say. We take a disinterested pleasure in beauty. In Felski’s opinion, there was a lot more going on. Critics should describe the full range of motivations that drive people to take up literature.
In 2008, Felski gave that a try with a slim manifesto called Uses of Literature (Blackwell). She explored how people read fiction for recognition (its capacity to foster self-understanding); enchantment (the escapism of total absorption in an imaginary world); and shock (that emotional mix of revulsion and fascination you might get from avant-garde theater).
The book did reasonably well. Yet Felski says some people responded with statements like "What you say is very true, but this kind of argument can’t really challenge the importance of critique in literary studies." There was a widespread assumption that practicing critique was the only way to be a serious intellectual. Scholars considered it the most rigorous form of thought, Felski says, because of its persistent theoretical interrogation of ideas that are taken for granted: nature, reality, gender, the self, the human. They also saw it as the most radical way of thinking because it allowed them to challenge dominant values.
Felski was unconvinced. So last year she published a gentle polemic called The Limits of Critique (University of Chicago Press). Her book walks a rhetorical tightrope, crediting the contributions of literary theory while deflating its claims to rigor or radicalism. The book’s basic thrust is to redescribe critique rather than refute its ideas. It dwells on the mood of literary scholars, their way of relating to texts. "The barbed wire of suspicion holds us back and hems us in, as we guard against the risk of being contaminated and animated by the words we encounter," Felski writes. "The critic advances holding a shield, scanning the horizon for possible assailants, fearful of being tricked or taken in."
But these shield-wielding naysayers are prey to a predictable repertoire of tics, conventions, and assumptions, Felski argues. Like detectives, they search for clues that ordinary people miss, probe those clues for hidden meanings, and come up with a story that explains them. In one move characteristic of an older style of interpretation, feminist critics would argue that female desire was "repressed in the texts of a patriarchal culture," as Felski puts it. Digging down beneath the surface, they found gaps and contradictions that suggested this buried longing. In another trope that has found favor more recently, a feminist critic might stand back from a text to question its basic assumptions, Felski says. Now the critic shows how a text is "part of a larger system of gender conventions and power relations" that she wants to "denaturalize" (that is, to call into question).
Literary critics write a lot about the positive aspects of fiction, Felski says. But they generally root that appreciation in the subversive premises of critique. They value literature because it disrupts, because it challenges identity, because it opposes the status quo — in other words, because it’s critical. When Felski talks about the "limits" of critique, she means, in part, that this account of why art matters is inadequate. The critical aspects of creative works are "not the only reason, or the main reason, why people turn to literature or films or paintings," she says.
Felski attacks critique’s stature as the most radical form of thought. Here she draws on the work of Bruno Latour, a French anthropologist and sociologist. Latour questions the assumption that being suspicious and critical makes you a progressive thinker, in contrast to the purportedly credulous and complacent masses. He points out that conservative thinkers are now just as likely to draw on the forms of suspicious questioning associated with critique. Think of climate-change deniers, or all those Trump voters so deeply suspicious of elites.
Like Felski, Lisa Ruddick, who established herself in 1990 with a psychoanalytic study of Gertrude Stein’s writing, also takes issue with the suspicious mood in literary studies. But she emphasizes the psychological fallout.
What’s wrong, as she sees it, is literary scholars’ tendency to condemn certain ideas and beliefs as "humanist." The problem dates to the 1980s, she argues, when literary scholars became enamored of French poststructuralist theories. These ideas held that the "self" was not fully stable or autonomous — that we are formed variously by language, culture, and history. While Ruddick considers that a legitimate point, she argues that the "stigma" of humanism has gradually come to encompass more and more of what makes life meaningful, most notably our very sense of an interior world.
This antipathy to the individual has moral ramifications for the field, Ruddick says. If its initiates lose investment in their inner lives and grow alienated from their moral intuitions, the profession as a collective benefits: People throw themselves into professional satisfactions like status and praise. But the intellectual stagnation, the discouragement against following one’s moral feelings — these, in Ruddick’s view, foster a deep cynicism. Family is rejected as "provincial," home as a "disciplinary mechanism," and the inner life as a "bourgeois" luxury. At worst, they create an opening for "violent and sadistic ideas." In the most shocking example, an analysis of a Henry James story tries to make the sexual abuse of children look politically progressive. "Today’s anti-pedophile," Ruddick writes, summarizing the analysis, "perpetrates the ‘potential violence’ of ‘speaking on [children’s] behalf.’ " Such ideas violate scholars’ private convictions, Ruddick says, but they go unchallenged because they seem to mesh with the ideology of the group.
To reality-check this tale of dysfunction, Ruddick interviewed about 70 young academics, mostly Ph.D. students, at seven major research universities. She found that two types of scholars tended to be satisfied: those with a political commitment to an issue favored by the field of English, and those who, not especially stirred up by theory, study literary-historical questions. But the interviews also strongly confirmed her sense of the discouragement and constraint that students can feel adapting to the discipline. "English, without knowing it, has fallen into an intense version of this kind of professional groupthink," says Ruddick, who is writing a book that expands on her "When Nothing Is Cool" essay. "I believe that the profession can’t really move forward until we shed our fear of saying and thinking things that colleagues would call ‘humanist.’ "
On social media, many responded to Ruddick with appreciation. "This essay felt like I’d been holding my breath, waiting for it for decades," wrote Gardner Campbell, an associate professor of English at Virginia Commonwealth University.
"Stunning piece. Finally thoughtful people, long cowed into silence, are starting to speak up," wrote Terry Castle, a Stanford English professor.
Felski’s work has also been widely touted; The Times Literary Supplement called it "perhaps the most ambitious reappraisal of the discipline to appear since theory’s heyday." But other scholars are just as passionate in their criticism of Felski and Ruddick. What animates them, often, is a feeling that the reassessments of critique distort what’s actually happening in literary studies.
That was the reaction of Columbia’s Bruce Robbins, who sees himself as one of the ethical-political critics being taken to task. He dismisses Ruddick as an out-of-touch scholar bent on tarring the entire field with the worst practices of a relatively small number of people. Though Robbins considers Felski a more careful thinker, he finds her portrayal of critique unfair, too, because she represents those who do it primarily as faultfinders. "She’s not paying attention to the many varied and extremely interesting ways in which people’s positive appreciation is part of their critical practice," he says.
“Our attitudes to artworks are much more unpredictable and surprising than a lot of social theories allow for.”
Felski also makes critique seem more dominant than it is, says another skeptic, Lee Konstantinou, an assistant professor of English at the University of Maryland at College Park. "It might be that I just went to graduate school at a different time" — Konstantinou earned his Ph.D. in 2009, while Felski got hers in 1987 — "but I was not told that the only valuable thing that I could be doing as a literary critic would be to debunk or expose the disavowed meanings hidden within literary texts," he says. As a doctoral student at Stanford, he learned to think of himself as a scholar engaged in literary and cultural history — a practice that, while it did involve critiquing, also put a premium on visiting archives and documenting the past. "The picture of criticism that these post-critics create seems a little bit reductive," he says, adding, "Literary critics are not handcuffed to the project of critique."
Konstantinou thinks this debate conceals bigger issues, like the dwindling numbers of English majors and the university funding crisis. He quotes Felski’s hope, in The Limits of Critique, to "articulate a positive vision for humanistic thought in the face of growing skepticism about its value." No methods shift will appease outside critics, he says. "It’s not the case that if you were just less politicized in your reading of Jane Austen, all of a sudden Scott Walker’s going to say, ‘Oh, no, I love the University of Wisconsin system.’ If the postcritical project is going to survive, it can’t just rest on the idea that we have to make literary studies comprehensible to people who don’t know a lot about it or don’t do the requisite reading."
But to talk about a "postcritical project" implies a cohesiveness that doesn’t seem to exist beyond a desire for more diversity of approaches. Among the scholars who have challenged critique — and not all of them accept the label "postcritical" — Ruddick wants to broaden the acceptable palette of psychoanalytic theories. Duke’s Moi wants to rethink prevalent notions of language as a self-contained system cut off from the world. Sharon Marcus and Stephen Best introduce "surface reading." This approach "describes works without interpreting or evaluating them," Marcus says, focusing more on what is in a text rather than what it excludes or represses.
Felski is returning to the work she began in Uses of Literature. That book partially inspired the project she’ll work on with the $4.2-million Danish grant. Spending her fall semesters at the University of Southern Denmark, she will team up with literary scholars, historians, and social scientists to tackle questions about the social dimensions of literature. For example, the relationship between literature and medicine: Could novels give us new ways of thinking about diseases? Or class: What does literature tell us about the "precariat," that growing segment of society defined by underemployment? Or welfare: Why does that word carry such negative connotations in the United States, and such positive ones in Scandinavia? How do people attach themselves to certain words, making them part of their identity, while disengaging from other ones?
That question of attachments — to novels and films, paintings and music — is at the heart of Felski’s next book. She operates from the premise that people’s everyday experience of art is much more mysterious than commonly thought. Consider the story of Zadie Smith’s changing relationship to Joni Mitchell. The novelist once dismissed Mitchell’s music as, in Felski’s words, "a white girl’s warbling." Then one day Smith could no longer listen to Mitchell’s songs without crying. Why? To think about such questions, Felski draws on the philosophical tradition of phenomenology, looking closely at first-person experience. So, in that musical epiphany, Smith is in her 30s. She and her husband are driving to a wedding in Wales, with Mitchell playing on the car radio. They bicker. They spend an afternoon at Tintern Abbey, where Smith gazes out at the green hills. And suddenly she’s humming Joni Mitchell. Felski writes about the way such different strands of experience come together to shape perceptions of art.
"Our attitudes to artworks are much more unpredictable and surprising than a lot of social theories allow for," she says. "And therefore we need to look at these specific examples of a relationship to an artwork. A lot of specific examples are going to explode our theories rather than confirm them."
Among the many and various claims put forward by the opposing camps in the election just passed, one was made by both sides: they, and only they, had a prescription that would deal with America’s less-than-stellar economic condition, the years of slow growth and the lack of sustained real improvements in the standard of living of normal Americans, like the improvements that had occurred back in Eisenhower and Kennedy’s time. With American productivity resurgent under the future president’s new policies, each side promised, the country would once again thrive.
Yet what if, due to deeply entrenched historical and structural factors, that future is highly unlikely to happen? What if those Eisenhower boom times have to be understood as a one-off phenomenon? What if we should all just get used to modest (that is, 1%-2%) long-term growth as “normal”? What will our politicians’ promises be worth then?
ENLARGE
PHOTO: WSJ
AN EXTRAORDINARY TIME
By Marc Levinson Basic, 326 pages, $27.99
Before rejecting this as too gloomy a prospect to be contemplated, readers might want to ponder Marc Levinson’s “An Extraordinary Time.” This is a smoothly written account of the U.S. and the world economy during the 1970s and parts of the 1980s as told by the one-time economics editor of the Economist magazine and author of other important works (such as “The Box,” about the emergence and implications of the shipping container revolution).
Mr. Levinson’s book devotes only its first two chapters to the years of the great, world-wide economic expansion (1950-73). The rest relates what happened when those fabulous times were no more: “And then the boom was over,” begins the final paragraph of Chapter Two, before a third chapter that is appropriately titled “Chaos.” Here readers might be encouraged, justly, to hold onto their seats.
Employing appropriate extracts from personal memoirs as well as official economic reports, Mr. Levinson takes us back to the whirlwind of the very early 1970s, when a plunging-dollar crisis was soon followed by even worse shocks: oil and gas prices going through the roof; the global economy shuddering; the collapse of banks and firms; and then (here the author is very good) the half-hearted recovery that would defy the policy prescriptions of one administration after another.
This was not a temporary tripping-up but a new economic era. All of the growth that consumers, politicians, government planners and economists had taken for granted over the preceding quarter-century was at an end. Factories closed up. Boom towns became bust towns. Weird things happened, now lost to public memory. Anyone living in Europe in, say, 1973, can remember “car-free Sundays,” when the population was encouraged to stroll with their families, or play pick-up soccer games, along the closed-off highways and autobahns of Europe. The idea was that one day off driving a week saved one-seventh of the short supply of gas. It was funny, but wasn’t it a bit scary and ominous, like some scene from a science-fiction novel?
Mr. Levinson is a smart enough author not to be tempted into some breathless mono-causal account of either the earlier “boom” or the later slowdown. He’s excellent at description, and he is not an economic theorist. He comprehends that all modernizing economies seem to experience a huge surge in worker productivity as they move from being agrarian to industrial societies (Victorian Britain, between the 1840s and 1870s, showed the way here). In America’s case, huge wartime economic activity provided an additional boost.
The big point that Mr. Levinson is making is that historically special circumstances were at work that pushed America into abnormally high rates of growth that wouldn’t last forever. Interestingly, then, while the 1973 “oil shock” caused the economy to fall back, it wasn’t the chief culprit. The United States was coming to the end of a special period economically; it was becoming a mature society.
If that is so, and this is the hard conclusion to be drawn, then presumably nothing will get us back to the golden age. A change of administration has little real effect on such things. When it comes to the key measure, total-factor productivity, the post-1973 American economy has been equally unresponsive—this may surprise some readers—to newer technologies in the workplace. Faster communications, improved construction materials, better energy use, the transformation of the firm, increased population demand, even the internet: Nothing has done the trick. The Clinton years in the mid-1990s looked pretty good, at least for a while. But nothing ever got things back to that “extraordinary time” when productivity went up and up without missing a beat. Mr. Levinson is, moreover, not alone in this argument. The respected economic historian Robert Gordon, at the end of “The Rise and Fall of American Growth” (2016), suggested as much, even more emphatically.
Mr. Levinson ends his book abruptly, rather too abruptly for this reviewer’s taste. How nice it would have been to have this very smart writer tease out more implications from his story. Instead he closes with a 1981 reflection by the great Nobel laureate economist Paul Samuelson: “The third quarter of the Twentieth Century was a golden age of economic progress. It surpassed any reasoned expectations. And we are not likely to see its equivalent soon again.” But is there not more to say? What we have here is a giant economic claim, that modernizing societies (America included) have but one special period of great productivity growth and then return to normal. Either it’s true, and we have troubled times ahead, or it’s not yet proved. But what could bring us a second sustained boom?
Mr. Kennedy is a professor of history and director of International Security Studies at Yale. His many works include “The Rise and Fall of the Great Powers: Economic Change and Military Conflict From 1500 to 2000.”
A Princeton PhD, was a U.S. diplomat for over 20 years, mostly in Central/Eastern Europe, and was promoted to the Senior Foreign Service in 1997. After leaving the State Department in 2003 to express strong reservations about the planned U.S. invasion of Iraq, he shared ideas with Georgetown University students on the tension between propaganda and public diplomacy. He has given talks on "E Pluribus Unum? What Keeps the United States United" to participants in the "Open World" program. Among Brown’s many articles is his latest piece, “Janus-Faced Public Diplomacy: Creel and Lippmann During the Great War,” now online. He is the compiler (with S. Grant) of The Russian Empire and the USSR: A Guide to Manuscripts and Archival Materials in the United States (also online). In the past century, he served as an editor/translator of a joint U.S.-Soviet publication of archival materials, The United States and Russia: The Beginning of Relations,1765-1815. His approach to "scholarly" aspirations is poetically summarized by Goethe: "Gray, my friend, is every theory, but green is the tree of life."