Monthly Archives: March 2004

Why ‘We’ Lovehate ‘You’

Paul Smith is professor of cultural studies at George Mason University and chair in media studies at the University of Sussex, and author most recently of Millennial Dreams (Verso).

“The reaction to the events of 11 September–terrible as they were–seems excessive to outsiders, and we have to say this to our American friends, although they have become touchy and ready to break off relations with accusations of hard-heartedness.”

‘We’ and ‘you’

Doris Lessing’s rueful but carefully aimed words, published in a post-9/11 issue of Granta magazine where a constellation of writers had been asked to address “What We Think of America,” doubtless have done little to inhibit the progress of American excess in the time since the terrorist attacks. The voices of even the most considerable of foreign intellects were hardly alone in being rendered inaudible by the solipsistic noise that immediately took over the American public sphere after 9/11. All kinds of voices and words, from within America and without, immediately lost standing and forfeited the chance to be heard, became marginalised or simply silenced, in deference to the media-led straightening of the possible range of things that could be said. And even after the initial shock of 9/11 had receded, it seems that one’s standing to speak depended largely upon the proximity of one’s sentiments to the bellicose sound-bites of the American president as his administration set sail for retaliatory and pre-emptive violence and promoted a Manichean worldview where one could only be either uncomplicatedly for or uncomplicatedly against America, even as it conducted illegal, immoral, and opportunistic war.

The peculiar American reaction to 9/11 was always latent in the discursive and cultural habits of this society where, as Lessing pointedly insists, “everything is taken to extremes.” Such extremism is perhaps not often enough considered, she suggests, when ‘we’ try to understand or account for the culture (Lessing, p. 54). I’m not sure that it’s the case that that extremism has exactly gone unnoticed; it is, after all, the motor and at the same time the effect of the sheer quotidian brutality of American social relations. But the sudden shock to the American system delivered by the terrorists certainly facilitated a certain kind of extremism, a certain kind of extreme Americanism.

That extremist Americanism is foundational to this culture. America is, as Jean Baudrillard has said, the only remaining primitive society…a utopia that is in the process of “outstripping its own moral, social and ecological rationale” (1988, p. 7). And this is, moreover, a primitivism awash with its own peculiar fundamentalisms–not quite the fundamentalisms that America attacks elsewhere in a kind of narcissistic rage, but fundamentalisms that are every bit as obstinate. This is, after all, a society where public discourse regularly pays obeisance to ancient texts and their authors, to the playbook of personal and collective therapy, to elemental codes of moral equivalency, and so on. And this is to leave aside the various Christian and populist fundamentalisms that are perhaps less respectable but nonetheless have deep influence on the public sphere. But in its perhaps most respectable fundamentalism–always the most important one, but now more than ever in this age of globalisation–the society battens on its own deep devotion to a capitalist fundamentalism. Thus it is a primitive society in a political-economic sense too: a society completely devoted to the upkeep of its means of consumption and means of production, and thus deeply dependent upon the class effects of that system and ideologically dependent upon ancient authorities, which remain tutelary and furnish the ethical life of the culture.

It is to these kinds of fundamentalism that America appealed after 9/11, by way of phrases such as ‘our values,’ ‘who we are,’ ‘the American way of life,’ and so on; or when Mayor Giuliani and others explicitly promoted consumption as a way of showing support for America. None of that was perhaps terribly surprising, however disturbingly crass it might have been, and it was clear how much it was necessary for the production of the forthcoming war economy in the USA. But the construction of such extremist platitudes (endlessly mediatised, to be sure) was surprisingly successful in effecting the elision of other kinds of speech in this nation where the idea of freedom of speech is otherwise canonised as a basic reflex ideology.

But, (as de Tocqueville was always fond of repeating) this is also a nation where dissidents quickly become pariahs, strangers. The voices, the kinds and forms of speech that were silenced or elided in the aftermath of 9/11 are, of course, the dialectical underbelly to the consolidation of a fundamentalist sense of America, and to the production of an excessive cultural ideology of shared values. They go some way to constituting, for the sake of what I have to say here, a ‘we’–strangers both within the land and beyond it. This is not, of course, a consistent ‘we,’ readily located either beyond or within the borders of the USA and who could be called upon to love or hate or to lovehate some cohesive ‘you’ that until recently sat safely ensconced inside those same borders. It goes without saying that nobody within or without those boundaries can be called upon individually to comply seamlessly, or closely, or for very long, to a discourse of putative national identity. So in the end there is no living ‘you’ or ‘we’ here, but only a vast range of disparate and multifarious individuals, living in history and in their own histories, imperfectly coincident with the discursive structure of “America.”

And yet imaginary relations are powerful. The ‘you’ whose sense of belonging to, or owning, that fundamentalist discourse has for the time being asserted or constructed itself qua America; but it is of course unclear who ‘you’ really are. It has never been clear to what extent a ‘you’ could be constructed on the ground by way of ideological and mediatised pressure. It’s certainly unclear how much the mainstream surveys could tell us, conducted as they are through the familiar corporate, university, and media channels. And it would be grossly simplistic to try to ‘read’ the nation’s ideology through its mediatised messages and simply deduce that people believe (in) them.1 So the question of “who are ‘you’?” remains opaque in some way. At the same time, there is a discursive space where the everyday people that American subjects are coincides with the ‘you’ that is now being promulgated as fundamental America.

By the same token, there is also some kind of ‘we’ that derives from the fact that the identities and the everyday lives of so many outside the USA are bound up with the USA, with what the USA does and says, and with what it stands for and fights for. The ways in which ‘our’ identities are thus bound up is different for some than for others, obviously, and ‘we’ are all in any case different from one another. I share nothing significant, I think, with the perpetrators of the attacks on the Trade Towers or on the tourists in Bali. Some of us find ourselves actually inside the boundaries of the USA. That’s where I speak from right now, a British subject but one whose adult life has been shaped by being an alien inside America and thus to some large extent shaped by ‘you.’ And there are many in similar positions, some killed in the WTC attacks, others Muslims, others illegals, and so on. And there are of course, also the internal ‘dissenters’–those who speak and find ways to be heard outside the channels that promote the construction of a ‘you.’ All of ‘us, then, inside and outside the borders of the US, are not ‘you’–a fact that ‘you’ make clear enough on a daily basis.


The ‘we’ is in fact a construct of the very ‘you’ I have just been talking about. This ‘we’ is generated through the power of the long, blank gaze emanating from the American republic that dispassionately, without empathy and certainly without love, refuses to recognise most of the features of the world laid out at its feet; a gaze that can acknowledge only that part of the world which is compliant and willing to act as a reservoir of narcissistic supply to the colossus.

Appropriately (in light of the events of 9/11, certainly, and probably before that) it is to the World Trade Center that Michel de Certeau pointed when he wanted to describe the ideological imposition that such a gaze exerts over the inhabitants of a space. In his famous essay, “Walking in the City”(1984), he begins his disquisition from the 110th floor of the World Trade Center, meditating on the ichnographic gaze that the tower (then) enabled, looking down over a city that becomes for him a “texturology” of extremes, “a gigantic rhetoric of excess in both expenditure and production”(p. 91). That gaze is for him essentially the exercise of a systematic power, or a structure in other words. Its subjects are the masses in the streets, all jerry-building their own relation to that structure as they bustle and move around the spaces of this excessive city.

De Certeau doesn’t say so, but one could suspect that he reads the tower and the view it provides by reference to the mystical eye sitting atop the pyramid on the US dollar bill–another trope in American fundamentalist discourse, the god who oversees ‘your’ beginnings. But at any rate, it’s hard not to be struck in his account by the way the relationship between the ichnographic and systematic gaze and the people below replicates a much more Hegelian dialectic: the master-slave dialectic. De Certeau’s sense of power relations never quite manages to rid itself of that Hegelian, or even Marxist, sense that the grids of power here are structural rather than untidily organic in some more Foucauldian sense. The gaze he interprets, then, is in that sense the colossal gaze of the master, surveying the slaves. It is the gaze of a ‘you’ for whom the real people, foraging below and finding their peculiar ways of living within the ichnographic grids that are established for them, can be seen only as subjects and judged only according to their conformity. And when the structure feels itself threatened by the agitation and even independence of its subjects below (as, in De Certeau’s analysis, the city structure begins to decay and its hold on the city dwellers is mitigated), it tries to gather them in again by way of narratives of catastrophe and panic (p. 96). One boon of the 9/11 attacks for the colossus was of course the opportunity to legitimise such narratives.

I cite De Certeau’s dense essay in part because it has been strangely absent from the many efforts of sociological and cultural studies to ‘re-imagine’ New York after 9/11; one might have imagined a text as important as this one to have something to teach about the intersections of power and control in a modern city. But I cite it more for the reminder it offers–beginning from the same place, as it were, as the terrorist attacks themselves–of the way that the spatial structure of the city “serves as a totalizing and almost mythical landmark for socioeconomic and political strategies.” Part of the lesson of this conceit is the knowledge that in the end the city is “impossible to administer” because of the “contradictory movements that counterbalance and combine themselves outside the reach of panoptic power” (p. 95). De Certeau’s New York City and its power grid act as a reasonable metaphor for the way in which ‘our’ identities are variously but considerably construed in relation to ‘you.’ ‘Your’ identity is the master’s identity in which ‘we’ dialectically and necessarily find ‘our’ own image, ‘our’ reflection, and ‘our’ identity. The master’s identity is inflected to the solipsism of self-involvement and entitlement while emanating a haughty indifference to ‘us.’

The situation is familiar, then. In the places, histories, and structures that ‘we’ know about, but of which ‘you’ always contrive to be ignorant, it is a situation that is historically marked by the production of antagonism and ressentiment. What the master cannot see in the slave’s identity and practice is that ressentiment derives not from envy or covetousness but from a sense of injustice, a sense of being ignored, marginalised, disenfranchised, and un-differentiated. That sort of sense of injustice can only be thickened in relation to an America whose extremist view of itself depends upon the very discourse of equality and democracy that the slave necessarily aspires to. Ressentiment is in that sense the ever-growing sense of horror that the master cannot live up to the very ideals he preaches to ‘us.’

It is a kind of ressentiment that Baudrillard, in his idiosyncratic (but nonetheless correct) way, installs at the heart of his short and profound analysis of the events of 9/11. Whatever else can be located in the way of motivation for the attacks, he suggests, they represented an uncomplicated form of ressentiment whose “acting-out is never very far away, the impulse to reject any system growing all the stronger as it approaches perfection or omnipotence” (2002, p.7). Moreover, Baudrillard is equally clear about the problem with the ‘system’ that was being attacked: “It was the system itself which created the objective conditions for this retaliation. By seizing all the cards for itself, it forced the Other to change the rules” (p.9). In a more prosaic manner, Noam Chomsky notes something similar in relation to the 9/11 attacks when he says that the attacks marked a form of conflict qualitatively different from what America had seen before, not so much because of the scale of the slaughter, but more simply because America itself was the target: “For the first time the guns have been directed the other way” (2001, p. 11-12). Even in the craven American media there was a glimmer of understanding about what was happening; the word ‘blowback’ that floated around for a while could be understood as a euphemism for this new stage in a master/slave narrative.

As the climate in America since 9/11 has shown very clearly, such thoughts are considered unhelpful for the construction of a ‘you’ that would support a state of perpetual war, and noxious to the narratives of catastrophe and panic that have been put into play to round up the faithful. The notion, in any case, that ressentiment is not simply reaction, but rather a necessary component of the master’s identity and history, would always be hard to sell to a ‘you’ that narcissistically cleaves to “the impossible desire to be both omnipotent and blameless” (Rajagopal, p. 175). This is a nation, after all, that has been chronically hesitant to face up to ressentiment in its own history, and mostly able to ignore and elide the central antagonisms of class. This is and has been a self-avowed ‘classless’ society, unable therefore to acknowledge its own fundamental structure, its own fundamental(ist) economic process (except as a process whereby some of its subjects fail to emulate the ability of some of the others to take proper advantage of level playing fields and equality of opportunity). For many of ‘us’ it has been hard to comprehend how most Americans manage to remain ignorant about class and ignorant indeed of their own relationship to capital’s circuits of production and consumption. At least it’s hard to understand how such ignorance can survive the empirical realities of America today. The difficulty was by no means eased when it became known that families of 9/11 victims would be paid compensation according to their relatives’ value as labour, and this somehow seemed unexceptionable to ‘you.’ The blindness of the colossal gaze as it looks on America itself is replicated in the gaze outward as it looks on ‘us.’ This is a nation largely unseeing, then, and closed off to the very conditions of its own existence–a nation blindly staring past history itself.

“Events are the real dialectics of history,” Gramsci says, “decisive moments in the painful and bloody development of mankind” (p.15) and 9/11, the only digitised date in world history, can be considered an event that could even yet be decisive. It would be tempting, of course, to say that once the ‘end of history’ had supposedly abolished all Hegelian dialectics–wherein ‘our’ identities would be bound up with ‘yours’ in an optical chiasmus of history–it was inevitable that history itself should somehow return to haunt such ignorance of historical conditions. Yet, from 9/11 and through the occupation of Iraq, America appears determined to remain ex-historical and seems still unable to recognise itself in the face of the Other–and that has always and will again make magisterial killing all the more easy.

Freedom, equality, democracy

If this dialectic of the ‘you’ and the ‘we’ can claim to represent anything about America’s outward constitution, it would necessarily find some dialectical counterpart in the inward constitution of this state. At the core of the fundamental notions of ‘the American way of life’ that ‘you’ rallied around after 9/11 and that allow ‘you’ to kill Iraqis in order to liberate them, there reside the freighted notions of freedom, equality and democracy that, more than a century and a half ago, de Tocqueville deployed as the central motifs of Democracy in America. De Tocqueville’s central project is hardly akin to my project here, but it wouldn’t be far-fetched to say that his work does in fact wage a particular kind of dialectical campaign. That is, Democracy in America plots the interaction of the terms freedom and equality in the context of the new American republic that he thought should be a model for Europe’s emerging democracies. His analysis of how freedom, equality, and democratic institutions interact and, indeed, interfere with one another still remains a touchstone for understanding the peculiar blindnesses that characterise America today. One of its main but largely under-appreciated advantages is that it makes clear that freedom, equality and democracy are by no means equivalent to each other–and one might even say, they are not even preconditions for one another, however much they have become synonyms in ‘your’ vernacular. While de Tocqueville openly admires the way in which America instantiates those concepts, he is endlessly fascinated by exactly the untidiness and uncertainty of their interplay. That interplay entails the brute realities of everyday life in the culture that is marked for him by a unique dialectic of civility and barbarity. In the final analysis de Tocqueville remains deeply ambivalent about the state of that dialectic in America, and thus remains unsure about the nature and future of the civil life of America.

Unsurprisingly, his ambivalence basically devolves into the chronic political problem of the relationship of the individual to the state. One of the effects of freedom and equality, he suggests, is the increasing ambit of state functions and an increasing willingness on the part of subjects to allow that widening of influence. This effect is severe enough to provoke de Tocqueville to rather extreme accounts of it. For example, his explanation of why ordinary citizens seem so fond of building numerous odd monuments to insignificant characters is that this is their response to the feeling that “individuals are very weak; but the state…is very strong” (p. 443). His anxiety about the strength of such feelings is apparent when he discusses the tendency of Americans to elect what he calls “tutelary” government: “They feel the need to be led and the wish to remain free” and they “leave their dependence [on the state] for a moment to indicate their master, and then reenter it” (p.664).

This tendency derives, he says, from “equality of condition” in social life and it can lead to a dangerous concentration of political power–the only kind of despotism that young America had to fear. It would probably not be too scandalous to suggest that de Tocqueville’s fears had to a great degree been realised by the end of the 20th century. And the current climate, where the “tutelary” government threatens freedom in all kinds of ways in the name of a war that it says is not arguable, could only be chilling to de Tocqueville’s sense of the virtues of democracy. The (re)consolidation of this kind of tutelary power is figured for me in the colossal gaze that I’ve talked about, a gaze that construes a ‘you’ by way of narratives of catastrophe and panic while extending the power of its gaze across the globe by whatever means necessary.

But at the centre of this dialectic of freedom and equality, almost as their motor, de Tocqueville installs the idea that American subjects are finally “confined entirely within the solitude of their own heart,” that they are “apt to imagine that their whole destiny is in their own hands,” and that “the practice of Americans leads their minds to fixing the standards of judgement in themselves alone” (p. 240-241). It’s true that for de Tocqueville this kind of inflection is not unmitigatedly bad: it is, after all, a condition of freedom itself. But nonetheless the question remains open for him: whether or not the quotidian and self-absorbed interest of the individual could ever be the operating principle for a successful nation. He is essentially asking whether the contractual and civil benefits of freedom can in the end outweigh the solipsistic and individualistic effects of equality. Or, to put the issue differently, he is asking about the consequences of allowing a certain kind of narcissism to outweigh any sense of the larger historical processes of the commonwealth–a foundational question, if ever there was one, in the history of the nation.2

Jean Baudrillard’s America, a kind of ‘updating’ of de Tocqueville at the end of the 20th century, is instructive for the way that it assumes that de Tocqueville’s questions are still alive (or at least, it assumes that Americans themselves have changed very little in almost two hundred years [p. 90]). Baudrillard is in agreement with de Tocqueville that the interplay of freedom and equality, and their relation to democratic institutions, is what lies at the heart of America’s uniqueness. He’s equally clear, however, that the 20th century has seen, not the maintenance of freedom (elsewhere he is critical of the way that tutelary power has led to regulation and not freedom [2002], but the expansion of the cult of equality. What has happened since de Tocqueville is the “irrepressible development of equality, banality, and in-difference” (p. 89). In the dialectic of freedom and equality, such a cult necessarily diminishes the extent of freedom, and this is clearly a current that the present US regime is content to steer. But Baudrillard, like de Tocqueville before him, remains essentially enthralled by the “overall dynamism” in that process, despite its evident downside; it is, he says, “so exciting” (p. 89). And he identifies the drive to equality rather than freedom as the source of the peculiar energy of America. In a sense, he might well be right: certainly it is this “dynamism” that ‘we’ love, even as ‘we’ might resist and resent the master’s gaze upon which it battens.

Love and contradiction

The “dynamism” of American culture has been sold to ‘us’ as much as to ‘you’–perhaps even more determinedly in some ways. Brand America has been successfully advertised all around the world, in ways and places and to an extent that most Americans are probably largely unaware of. While Americans would probably have some consciousness of the reach of the corporate media, or of Hollywood, and necessarily some idea of the reach of other brands such as McDonald’s, most could not have much understanding of how the very idea of America has been sold and bought abroad. For many of ‘us,’ of course, it is the media and Hollywood that have provided the paradigmatic images and imaginaries of this dynamic America. It is in fact remarkable how many of the writers in the issue of Granta in which Doris Lessing appears mention something about the way those images took hold for them, in a process of induction that ‘we’ can be sure most Americans do not experience reciprocally.

The dynamism of that imaginary America is a multi-faceted thing, imbuing the totality of social relations and cultural and political practices. It begins, maybe, with a conveyed sense of the utter modernity of American life and praxis, a modernity aided and abetted by the vast array of technological means of both production and consumption. The unstinting determination of the culture to be mobile, to be constantly in communicative circuits and to be open day and night, along with the relative ease and efficiency of everyday life and the freedom and continuousness of movement, all combine to produce a sense of a culture that is endemically alive and happening. This is ‘our’ sense of an urban America, at least, with its endless array of choices and the promised excitement and eroticism of opportunity. The lure of that kind of urbanity was always inspissated by the ‘melting pot’ image of the USA, and is further emphasised in these days of multiculturalism and multi-ethnicity. Even beyond the urban centres, of which there are so many, this dynamic life can be taken for granted, and the realm of the consumer and the obsessive cheapness of that realm reflect the concomitant sense of a nation fully endowed with resources–material and human–and with a standard of living enjoyed by most people but achieved by very few outside the USA–even these days, and even in the other post-industrial democracies. ‘We’ can also see this vitality of the everyday life readily reflected in the institutional structures of the USA: for instance, other ways in which we are sold America include the arts, the sciences, sports, or the educational system, and ‘we’ derive from each of those realms the same sense of a nation on the move. As ‘our’ Americans friends might say, what’s not to like?

Beyond the realms of culture and everyday life, ‘we’ are also sold the idea of America as a progressive and open political system the like of which the world has never seen before. The notions that concern de Tocqueville so much are part of this, of course: freedom, equality, and democratic institutions are the backbone of ‘our’ political imaginary about the USA. In addition, ‘we’ are to understand America as the home of free speech, freedom of the press and media, and all the other crucial rights that are enshrined in the Constitution and the Bill of Rights. Most importantly, ‘we’ understand those rights to be a matter for perpetual discussion, fine-tuning, and elaboration in the context of an open framework of governance, legislation, and enforcement. Even though those processes are immensely complex, ‘we’ assume their openness and their efficacy. Even the American way of doing bureaucracy seems to ‘us’ relatively smooth, efficient and courteous as it does its best to emulate the customer-seeking practices of the service industries. And all this operates in the service, less of freedom and more, as I’ve suggested, in the service of “equality of condition”–and ultimately in the service of a meritocratic way of life that even other democratic nations can’t emulate. And on a more abstract level, I was struck recently by the words of the outgoing Irish ambassador to the US, Sean O’Huiginn, who spoke of what he admired in the American character: the “real steel behind the veneer of a casual liberal society…the strength and dignity [and] good heartedness of the people” and the fact that America had ‘brought real respect to the rule of law.”3

These features, and I’m sure many others, are what go to constitute the incredibly complex woof and weave of ‘our’ imaginaries of the United States. The reality of each and any of them, and necessarily of the totality, is evidently more problematic. The words of another departing visitor are telling: “The religiosity, the prohibitionist instincts, the strange sense of social order you get in a country that has successfully outlawed jaywalking, the gluttony, the workaholism, the bureaucratic inflexibility, the paranoia and the national weakness for ill-informed solipsism have all seemed very foreign.”4 But still those imaginaries are nonetheless part of ‘our’ relation to America–sufficiently so that in the 9/11 aftermath the question so often asked by Americans, “Why do they hate us?”, seemed to me to miss the point quite badly. That is, insofar as the ‘they’ to whom the question refers is a construct similar to the ”we’ I’ve been talking about, ‘we’ don’t hate you, but rather lovehate you.

Nor is it a matter, as so much American public discourse insists, of ‘our’ envying or being jealous of America. Indeed, it is another disturbing symptom of the narcissistic colossus to constantly imagine that everyone else is jealous or envious. Rather, ‘we’ are caught in the very contradictions in which the master is caught. For every one of the features that constitute our imaginary of dynamic America, we find its underbelly; or we find the other side of a dialectic–the attenuation of freedom in the indifferentiation of equality, or the great barbarity at the heart of a prized civility, for instance. Equally, accompanying all of the achievements installed in this great imaginary of America, there is a negative side. For instance, while on the one hand there is the dynamic proliferation of technologies of communication and mobility, there is on the other hand the militarism that gave birth to much of the technology, and an imperious thirst for the oil and energy that drive it. And within the movement of that dialectic–one, it should be said, whose pre-eminence in the functioning of America has been confirmed once more since 9/11–lies the characteristic forgetting and ignorance that subvents the imaginary. That is, such technologies come to be seen only as naturalised products of an ex-historical process, and their rootedness in the processes of capital’s exploitation of labour is more or less simply elided. And to go further, for all the communicative ease and freedom of movement there is the extraordinary ecological damage caused by the travel system. And yet this cost is also largely ignored–by government and people alike–even while the tension between capital accumulation and ecological comes to seem more and more the central contradiction of American capitalism today.5

One could easily go on: the point is that from every part of the dynamic imaginary of America an easy contradiction flows. Despite, for example, the supposed respect for the rule of law, American citizens experience every day what Baudrillard rightly calls “autistic and reactionary violence” (1988, p. 45); and the ideology of the rule of law does not prevent the US being opposed to the World Court, regularly breaking treaties, or picking and choosing which UN resolutions need to be enforced, and illegally invading and occupying another sovereign nation. The imaginary of America, then, that ‘we’ are sold–and which I’m sure ‘you’ believe–is caught up in these kinds of contradictions–contradictions that both enable it and produce its progressive realities. These contradictions in the end constitute the very conditions of this capitalism that is fundamentalist in its practice and ideologies.

So, ‘our’ love for America, either for its symbols and concepts or for its realities, cannot amount to some sort of corrosive jealousy or envy. It is considerably more complex and overdetermined than that. It is, to be sure, partly a coerced love, as we stand structurally positioned to feed the narcissism of the master. And it is in part a genuine admiration for what I’m calling for shorthand the “dynamism” of America. But it is a love and admiration shot through with ressentiment, and in that sense it is ‘about’ American economic, political, and military power and the blind regard that those things treat ‘us’ to. It is the coincidence of the contradictions within America’s extremist capitalism, the non-seeing gaze of the master, and ‘our’ identification with and ressentiment towards America that I’m trying to get at here. Where those things meet and interfere is the locus of ‘our’ ambivalence towards ‘you,’ to be sure, but also the locus of ‘your’ own confusion and ignorance about ‘us.’ But the ‘yea or nay,’ positivist mode of American culture will not often countenance the representation of these complexities; they just become added to the pile of things that cannot be said, especially in times of catastrophe and panic.

What is not allowed to be said

It’s easy enough to list the kinds of things that could not be said or mentioned after 9/11, or enumerate the sorts of speech that were disallowed, submerged, or simply ignored as the narratives of panic and catastrophe set in to re-order ‘you’ and begin the by now lengthy process of attenuating freedom.

What was not allowed to be said or mentioned: President Bush’s disappearance or absence on the morning of the attacks; contradictions in the incoming news reports about not only the terrorist aeroplanes but also any putative defensive ones (it’s still possible to be called a conspiracy theorist for wondering about the deployment of US warplanes that day, as Gore Vidal discovered when he published such questions in a British newspaper);6 the idea that the attacks would never have happened if Bush had not become president; and so on. Questions like those will, one assumes, not be able to be addressed by the governmental inquiry into 9-11, especially many months later when the complexities of 9-11 have been obliterated by the first stages of the perpetual war that Bush promised. In addition, all kinds of assaults were made on people who had dared say something “off-message”: comedians lost their jobs for saying that the terrorists were not cowards, as Bush had said they were, if they were willing to give up their lives; college presidents and reputable academics were charged with being the weak link in America’s response to the attacks; and many other, varied incidents of the sort, including physical attacks on Muslims simply for being Muslim. And in the many months after the attacks, lots of questions and issues are still passed over in silence by the media and therefore do not come to figure in the construction of a free dialogue about ‘your’ response to the event.

Many of ‘us’ were simply silenced by the solipsistic “grief” (one might like to have that word reserved for more private and intimate relationships) and the extreme shock of Americans around us. David Harvey talks about how impossible it was to raise a critical voice about the role bond traders and their ilk in the towers might have had in the creation and perpetuation of global social inequality (p. 59). Noam Chomsky was rounded upon by all and sundry for suggesting, in the way of Malcolm X, that the chickens had come home to roost. The last thing that could be suggested was the idea that, to put it bluntly, these attacks were not unprovoked and anybody who thought there could be a logic to them beyond their simple evilness was subjected to the treatment Lessing describes at the head of this piece. The bafflement that so many of ‘you’ expressed at the idea that someone could do this deed, and further that not all of ‘us’ were necessarily so shocked by it, was more than just the emotional reaction of the moment.

This was an entirely predictable inflection of a familiar American extremism, soon hardening into a defiant–and often reactionary–refusal to consider any response other than the ones ‘you’ were being offered by political and civic leaders and the media. Empirical and material, political and economic realities were left aside, ignored, not even argued against, but simply considered irrelevant and even insulting to the needs of a “grief” that suddenly became national–or rather, that suddenly found a cohesive ‘you.’ And that “grief” turned quickly into a kind of sentimentality or what Wallace Stevens might have called a failure of feeling. But much more, it was a failure, in the end, of historical intelligence. A seamless belief that America can do no wrong and a hallowed and defiant ignorance about history constitute no kind of response to an essentially political event. Even when the worst kinds of tragedy strike, an inability to take any kind of responsibility or feel any kind of guilt is no more than a form of narcissistic extremism in and of itself.7


On 9/11 there was initially some media talk about how the twin towers might have been chosen for destruction because of their function as symbols of American capitalist power in the age of globalisation. David Harvey suggests that in fact it was only in the non-American media that such an understanding was made available, and that the American media talked instead about the towers simply as symbols of American values, freedom, or the American way of life (p. 57). My memory, though, is that the primary American media, in the first blush of horrified reaction, did indeed talk about the towers as symbols of economic might, and about the Pentagon as a symbol of military power. But like many other things that could not be said, or could no longer be said at that horrible time, these notions were quickly elided. Strangely, the Pentagon attack soon became so un-symbolic as to be almost ignored. The twin towers in New York then became the centre of attention, perhaps because they were easier to parlay into symbols of generalised American values than the dark Pentagon, and because the miserable deaths of all those civilians was more easily identifiable than those of the smaller number of military workers in Washington.

This was a remarkable instance of the way an official line can silently, almost magically, gel in the media. But more importantly, it is exemplary of the kind of ideological movement that I’ve been trying to talk about in this essay: a movement of obfuscation, essentially, whereby even the simplest structural and economic realities of America’s condition are displaced from discourse. As Harvey suggests, the attacks could hardly be mistaken for anything but a direct assault on the circulatory heart of financial capital: “Capital, Marx never tired of emphasizing, is a process of circulation….Cut the circulation process for even a day or two, and severe damage is done…What bin Laden’s strike did so brilliantly was [to hit] hard at the symbolic center of the system and expose its vulnerability” (p. 64-5).

The twin towers were a remarkable and egregious architectural entity, perfectly capable of bearing all kinds of allegorical reading. But there surely can be no doubt that they were indeed a crucial “symbolic center” of the processes through which global capitalism exercises itself. Such a reading of their symbolism is more telling than Wallerstein’s metaphorical understanding that “they signalled technological achievement; they signalled a beacon to the world” (2001). And it is perhaps also more telling than (though closer to) Baudrillard’s understanding of them: “Allergy to any definitive order, to any definitive power is–happily–universal, and the two towers of the World Trade Center were perfect embodiments, in their very twinness, of that definitive order” (2002, p.6). It is certainly an understanding that not only trumps, but exposes the very structure of the narcissistic reading of them as symbols of ‘your’ values and ‘your’ freedom.

That narcissism was, however, already there to be read in these twin towers that stared blankly at each other, catching their own reflections in an endless relay. They were, that is, not only the vulnerable and uneasy nerve-centres of the process of capital circulation and accumulation; they were also massive hubristic tributes to the self-reflecting narcissism they served. Perhaps it was something about their arrogant yet blank, unsympathetic yet entitled solipsism that suggested them as targets. The attacks at very least suggested that someone out there was fully aware of the way that the narcissist’s identity and the identity of those the narcissistic overlooks are historically bound together. It’s harder to discern whether those people would have known, too, that the narcissist is not easy to cure, however often targeted; or whether they predicted or could have predicted, and perhaps even desired, the normative retaliatory rage that their assault would provoke?

What ‘we’ know, however, is that ‘we’ cannot forever be the sufficient suppliers of the love that the narcissist finds so necessary. Indeed, ‘we’ know that it is part of the narcissistic disorder to believe that ‘we’ should be able to. So long as the disorder is rampant ‘we’ are, in fact, under an ethical obligation not to be such a supplier. In that sense (and contrary to all the post 9/11 squealing about how ‘we’ should not be anti-American), ‘we’ are obliged to remind the narcissist of the need to develop “the moral realism that makes it possible for [you] to come to terms with existential constraints on [your] power and freedom” (Lasch p. 249).

But Christopher Lasch’s final words in a retrospective look at his famous work, The Culture of Narcissism, are not really quite enough. This would be to leave the matter at the ethical level, hoping for some kind of moral conversion–and this is not an auspicious hope when the narcissistic master is concerned. At the current moment when we all–‘we’ and ‘you’–have seen the first retaliation of the colossus and face the prospect of extraordinary violence on a world scale, too much discussion and commentary (both from the right and the left) remains at the moral or ethical levels. This catastrophic event has and the perpetual war that has followed it have obviously, in that sense, produced an obfuscation of the political and economic history that surrounds them and of which they are part. Such obfuscation serves only the master and does nothing to satisfy the legitimate ressentiment of a world laid out at the master’s feet. At the very least, in the current conjuncture, ‘we all’ need to understand that the fundamentalisms and extremisms that the master promulgates, and to which ‘you’ are in thrall, are not simply moral or ethical, or even in any sense discretely political; they are just as much economic and it is that aspect of them that is covered over by the narcissistic symptoms of a nation that speaks through and as ‘you.’


Baudrillard, J. (1988), America (Verso).

Baudrillard, J. (2002), The Spirit of Terrorism (Verso).

Chomsky, N. (2001), 9-11 (Seven Stories Press).

De Certeau, M. (1984), The Practice of Everyday Life (U. California).

De Tocqueville, A. (2000), Democracy in America (U. Chicago).

Gramsci, A. (1990), Selections from Political Writings, 1921-1926 (U. Minnesota).

Harvey, D. (2002), “Cracks in the Edifice of the Empire State,” in Sorkin and Zukin, eds., After the World Trade Center (Routledge), 57-68.

Lasch, C. (1991), The Culture of Narcissism (Norton).

Lessing, D. (2002), Untitled article, Granta 77 (spring 2002), 53-4.

Rajagopal, A. (2002), “Living in a State of Emergency,” Television and New Media , 3:2, 173 ff.

Wallerstein, I., (2001), “America and the World: The Twin Towers as Metaphor,”


  1. This is the error of otherwise worthy work like Sardar, Z. & M.W. Davies (2002), Why Do People Hate America? (Icon Books).
  2. A classic, but largely ignored, statement of American history in these terms is William Appleman Williams (1961), The Contours of American History (World Publishing Company).
  3. “Departing Irishman Mulls ‘Glory of America’,” Washington Post 12 July 2002.
  4. Matthew Engel, “Travels with a trampoline,” The Guardian 3 June, 2003.
  5. See Ellen Wood (2002), “Contradictions: Only in Capitalism?” in Socialist Register 2002 (Monthly Review Press).
  6. Gore Vidal, “The Enemy Within,” The Observer 27 October 2002.
  7. A longer version of this article–forthcoming in Ventura, P. (ed.), Circulations: ‘America’ and Globalization and planned to be part of my forthcoming Primitive America (U. Minnesota)–elaborates on the concept of narcissism that I have been deploying here. I distinguish my use from that of Christopher Lasch in The Culture of Narcissism in order to be able to describe a narcissistic (and primitive) structuration of America, rather than imputing narcissistic disorders to individuals or, for that matter, to classes.

Expert Economic Advice and the Subalternity of Knowledge: Reflections on the Recent Argentine Crisis

Ricardo D. Salvatore is professor of modern history at Universidad Torcuato di Tella in Buenos Aires. He is author of Wandering Paysanos: State Order and Subaltern Experience in Buenos Aires During the Rosas Era (1820-1860) and coeditor of Crime and Punishment in Latin America: Law and Society since Late Colonial Times and Close Encounters of Empire: Writing the Cultural History of U.S.-Latin American Relations, all published by Duke University Press.

Act 1. Taking the Master’s Speech as Proper

President Duhalde and Minister Remes Lenicov went to Washington and Monterrey to speak with the key figures in the US Treasury, the IMF and the World Bank. They carried with them a message: that with an impending hyperinflation it was necessary to check the devaluation of the peso; that, to strengthen the reserves of the central bank and avoid the collapse of financial institutions, IMF funding was needed; that, for the moment, provincial and federal deficits were difficult to control. After some preliminary talks, their arguments crashed against a wall of technical reason. All their arguments were rejected as out-moded views or erroneous arguments. And they listened to a new and unexpected set of arguments: The Fund and the Treasury will accept no more false promises from Argentina. In the view of their experts, the Argentine representatives presented no credible and “sustainable plan” for macro-economic stability and growth. Instead of promises of financial assistance, they issued a warning: abandoning free-market reforms and altering contracts and compromises would lead Argentina into an isolationist path that will produce more damage to its people. Half-persuaded by these strong arguments, President Duhalde and Minister Remes Lenicov returned to Argentina and started to speak with the voice of the IMF and the US Treasury. Hence, they started to spread the new gospel: a free-floating exchange rate, inflationary targeting, and macro-economic consistency. Back in Buenos Aires, President Duhalde explained to the TV audience: sometimes a father, in order to keep feeding his family, has to “bend his head” and accept, against his best judgement, the truth of the Other (to take its “bitter medicine”). This subaltern gesture, of course, contradicted his prior statements, because the prescribed policies fundamentally questioned the validity of the belief system that united the main partners of the governing alliance (Peronistas and Radicales).

After a long negotiation that seemed to go nowhere, Argentine functionaries found out that the rhetoric of the knowledgeable empire was harsh. Conservative voices were beginning to argue that neither the IMF nor the US should continue to pour money into a corrupt and unviable economy. Secretary of the Treasury Paul O’Neil spoke of Argentina as a land of waste where the savings of American “plumbers and carpenters” could be easily washed away by wrong policy decisions.1 The suspicion of the American worker was the basis of the duress of the new US policy towards Unwisely Overindebted Countries (UOCs). In this conservative viewpoint, popular common sense taught government to be wary of international bankers and their advisers. They would willingly waste other people’s money in order to save their own interests.

Act 2. Two Ideologies In Conflict

A long-political and ideological conflict brought about the fall of President De la Rua in December 20, 2001. With him fell the so-called modelo económico implemented first by Minister Domingo Cavallo during the Menem administration (free convertibility between the peso and the dollar, free international capital mobility, government without discretionary monetary policy, privatization of government enterprises, opening of the economy to foreign competition). The defeat of De la Rua-Cavallo was read as the end of the hegemony of a way of understanding economic policy and its effect on economic development (the so-called “Washington Consensus” that in Argentina translated into a “convertibility consensus”). The politico-ideological terrain has long been divided between supporters of economic integration and free-market policies and supporters of regulation, protectionism, state-led development, and re-distributionist policies. De la Rua-Cavallo tried to defend until the end the former model, while their successors (Rodriguez Saa for a week, and Duhalde) promised policies that seemed to satisfy the expectations of the latter camp. Thus, the events of December 19-20 anticipated the re-emergence and potential hegemony of what, for simplicity, I shall call “nationalist-populist reason” at the expense of “neo-liberal reason” and the Washington consensus.

Rodríguez Saa’s announcement of the Argentine default, his promise of one million new jobs, and his televised embrace with union leaders gave Argentines the impression that a great REVERSAL was under way. The same could be said of President Duhalde. His devaluation in early January, his promises to distribute massive unemployment compensations, his announcement of a new “productivist alliance” against the interests of speculators, foreign banks and privatized utilities, and the interventionist policies to “compensate” the effects of this devaluation all created the impression that things would turn around. That, at last, the people had defeated the modelo económico and its rationale and that, consequently, it was time for a redistribution of economic gains and for strong government. The new productivist alliance–it seemed–would increase employment, re-industrialize the country, and put under check the rapacity of foreign companies. If this was so, the winners of the past “neo-liberal age” would have to accept losses for the benefit of the poor, the unemployed, and the economic renaissance of the interior provinces. As it turned out to be (so far), this public perception was disappointed by the sudden “conversion” of their politicians to the refurbished Washington Consensus.

Act 3. New Faces on TV

Momentarily at least, mainstream economic experts (most of them trained in US top universities) have disappeared from the TV screens. Although they continue to be called to integrate discussion panels for news-and-commentary TV programs, many of those associated with the words “liberal,” “neo-liberal” or “ortodoxo” refuse to participate in these programs. Their space has been occupied by heterodox economists–some of them neo-Keynesian, others simply expressing the views of the unions or industries they represent, others speaking for opposition and leftist parties, still others carrying the credential of having participated in the cabinets of governments of the 1980s or before. Unlike the US-trained experts, these other economists had studied in local universities and display a technical expertise and common wisdom that some might find insufficient or old-fashioned. Some of these economists are making their first appearances on TV programs, while others are re-appearing after decades of ostracism and neglect. Although they represent a diversity of perspectives, they are in agreement that the modelo económico of the 1990s is over and that their positions (income redistribution, active or expansionary policies, more regulation, taxes on privatized utilities, price controls, and even statizations of foreign-owned banks and oil companies) need to be heard. Their greater popularity speaks of a displacement in public discourse towards positions closer to what I have called “nationalist-populist reason.” The economists that are now appearing on TV screens are putting into debate whether Argentina should listen to the words of advice of the IMF and the US Treasury; whether it is convenient to give up so much (policy autonomy) for so little (fresh loans and tons of compelling advice).

This change in the type of economist now exposed to TV audiences speaks of the crisis of legitimacy of the “Washington Consensus” and its Argentine variant, the “convertibility consensus.” In part the retrenchment of orthodox or neo-liberal economists (which I repeat is only temporary) is founded upon solid arguments: the lack of reception in government circles for their advice, the evident failure of the modelo to generate sustained economic growth, and a bit of fear. Some economists (Roberto Aleman) have been assaulted by small groups of demonstrators, others (Eduardo Escasani) have been subject to public escraches, and still others have been advised to remain uncritical. And, as we all know, Domingo Cavallo is now in prison, arrested on charges of contraband. This Harvard-trained economist may be unfairly paying for felonies that he did not commit, but in the eyes of many of his countrymen he is guilty of the increased unemployment, poverty, and inequality that resulted from the application of policies that were part and parcel of the Washington Consensus.

Act 4. The Washington Consensus Reconsidered

To understand the meaning of this crisis of legitimacy for the US-trained economist, we must look at the “policy community” in the US and at its changing consensus. Since the Asian crisis (Summer of 1997), the Washington Consensus and its disseminating agencies (the IMF and the World Bank) has faced severe criticism.2 Leading economists such as S. Fisher, A. Krueger, J. Stiglitz, or R. Barro have begun to openly criticize IMF policies, calling for major reforms if not its abolition. Economists and financial experts have argued that opening commodities and financial markets simultaneously was bound to generate explosive financial crises. That macroeconomic stability was not enough to guarantee the stable and sustained growth of “emerging markets.” That the IMF (with its enhanced credit facilities, and high premium loans, and its insistence on fiscal austerity) had pushed countries into debt traps that led to financial crises and severe depressions. Attacks from left and right have led experts to re-examine the role that the IMF must play in the new global economy. Key experts have argued that IMF large loans to unreliable countries only encourage irresponsible private lending. Others have suggested that, in the future, the IMF should play a much more limited role in the world economy, restricted to “crisis prevention” and “crisis management;” that is, to collect data, give warnings, and provide policy advice (Barro and Reed 2002).

Albeit not the only one, Joseph Stiglitz has been perhaps the most vociferous in this criticism (North 2000; Stiglitz 2001). Since the 1997-98 Asian crisis, he has been exposing IMF policies as “fire added to the fire” (as the main reason for economic disaster and social distress). Instead of stimulating economies with increased social expenditures and an expansion of credit, the IMF had consistently recommended budget cut-backs, monetary restriction, and further de-regulations. These policies have sunk into deep and prolonged recessions economies that had the chance of rapid recovery. His book Globalization and its Discontents (2002) has circulated widely among critics of globalization and international financial institutions. Translated into Spanish in the same year, it has been enthusiastically received in Argentina by supporters of industrial protection, Keynesian economic policies, and the reconstruction of a “national” and “popular” economy. Radio and TV programs have familiarized the Argentine public with Stiglitz’s criticism of the IMF, emphasizing his authority as a Nobel-prize winner. Readers, of course, take from a text what strengthens their own views. Thus, Stiglitz has been locally portrayed as a detractor of the IMF and as a defender of “active industrial policies”–others have gone further, presenting him as a crusader against free-market ideology–while, in actuality, his positions have been more conservative. (True, he has accused the IMF of “lack of intellectual coherence” and has called for major limitations in the role of the IMF; but his understanding of the world economy falls short of falling into the camp of the “nationalist-populist” consensus.)

Over time, this criticism has been eroding the basis of the Washington Consensus. Though many US-trained experts still believe that macroeconomic stability and free-market reforms should not be abandoned, many see now that these two conditions are not sufficient. The consensus has shifted towards the terrain of institutions. Now scholars and policy makers agree that, in addition to free-markets and macroeconomic stability, emerging economies need good government, reliable financial systems, and exemplary judiciaries. In particular, given the frequency of external shocks, countries need to have good bankruptcy laws and (some recommend) some degree of control over short-term speculative capital. Some ambiguities of the earlier consensus (between alternative exchange regimes) have been closed: now only flexible exchange regimes are considered acceptable. The experiences of Mexico, Asia and Brazil have given the Fund grounds for arguing that exchange rate devaluations can prove successful. And arguments about the inefficaciousness of IMF loans (“giving fish to the sharks” as Stiglitz has put it, or in the conservative variety of a “moral hazard” argument) have gained widespread support in the policy community.

Act 5. US Economists Give Opinion of the Argentine Crisis

Expert economic opinion in the US is divided regarding the Argentine current crisis. There are those who blame Argentine policy-makers and politicians for the economic and financial collapse. Among them are Professor Martin Feldstein at Harvard, Professor Gary Becker at Chicago, and Professor Charles Calomiris at Columbia. On the other side are those who fault the IMF for the Argentine crisis: Professors Paul Krugman, Mark Weisbrot, and Arthur MacEwan, among others. Those who blame the IMF focus on either its unhelpful or wrong economic advice and its ill-timed financial assistance. The IMF sins are limited to not telling Argentina to abandon its fixed exchange system sooner; or advising orthodox, austerity measures in the middle of a depression. Those who place the blame on Argentine policy-makers (and minimize the IMF’s responsibility) point to the transformation of fiscal deficits into mounting public debt and to the inability to complete the structural reforms needed to make the currency board system work. Their interventions tend to distinguish between economic liberalization (not to be blamed) and ill-advised monetary and fiscal policy (guilty as charged). The positions in the debate are two-sided: either the patient did not follow the doctor’s prescription completely; or the doctor, willingly or by ignorance, provided the patient with the wrong medicine.

In any of the two extreme situations, Argentina stands in a subaltern position vis-à-vis expert (economic) knowledge. Its policy-makers are conceived, in both perspectives, as dependent upon the authorized word of IMF or the “policy analysis” community. Or, put in other terms, Argentina appears always as an “experiment” or an “example” (some use the phrase “textbook case”) of a theory or policy paradigm that is under discussion. Argentina, whether it is the “poster child” of neo-liberal reforms or the “basket case” or IMF folly, stands always as a passive object of knowledge, providing mostly data to feed a vigorous academic and policy debate that goes on elsewhere. Leading scholars and policy-makers in Argentina can argue against the current, but can hardly avoid the protocols of authorization governing “voice” in the US academic and policy community. Yes, at the end of a persistent effort, Minister Cavallo persuaded many of his peers in the United States that convertibility had been a success and could be sustained over time. But in order to do this, he had to publish his views in the most traditional and respected journal of the profession: the American Economic Review. (Cavallo and Cottani 1997).3 Cavallo’s arguments, to the extent that he used the master’s idiom (he spoke of the strength of an internationalized banking system, of the soundness of macro-economic fundamentals, and the resilience of the national economy to global crises), were considered valid, though not completely persuasive.

Few in the US tribunal of expert economic opinion connect Argentina’s failure with economic knowledge or the way this translates into authoritative economic advice. Few of these economic commentators are aware of the impact produced by their own university’s teaching upon the policy-makers of developing economies.4 After all, their students go to occupy key positions of responsibility as ministers of finance or presidents of central banks. Their students are the ones who generally sit at the other side of the table when the IMF experts dispense advice–and they are those who communicate to the population the “bitter medicine” prescribed by the money doctors. Profound disagreement within the US academy (about the causes of the Argentine crisis) stands in sharp contradiction with the single-voice advice dispensed by IMF experts.

After the crisis of November 2001-March 2002, Argentina is back at the center of interest of economic opinion as a “leading case.” Why has it failed? What were the forces that triggered the collapse? Has the confidence in free-market policies been severely damaged by this event? With the images of supermarket riots (“saqueos“), middle-class citizens banging pots and pans (“cacerolazos“) and youths throwing stones at money tellers in Buenos Aires, US economists return to the laboratory of Economic Science to re-think and re-establish their preconceptions. The essay that Martin Feldstein wrote for Foreign Affairs immediately after these events is symptomatic. Here we encounter not the doubt of the researcher but the certainty of the judge. The case (the financial collapse of Argentina) calls for an attribution of guilt. Without much supporting evidence, Feldstein rushes to indict two potential suspects: the overvalued peso, and excessive foreign debt. (Feldstein 2002). If this were so, then liberalizing policies are not to blame. Only the Argentine government is to blame, for it promised something that became impossible to deliver: convertibility at a fixed exchange rate. In the end, the “case” (Argentina) helps to reinforce orthodoxy. If the government would have been able to lower real wages (twisting labor’s arms) or maintain the golden rule of convertible regimes (to reduce the money supply and raise the interest rate) to discipline the economy into cost-reduction “competitiveness,” Argentina would have been able to maintain its convertibility.

What failed, in Feldstein’s opinion, was the political ability and the vision of the Argentine government to adapt national reality to the conditions of the world market. Argentine reality proved stubborn. Even with 15 percent unemployment rates, real wages did not decline. After the Brazilian devaluation, Argentine wages became unrealistically high. But instead of accepting the painful reality (cutting wages to world-market levels), Argentine politicians opted for the easy road: increasing indebtedness. Feldstein draws three “lessons” from the Argentine experience: 1) that a fixed exchange system combined with a currency board is a bad idea; 2) that substantial foreign borrowing is unsustainable in the long run (is a high-risk development strategy); and 3) that free-market policies continue to be desirable, in spite of this sad experience. If in the 1980s Argentina stood as a class example of what could go wrong in the land of “financial restriction,” closed economies, and populist policies, in 2002 the country was again an example–now of the failure of the currency board in countries with too much debt and inflexible labor laws.

On the other side of the argument, there are also prominent public figures and renowned scholars. Best-known among the critics of the IMF is Professor Joseph Stiglitz. But other economists have joined this position arguing that the “Argentine case” is another proof of the failure of IMF policies. University of Massachusetts Professor Arthur MacEwan is on this train. In his analysis (MacEwan 2002), he presents Argentina as a victim of ill-oriented policies, policies that insisted on increasing debt to sustain an outmoded system (the currency board). Bad economic advice comes not from good economic science but from interest. The IMF, trying to defend powerful US corporations and global banking firms, continued to funnel loans to an already bankrupt economy. To Lance Taylor, a monetary policy expert and economic historian, the fall of Argentina must be viewed as a consequence of wrong policy choices (Interview 2001). The fixed exchange rate was good to tame inflationary expectations, but disastrous as a development strategy. In the end, the Argentine case demonstrates the foolishness of opening at the same time commodity and capital markets. Economic growth becomes dependent upon the flow of capital. If successful, the inflow of fresh capital creates price inflation and this generates an overvalued exchange that conspires against local industry. If unsuccessful, capital outflows create expectations of devaluation.

Curiously, in the end, this liberal view coincides with the orthodox one. There is nothing wrong with economic expertise, only with the institutions that represent powerful interests. Liberal and conservative opinion clash only with regard to the desirability of unregulated markets, but they are one in regard to defending the citadel of knowledge. And, what is more important, in one or the other perspective, Argentina continues to be a “case,” an experimental site of policy and knowledge.

How does economic “doxa” pass for knowledge? How is economic theory able to sustain its orthodox core under a storm of counter-evidence? What makes an opinion emanated from Harvard a dominant view? Why is Argentina always a subject of study and object of advice and not a producer of knowledge? These are questions that this essay cannot attempt to answer. Nevertheless, it is useful to reflect on the unevenness of this situation. Certain locations (the subaltern ones) provide the data for experiments in policy while other locations (the dominant ones) provide the theory to understand the success or failure of these experiments. Here lies a condition of subalternity that cannot be solved by improving the quality of the national administration with foreign-trained economists. For the last word remains on the side of those who produce knowledge.

Act 6. Imperial Economics

Informal empires can treat their areas of influence with more or less duress, with more or less affection. It depends upon international political conditions and upon the vision and convictions of a US president (or his administration). Thus, when Paul O’Neil assumed a key position as the US Secretary of the Treasury (January 2001), it seemed that the empire would get tougher with regard to those countries that followed “bad policies.” Comply with the economic advice the IMF and the policy community give you or else suffer isolation: this seemed to be the image the new administration wanted to project (“Tough-Love”). This, combined with recurrent expressions by IMF high-level officials that Argentine negotiators were unable to produce a plan (meaning that their technical experts were not able to draw a consistent macro-economic program), created the impression that the time of “carnal relations” between the US and Argentina was over. From then on, distrust, derision, and distance would characterize the relations between the two countries. This was, to a certain degree, to be expected. Few could anticipate, nonetheless, that misunderstandings about economic policy and economic goals would be at the basis of this imperial duress–and, more importantly, that the hegemony of economic discourse would be the source of contention.

Perhaps part of Argentina’s neo-colonial condition is given by the fact that the country has been taken as a site of experiment for economic policy. In the early 1990s, Argentina pioneered neo-liberal reforms in the region, becoming the “poster child” of free-markets, privatization, and macro-economic stability. Since the Asian crisis, Argentina has turned into a “basket case” of poor fiscal management, increasing country risk, and bad international loans. Curiously, few have examined the location from which the judgement of “success” and “failure” emanates: US universities, think tanks, and multilateral credit institutions. The same place where tons of economic and financial advice is produced on a daily basis. Local economists engage in the debates proposed by these centers of knowledge and policy, shifting gradually or suddenly their opinions about policy and doctrinal trends. They are not equal contributors to the world of knowledge and policy: like President Duhalde, they abide with the authorized word of US-experts. Otherwise, they get displaced into the territory of the “dysfunctional.”

Is economics an imperial science? We know that the discipline has tried to colonize other social and human sciences with its maximizing principles and its implicit rationality. But, economic science could be dubbed “imperial” in a more fundamental sense. New work on international finance and economics deals with the question of global governance. Larry Summers, the current president of Harvard University, is an expert on this subject. He has been arguing that the US is the first non-imperialist empire; that US primacy in the field of economics will assure US leadership in the management of the global economy.

In 1999, Summers celebrated the globalization of US economics. US-trained economists were taking control of key positions in emergent economies’ governments and central banks (Summers 1999). There were Berkeley-trained economists in Indonesia, Chicago alumni in Chile, MIT and Harvard graduates in Mexico and Argentina, etc. These economists were spreading the knowledge of how to manage national economies in a globalized environment and providing the rationale for the transformation under way. They were called to assume a central role in completing the globalization process in the terrain where corporations alone could not progress: the reform of government. It was only in this terrain in which the imperatives of greater international integration could be made compatible with the demands of national communities.

US-trained economists will be the ones facing the challenges of a globalized world: they had to find innovative solutions to the problem of reducing financial volatility and the contagion across countries of financial crises. Since the late 1990s, Summers has been arguing for the building of a new “international financial architecture” for the world economy. His views of a pacified global economy is one in which experts dominate and help the rest of humanity cushion the effects of inevitable “market failures.” (Summers 1999). What role does the United States play in this imagined world scenario? To Summers, the US is the “indispensable nation,” the only power that can lead a movement towards international economic integration without causing a major disruption (restructuring) in the nation-state system.5 How could this be accomplished? By the persuasive power of economic knowledge. In the end, only the diffusion of economic rationale (“economist want their fellow citizens to understand what they know about the benefits of free trade”) can produce a compromise between promises of democratic governance (widespread public goods) and the recurrent constraints imposed by global financial crises.

The United States has changed the rhetoric of empire. For it is the first “outward-looking,” “non-imperialist” superpower with the energy (its own system of government and its economic expertise) to lead the world to cooperative solutions to its problems (Summers 1998). The new “civilizing mission” is to spread to the four winds the rationale of responsible and transparent government. The new promised land is a financial architecture that resists the pressure of periodic financial crises and a new regulatory system that neither stifles the forces of capitalist enterprise nor destroys the belief in democratic government. The new ideal is a novel compromise between government and market-one that can only be imagined and disseminated by economic experts.

Act 7. Post Devaluation Blues (Universidad di Tella)

Financial and economic crises take a heavy toll in the university system of peripheral countries (“emergent economies”) such as Argentina. An abrupt devaluation (that multiplies by three the value of the dollar) makes it almost impossible to continue study abroad programs, makes the library cut down dramatically its foreign subscriptions and purchases of books in other languages, makes it quite difficult for professors to attend conferences and congresses overseas or to invite foreign colleagues, and leaves demands for new computers or better internet connections in the basket of Utopia. If the change in exchange regimes is accompanied by a dramatic fall in GDP and employment and by rampant inflation (as is the case now in Argentina), the conditions are given for a dispersion of the Faculty, attracted by better employment possibilities elsewhere. In short, international crises strike at the very foundation of developing universities. A small but growing university, such as Universidad Torcuato Di Tella, is faced with a paralysis, in terms of human and physical resources, if it manages to withstand the collapse of the economy.

Curiously, our university has one of the best economics school in the country. Our professors have been advisers to governments, if not themselves government officials in areas related to economic policy. They have been trained at UCLA, Chicago, Yale, MIT, and other leading economic schools in the US. They themselves fell prey to the trap of the “convertibility consensus” and now form part of the “mainstream economists” who are very cautious to speak in the context of an economic meltdown and high political volatility. Our departments of Economics and Business were at the center of the university.

Now that old-line economic reasoning is called into question, it may be time to re-think priorities. Perhaps re-configuring the curricula of economic majors to include alternative modes of thinking about “equilibria,” “incentive structures,” or “economic performance.” Maybe it is now time for greater exchange among the social sciences and the humanities. Maybe it is time to challenge the master discourse elaborated in US economic schools about what constitutes “sound economic policy” and to re-think the position of authority (and the scarcity of evidence) from which international economists dispense advice to “emerging markets.” Perhaps, as Paul Krugman has recently suggested, if IMF advice would be offered in the market, the price of this service would be very low due to insufficient demand.

Conclusion. The Subalternity of Knowledge (and How to Turn it Around)

One of the problems associated with a peripheral location in the world of expertise is not to know the right answer at the right time. In March 1999 (in an address at the Inter-American Development Bank), Summers suggested that the keys to prevent financial crises in Latin America were transparent accounting, corporate governance, and effective bankruptcy regimes. This was the vocabulary of the new science of global governance. Those who did not pay attention to the relationship between information, judicial institutions, and markets were just out of tune with history. This is what happened to President Duhalde and his Minister Remes in their encounters with the IMF, US Treasury, and other experts. If they had been listening to the word of experts in “global finance” and “crisis management,” they would have been better able to understand the non-cooperative stand and duress of US experts. For (guided by an outdated policy agenda) they were enacting policies that were exactly the opposite of those recommended by experts.

The current crisis is first a crisis of governability and public confidence, but it is also a crisis of legitimacy for the policy-expert. Bad economic advice and bad government policy has contributed to a deepening of the economic depression and to create divisions in society. The discredit of past governments that were unable to fulfill their promises of economic improvement and lesser social inequality drags along the discredit of economic advice. The gigantic struggle between supporters and detractors of convertibility has now turned into another gigantic struggle between nationalist-populist and neo-liberal policy solutions. There is a profound disbelief in “expert (economic) reason.” People watch their TV screens in astonishment as representatives of local expertise (home-grown economists) pile up criticism against neo-liberal reforms and orthodox economics. People are beginning to realize that economic predictions are quite frequently in error, that technically sound advice is often politically and socially unviable, and that economists many times represent not just independent thought, but certain narrow corporate interests.

How costly is economic experimentation in peripheral economies? Will a greater dose of economic advice from the center (or a greater number of US-trained economists) spare us from the effects of globalization? Are we teaching our economists to speak with their own voice? Is their knowledge integrated into a broader conception of the world and of the human and social sciences? How should we feed the minds of those who will be managing the global economy from these peripheral outposts? Should they be only producing data for the center and applying policy solutions developed at the center? We need to seriously examine the bi-polarities created by this knowledge structure. Why are arguments about poverty and social inequality unable to penetrate the wall of technical neo-liberal reason? Why is fiscal responsibility anathema for heterodox economists? We need to re-consider the formation and circulation of economic expert advice as a constitutive moment of global governance–and challenge its foundational precepts. We need to examine the broader implications of universal financial and economic recipes, and the denial of locally situated and socially-embedded policy solutions.


In May of 2002, President Duhalde appointed a new economic minister, Roberto Lavagna, an expert who had made a career in the European policy community. Against the double talk of Minister Remes Lenicov (who tried to appear tough as a hyper-regulator but accepted the views of the IMF in every policy issue), the new minister took distance from the IMF and its policies. He let the Central Bank intervene in the exchange market so as to stabilize the currency (something the IMF experts were against), then started to accumulate foreign exchange reserves, delaying the payment of financial obligations to the IMF and the World Bank (a decision that provoked the anger of experts in these institutions). Soon, having obtained some minor achievements in terms of declining inflation and a stop in the fall in real output, the minister started to pursue a new negotiation strategy with the Fund: “we have to be responsible, not obedient.” In Congress, this translated into a series of delaying tactics that avoided “resolving” the problems that the Fund wanted (re-structuring of the banking system, an immediate raise in the prices of public services, the abolition of provincial bonds, and some commitment to re-start negotiations with government bonds), complemented with new bills (postponing bankruptcies and housing evictions) that went against the wishes of the IMF. As the president soon discovered, taking distance from the IMF provided important political gains. So, he began to excel in the practice of appearing committed to a successful negotiation with the IMF and, at the same time, boycotting every possibility for success.

To be fair, one has to acknowledge that from the other side of the window, the IMF leadership (the same as the Secretary of the Treasury) became increasing alienated from Argentina, as they saw President Duhalde taking the wrong turn towards a “nationalist-populist” agenda. In fact, they started to consider that he was missing altogether the train that led to “capitalist development” and “good government.” In the end, perhaps, President Duhalde did not understand (and could not understand) the rules of the economy. The initial misreading of the reasons of empire–an inexplicable rejection of the reasons of the local policy-maker–turned into alienation and mutual distrust. Perhaps, reasons Duhalde, the IMF does not want to sign an agreement. Perhaps, reasons the IMF leadership, Duhalde is no longer truly committed to reaching an agreement. Once the child has gone back to its rebellious state, the father will step up the threat of punishment. In this the imperial father will speak through the voice of local and international experts: if Argentina does not negotiate with the IMF and does not fulfill its international commitments it will “fall out of the world.”

December, 2002


Barro, Robert and Jay Reed (2002), “If We Can’t Abolish IMF, Let’s at Least Make the Big Changes,” Business Week, April 10.

Becker, Gary (2002), “Deficit Spending Got Argentina into This Mess,” Business Week, February 11.

Broad, Robin and John Cavanagh (1999), “The Death of the Washington Consensus?” World Policy Journal 16:3 (Fall), 79-88.

Cavallo, Domingo F. and Joaquín Cottani (1997), “Argentina’s Convertibilty and the IMF,” American Economic Review 87:2 (May), 17-22.

Feldstein, Martin (2002), “Argentina’s Fall: Lessons from the Latest Financial Crisis,” Foreign Affairs (March-April).

Fisher, Stanley (2001a), “Exchange Rate Regimes: Is the Bipolar View Correct?” Finance & Development 38:2 (June), 18-21.

______. (2001b), “The IMF’s Role in Poverty Reduction,” Finance & Development 38:2 (June), S2-S3.

______. (1997), “Applied Economics in Action: IMF Programs,” The American Economic Review 87:2 (May), 23-27.

Taylor, Lance (2001), “Argentina: Poster Child for the Failure of Liberalized Policies?” Challenge (November-December).

Keaney, Michael (2001), “Consensus by Diktat: Washington, London, and the `modernization` of modernization,” Capitalism, Nature, Socialism 12:3 (September), 44-70.

Levinson, Mark (2000), “The Cracking Washington Consensus,” Dissent 47:4 (Fall), 11- 14.

MacEwan, Arthur (2002), “Economic Debacle in Argentina: The IMF Strikes Again,” Dollars & Sense (March-April).

Naim, Moises (2000), “Fads and Fashion in Economic Reforms: Washington Consensus or Washington Confusion?” Third World Quarterly 21:3 (June), 505-528.

North, James (2000), “Sound the Alarm,” Barron’s, April 17.

Stiglitz, Joseph E. (2002), Globalization and Its Discontents (New York: W.W. Norton).

________. (2001), “Failure of the Fund: Rethinking the IMF Response,” Harvard International Review 23:2 (Summer), 14-18.

Summers, Lawrence (1999), “Distinguished Lecture on Economics in Government,” Journal of Economic Perspectives, 13:2 (Spring), 3-18.

_______. (1998), “America: The First Nonimperialist Superpower,” New Perspectives Quarterly, April 1st.

“To Little, Too Late? The IMF and Argentina,” (2001) The Economist, August 25.

“Tough-Love Ya to Death,” (2001) Newsweek, May 28.

“Unraveling the Washington Consensus: An Interview with Joseph Stiglitz,” Multinational Monitor 21:4 (April 2000), 13-17.

Weisbrot, Mark, “Another IMF Crash” (2001), The Nation, December 10.

Weisbrot, Mark and Thomas I. Palley (1999), “How to Say No to the IMF,” The Nation, June 21.


  1. Many articles and commentaries picked up on O’Neil’s phrase. See for example The Economist (2001).
  2. See Krueger 1998; Stiglitz 2001; Fisher 2001b, among others. Even a supporter of IMF policies such as Stanley Fisher (1997) had to acknowledge that IMF programs had a limited effect upon the domestic side of developing economies (“few countries seeing significant increases in growth or sharp reductions in inflation”), although the Fund’s policies did produce improvements in the external sector and momentary reductions in fiscal deficits.
  3. In this essay, it is clear that Cavallo was running against the current. He acknowledged that key people in the policy and theory community were skeptical about the currency board. But they were ready to accept price stability and good fiscal figures as “proofs” of success. In fact, as Cavallo conceded, no one in the “policy community” raised the issue of rising unemployment at a time in which Argentina seemed to have weathered bravely the aftermath of the Mexican crisis (1996-97).
  4. Mark Weisbrot is perhaps an exception in this regard. He argues that Argentina had been mismanaged by US-trained economists and subjected, for too long, to ill advice from the IMF. In the end, a failed experiment (Argentine convertibility), disguised as success, could only bring discredit to those economists who supported it (Weisbrot 2001).
  5. Summers is aware of the unevenness implicit in this conception of the world system. Whereas the US government claims absolute sovereignty to control domestic economic policy, other countries should content themselves with a limited sovereignty. Subject to the financial surveillance of multilateral institutions, they cannot entertain the dream of ever issuing world-money. Even if they become “fiscal and monetarily responsible,” the Federal Reserve System will never allow other countries to become new members in the board of directors.

The Role of The United States in the Global System after September 11th

Dani Nabudere is an Executive Director at Afrika Study Center, Mbale, Uganda. Most recently Nabudere has edited Globalisation and the African Post-colonial state (AAAPS, Harare, 2000) and is author of Africa in the New Millennium: Towards a post-traditional renaissance (forthcoming-Africa World Press).


It is clear that power relations in the global system have been severely tested since the events of September 11, 2001, so much so that it has become fashionable these days for people to argue that the world has irrevocably changed with those events. The terrorist attacks on the World Trade Center in New York and the Pentagon in Washington on September 11th, 2001 were calculated moves to test the standing and political and economic positions of the world’s sole superpower. They were aimed at delivering a blow that could carry several messages around the world at once. Indeed, it is clear that this fateful event was a manifestation of the contradictions of the modern world system since its foundation some five hundred years ago, and the messages the attacks were calculated to transmit were intended to convey to all and sundry those contradictions.

The first of these messages was the expression of anger by those disaffected social and political forces, that felt mistreated, marginalized, and oppressed by U.S. global power relations. The second was to demonstrate to the U.S. that those global power relations were vulnerable and could be attacked at the very heart of the system any time. Thirdly, the attacks gave signal to other disaffected groups opposed to U.S. dominance of the world that it was possible to weaken this power in such a way that their grievances could be addressed through the overthrow of that system. Fourthly, by attacking these two pillars of U.S. economic and military power, al Qaeda wanted to demonstrate that the U.S. was not as powerful as it thought and that its economic power and military power could be broken down by well organized, and well manned attacks.

These messages had other side interpretations. To U.S. neo-conservative forces as well as to some in the right-wing liberal political establishment, these attacks signaled an attempt by fundamentalist political Islam to overthrow Western civilization at the core and, in this respect, the attacks were interpreted as not just constituting a threat to the U.S. as a country but to the whole Christian, western civilization project. This was in fact what president Bush dubbed an “attack on civilization” in his condemnation of the strikes. This interpretation had the effect of influencing the way the world looked at the attack and the U.S. response to it. While not necessarily accepting this interpretation, it forced all foreign governments, with the exception of the very few, to side with the U.S. ideologically on the issue. Thus in addition to the overwhelming humanistic outpouring of sympathy for the victims, it enabled the Bush administration to arm-twist all governments and individuals throughout the world to side with its response on the grounds that the attacks were not on the U.S. as such but on “civilization” in general. It forced these governments to side with the U.S. government, faced with its accompanying threat that: “Either you are with us, or you are against us.”

At the same time, the attacks had other interpretations. The generalization of the consequences of the attack also put emergent “anti-globalization” activists on the spot since any attempt by them to express sympathy with the attackers by asking that the causes of the attacks be examined and addressed was interpreted as being “unpatriotic” expression of sympathy with “the enemy.” For this reason, the attacks had the effect of dampening the activities of the global solidarity movement, at least for some time, since its strong showing at the Seattle WTO demonstrations in 1999. This interpretation was also used to crack down on the democratic and civil rights of U.S. citizens and to reinforce authoritarian regimes throughout the world. Thus, the event and the reactions surrounding it were turned from a political discourse into a moral-religious event in which “the enemy” was equated with evil and barbarism, while the victim was equated with virtue and civilization.

Nevertheless, these interpretations have begun to have an opposite effect in that the widening of the net in “the war against terrorism” with the attack against Iraq has caused many countries to pose questions that were not posed earlier. Questions are being asked whether the tragic events of September 11 are not being misinterpreted to advance a narrow political agenda of some cliques within the U.S. political establishment. Something like a return to a political discourse is beginning to emerge with a call being made to address the real causes that led to the September 11th attacks against the headquarters of the “Free World” and for the United Nations to resume its responsibilities for international peace and security. President Bush’s threats against the United Nations to act according to his will “or become irrelevant” are being taken as rantings of a president whose unilateralism has gone wild. The war against Iraq has again undermined the hope of a return to a multilateral world.

In may ways, therefore, these events, and particularly the unilateral action of launching the war against Iraq with the support of Britain and the so-called “alliance of the willing,” have confirmed a predictable hegemonic trend in U.S. foreign policy since the end of World War 2. This trend has afflicted all great hegemonic powers in history. Nevertheless the role of the U.S. in international relations since the end of that war has confirmed the traditional realist and hegemonic stability theories, which have argued that for the stability of institutions of global international public good to prevail, there must be a hegemonic power that is able to enforce certain rules of behavior in international relations, because the hegemon in that case can afford the short-run costs of achieving the long-run gains, which also happens to be in its national interests. These theories have been challenged by institutional stability theorists, who have argued that the model of institutionalized hegemony, which explains the functioning of multilateral arrangements based on the cooperation of a number of core countries to overcome “market failures,” is preferable to the hegemonic power model [Keohane, 1980].

The U.S. in the Post-War Order

The hegemonic stability theories seem to have been backed by evidence of the early phase of the post-World War 2 period in which the U.S. was able to push the former European imperial powers to accept a multilateral economic system, which existed beside the United Nations system, with the U.S. playing the leading role. This Bretton Woods system was predicated on the coincidence of three favorable political conditions. The first was the concentration of both political and economic power in the hands of a small number of (western) states; secondly, the existence of a cluster of important (economic and political) interests shared by those states; and thirdly, the presence of a dominant power “willing and able” to assume a leadership role in the new situation [Spero, 1977: 29].

It is the evolution of the contradictions of this combination that Spero spoke about that has created the predicament in which the present situation for the U.S. arises. The domination of the U.S. in global economic and strategic institutions such as the North Atlantic Treaty Organization (NATO), the South East Asia Treaty Organization (SEATO), the Central Treaty Organization (CENTO) and the Bretton Woods institutions characterized the Cold War period. These institutions expressed the interests of the western powers (at first hostile to Japanese emergence on the world economic scene) as the culmination of the western modern system based on liberal-monopolistic capitalism. The institutions also expressed political military power that the western countries wielded throughout the world. Western systems of economic, political, and military power in fact protected those economic interests that were threatened by “communism,” and as time passed, by the emergent nationalism of what came to be called “Third World” or “developing” countries.

Indeed, as Paul Kennedy argued in his book: The Rise and Fall of the Great Powers [1988], economic power is always needed to underpin military power, while the latter is also necessary in order to acquire and protect wealth that superpower status demands. The problem arises when a disproportionate share of a hegemon’s economic resources is increasingly diverted from wealth-creation and allocated to military purposes. The result is the weakening of the economic backbone of the military power of the hegemon in the long run, which often leads to its eventual collapse.

This reality took some time to come through in the case of the U.S. The rise of U.S. transnational corporations in the world economy, for a time, reinforced U.S. economic, political, and strategic power, which many states in the world were obliged to comply with due to the imperatives of the situation. Having suffered from its isolationism of the interwar years and thereby contributing to the eventual collapse of the economic system and of the peace that had followed World War 1, the U.S. in the period following World War 2 was prepared for an outward push through the Bretton Woods multilateral system and the NATO alliance.

Having settled into the role of a superpower only challenged by the Soviet Union, the U.S. begun to pursue a series of policies in the international arena that tended to undermine its own political belief in the independence of states against European colonialism. To some extent, this was prompted by U.S. determination to resist “communism.” But that consideration was only marginal. The real major consideration was the need to defend a western system of values built around Christianity, liberal democracy, and world capitalism. Now these values and interests appear to be threatened by the al Qaeda attack on the U.S.

Regarding its relations with Third World countries, many of these countries originally considered the U.S. to be a “progressive” and friendly power because of its opposition to the European colonial system, especially in the interwar years and the immediate post-war period. U.S. partial support for the right of self-determination for colonial countries, articulated in the Wilsonian “Fourteen Point” Speech after World War 1, symbolized this “progressive” image. But soon the U.S.’s own economic and strategic interests compelled it to structure the post-war multilateral system in such a manner that its hegemonic interests were taken care of globally. It was therefore not surprising that its role as a neo-colonial power emerged in the course of this historical process. This reality was revealed in its dealings with the former colonial powers, as both began to rely on NATO to suppress the struggles for self-determination against the former British and Portuguese colonies in Southern Africa and elsewhere.

The same happened in other parts of the Third World–in Asia and Latin America. The existence of the U.S.S.R. as an opposite hegemonic power implied the need to confront it not only on its own home ground, but also in the now politically independent countries of Asia, Africa, and Latin America. From confronting Cuba in the U.S.’s own back yard, the “anti-communist” crusade spread to all regions of the world. The United States came to increasingly rely on right-wing military rulers as “comrades in arms” in the fight “against communism” in Third World countries. They increasingly supported military dictators such as Ferdinand Marcos in the Philippines, General Suharto in Indonesia, General Mobutu in Congo, as well as General Pinochet in Chile.

These dictators represented the rear-guard of United States policy in the Cold War period in Third World countries. A stage was reached when the fight against the U.S.S.R. was equivalent to the fight for control of the world’s natural and human resources for the benefit of the “Free World” against those of the East led by the U.S.S.R. Oil, strategic materials, and mineral wealth as well as trade and investment outlets became vital strategic areas to defend.

The Oil Crisis of the mid-1970s signaled the heightening of the United States political and strategic position in the Middle East, as we have seen, while the survival of Israel in the sea of Arab nationalism also determined the shape of U.S. foreign policy in that area. Arab nationalism and the Palestinian struggle against Israel appeared to contradict United States global policy and this set the environment for the September 11th events. Indeed, the U.S. has viewed the Middle East as an “arc of crisis” since the late 1970s.

It will be remembered that in 1979 President Carter signed Presidential Directive 18 to order the creation of the Rapid Deployment Force (RDF), composed of some 250,000 men and women, designed to meet contingencies after the Iranian revolution. The force was supposed to protect U.S. interests in 19 countries stretching all the way from Morocco through the Persian Gulf up to Pakistan, which the Pentagon regarded as the “cockpit of global crisis in the 1980s.” In fact the real purpose was the protection of the oil fields in the area.

With the fall of the Shah of Iran and the Soviet invasion of Afghanistan in 1979, the RDF was expanded. By 1984, the force had been expanded to 400,000 men and women to be on the standby for action in the “worst case scenario” of possible Soviet invasion of Iran. This understanding was based on the calculation that by 1985, the Soviet Union would have become a net importer of oil and therefore constituted a serious competitor to the U.S. monopoly of Arab oil. This happened during the second oil crisis in 1979, a period heightened by the instability in Iran, which has never ended as far as the U.S. is concerned. All these developments are interlinked and therefore provide a necessary background to understanding pre- and post-September 11 developments.

The U.S. in the Post-September 11 World

Old and New Alliances

The above background clearly demonstrates that the U.S. has throughout the period of its hegemony used its power to bolster its interests, which in many cases in effect meant the U.S. standing against the interests of the peoples of the Third World. Its support for reactionary and authoritarian regimes has not abated even in the post-Soviet period. Clearly, the collapse of the Soviet empire very much eased its strategic pressures, but the much vaunted and expected “peace dividend” never materialized. This is because the U.S. has continued to face military challenges to its power and its major concerns now are how it can reign in “rogue” and “terrorist” states, which constitute the “axis of evil.” The enemy image has shifted from the U.S.S.R. to these “rogue” states in the Third World. The events of September 11th must, therefore, in our view, be seen as part of this strategic problem facing the U.S. since its assumption of leadership of western interests against the rest of the world. Having played a role in the collapse of the U.S.S.R., it finds itself faced with an even stronger enemy within the ranks of Third World nationalism, which in its judgment constitutes many terrorist and “rogue” states and groups.

In comprehending the issues at stake, it is important to focus on the year 1979 as the watershed in the emergence of this new U.S. dilemma. This watershed was marked by the decline of Soviet power, especially weakened by its defeats in the war in Afghanistan; while at the same time, 1979 also signaled the beginnings of challenges to U.S. power in the Muslim world starting with the Iranian revolution of that year. It has also to be noted that that year and the following year also signaled a shift of western political power to the right–with the rise of Margaret Thatcher in the U.K. and Ronald Reagan in the U.S.

Supporting Muslim forces against the U.S.S.R. in Afghanistan in its efforts to “contain” the Soviet influence in the Middle East, the U.S. created a temporary convergence of interests with the radical Islamist groups in its anti-Soviet confrontation, while at the same time creating conditions for the emergence of radical political Islamism. For a time, the convergence of interests was beneficial to the U.S., but there began to emerge a divergence of interest with the collapse of the Soviet Union. This force eventually grew and assumed political importance, which eventually turned against U.S. expansionism in the Middle East. In this sense, it can be said that the collapse of the U.S.S.R. was at the same time the beginnings of the problems for the U.S. with the Muslim world in the Middle East, and in the Third World in general. In that scenario, it can be said that the seeds that germinated and forced their way out of the ground on September 11th were sown in the Afghanistan anti-Soviet war.

Samuel Huntington, in his book, The Clash of Civilizations and the Remaking of World Order [1997], located the rise of radical Islamism in the squalor of the marginalized Moslem masses in the Arab world in the mid-seventies. It is also well known that the Iranian Islamic revolution was, to a great extent, fueled by the worsening economic conditions in Iran that led to mass discontent and eventual rebellion. The discontent was clearly linked to western (imperialist) dominance in the region, where foreign oil corporations exploited local oil resources in alliance with the traditional ruling families against the interests of the masses of the people. These contradictions are still at the core of the conflicts in the region, which the U.S. continues to ignore.

One consequence of this development was to put radical and militant Islam at the center of the Muslim states, whose leaders were increasingly challenged to abandon western symbols of power. The enemy was the cultural imperialism of the west led by the U.S. From that broad anti-imperialist strategy, the Islamic radicals were able to win support for their cause from non-Muslim Third World peoples. In working for the defeat of communism in Afghanistan and the world as a whole, the U.S. played on the Muslim and Christian fundamentalist fear of communism as a “godless creed.” The U.S. worked closely with Islamic fundamentalists so long as this served its global hegemonic ambitions in defending its oil bases in the Persian Gulf region. At the same time, it pursued the secular values of democracy, freedom, and justice, which were perceived by its allies as hypocritical.

With the collapse of communism in 1989, the U.S. in its triumphalism, symbolized by the new drive for globalization, begun to be viewed by the Islamic forces as an equally “godless creed” with its emphasis on empty materialism and consumerism. This was seen as a soulless and nihilistic cultural imperialism, which was being imposed on the Arab and Muslim peoples. It was a challenge to the Islamic belief in a non-secular state system as well as to the values of western style nationalism. The U.S. could no longer invoke the Cold War in its support, since the Soviet Union was now also becoming a capitalist and secular system. Its earlier alliance with radical Islam, which enabled the U.S. to recruit people like Osama Bin Laden to its anti-communism cause, began to wane. Its support among the Taliban could only be maintained by bribery and corruption in pursuit of its materialist creed and ambitions.

Still in Search of Oil

So in order to understand the September 11th events without conjuring up conspiracy theories, it is important to note that the issue of the change of the Taliban government in Afghanistan was uppermost in the minds of certain business and political interests in the U.S. at the material time. In testimony before the Subcommittee on Asia and the Pacific Region of the Committee on International Relations of the House of Representatives on February 12, 1998, John J. Maresca, the UNOCAL vice-president for international relations, argued that there was need for multiple pipeline routes for Central Asian oil and gas resources, as well as the need for the U.S. to support international and regional efforts aimed at achieving balanced and lasting political settlements to the conflicts in the region, “including Afghanistan.” He also pointed out that there was need for U.S. “structured assistance” to encourage economic reforms and the development of appropriate investment climates in the region. Therefore, in his view, one major problem, which had as yet to be resolved, was how to get the region’s vast energy resources to the markets where they are needed.

At this time, there was a consortium of 11 foreign oil companies, including four American companies, Unocal, Amoco, Exxon and Pennzoil, which were involved in the exploration in the region. This consortium conceived of two possible routes, one line angling north and crossing the north Caucasus to Novorossiysk; the other route across Georgia to a shipping terminal on the Black Sea, which could be extended west and south across Turkey to the Mediterranean port of Ceyhan. But even if both pipelines were built, they would not have had enough total capacity to transport all the oil expected to flow from the region in the future. Nor could they have had the capability to move it to the right markets.

The second option was to build a pipeline south from Central Asia to the Indian Ocean. One obvious route south would cross Iran, but this was foreclosed for American companies because of U.S. sanctions legislation against Iran. In Maresca’s view, the only other possible route was across Afghanistan, which had of course its own unique challenges. The country had been involved in bitter warfare for almost two decades, and is still divided by civil war. He emphasized: “From the outset, we have made it clear that construction of the pipeline we have proposed across Afghanistan could not begin until a recognized government is in place that has the confidence of governments, lenders, and our company” [Emphasis added].

These developments indicate that the whole situation around September 11th can now be seen to have been part of a wider geo-strategic process of U.S. economic and political interests. While not conjuring up conspiracy theories, one can surmise that there was more to the incidents than meets the eye. It is reported that senior U.S. officials in mid-July 2001 told Niaz Naik, a former Pakistani Foreign Secretary, that military action was planned to be taken against the Taliban by mid-October, 2001. Bush declared war against Afghanistan, though the Taliban did not order the attack on the U.S. It was alleged by the U.S. government that Osama Bin Laden, a Saudi national residing in Afghanistan, ordered the attack. The U.S. action against Afghanistan resulted in the ouster of the Taliban regime and a change of government. Was this a calculated move or was it a genuine war against terrorism? Within a few months of the ouster of the Taliban regime, the U.S. government under President Bush quietly announced on January 31, 2002 that it would support the construction of the Trans-Afghanistan pipeline. Then on February 2, 2002 the Irish Times announced that President Musharraf of Pakistan (now popularly known as Busharraf) and the new Afghan leader, Mohamed Karzai, had “announced an agreement to build the proposed gas pipeline from Central Asia to Pakistan via Afghanistan.” Although September 11th might have been an event that took place independently of the wishes of the U.S. oil interests in the area, the issues connected with the event were clearly interlinked [Onyango-Obbo: 2002:8].

Africa in the `New World Order’

The events of September 11th have had a spectacular impact on the African continent. Although terrorist attacks against the U.S. embassies in Kenya and Tanzania signaled a new development for these countries in terms of their security, which U.S. presence posed, the issue was nevertheless seen as a distant threat. In the new situation and due to pressures from the U.S. government, the Organization of African Unity (OAU) in October 2001 quickly adopted a Declaration Against Terrorism, which had different connotations from the earlier initiatives by the African States themselves. At the same time, efforts were exerted to propose a Treaty on Terrorism in terms of the new definitions emanating from the U.S. Before September 11th, the OAU had in July 1999 adopted the Convention on the Prevention and Combating of Terrorism, which in article 1 condemned “all forms of terrorism” and appealed to member states to review their national legislation to establish criminal offences against those engaged in such acts. The Convention had gone a step further to define terrorism and to distinguish it from the legitimate use of violent struggle by individuals and groups. The Convention pointed out that political, philosophical, ideological, racial, ethnic, religious or other motives could not be used as justifiable defense for terrorism. Nevertheless, in article 3 (1) it declared:

Notwithstanding the provisions of article 1, the struggle waged by peoples in accordance with the principles of international law for their liberation or self-determination, including armed struggle against colonialism, occupation, aggression and domination by foreign forces shall not be considered as acts of terrorism.

It can be seen here that the African states had made some attempt to be objective on what constituted terrorism. But the events of September 11th seem to have pulled the clock backwards. Soon after the attacks on the U.S., the U.S. National Security Adviser, Condoleezza Rice, reminded the African States that:

One of the most important and tangible contributions that Africa can make now is to make clear to the world that this war is one in which we are all united. … We need African nations, particularly those with large Muslim populations, to speak out at every opportunity to make clear … that this is not a war of civilizations. … Africa’s history and geography give it a pivotal role in the war. … Africa is uniquely positioned to contribute, especially diplomatically through your nations’ memberships in African and Arab and international organizations and fora, to the sense that this is not a war of civilizations. This is a war of civilizations against those who would be uncivilized in their approach towards us [Emphasis added].

Following this appeal, the OAU Central Organ in November 2001 issued a Communiqué on terrorism in which the organization “stressed that terrorism is a universal phenomenon that is not associated with any particular religion, culture or race.” It added that terrorism “constitutes a serious violation of human rights, in particular, the rights to physical integrity, life, freedom and security.” The Communiqué also added that terrorism “poses a threat to the stability and security of States; and impedes their socio-economic development.” The Communiqué further stressed that terrorism cannot be justified under any circumstances and consequently, it “should be combated in all its forms and manifestations, including those in which states are involved directly or indirectly, without regard to its origin, causes, and objectives.”

This Communiqué demonstrated sensitivity to the problem of terrorism because of the multiethnic, multireligious, multiracial, and multicultural composition of the continental organization. It specifically excluded the religious connotations that terrorism was having in the U.S. It included, to some extent, state-sponsored terrorism as part of the evils to be combated, “without regard to its origins, causes or objectives.” But in another sense, many states now began to respond to the dictates of the Bush administration in their understanding of the problem in order to curry favor with the U.S. Some African States initiated legislation directed at their internal opposition in terms of the new U.S. definitions of terrorism. Malawi, Zimbabwe, and Uganda were the first ones to do so.

Uganda, in particular, emphasized the fact that it had been fighting terrorism even before the U.S. began to do so consistently. It rushed legislation though parliament, which was aimed at the legitimate opposition as well as groups fighting the government by way of “armed struggle.” These groups fighting the government “in the bush” were listed and sent to the U.S. and the UNO to be included among terrorist organizations. The Lords Resistance Army (LRA) and the Allied Democratic Forces (ADF), fighting in different parts of Uganda, were now listed internationally as terrorist organizations. At the same time, a law against terrorism was also rushed through parliament, which the opposition regarded as being targeted against them. Soon, it listed its opponents as “terrorists” to be treated as criminals in any part of the world.

These negative developments indicated the real impact on world affairs initiated by the U.S. response to terrorism. The statement by Condoleezza Rice demonstrated the concerns of the U.S. government as to the role Africa could play in the “war.” But it missed the very important point that Africa was largely a Christian and Muslim continent, where these two civilizations met and intermingled with African traditional religions and civilizations. This combination has created a more racially, religiously, and culturally tolerant continent. Indeed, it is said that the American officials in Guinea were extremely impressed by the fact that on the very day of the attack against the U.S., the entire Cabinet of the government of Guinea, which is an all Muslim country, went in one body to the U.S. Embassy in Conakry to deliver their condolences to the American people. This single incident demonstrated that African Islam was important to the U.S. in moderating Islamic radicals on the continent.

The Pursuit for Oil in Africa

But the U.S.–in its usual way of “divide and rule” to maintain its hegemonic position in the world–has already seized on this positive African approach and tried to pit Africans against the Arabs on the issue of oil to break the solidarity among the Organization of Petroleum Producing Countries (OPEC). It is this hegemonic “divide and rule” imperialist strategy that turns “friends” into enemies at any time it pleases the U.S. government. It is this same approach in an earlier phase that, in the U.S. interests for oil, used Saudi Arabia as a “friend” of the U.S. in order to weaken the Arab peoples’ cause for nationhood, but was now turning against it when it did not any longer suit those interests.

On 25th January 2002, the State Department released information at a breakfast seminar sponsored by the Institute for Advanced Strategic and Political Studies (IASPS) entitled: “African Oil: A Priority for U.S. National Security and African Development” about the projected U.S. strategies on oil and the growing importance of African oil to the U.S. economy. The U.S. officials, among them an Assistant Secretary of State for African Affairs, Walter Kunsteiner, added: “It is undeniable that this (oil) has become of national strategic interest to us.”

According to James Dunlop, an assistant to Kunsteiner, who also spoke at the meeting, the United States already was getting 15 per cent of its oil imports from the African continent, and the figure was growing. A U.S. Air Force, Lt. Colonel Karen Kwiatkowski, a political/military officer assigned to the Office of Secretary of Defense for African Affairs, confirmed that Africa was important to U.S. national security. She authoritatively added that she spoke as ” a U.S. government policymaker in the area of sub-Saharan Africa and national security interests.” She tried to justify the shift in U.S. interests by pointing out that the U.S. relationship to African countries was non-colonial, based on a generally positive history. In this, she did not refer to the past relationship of slave trade, which had a negative impact on today’s development prospects for Africa. What was important to the U.S. at this juncture was to try to woo African states in the new strategic game of U.S. “security interests” and Africa’s oil had now become important to the U.S. security interests because the availability of Arab oil could no longer be relied upon. According to the U.S. National Intelligence Council’s document “Global Trends 2015” report, which came out in December 2001 after the September attack, 25 per cent of U.S. oil imports in 2015 were projected to come from sub-Saharan Africa. The prime energy location sites were in West Africa, Sudan, and Central Africa.

In this respect, Africa was seen as being important for the “diversification of our sources of imported oil” away from the “troubled areas of the Middle East and other politically high-risk areas.” In fact this drive to diversify sources of oil was behind the U.S. policy to bring about “regime change” in Iraq. In this context, the vast oil and gas reserves of Africa, Russia and the Asian Caspian regions had become critical for U.S. hegemony. The proven reserves of the African continent were said to be well over 30 billion barrels of oil, and over 40 different types of crude were available. Under current projections, the U.S. expects to import over 770 million barrels of African petroleum by the year 2020. U.S. investments in this direction were expected to increase so that by 2003, these would exceed $10 billion a year. Between two-thirds and three-fourths of U.S. direct investment in Africa will be in the energy sector, and this was expected to contribute to Africa’s economic development.

The U.S. has vigorously begun to pursue this policy in the Sudan and Nigeria. Recent U.S. peace moves in the Sudan are linked to this strategy. At a dinner honoring Reverend Leon Sullivan on the 20th of June 2002, President Bush stated that the U.S. would continue the search for peace in the Sudan, while at the same time seeking to end her sponsorship of terrorism. He added:

Since September the 11th there is no question that the government of Sudan has made useful contributions in cracking down on terror. But Sudan can and must do more. And Sudan’s government must understand that ending and stopping its sponsorship of terror outside Sudan is no substitute for efforts to stop war inside Sudan. Sudan’s government cannot continue to block and manipulate U.N. food deliveries, and must not allow slavery to persist.

It was therefore imperative to put to an end the war in the Sudan in order to explore the vast oil resources in all Sudan. It was estimated that 3-4 billion barrels of oil lie in the Southern Sudd area of the country, which was under the control of the SPLM. The new anti-terrorism policy in Sudan, combined with the shift of U.S. strategic considerations from the Middle East in terms of oil production, required that a peace settlement be worked on as a matter of priority and this explains the role the U.S. played in bringing about the Machakos Peace Agreement between the government of Sudan and the SPLM in July 2002. Recently, the Sudanese government in the North reported that it had discovered a new oil source in the Northern parts of the country. This suggested that the U.S. would in the future play the South against the North in order to assure itself of energy supplies. Hence its efforts to bring about peace in the Sudan were not wholly genuine.

As regards Nigeria, the U.S. government is said to be targeting the Gulf of Guinea as a replacement to the Gulf of Persia as the future main source of U.S. oil imports. This region is now dubbed the “Africa Kuwait” in the U.S. strategic lexicon. A White Paper submitted to the U.S. Government by the African Oil Policy Initiative Group (AOPIG), pointed to the growing fear of insecurity in the continued supply of crude oil from the troubled Persian Gulf. According to Dr. Paul Michael Wihbey, a leading member and Fellow of the American Institute for Advanced Strategic and Political Studies (AIASPS), the U.S. expected to double its oil imports from Nigeria from 900,000 barrels per day to around 1.8 million barrels per day in the next five years. He pointed out that one major lesson of the September 11th terrorist attack was that the U.S. needed to diversify its major source of oil away from the Persian Gulf. A Lagos newspaper quoted him as saying:

Statistics from the US Department of Energy showed African oil exports to the US will rise to 50 percent of total oil supply by 2015. Nigeria is the energy super power of Africa. The private sector, small and major operators, administration and officials, have come to realize that Nigeria and the Gulf of Guinea are of strategic importance to the US.

The U.S. government had in fact already begun discussions on the new initiative with the Nigerian government. An important factor that was creating a greater focus on its oil was that Nigeria had created an atmosphere of stability since the democratically elected government of President Olusegun Obasanjo had come to power. U.S. President George W. Bush visited Nigeria and four other African countries in the first quarter of 2003. In fact all this made a lot of sense at the very time when the U.S. was distancing itself from Saudi Arabia, its former ally. A briefing to a Pentagon defense panel described Saudi Arabia as a “kernel of evil.” The Washington Post of August 6, 2002 reported that the briefing had described Saudi Arabia as the enemy of the U.S. Laurent Murawiec, in his July 10th 2002 briefing, is said to have stated: “The Saudis are active at every level of the terror chain, from planners to financiers, from ideologists to cheerleaders.” He added that Saudi Arabia supported U.S. enemies and also attacked U.S. allies. He described Saudi Arabia as “the kernel of evil, the prime mover, the dangerous opponent” in the Middle East. The Washington Post added that although the briefing did not reflect official U.S. policy, these views represented a “growing currency” within the Bush administration. Yet in trying to play Africa against the Arab world, the U.S. was exploiting certain weaknesses within the African polity created by the European colonial strategy of “divide and rule.” The U.S. reasoned that its reliance on African sources of oil was better assured in Africa than in the Arab world. One official argued that it would be difficult to find a Saddam Hussein in Africa. The reason was Africa’s political disunity because of the African political elite having accepted former colonial boundaries as sacrosanct. The U.S. could exploit these divisions even more, especially when it came to the “Anglophone” and “francophone” divisions, which the U.S. and France could exploit to advance their interests. Moreover, it could also exploit the democracy and good governance cards to topple regimes that put road blocks in its way.

It is clear that the U.S. had gained wide acceptance of its anti-terrorist policies among the majority of African States. There is also indication that although at the G8 Summit at Kananaskis (Alberta, Canada) the U.S. did not offer much by way of financial backing to the New Partnership for Africa’s Development (NEPAD), the U.S. and other members of the G8 had placed great importance on the NEPAD initiative if only because it gave Nigeria and South Africa a predominant voice in Africa’s affairs. It was believed that these two countries would bring the other African leaders under disciplined control through the Peer Review Mechanism on Good Governance, which the leaders had imposed on themselves as a condition for financial support for NEPAD.

Indeed, one of the very first “projects” under NEPAD was a project to fight terrorism. During his whistle-stop tour of West Africa in April 2002, the British Prime Minister, Tony Blair, acknowledged that the September 11th attacks on the United States had effected a real change in the way everybody looked at the world. In his address to the Ghana Parliament, Blair argued that increased financial support to Africa was part of the process of fighting “terrorism” because engaging African states could reduce the risk of their becoming “breeding grounds for the kind of people who carried out the U.S. attacks.” He further argued: “If we leave failed states in parts of Africa, the problems sooner or later end up on our door step.” So the African countries are part and parcel of the September 11th alliance against terrorism, but African continued support will depend on how the U.S. plays its game, which is very dicey.

Africa Must Pursue Ubuntu Policy

The U.S. attack on Iraq has altered the situation somewhat. South Africa played a key role in developing an anti-Iraq war position for the African Union and the Non-Aligned Movement, which came out with strong statements against the war. President Mbeki of South Africa is chairperson of both organizations. Almost all the African states took a position against the war. The only exception is the so-called “New Breed” of African leaders from Uganda, Ethiopia, Eritrea and Rwanda. All these countries are in internal and cross-border conflicts themselves and so it suits them to try to woo the U.S. in the war against each other. Moreover the anti-terrorism rhetoric of President Bush and the U.S. government also seems to help them to fight one another on the basis that they are against terrorism promoted by the other. This is non-sustainable.

The U.S. also did not play its Iraq war game well with some of African states. According to an investigative journalist, Seymour Hersh, the C.I.A. Chief, George Tenet, told a closed-door session of the Senate Foreign Relations Committee that between 1999 and 2001, Iraq had sought to buy 500 tonnes of uranium oxide from the African state of Niger, which would have been enough to build 100 nuclear bombs. This so-called connivance of Niger with Iraq was later used in the British government “Iraq Dossier” to prove that Iraq had weapons of mass destruction. The same “fact sheet” was cited by President Bush in his State of the Union Address on the issue of Iraq. It was used to “prove” that since it had tried to “cover up” this purchase, it was also lying about its program for developing weapons of mass destruction. According to the investigative reporter, this story about Iraq’s attempted purchase of Uranium from Niger was used as “evidence” to convince the U.S Congress to endorse military action against Iraq.

In less than two weeks before the initial U.S. bombing of Iraq, the Head of the Atomic Energy Agency, Mohammed ElBaradei, decisively discredited the accusations that had all along been denied by Niger, but with no one paying attention. The documents, which were allegedly exchanged between the governments of Niger and Iraq confirming the deal, were proved to have been forged. The documents consisted of a series of faked letters between Iraq and Niger officials. One letter that was dated July 2000 bore an amateurish forgery of the Niger president’s signature. Another letter was sent over the name of a person identified as Niger’s foreign minister, when that person had left the position ten years prior to the date of the letter!

The selection of Niger–a poor African country with little voice internationally–as the fall guy was also intentional. According to Hersh, the forgers assumed that it would be much more credible to implicate a poor African country rather than any one of the other three leading exporters of uranium oxide: Canada, Australia and Russia. While these countries could have proved the charges false, Niger, on the other hand, lacked the means of persuading the world that the accusations were false.

It is very impressive that despite Africa’s marginalisation and poverty, very few African states have been wooed to be part of the “alliance of the willing.” Most impressive was the resistance by Cameroun, Guinea and Angola, at the time African alternate members of the Security Council, to accept U.S. bullying and bribery to support the alliance against Iraq. These examples go to show that small states can stand up to great power pressure and maintain a new human morality based on a democratic world order. What the U.S. wanted to achieve in Iraq with high-tech “smart weapons” was to demonstrate to all that whatever the U.S. says “goes.” This kind of political behavior would not be a world order, but an attempt to create world disorder.

Africa should therefore stand firm in support of the United Nations and in solidarity with the Arab world in these testing times, despite the fact that some Arab countries participated in the enslavement of the African people and, indeed, continue to do so in Mauritania and Sudan. Africans continue to suffer at the hands of Arab enslavers, who are committing acts of genocide against them in these two countries. It is the duty of Africans to unite and continue to resist these acts of inhumanity and pursue claims for reparations against those Arab countries that participated in this trade and the continued acts of slave trade even up to the present moment. At the same time Africa must insist that these and similar acts, including acts of terrorism and state-terrorism against other peoples, be solved on the basis of internationally agreed solutions based on principles of international law and Ubuntu.

These principles include truth, acceptance of responsibility, compensation and reparation for wrongs against other human beings, justice, and reconciliation. Ubuntu draws deeply from African civilisational values. According to former Archbishop Desmond Tutu, later to become chairman of the Truth and Reconciliation Commission of South Africa:

Africans have this thing called UBUNTU… the essence of being human. It is part of the gift that Africans will give the world. It embraces hospitality, caring about others, willing to go the extra mile for the sake of others. We believe a person is a person through another person, that my humanity is caught up, bound up and inextricable in yours. When I dehumanize you I inexorably dehumanize myself. The solitary individual is a contradiction in terms and, therefore, you seek to work for the common good because your humanity comes into its own community, in belonging [Mukanda & Mulemfo, 2000: 52-62].

These philosophic values called Ubuntu also draw from other cultures and civilizations. It is the only civilized way we can manage problems and handle disputes in the twenty-first century, which should be a century of peace. Africa has therefore acted correctly in refusing to side with the U.S. in its war against Iraq. It is an unfair war. Such a war will have the fateful consequences of harming the interests of the Arab peoples, but also have adverse consequences for international security, which would affect African countries as well. Africa should also disassociate itself from the actions of the Bush administration in declaring Iran, Iraq, and North Korea to be the “Axis of Evil.” African states should maintain contacts and relations with all these countries. Mzee Mandela gave a lead in responding to what the U.S. regarded as terrorism when it tried to isolate Libya over the Lockerbie aircraft-bombing affair. Mandela broke the blockade against Libya by visiting Tripoli against U.S. protestations. By so doing he strengthened the African states, which also resolved to end the blockade through the Organization of African Unity (OAU). This African action made it possible for Libya to cooperate more willingly with the international community in resolving the dispute through the courts and is now part of the alliance in the fight against terrorism under UN resolutions.

Furthermore, Mzee Nelson Mandela correctly refrained from endorsing Bush’s blank concept of “terrorism” by qualifying it to not apply to genuine cases of peoples’ discontent. He argued that the right to self-determination and other peoples’ rights should not be confused with terrorism. He argued that it is by ignoring these rights, as in the case of Palestine, that acts of violence occur, which some people may prefer to describe as terrorism. He explained that these kinds of violence are the result of frustrations arising out of the non-recognition of peoples’ demands for the right to self-determination and peoples’ democratic rights. Later he called Bush a “bully” when he dismissed Iraq’s unconditional acceptance of the United Nations return of weapons inspectors and called on the U.S. to respect the United Nations. He also condemned those leaders in the world who kept quiet “when one country wants to bully the whole world.”

This is the way forward. We cannot keep quiet to the gimmicks of an outlaw behaving as if he were in the “Wild West” when it comes to the responsibilities of states to maintain peace and security in the world. While Saddam Hussein himself might have behaved like a bully himself, that is not the way he should be treated. The philosophy of “tooth for tooth, eye for eye” leaves all of us toothless and blind. We need a humane way of handling human affairs and a reasonable system of conflict management, control and resolution, which the Ubuntu philosophy offers. Therefore, the only civilized way of dealing with these issues is through the principles and spirit of Ubuntu in international relations.

The U.S. should emulate this African Ubuntu approach instead of following the path of violent confrontations with the Arab countries and Muslim political groups engaged in violence against it for causes that need to be addressed in a humane way. Violence begets violence and those who are more powerful should be more guarded in resorting to its use. As the English proverb has it: “those who live in glass houses should not throw stones.” This truism holds for the U.S. as well. Instead the U.S. should acknowledge the right of all peoples’ to self-determination, including the right of the Palestinian people, for whom the Bush administration has had little regard. We cannot afford to have one set of rules for the Palestinians and another set of rules for the Israelis. A completely new approach to the problems of the 21st century is required and the answer lies in ensuring security for all in all its manifestations.

We agree with Francis Kornegay of the Center for Africa’s International Relations, University of Witwatersrand, South Africa, when he suggests that Africa should be declared a zone of peace, which the African Union could monitor. This would be part of a doctrine in international relations based on the philosophy of Ubuntu in which African states and people commit themselves to be a continent that unites all the world’s people by insulating the continent from becoming a battleground in the war against terrorism as has already happened in Kenya and Tanzania. In this direction, the U.S. has already named a number of countries in the Horn of Africa to be part of their strategy of fighting terrorism on the African continent. African states should not collaborate in this scheme and instead declare that the continent is “terrorism free” and a “zone of peace.” But to do this, Africa would have to return to a strong commitment to the non-alignment movement in solidarity with the Arab world as well as other parts of the oppressed world.


In conclusion, it should be pointed out that the attack on the U.S. on September 11th 2001 was directed at U.S. strategic interests, which it has developed since the end of WW2. The analysis here has shown that this policy has been developed against the interests of Third World peoples, whose resources are subjected to U.S. control and exploitation. The U.S. believes that as a leader of the “Free World” it has the responsibility to ensure global peace and security and to do this, it needs to develop the resources in the entire world on a “free trade” basis. But, as we have seen, this has been achieved through manipulation and the use and threat of use of force against its weaker opponents in the Third World. The U.S. claims that its actions are motivated by the interest of the whole world, although it also at the same time claims to be defending “civilization,” which is a coded-word for western civilization and western interests.

Therefore while it calls on the whole world not to permit the al Qaeda to turn the present war against terrorism into a war of civilizations, it actually creates conditions that could ultimately turn such a conflict into a generalized conflict between civilizations on a global scale. The only answer to this conflict therefore lies in insisting that all problems between countries, cultures, and civilizations be resolved through dialogue and negotiations, which recognizes the interests of all as equally important. We have to use organs of global dialogue such as the United Nations, Global Summits and Conferences through which agreements can be reached and implemented. It is for this reason that the UN Secretary General, Kofi Annan, called for a dialogue between civilizations as a task of this century, if indeed the century is to be a peaceful one.

For the U.S. therefore to emphasize that the war against terrorism is not a civilisational one, while at the same time calling on the African states to agree that it is “a war of civilizations against those who would be uncivilized in their approach towards us” is to take Africa for granted and to try to use Africans against other peoples who may have genuine grievances against the U.S. It should be remembered that up to this point, the U.S. government still regards Africans and their descendants in the United States as being less than human beings and still treats them as uncivilized beings. Why? Because, alongside the other western powers and some of the Arab world, they refuse to consider demands for reparations for the exploitation and sufferings of those Africans who were enslaved by them and exploited as sub-humans in the building of the wealth which they now enjoy. Africa must push for the need to have dialogue on all these issues. The U.S. cannot have its cake and eat it. She cannot expect Africans to defend their civilization while at the same time refusing to compensate them for acts of inhuman behavior against them.

Global security of the 21st century requires that security of one country becomes the security of another and security in this new understanding must be understood in its broadened sense to mean human security for all. As the Social Science Research Council has come to recognize, security concerns should no longer be seen in the context of the geopolitics of the Cold War period. The field of security considerations has changed greatly since the early 1980s with the increasing realization that threats to security of individuals, communities and states around the world originate from a variety of sources other than the military dimension of great power competition and rivalry, which characterized the period of the Cold War. Such `small events’ as localized wars, small arms proliferation, ethnic conflicts, environmental degradation, international crimes, and human rights abuses are all now being recognized as being central to the understanding of security at local, national, regional, and global levels.

The U.S., just like all countries of the world, must adjust to this new reality and address all these different concerns of security in order to create conditions for security for all. It has now to be realized and accepted by all of us on this planet that security for ‘us’ must mean security for `them’ as well, otherwise there cannot be security for all. That must be the lesson we should learn from the events of September 11th 2001. In short, September 11th requires us to embrace and enhance a holistic security consciousness that should inform global security policy based on Ubuntu.


Huntington, S [1997]: The Clash of Civilizations and the Remaking of World Order, Touchstone, New York.

Kennedy, P [1988]: The Rise and Fall of the Great Powers: Economic Change and Military Conflict From 1500 to 2000, Unwin-Hyman, London.

Keohane, R O [1980]: “The Theory of Hegemonic Stability and Changes in International Economic Regimes, 1967-1977,” in Holsti, O. R, Siverson, R. M, and George, A. L (eds.) [1980]: Change in the International System, Westview Press, Boulder.

Mukanda and Mulemfo, M [2000]: Thabo Mbeki and the African Renaissance: The Emergence of a New African Leadership, Actua Press (Pty), Pretoria, South Africa.

Nabudere, D W [1979]: Essays in the Theory and Practice of Imperialism, Onyx Press, London.

Nabudere, D W [1990]: The Rise and Fall of Money Capital, AIT, London.

Onyango-Obbo, C [2002]: “Is USA that ignorant? So what do its young mean in white shirts want here,” Ear to the Ground column, The Monitor, Wednesday July 3, 2002.

Rangarajan, L [1984]: “The Politics of International Trade” in Strange, S (ed.) [1984]: Paths to International Political Economy, George Allen & Unwin, London.

Raghavan, C [1990]: Recolonisation: GATT, The Uruguay Round & The Third World, Zed/Third World Network, London.

Rashid, A [2000]: Taliban: Militant Islam, Oil and Fundamentalism in Central Asia, Yale University Press, New Haven.

Spero, J. E [1977, 1985]: The Politics of International Economic Relations, John Allen & Unwin, London.

Anti-Americanism: A Revisit

Rob Kroes, chair and professor of American studies at the University of Amsterdam, is the author of If You’ve Seen One You’ve Seen the Mall: Europeans and American Mass Culture, and Them and Us: Questions of Citizenship in a Globalizing World.

I. What “ism” is anti-Americanism?

What kind of “ism” is anti-Americanism? Like any “ism” it refers to a set of attitudes that help people to structure their world view and to guide their actions. It also implies a measure of exaggeration, a feverish over-concentration on one particular object of attention and action. Yet what is the object in the case of anti-Americanism? The word suggests two different readings. It could refer to anti-American feelings taken to the heights of an “ism,” representing a general rejection of things American. It can also be seen as a set of feelings against (anti) something called Americanism. In the latter case, we need to explore the nature of the Americanism that people oppose. As we shall see the word has historically been used in more than one sense. Yet whatever its precise meaning, Americanism – as an “ism” in its own right – has always been a matter of the concise and exaggerated reading of some characteristic features of an imagined America, as a country and a culture crucially different from places elsewhere in the world. In that sense Americanism can usefully be compared to nationalism.

In much the same way that nationalism implies the construction of the nation, usually one’s own, in a typically inspirational vein, causing people to rally around the flag and other such emblems of national unity, Americanism helped an anguished American nation to define itself in the face of the millions of immigrants who aspired to citizenship status. Particularly at the time following World War I it became known as the “one hundred percent Americanism” movement, confronting immigrants with a demanding list of criteria for inclusion. Americanism in that form represented the American equivalent of the more general concept of nationalism. It was carried by those Americans who saw themselves as the guardians of the integrity and purity of the American nation. There is, however, another historical relationship of Americanism to nationalism. This time it is not Americans who are the agents of definition, but others in their respective national settings. Time and time again other peoples’ nationalism not only cast their own nation in a particular inspirational light, it also used America as a counterpoint, a yardstick that other nations might either hope to emulate or should reject.

Foreigners as much as Americans themselves, therefore, have produced readings of America, condensed into the ideological contours of an “ism.” Of course, this is likely to happen only in those cases where America has become a presence in other peoples’ lives, as a political force, as an economic power, or through its cultural radiance. The years following World War I were one such watershed. Through America’s intervention in the war and the role it played in ordering the post-war world, through the physical presence of its military forces in Europe, and through the burst of its mass culture onto the European scene, Europeans were forced in their collective self-reflection to try and make sense of America, and to come to terms with its impact on their lives. Many forms of Americanism were then conceived by Europeans, sometimes admiringly, sometimes in a more rejectionist mood, often in a tenuous combination of the two. The following exploration will look at some such moments in European history, high points in the American presence in Europe, and at the complex response of Europeans.

Americanism and anti-Americanism

“Why I reject ‘America.'” Such was the provocative title of a piece, published in 1928 by a young Dutch author who was to become a leading intellectual light in the Netherlands during the 1930s. The title is not a question, but an answer, assessing his position to an America in quotation marks, a construct of the mind, a composite image based on the perception of current dismal trends which the author then links to America as the country and the culture characteristically – but not uniquely – displaying them. It is not, however, uniquely for outsiders to be struck by such trends and to reject them. Indeed, as Ter Braak himself admits, anyone sharing his particular sensibility and intellectual detachment he is willing to acknowledge as a European, “even if he happens to live on Main Street.” It is an attitude for which he offers us the striking parable of a young newspaper vendor that he saw one day standing on the balcony of one of those pre-World War II Amsterdam streetcars, surrounded by the pandemonium of traffic noise, yet enclosed in a private sphere of silence. Amid the pointless energy and meaningless noise the boy stood immersed in the reading of a musical score, deciphering the secret code which admitted entrance to a world of the mind. This immersion, this loyal devotion to the probing of meaning and sense, to a heritage of signs and significance, are for Ter Braak the ingredients of Europeanism. It constitutes for him the quintessentially European reflex of survival against the onslaught of a world increasingly geared toward the tenets of rationality, utility, mechanization, and instrumentality, yet utterly devoid of meaning and prey to the forces of entropy. The European reaction is one that pays tribute to what is useless, unproductive, defending a quasi-monastic sphere of silence and reflexiveness amidst the whirl of secular motion.

This reflex of survival through self-assertion was of course a current mood in Europe during the interwar years, a Europe in ruins not only materially but spiritually as well. Amid the aimless drift of society’s disorganization and the cacophony of demands accompanying the advent of the masses on to the political agora, Americanism as a concept had come to serve the purpose of focusing the diagnosis of Europe’s plight. The impulse toward reassertion – toward the concentrated retrieval of meaning from the fragmented score of European history – was therefore mainly cultural and conservative, much as it was an act of protest and defiance at the same time. Many are the names of the conservative apologists we tend to associate with this mood. There is Johan Huizinga, the Dutch historian, who upon his return from his only visit to the United States at about the time that Ter Braak wrote his apologia, expressed himself thus: “Among us Europeans who were traveling together in America … there rose up repeatedly this pharisaical feeling: we all have something that you lack; we admire your strength but do not envy you. Your instrument of civilization and progress, your big cities and your perfect organization, only made us nostalgic for what is old and quiet, and sometimes your life seems hardly to be worth living, not to speak of your future” – a statement in which we hear resonating the ominous foreboding that “your future” might well read as “our [European] future.” For indeed, what was only implied here would come out more clearly in Huizinga’s more pessimistic writings of the late 1930s and early ’40s, when America became a mere piece of evidence in Huizinga’s case against contemporary history losing form.

Much as the attitude involved is one of a rejection of “America” and Americanism, what should strike a detached observer is the uncanny resemblance with critical positions that Americans had reached independently. Henry Adams of course is the perfect example, a prefiguration of Ter Braak’s “man on the balcony,” transcending the disparate signs of aimlessness, drift and entropy in a desperate search for a “useless” and highly private world of meaning. But of course his urgent quest, his cultural soul-searching, was much more common in America, was much more of a constant in the American psyche than Europeans may have been willing to admit. Cultural exhortation and self-reflection, under genteel or not-so-genteel auspices, were then as they are now a recurring feature of the American cultural scene. During one such episode, briefly centered around the cultural magazine The Seven Arts, James Oppenheim, its editor, pointed out that “for some time we have seen our own shallowness, our complacency, our commercialism, our thin self-indulgent kindliness, our lack of purpose, our fads and advertising and empty politics.” In this brief period, on the eve of America’s intervention in World War I, there was an acute awareness of America’s barren landscape, especially when measured by European standards. Van Wyck Brooks, one of the leading spokesmen of this group of cultural critics, pointed out that “for two generations the most sensitive minds in Europe – Renan, Ruskin, Nietzsche, to name none more recent – have summed up their mistrust of the future in that one word – Americanism.” He went on to say: “And it is because, altogether externalized ourselves, we have typified the universally externalizing influences of modern industrialism.”

Yet, in spite of these similarities, the European cultural critics may seem to argue a different case and to act on different existential cues: theirs is a highly defensive position in the face of a threat which is exteriorized, perceived as coming from outside, much as in fact it was immanent to the drift of European culture. What we see occurring is in fact the retreat toward cultural bastions in the face of an experience of a loss of power and control; it is the psychological equivalent of the defense of a national currency through protectionism. It is, so to speak, a manipulation of the terms of psychological trade. A clear example is Oswald Spengler’s statement in his Jahre der Entscheidung (Years of Decision): “Life in America is exclusively economic in its structure and lacks depth, the more so because it lacks the element of true historical tragedy, of a fate that for centuries has deepened and informed the soul of European peoples….” Huizinga made much the same point in his 1941 essay on the formlessness of history, typified by America. Yet Spengler’s choice of words is more revealing. In his elevation of such cultural staples as “depth” and “soul,” he typifies the perennial response to an experience of inferiority and backwardness of a society compared to its more potent rivals. Such was the reaction, as Norbert Elias has pointed out in his magisterial study of the process of civilization in European history, on the part of an emerging German bourgeoisie vis-à-vis the pervasive radiance of French civilization. Against French civilisation as a mere skin-deep veneer it elevated German Kultur as more deep-felt, warm and authentic. It was a proclamation of emancipation through a declaration of cultural superiority. Americanism, then, is the twentieth-century equivalent of French eighteenth-century civilisation as perceived by those who rose up in defense against it. It serves as the negative mirror image in the quest for a national identity through cultural self-assertion. Americanism in that sense is therefore a component of the wider structure of anti-Americanism, paradoxical as this may sound.

Americanism, un-Americanism, anti-Americanism

Let us dwell briefly on the conceptual intricacies of such related terms as Americanism, un-Americanism, and anti-Americanism. Apparently, as we have seen, Americanism as a concept can stand for a body of cultural characteristics deemed repugnant. Yet the same word, in a different context, can have a highly positive meaning, denoting the central tenets of the American creed, or of “American scripture,” as Michael Ignatieff would have it. Both, however, duly deserve their status of “isms”: both are emotionally charged code words in the defense of an endangered national identity. In the United States, as “one hundred percent Americanism,” it raised a demanding standard before the hordes of aliens aspiring to full membership in the American community while threatening the excommunication of those it defined as un-American. Americanism in its negative guise fulfilled much the same function in Europe, serving as a counterpoint to true Europeanism. In both senses, either positive or negative, the concept is a gate-keeping device, a rhetorical figure, rallying the initiates in rituals of self-affirmation.

Compared to these varieties of Americanism, relatively clear-cut both historically and sociologically, anti-Americanism appears as a strangely ambiguous hybrid. It never appears to imply – as the word suggests – a rejection across the board of America, of its society, its culture, its power. Although Huizinga and Ter Braak may have inveighed against Americanism, against an America in quotation marks, neither can be considered a spokesman of anti-Americanism in a broad sense. Both were much too subtle minds for that, in constant awareness of contrary evidence and redeeming features, much too open and inquiring about the real America, as a historical entity, to give up the mental reserve of the quotation mark. After all, Ter Braak’s closing lines are: “America’ I reject. Now we can turn to the problem of America.” And the Huizinga quotation above, already full of ambivalence, continues thus: “And yet in this case it must be we who are the Pharisees, for theirs is the love and the confidence. Things must be different than we think.”

Now where does that leave us? Both authors were against an Americanism as they negatively constructed it. Yet it does not meaningfully make their position one of anti-Americanism. There was simply too much intellectual puzzlement, and particularly in Huizinga’s case, too much admiration and real affection, too much appreciation of an Americanism that had inspired American history. Anti-Americanism, then, if we choose to retain the term at all, should be seen as a weak and ambivalent complex of anti-feelings. It does not apply but selectively, never extending to a total rejection of both Americanisms. Thus we can have either of two separate outcomes; an anti-Americanism rejecting cultural trends which are seen as typically American, while allowing of admiration for America’s energy, innovation, prowess, and optimism, or an anti-Americanism in reverse, rejecting an American creed that for all its missionary zeal is perceived as imperialist and oppressive, while admiring American culture, from its high-brow to its pop varieties. These opposed directions in the critical thrust of anti-Americanism often go hand in hand with opposed positions on the political spectrum. The cultural anti-Americanism of the inter-war years typically was a conservative position, whereas the political anti-Americanism of the Cold War and the war in Vietnam typically occurred on the left wing. Undoubtedly the drastic change in America’s position on the world stage since World War II has contributed to this double somersault. Since that war America has appeared in a radically different guise, as much more of a potent force in every-day life in Europe than ever before. This leads us to explore one further nexus among the various concepts.

The late 1940s and ’50s may have been a honeymoon in the Atlantic relationship, yet throughout the period there were groups on the left loath to adopt the unfolding Cold-War view of the world; they were the nostalgics of the anti-Nazi war alliance with the Soviet Union, a motley array of fellow travelers, third roaders, Christian pacifists, and others. Their early critical stance toward the United States showed up yet another ambivalent breed of anti-Americanism. In their relative political isolation domestically, they tended to identify with precisely those who in America were being victimized as un-American in the emerging Cold-War hysteria of loyalty programs, House Un-American Activities Committee (HUAC) inquiries, and McCarthyite persecution. In their anti-Americanism they were the ones to rally to the defense of Alger Hiss and the Rosenbergs, Ethel and Julius, and their many American supporters. Affiliating with dissenters in America, their anti-Americanism combined with the alleged un-Americanism of protest in the United States to form a sort of shadow Atlantic partnership. It is a combination that would again occur in the late sixties when the political anti-Americanism in Europe, occasioned by the Vietnam War, felt in unison with a generation in the United States engaged in anti-war protest and the counter-culture of the time, burning US flags along with their draft cards as so many demonstrations of a domestic anti-Americanism that many among Nixon’s “silent majority” at the time may have deemed un-American. As bumper stickers at the time reminded protesters: America, Love It Or Leave It.

The disaffection from America during the Vietnam War may have appeared to stand for a more lasting value shift on both sides of the Atlantic. The alienation and disaffection of this emerging adversary culture proved much more short-lived in America, however, than it did in Europe. The return to a conservative agenda, cultural and political, in America since the end of the Vietnam War never occurred in any comparable form in countries in Europe. There indeed the disaffection from America has become part of a much more general disaffection from the complexities and contradictions of modern society. The squatters’ movement in countries such as Germany, Denmark, or the Netherlands, the ecological (or Green) movement, the pacifist movement (particularly in the 1980s during the Cruise Missile debate), and more recently the anti-globalization movement have all become the safe havens of a dissenting culture, highly apocalyptic in its view of the threat which technological society poses to the survival of mankind. And despite the number and variety of anti-feelings of these adversary groups, America to each and all of them can once again serve as a symbolic focus. Thus, in this recent stage, it appears that anti-Americanism cannot only be too broad a concept, as pointed out before – a configuration of anti-feelings that never extends to all things American – it can also be too narrow, in that the “America” which one now rejected is really a code word – a symbol – for a much wider rejection of contemporary society and culture. The more diffuse and anomic these feelings are, the more readily they seem to find a cause to blame. Whether or not America is involved in an objectionable event – and given its position in the world it often is – there is always a nearby McDonald’s to bear the brunt of anger and protest, and to have its windows smashed. If this is anti-Americanism, it is of a highly inarticulate, if not irrational, kind.

II. Cultural Anti-Americanism: Two French Cases

“Nous sommes tous américains.” We are all Americans. Such was the rallying cry of the French newspaper Le Monde‘s editor-in-chief, Jean-Marie Colombani, published two days after the terrorist attack against symbols of America’s power. He went on to say: “We are all New Yorkers, as surely as John Kennedy declared himself, in 1962 in Berlin, to be a Berliner.” If that was one historical resonance that Colombani himself called forth for his readers, there is an even older use of this rhetorical call to solidarity that may come to mind. It is Jefferson’s call for unity after America’s first taste of two-party strife. Leading opposition forces to victory in the presidential election of 1800, he assured Americans that “We are all Federalists, we are all Republicans,” urging his audience to rise above the differences that many at the time feared might divide the young nation against itself. There would clearly be no need for such a ringing rhetorical call if there were not at the same time an acute sense of difference and division. Similarly in the case of Colombani’s timely expression of solidarity with an ally singled out for vengeful attack, solely because it, more than any of its allies, had come to represent the global challenge posed by a shared Western way of life. An attack against America was therefore an attack against common values held dear by all who live by standards of democracy and the type of open society that it implies. But as in Jefferson’s case, the rhetorical urgency of the call for solidarity suggests a sense of difference and divisions now to be transcended, or at least temporarily to be shunted aside.

As we all know, there is a long history that illustrates France’s long and abiding affinity with America’s daring leap into an age of modernity. It shared America’s fascination with the political modernity of republicanism, of democracy and egalitarianism, with the economic modernity of progress in a capitalist vein, and with an existential modernity that saw Man, with a capital M and in the gender-free sense of the word, as the agent of history, the molder of his social life as well as of his own individual identity and destiny. It was after all a Frenchman, Crèvecoeur, who on the eve of American independence pondered the question of “What, then, is the American, this new Man?” A long line of French observers have, in lasting fascination, commented on this American venture, seeing it as a trajectory akin to their own hopes and dreams for France. Similarly, French immigrants in the United States, in order to legitimize their claims for ethnic specificity, have always emphasized the historical nexus of French and American political ideals, elevating Lafayette alongside George Washington to equal iconic status.

But as we also know, there is an equally long history of French awareness of American culture taking directions that were seen as a threat to French ways of life and views of culture. Whether it was Tocqueville’s more sociological intuition of an egalitarian society breeding cultural homogeneity and conformism, or later French views that sought the explanation in the economic logic of a free and unfettered market, their fear was of an erosion of the French cultural landscape, of French standards of taste and cultural value. As I have argued elsewhere, the French were not alone in harboring such fears, but they have been more consistently adamant in making the case for a defense of their national identity against a threatening process of Americanization. The very word is a French coinage. It was Baudelaire who, on the occasion of the 1855 Exposition Universelle de Paris, spoke of modern man, set on a course of technical materialism, as “tellement américanisé … qu’il a perdu la notion des différences qui caractérisent les phénomènes du monde physique et du monde moral, du naturel et du surnaturel.” The Goncourt brothers’ Journal, from the time of the second exposition in 1867, refers to “L’exposition universelle, le dernier coup à ce qui est l’américanisation de la France.” As these critics saw it, industrial progress ushered in an era where quantity would replace quality and where a mass culture feeding on standardization would erode established taste hierarchies. There are echoes of Tocqueville here, yet the eroding factor is no longer the egalitarian logic of mass democracy but the logic of industrial progress. In both cases, however, whatever the precise link and evaluating angle, America had become the metonym for unfettered modernity, like a Prometheus unbound.

Europeans, French observers included, have always been perplexed by two aspects of the American way with culture – two aspects that to them represented the core of America’s cultural otherness – one its crass commercialism, the other its irreverent attitude of cultural bricolage, recycling the culturally high and low, the vulgar and the sublime, in ways unfamiliar and shocking to European sensibilities. As for the alleged commercialism, what truly strikes Europeans is the blithe symbiosis between two cultural impulses that Europeans take to be incompatible: a democratic impulse and a commercial one. From early on American intellectuals and artists agreed that for American culture to be American it must needs be democratic. It should appeal to the many, not the few. Setting itself up in contradistinction to Europe’s stratified societies and the hierarchies of taste they engendered, America proclaimed democracy for the cultural realm as well. That in itself was enough to make Europeans frown. Could democratic culture ever be anything but vulgar, ever be more than the largest common denominator of the people’s tastes? Undeniably, there were those in Europe who agreed with Americans that cultural production there could not simply follow in the footsteps of Europeans, and who were willing to recognise an American Homer in Walt Whitman, America’s poet of democracy. But even they were aghast at the ease with which the democratic impulse blended into the commercial. What escaped them was that in order to reach a democratic public, the American artist found himself in much the same situation as a merchant going to market. If America was daring in its formation of a mass market for goods that it produced en masse, it was equally daring in its view that cultural production in a democratic vein needed to find its market, its mass audience. In the absence of forms of European cultural sponsorship, it needed to make its audiences, to create its own cultural market, if only with a view to recouping the cost of cultural production. Particularly in the age of mechanical reproduction when the market had to expand along with the growth in cultural supply, American culture became ever more aware of the commercial calculus. And by that same token, it became ever more suspect in the eyes of European critics. Something made for profit, for money, could inherently never be of cultural value. This critical view has a long pedigree and is alive and well today.

The other repertoire of the European critique of American mass culture focuses on its spirit of blithe bricolage, of its anti-canonical approach to questions of high culture versus low culture, or to matters of the organic holism of cultural forms. Again, some Europeans were tempted, if not convinced, by Whitmanesque experiments in recognising and embracing the elevated in the lowly, the vulgar in the sublime, or by his experiments in formlessness. They were willing to see in this America’s quest for a democratic, if not demotic, culture. But in the face of America’s shameless appropriation of the European cultural heritage, taking it apart and re-assembling it in ways that went against European views of the organic wholeness of their hallowed heritage, Europeans begged to differ. To them, the collage or re-assemblage attitude that produced Hearst Castle, Caesar’s Palace, or the architectural jumble of European quotations in some of America’s high-rise buildings seemed proof that Americans could only look at European culture in the light of contemporaneity, as if it were one big mail-order catalog. It was all there at the same time, itemized and numbered, for Americans to pick and choose from. It was all reduced to the same level of usable bits and pieces, to be recycled, re-assembled, and quoted at will. Many European critics have seen in this an anti-historical, anti-metaphysical, or anti-organicist bent of the American mind. When Huizinga was first introduced, in the 1920s, to the Dewey Decimal System used to organize library holdings, he was aghast at the reduction of the idea of a library, an organic body of knowledge, to the tyranny of the decimal system, to numbers. Others, like Charles Dickens or Sigmund Freud, more facetiously, saw American culture as reducing cultural value to exchange value, the value of dollars. Where Europeans tend toward an aesthetics that values closure, rules of organic cohesion, Americans tend to explode such views. If they have a canon, it is one that values open-endedness in the re-combination of individual components. They prefer constituent elements over their composition. Whether in television or American football, European ideas of flow and continuity get cut up and jumbled, in individual time slots as on tv, or in individual plays as in football. Examples abound, and will most likely come to your mind, “even as I speak,” (to use American television lingo).

Now, potentially, the result of this bricolage view of cultural production might be endless variety. Yet what Europeans tended to see was only spurious variety, fake diversity, a lack of authenticity. A long chorus of French voices, from George Duhamel and François Mauriac in the interwar years, to Jean-Paul Sartre and more particularly Simone de Beauvoir after World War II, in the 1940s and ’50s, kept this litany resounding. At one point Simone de Beauvoir even borrowed from David Riesman, an American cultural critic, to make a point she considered her own. She referred to the American people as “un peuple de moutons,” conformist, and “extéro-conditionnés,” French for Riesman’s “other-directed.” At other points she could see nothing but a lack of taste, if not slavishness, in American consumerism.

Such French views are far from dated yet. They still inform current critiques of contemporary mass culture. Yet, apparently, the repertoire is so wide-spread and well-known that often no explicit mention of America is needed any more. America has become a subtext. In the following I propose to give two examples, both of them French. One illustrates the dangers of commercialism in the production of culture, the other the baneful effects of America’s characteristic modularizing mode in cultural production, its spirit of bricolage.

Commercialism and culture

In our present age of globalization, with communication systems such as the Internet spanning the globe, national borders have become increasingly porous. They no longer serve as cultural barriers that one can raise at will to fend off cultural intrusions from abroad. It is increasingly hard to erect them as a cultural “Imaginot” line (forgive the pun) in defense of a national cultural identity. Yet old instincts die hard. In a typically preemptive move, France modernized its telephone system in the 1980s, introducing a communication network (the Minitel) that allowed people to browse and shop around. It was a network much like the later World Wide Web. The French system was national, however, and stopped at the border. At the time it was a bold step forward, but it put France at a disadvantage later on, when the global communications revolution got under way. The French were slower than most of their European neighbors to connect to the Internet. And that may have been precisely the point.

At every moment in the recent past when the liberalization of trade and flows of communication was being discussed in international meetings, the French raised the issue of cultural protection. They have repeatedly insisted on exempting cultural goods, such as film and television, from the logic of free trade. They do this because, as they see it, France represents cultural “quality” and therefore may help to maintain diversity in the American-dominated international market for ideas. The subtext for such defensive strategies is not so much the fear of opening France’s borders to the world but rather fear of letting American culture wash across the country. Given America’s dominant role in world markets for popular culture, as well as its quasi-imperial place in the communications web of the Internet, globalization to many French people is a Trojan horse. For many of them, globalization means Americanization.

Not too long ago the French minister of culture published a piece in the French daily newspaper Le Monde, again making the French case for a cultural exemption from free trade rule. A week later one of France’s leading intellectual lights, Pierre Bourdieu, joined the fray in a piece published in the same newspaper. It was the text of an address delivered on October 11, 1999, to the International Council of the Museum of Television and Radio in Paris. He chose to address his audience as “representing the true masters of the world,” those whose control of global communication networks gives them not political or economic power but what Bourdieu called “symbolic power,” that is, power over people’s minds and imaginations gained through cultural artifacts – books, films, and television programs – that they produce and disseminate. This power is increasingly globalized through international market control, mergers and consolidations, and a revolution in communications technology. Bourdieu briefly considered the fashionable claim that the newly emerging structures, aided by the digital revolution, will bring endless cultural diversity, catering to the cultural demands of specific niche markets. Bourdieu refected this out of hand; what he saw was an increasing homogenization and vulgarization of cultural supply driven by a logic that is purely commercial, not cultural. Aiming at profit maximization, market control, and ever larger audiences, the “true masters of the world” gear their products to the largest common denominator that defines their audience. What the world gets is more soap operas, expensive blockbuster movies organized around special effects, and books whose success is measured by sales, not by intrinsic cultural merit.

It is a Manichaean world that Bourdieu conjured up. True culture, as he saw it, is the work of individual artists who view their audience as being posterity, not the throngs at the box office. In the cultural resistance that artists have put up over the centuries against the purely commercial view of their work, they have managed to carve out a social and cultural domain whose organizing logic is at right angles to that of the economic market. As Bourdieu put it: “Reintroducing the sway of the ‘commercial’ in realms that have been set up, step by step, against it means endangering the highest works of mankind.” Quoting Ernest Gombrich, Bourdieu said that when the “ecological prerequisites” for art are destroyed, art and culture will not be long for dying. After voicing a litany of cultural demise in film industries in a number of European countries, he lamented the fate of a cultural radio station about to be liquidated “in the name of modernity,” a victim to Nielsen ratings and the profit motive. “In the name of modernity” indeed. Never in his address did Bourdieu rail against America as the site of such dismal modernity, yet the logic of his argument is reminiscent of many earlier French views of American culture, a culture emanating from a country that never shied from merging the cultural and the commercial (or, for that matter, the cultural and the democratic). Culture, as Bourdieu defended it, is typically high culture. Interestingly, though, unlike many earlier French criticisms of an American culture that reached Europe under commercial auspices, Bourdieu’s defense was not of national cultures, more specifically the French national identity, threatened by globalization. No, he argued, the choice is between “the kitsch products of commercial globalization” and those of an international world of creative artists in literature, visual arts, and cinematography, a world that knows many constantly shifting centers. Yet blood runs thicker than water. Great artists, and Bourdieu listed several writers and filmmakers, “would not exist the way they do without this literary, artistic, and cinematographic international whose seat is [present tense!] situated in Paris. No doubt because there, for strictly historical reasons, the microcosm of producers, critics, and informed audiences, necessary for its survival, has long since taken shape and has managed to survive.” Bourdieu thus managed to have his cake and eat it too, arrogating a place for Paris as the true seat of a modernity in high culture. In his construction of a global cultural dichotomy lurks an established French parti pris. More than that, however, his reading of globalization as Americanization by stealth blinded him to the way in which French intellectuals and artists before him have discovered, adapted, and adopted forms of American commercial culture, such as Hollywood movies.

In his description of the social universe that sustains a cultural international in Paris, Bourdieu mentioned the infrastructure of art-film houses, of a cinémathèque, of eager audiences and informed critics, such as those writing for the Cahiers du cinéma. He seemed oblivious to the fact that in the 1950s precisely this potent ambience for cultural reception led to the French discovery of Hollywood movies as true examples of the “cinéma d’auteur,” of true film art showing the hand of individual makers, now acclaimed masters in the pantheon of film history. Their works are held and regularly shown in Bourdieu’s vaunted cinémathèque and his art-film houses. They were made to work, like much other despised commercial culture coming from America, within frameworks of cultural appropriation and appreciation more typically French, or European, than American. They may have been misread in the process as works of individual “auteurs” more than as products of the Hollywood studio system. That they were the products of a cultural infrastructure totally at variance with the one Bourdieu deemed essential may have escaped French fans at the time. It certainly escaped Bourdieu.

The modularizing mind and the World Wide Web

Among other dreams the Internet has inspired those of a return to a world of total intertextuality, of the reconstitution of the full body of human thinking and writing. It would be the return to the “City of Words,” the labyrinthine library that, like a nostalgic recollection, has haunted the human imagination since the age of the mythical library of Babylon. Tony Tanner used the metaphor of the city of words to describe the central quest inspiring the literary imagination of the 20th century. One author who, for Tanner, epitomizes this quest is Jorge Luis Borges. It is the constructional power of the human mind that moves and amazes Borges. His stories are full of the strangest architecture, including the endless variety of lexical architecture to which man throughout history has devoted his time – philosophical theories, theological disputes, encyclopaedias, religious beliefs, critical interpretations, novels, and books of all kinds. While having a deep feeling for the shaping and abstracting powers of man’s mind, Borges has at the same time a profound sense of how nightmarish the resultant structures might become. In one of his stories, the library of Babel is referred to by the narrator as the “universe” and one can take it as a metaphysical parable of all the difficulties of deciphering man’s encounters in existence. On the other hand Babel remains the most famous example of the madness in man’s rage for architecture, and books are only another form of building. In this library every possible combination of letters and words is to be found, with the result that there are fragments of sense separated by “leagues of insensate cacophony, of verbal farragos and incoherencies.” Most books are “mere labyrinths of letters.” Since everything that language can do and express is somewhere in the library, “the clarification of the basic mysteries of humanity … was also expected.” The “necessary vocabularies and grammars” must be discoverable in the lexical totality. Yet the attempt at discovery and detection is maddening; the story is full of the sadness, sickness and madness of the pathetic figures who roam around the library as around a vast prison.

What do Borges’s fantasies tell us about the Promethean potential of a restored city of words in cyberspace? During an international colloquium in Paris at the Bibliothèque Nationale de France, held on June 3rd and 4th, 1998, scholars and library presidents discussed the implications of a virtual memory bank on the Internet, connecting the holdings of all great libraries in the world. Some saw it as a dream come true. In his opening remarks Jean-Pierre Angremy referred to the library of Babel as imagined by Borges, while ignoring its nightmarish side: “When it was proclaimed that the library would hold all books, the first reaction was one of extravagant mirth. Everyone felt like mastering an intact and secret treasure.” The perspective, as Angremy saw it, was extravagant indeed. All the world’s knowledge at your command, like an endless scroll across your computer screen. Others, like Jacques Attali, spiritual father of the idea of digitalizing the holdings of the new Bibliothèque Nationale, took a similar positive view. Whatever the form of the library, real or virtual, it would always be “a reservoir of books.” Others weren’t so sure. They foresaw a mutation of our traditional relationship toward the written text, where new manipulations and competences would make our reading habits as antiquated as the reading of papyrus scrolls is to us.

Ironically, as others pointed out, texts as they now appear on our screen are like a throw-back to the reading of scrolls, and may well affect our sense of the single page. In the printed book every page comes in its own context of pages preceding and following it, suggesting a discursive continuity. On the screen, however, the same page would be the interchangeable element of a virtual data bank that one penetrates into by the use of a key word that opens many books at the same time. All information is thus put at the same plan, without the logical hierarchy of an unfolding argument. As Michel Melot, long-time member of the Conseil supérieur des bibliothèques, pointed out, randomness becomes the rule. The coherence of traditional discursive presentation will tend to give way to what is fragmented, incomplete, disparate, if not incoherent. In his view, the patchwork or cut-and-paste approach will become the dominant mode of composition.

These darker views are suggestive of a possible American imprint of the Internet. They are strangely reminiscent of an earlier cultural critique in Europe of the ways in which American culture would affect European civilization. Particularly the contrast seen between the act of reading traditional books and of texts down-loaded from the Net recalls a contrast between Europe and America that constitutes a staple in the work of many European critics of American culture. Europe, in this view, stands for organic cohesion, for logical and stylistic closure, whereas America tends towards fragmentation and recombination, in a mode of blithe cultural bricolage, exploding every prevailing cultural canon in Europe. Furthermore we recognise the traditional European fear of American culture as a leveling force, bringing everything down to the surface level of the total interchangeability of cultural items, oblivious to their intrinsic value and to cultural hierarchies of high versus low.

Yet in the views as exposed at the Paris symposium, we find no reference to America. Is this because America is a sub-text, a code instantly recognised by French intellectuals? Or is it because the logic of the Internet and of digital intertextuality have a cultural impact in their own right, similar to the impact of American culture, but this time unrelated to any American agency? I would go no further at this point than to suggest a Weberian answer. It seems to be undeniably the case that there is a Wahlverwandtschaft – an elective affinity – between the logic of the Internet and the American cast of mind, which makes for an easier, less anguished acceptance and use of the new medium among Americans than among a certain breed of Europeans.

After reviewing these two exhibits of cultural anti-Americanism as a subtext, taking French attitudes as its typical expression, what conclusions can we draw? One is that fears of an American way with culture, due to either its commercial motives or its modularizing instincts, are too narrow, too hide-bound. Discussing Bourdieu’s views, I mentioned counter-examples where compatriots of his, in the 1950s, thoroughly re-evaluated the body of cinematography produced in Hollywood. They moved it up the French hierarchy of taste and discovered individual auteurs where the logic of established French views of commercial culture would have precluded the very possibility of their existence. This is a story that keeps repeating itself. Time and time again French artists and intellectuals, after initial neglect and rejection, have discovered redeeming cultural value in American jazz, in American hard-boiled detective novels, in rap music, in Disney World, and other forms of American mass culture. What they did, and this may have been typically for French or more generally European intellectuals to achieve, was to develop critical lexicons, constructing canonic readings of American cultural genres. It is a form of cultural appropriation, making forms of American culture part of a European critical discourse, measuring it in terms of European taste hierarchies. It is a process of subtle and nuanced appropriation that takes us far beyond any facile, across-the-board rejection of American culture due to its commercial agency.

How about the second ground for rejection, America’s blithe leveling of cultural components to the level of interchangeable bits and pieces? As I argued in my review of the second exhibit, America may have been more daring when it ventures out in this field, yet we can find parallels and affinities with Europe’s cultural traditions. A catalytic disenchantment of the world, as part of a larger secularization of Europe’s Weltanschauung, had been eating away at traditional views of God-ordained order before Americans joined in. Again, facile rejections of what many mistakenly see as Americanization by stealth, when confronted with more radical manifestations of the modularization of the world, miss the point. I suggested the possibility that what the World Wide Web brings us in terms of endless digital dissection and re-assemblage of “texts,” may have more to do with the inherent logic of the digital revolution than with any American agency. A more or less open aversion to this happening should be seen, therefore, as anti-modernity rather than as anti-Americanism. It reveals a resentment against the relentless modernization of our world that has been a continuing voice of protest in the history of Western civilization.

It is a resentment, though, that should make us think twice. Clearly, we are not all Americans. We do not all freely join their Promethean exploration of the frontier of modernity. This is not the same as saying that those who are not “Americans” are therefore the Bin Ladens in our midst. But their resentment is certainly akin to what, in other parts of the world, has turned into blind hatred of everything Western civilization stands for.

III. Anti-Americanism and American power

New York, 9/11, one year later. While I am writing this, the events of a year ago are being remembered in a moving, simple ceremony. The list of names is being read of all those who lost their lives in the towering inferno of the World Trade Center. Their names appropriately reflect what the words World Trade Center conjure up; they are names of people from all over the world, from Africa, the Middle East, the Far East, the Pacific, Latin America, Europe, and, of course, North America – people of many cultures and many religions. Again the whole world is watching, and I realize suddenly that something remarkable is happening here. The American mass media record an event staged by Americans. They powerfully re-appropriate a place where a year ago international terrorism was in charge. They literally turn the site into a lieu de mémoire. They are in the words of Lincoln’s Gettysburg Address, read again on this occasion, consecrating the place. They imbue it with the sense and meaning of a typically American scripture. It is the language that, for over two centuries, has defined America’s purpose and mission in the ringing words of freedom and democracy.

I borrow the words “American scripture” from Michael Ignatieff. He used them in a piece he wrote for a special issue of Granta. He is one of twenty-four writers from various parts of the world who contributed to a section entitled “What We Think of America.” Ignatieff describes American scripture as “the treasure house of language, at once sacred and profane, to renew the faith of the only country on earth (…) whose citizenship is an act of faith, the only country whose promises to itself continue to command the faith of people like me, who are not its citizens.” Ignatieff is a Canadian. He describes a faith and an affinity with American hopes and dreams that many non-Americans share. Yet, if the point of Granta‘s editors was to explore the question of “Why others hate us, Americans,” Ignatieff’s view is not of much help. In the outside world after 9/11, as Granta‘s editor, Ian Jack, reminds us, there was a wide-spread feeling that “Americans had it coming to them,” that it was “good that Americans now know what it’s like to be vulnerable.” For people who share such views American scripture deconstructs into hypocrisy and willful deceit.

There are many signs in the recent past of people’s views of America shifting in the direction of disenchantment and disillusionment. Sure enough, there were fine moments when President Bush rose to the occasion and used the hallowed words of American scripture to make it clear to the world and his fellow-Americans what terrorism had truly attacked. The terrorists’ aim had been more than symbols of American power and prowess. It had been the very values of freedom and democracy that America sees as its foundation. These were moments when the president literally seemed to rise above himself. But it was never long before he showed a face of America that had already worried many long-time friends and allies during Bush’s first year in office.

Even before September 11th, the Bush administration had signaled its retreat from the internationalism that had consistently inspired US foreign policy since World War II. Ever since Woodrow Wilson, American scripture had also implied the vision of a world order that would forever transcend the lawlessness of international relations. Many of the international organizations that now serve to regulate inter-state relations bear a markedly American imprint, and spring from American ideals and initiatives. President Bush Sr., in spite of his avowed aversion to the “vision thing,” nevertheless deemed it essential to speak of a New World Order when, at the end of the Cold War, Saddam Hussein’s invasion of Kuwait seemed to signal a relapse into a state of international lawlessness. Bush Jr. takes a narrower, national-interest view of America’s place in the world. In an un-abashed unilateralism he has moved United States foreign policy away from high-minded idealism and the arena of international treaty obligations. He is actively undermining the fledgling International Penal Court in The Hague, rather than taking a leadership role in making it work. He displays a consistent unwillingness to play by rules internationally agreed to and to abide by decisions reached by international bodies that the United States itself has helped set up. He squarely places the United States above or outside the reach of international law, seeing himself as the sole and final arbiter of America’s national interest.

After September 11th this outlook has only hardened. The overriding view of international relations in terms of the war against terrorism has led the United States to ride roughshod over its own Constitutional protection of civil rights as well as over international treaty obligations under the Convention of Geneva in the ways it handles individuals, US citizens among them, suspected of links to terrorist networks. Seeing anti-terrorism as the one way to define who is with America or against it, President Bush takes forms of state terrorism, whether in Russia against the Chechens, or in Israel against the Palestinians, as so many justified anti-terrorist efforts. He gives them his full support and calls Sharon a “man of peace.” If Europeans beg to differ and wish to take a more balanced view of the Israeli-Palestinian conflict, the Bush administration and many op-ed voices blame European anti-Semitism.

This latter area is probably the one where the dramatic, if not tragic, drifting apart of America and Europe comes out most starkly. It testifies to a slow separation of the terms of public debate. Thus, to give an example, in England the chief rabbi, Jonathan Sacks, said that many of the things Israel did to the Palestinians flew in the face of the values of Judaism. “(They) make me feel very uncomfortable as a Jew.” He had always believed, he said, that Israel “must give back all the land (taken in 1967) for the sake of peace.” Peaceniks in Israel, like Amos Oz, take similar views. And so do many in Europe, Jews and non-Jews alike. Yet it would be hard to hear similar views expressed in the United States. There is a closing of ranks, among American Jews, the religious right, opinion leaders, and Washington political circles, behind the view that everything Israel does to the Palestinians is done in legitimate self-defense against acts of terrorism. Yet, clearly, if America’s overriding foreign-policy concern is the war against terrorism, one element tragically lacking in public policy statements of its Middle-East policy is the attempt to look at themselves through the eyes of Arabs, or more particularly Palestinians. A conflation seems to have occurred between Israel’s national interest and that of the United States. Both countries share a definition of the situation that blinkers them to rival views that are more openly discussed in Europe.

Among the pieces in Granta is one by a Palestinian writer, Raja Shehadeh. He reminds the reader that “today there are more Ramallah people in the US than in Ramallah. Before 1967 that was how most Palestinians related to America – via the good things about the country that they heard from their migrant friends and relations. After 1967, America entered our life in a different way.” The author goes on to say that the Israeli occupation policy of expropriating Arab land to build Jewish settlements and roads to connect them, while deploying soldiers to protect settlers, would never have been possible without “American largesse.” But American assistance, Shehadeh continues, did not stop at the funding of ideologically motivated programs. In a personal vignette, more telling than any newspaper reports, Shehadeh writes: “Last July my cousin was at a wedding reception in a hotel on the southern outskirts of Ramallah when an F16 fighter jet dropped a hundred-pound bomb on a nearby building. Everything had been quiet. There had not been any warning of an imminent air attack. … Something happened to my cousin that evening. … He felt he had died and was surprised afterwards to find he was still alive. … He did not hate America. He studied there. … Yet when I asked him what he thought of the country he indicated that he dismissed it as a lackey of Israel, giving it unlimited assistance and never censoring its use of US weaponry against innocent civilians.” The author concludes with these words: “Most Americans may never know why my cousin turned his back on their country. But in America the parts are larger than the whole. It is still possible that the optimism, energy and opposition of Americans in their diversity may yet turn the tide and make America listen.”

The current Bush administration, with its pre-emptive strategy of taking out opponents before they can harm the US at home or abroad, in much the same way that Israeli fighter jets execute alleged Palestinian terrorists, in their cars, homes, and backyards, without bothering about due process or collateral damage, is not an America that one may hope “to make listen.” Who is not for Bush is against him. Well, so be it. Many Europeans have chosen not to be bullied into sharing the Bush administration’s view of the world. They may not command as many divisions as Bush; they surely can handle the “division” that Bush has brought to the Atlantic community.

There has been a resurgence of open anti-Americanism in Europe and elsewhere in the world. Not least in the Middle East, the area that has brought us Osama Bin Laden and his paranoid hatred of America, and the West more generally. If he can still conflate the two, why can’t we? If Raja Shehadeh still holds hopes of an America that one can make listen, why don’t we? Let us face it: We are all Americans, but sometimes it is hard to see the Americans we hold dear in the Americans that hold sway.

This may remind Europeans that anti-Americanism is not the point. We may believe we recognize Americanism in any particular American behavior, be it cultural or political. Yet the range of such behavior is simply too wide – ranging in culture from the sublime to the vulgar, and in politics from high-minded internationalism to narrow nationalism – to warrant any across-the-board rejection.


1 M. ter Braak, “Waarom ik ‘Amerika’ afwijs,” (Why I reject America), De Vrije Bladen, V, 3, 1928; repr. in: Verzameld Werk, (Collected Works) Amsterdam: G.A. van Oorschot, 1950/1, Vol I, 255-65.

2 J. Huizinga, Amerika levend en denkend – Losse opmerkingen. (America living and thinking – Loose observations) (Haarlem: H.D. Tjeenk Willink & Zoon, 1926), 162.

3 The Seven Arts, (June 1917), 199.

4 The Seven Arts, (March 1917), 535.

5 O. Spengler, Jahre der Entscheidung, (Muenchen: Beck, 1933) 48.

6 Michael Ignatieff, “What We Think of America,” Granta, The Magazine of New Writing, 77 (Spring 2002), 47-50.

7 I may refer the reader to my survey of such French views of American modernity. See Rob Kroes, Them and Us: Questions of Citizenship in a Globalizing World (University of Illinois Press, 2000), chapter 9.

8 See, e.g., Annick Foucrier, Le rêve californien: Migrants francais sur la côte Pacifique (XVIIIe-XXe siècles). (Paris, 1999).

9 See my If You’ve Seen One, You’ve Seen the Mall: Europeans and American Mass Culture. (University of Illinois Press, 1996)

10 Quoted in: D. Lacorne, J Rupnik, and M.F. Toinet, eds., L’Amérique dans les têtes (Paris, 1986), 61.

11 Quoted in ibid., 62.

12 Pierre Bourdieu, “Questions aux vrais maîtres du monde” Le Monde, Sélection Hebdomadaire, October 23, 1999, pp. 1,7.

13 Tony Tanner, City of Words: American Fiction 1950-1970 (New York, 1971)

14 For the Borges quotations, see Tanner, City of Words, 41.

15 For my summary of the proceedings at the Paris colloquium, I have used a report published in Le Monde, Sélection Hebdomadaire, 2589, June 20th, 1998, p.13.

16 For a fuller analysis of the metaphorical deep structure underlying the European critique of American culture, I may refer the reader to my If You’ve Seen One, You’ve Seen the Mall.

17 I argue this more at length in the concluding chapter of my If You’ve Seen One, You’ve Seen the Mall, entitled “Americanization: What are we talking about?”

18 In an interview in the Guardian on August 27th, 2003.

“Americanization”: An East Asian Perspective

Akio Igarashi is a professor of law and politics at Rikkyo University, Tokyo, Japan. He is editor in chief of The Journal of Pacific Asia and author of a number of books and articles, including Japan and a Transforming Asia (Henyousuru Asia to Nippon [Seori Shobo, 1998]).

The Unconscious Reversal of Americanization

For the past few years, American Hollywood war films, such as “Saving Private Ryan”(1998), “The Thin Red Line”(1998), “Stalingrad”(2001) [released in the United States as “Enemy at the Gates”-ed.], “Pearl Harbor”(2001) and “Black Hawk Down” (2002), have appeared regularly in Tokyo movie theaters. War movies are one of the key genres in Hollywood, and Japanese moviegoers have had the opportunity to see a large number of Hollywood war movies thus far. While watching such movies, it is not uncommon for Japanese viewers to suddenly realize that unknowingly they have stepped onto the side of the United States Army. There are probably some instances when viewers have experienced discomfort upon recognizing Japan as the “enemy.” In such films, American notions of justice and heroism, as well as of freedom and democracy are deeply embedded, and such ideologies have influenced unsuspecting Japanese audiences.

The Vietnam War, however, which Americans themselves had difficulty justifying, changed the genre of Hollywood war movies. Representative works include “Deer Hunter” (1978), “Apocalypse Now” (1979), “Platoon”(1986), and “Full Metal Jacket”(1987). An undercurrent of protest against the brutalities of war and deep skepticism toward American military policies runs through these films. At the time that they appeared, Vietnam War movies had a great impact and their influence still remains.1  Japanese viewers who have seen the Hollywood war movies films mentioned above correctly observe, as one observer put it, “Although [Hollywood war films] seem to emphasize America’s viewpoint…these films are basically anti-war movies.” There is no mistake that, since the Vietnam War, American values and overt ideological messages in Hollywood war movies have subsided, and this has been acknowledged among the Japanese viewing public.

After 9.11 and the war in Afghanistan, Japanese viewers` perspectives on Hollywood war movies have changed even more, which is especially clear in their responses to “Black Hawk Down.” The film is based on real life events of 1993, when American troops were sent into Somalia on a U.N. Peacekeeping mission. Their assignment to capture two Somali warlords failed when their helicopters were shot down and they were attacked by Somali militias and civilians. Viewers who watched this film gave the following comments: “It felt as though I was on the battleground, that this was what war must be like,” (25-year old female); and “I learned that it was an unfair battle. The problem was not at the level of those fighting the battle, but was a problem at the administrative level. Who knows when Japan will be pulled into a similar situation while taking part in peacekeeping efforts?” (54-year old male). The misery of a battle in which members of the force lose their friends, one by one, slowly draws the viewer into the perspective of the American soldiers: “It was shocking when the final death count appeared just before the movie credits stating the toll to be 19 Americans and 1000 Somalis. Even though so many Somalis had died, while watching the film my sympathies were drawn toward the American soldiers. I think that’s what’s so frightening about films,” (male in his 30’s).2

In his review, Saito Tadao, a veteran film critic who has been writing film reviews since the end of World War II, stepped away from the ordinary review and expressed his thoughts on the state of America’s global strategy:

The director Ridley Scott concentrates on portraying the American soldiers’ feelings, whether it be of fear or solidarity, and keeps the causes of the civil war, the role of the peacekeeping forces, the pros and cons of America’s actions, and any explanation of the Somalis’ circumstances or feelings to a bare minimum. For the American soldiers placed at the heart of the danger, such information was surely irrelevant. Yet, at the present moment with the publicizing of the possibility of an American attack against Somalia because of suspicions of terrorist activity, it is necessary to be aware about such things.

“Blackhawk Down” touchingly depicts the camaraderie between the American soldiers on the one hand, while easily devaluing the lives of the Africans on the other. Viewers are left with a strong impression of poor Africans, but only because the situation is depicted as being utterly miserable. Though it is considered an American war film, for the Japanese people, the military actions of the United Nations and the peacekeeping forces cannot simply be disregarded as someone else’s problem.3

This film does not manage to instill an American value system in the viewer, but is an example of a reversal in which a Hollywood film leads to a critical analysis of the American government’s global policy. Thus, viewers are overcoming the message of the classic Hollywood war film. The fact that current Hollywood films themselves are bringing about such reversals is an important factor to keep in mind. Hollywood films have been at the center of the spread of American value systems and their manifestations throughout the world, a cultural Americanization that began in Japan after World War II. These films have now lost that capacity, which can be clearly observed in the responses of Japanese film critics and audiences alike. While observing the American government’s actions, however, the extent to which America is conscious of its own actions is unclear.

During the Cold War, Americanization actively transformed the liberal world bloc. Those countries affiliated with this bloc were incorporated as part of the global policies of American military and diplomatic efforts, and were simultaneously placed under the influence of a culture infused with American ideologies. Among the various types of cultural Americanization, popular culture captured the hearts of those in this “liberalist” world, and the acceptance of American popular culture helped these countries also embrace the military and diplomatic forms of Americanization. Around the time when the socialist bloc was losing its economic power, however, those countries in the liberalist bloc, particularly countries in East Asia, were beginning to escape the one-sided influence of Americanization. Owing to economic growth, a popular culture unique to that region was being formed. This in turn altered Japan’s relations with the U.S. and yielded a relativistic perspective toward the United States role in the areas of military policy and diplomacy. This essay will consider the influence and transformation of Americanization on the cultural front, focusing primarily on Japan but looking at East Asia as a whole.

1. The American Dream in Post-World War II Japan

For nearly one hundred years since World War I–the century at times even referred to as “America’s Century”–the United States has wielded incredible influence, not only on diplomatic or military fronts, but also on the cultural front. The spread of culture led by such notions as the Christian tradition, liberal expansionism, and Wilsonian internationalism was contemporaneous to the spread of democracy around the world in the 20th century.4 Particularly during the Cold War, as both eastern and western blocs were facing off in fierce ideological warfare, cultural Americanization was deeply imbricated by an American value system. The influence of Americanization had begun its work in Japan in the 1930’s. Hollywood films such as “Stagecoach” (1939) directed by John Ford, were being shown in movie theaters, and Filipino bands playing on cruise ships sailing to foreign destinations introduced jazz to Japanese passengers. Japanese musicians traveled to Shanghai, then the jazz mecca of Asia, to learn from American jazz musicians who were touring there. In daily life, homes boasting western or private rooms and American style modern roofs were called “culture homes (bunka jyutaku),” and there was a tendency to associate the luxury and convenience of American life to “cultural living,” or “progress.”5

Yet, it is no surprise that Americanization’s greatest period of influence on Japan occurred during the American occupation after World War II. Japanese people who only the day before had been crying out, “kichiku beiei,” or “American and British devils,” now faced with the “generosity” of the victor began to take a more obsequious stance toward their vanquishers.6 The overwhelming authority of the American military occupation policy enforced the aforementioned ideological policies throughout the country, even while emphasizing demilitarization of the state and democratization. Democracy and pacifism were spread extensively and popularized through these forms of American ideological endorsement.7 The new constitution, which took effect in 1947, was built upon the two concepts of democratization and demilitarization, which were quickly adopted by many Japanese.8 The extent to which this “imposed” democracy actually took hold in Japanese society or culture was undermined by the frequent incidents provoked by conservative politicians, which became the cause of general skepticism among the populace toward Japan’s “democracy.” Pacifism, a concept that was so entrenched in the Japanese peoples’ memory of themselves as victims of war, became unstable as the memory of war began to fade. Moreover, the fact that Japan is yet unself-conscious of its role as aggressor against other Asian countries in the war, or of the sacrifice of Okinawa to the American military as the price to be paid for postwar peace, makes the ideology of Japanese postwar pacifism quite fragile.9

In the immediate postwar period, what a majority of Japanese hoped for was the realization of a rational and affluent society. It was a hope for escape from a past of prewar and wartime control by imperial rule and militarism, and from utter poverty.10 What was particularly alluring about American culture for such Japanese were the prospects of freedom and material abundance. The spacious rooms and the big white refrigerator in the comic strip, Blondie, helped people to imagine the affluence of the American lifestyle. The flat side of a ham hock peering from the open refrigerator door was a source of wonder for a people who had only ever seen an entire hock of ham in a butcher’s showcase. For Japanese at the time, America’s prosperous culture of consumption, symbolized by chewing gum, chocolate, and women’s fashion, represented “the American Dream.”

With the occupation by joint military forces, jazz performances were resurrected in areas near bases, and with the ban lifted on NHK radio programs and dancehalls, jazz became accessible to the average Japanese listener. As part of their public relations efforts during the Cold War era, the American government promoted overseas concert tours of black jazz musicians, and in 1952, Louis Armstrong visited Japan. Along with jazz in the 1950’s, came rock-n-roll, and in the 1960’s came Bob Dylan’s folk music. Songs representing “freedom” arrived one after the other; the electric boom, group sounds, and music of a common global language among youths came pouring in from America.

Hollywood films were the most successful anti-communist propaganda tools and received powerful backing from the American government. These films exceeded the American government’s expectations by depicting the various circumstances of American society. The crowds that filled the movie theaters to capacity feasted on the freedom and affluence of American society in these films. The children who watched “Dumbo,” “Bambi,” and “Mickey Mouse” were captivated by the colorful and expressive Disney animations.

When television programming first aired in 1953 in Japan, it exceeded movies as a means for propagating ideas. Since production techniques and capital were still inadequate, particularly in the early stages of the newly begun television industry, American television shows were often directly imported. Family dramas like “Father Knows Best,” “The Donna Reed Show,” and “I Love Lucy” were aired, and the image of an idealized American middle-class family-life without racism or the shadow of poverty stuck in people’s minds. These shows would later become models for Japanese home dramas. The Western boom also brought “Laramie” and “Rawhide,” implanting into Japanese society the image of Americans who were simple, yet cheerful, who burned with the fire of justice and lived in the vast countryside.11 At a time when Japanese viewers were just beginning an era of rapid economic growth, they envisioned their future lives as bright and affluent as the lives of the characters in the home dramas, and experienced the humanism of American society through the westerns.

In postwar Japanese society, there were many, however, who saw Americanization from a much more critical viewpoint. Those involved in left wing or liberal politics recognized Americanization as the cultural analog of the U.S. geo-political role in East Asia and other developing areas. They saw the U.S. as an oppressor that suppresses and exterminates those who actually seek freedom, democracy, or humanism, in order to protect its own profits. The student movements of this area were made up of the persistent denunciations of “American imperialism.” Although these students, leftists, and progressives were perhaps unable to avoid completely the effects of Americanization on their day-to-day lives, their view of America continues to hold an authority that cannot be ignored.12

2. The Development of Japanization, and the Decline of Americanization

Even now in the post-Cold War era, American culture continues to hold tremendous power. Coca-Cola quenches the thirst of people around the world. KFC’s and McDonald’s occupy street corners of major cities, satisfying people’s hunger. Jazz and rock-n-roll are played around the world in clubs and street corners, and the popularity of the Hollywood movie is alive and well.13

However, in the late 1980’s, another powerful popular cultural force joined American culture in East Asia: Japanization appeared on the scene and captured people’s hearts. In Seoul and Bangkok, the numbers of “izakaya,” or Japanese-style bars, outnumbered KFC and McDonalds franchises, and Japanese cuisine, such as yakitori and sushi, are all the rage. Pop music made in Japan (“J-Pop”) has swept East Asia, and in Thailand there is even a domestic magazine that specializes in Japanese celebrities and singers. In this area of the world, the Japanese “invention,” karaoke, is an essential part of the entertainment scene. Among the selections, there are many Japanese songs that have been reproduced with native lyrics and vocalists, and are often believed to be songs originating in that country. Animated films and television shows are watched by children in this region and beyond, and surpass even Disney in popularity. The NHK (Japan Broadcasting Corporation) TV drama, “Oshin,” depicts the life of a girl who, born into a poor farming family, finally finds success after years of hardship. The show was a huge hit first among audiences in the developing countries of Asia and then elsewhere, and represented both Oshin’s and Japan’s success story as the realization of the “Japanese Dream.”14 Japanese TV dramas that depict urban life are widely shown in East Asia. College women in Seoul clutch Japanese fashion magazines as they walk about town, while youths in Thailand fixate on “character-goods.”15 Tokyo is now the fashion center of Asia.

Several factors underlie the spread of Japanization in East Asia. First, the Japanese pop culture industry had accumulated capital and techniques over a considerable period of time. Second, in East Asia, which achieved rapid economic growth in the 1980’s and 1990’s, the new middle-class living in major cities created a heightened demand for pop culture. A third factor might be attributed to the common culture and consciousness of the people within the region, which may have helped push the growth and dissemination of Japanization as the popular culture of East Asia.

Since the 1920’s, Japanese society sustained a considerable domestic market for popular culture, and the industry accumulated capital. With that as a foundation, it fell under the influence of Americanization on the one hand, and on the other it developed a popular culture all its own. In the music world, as western music was introduced alongside modernization, unique Japanese melodies and lyrics were developed for popular consumption. The postwar era also saw the absorption of music from America, such as jazz and rock-n-roll, and from all over the world, with the translation of these songs being set to “Japan-made” melodies or lyrics. Such hybrid songs even now dominate the top of the Japanese hit charts over and above worldwide hits, while they also spread throughout the entire East Asian region.

Film production has a long history dating from the prewar era. At one point after the end of World War II, there were nearly 7000 theatres in Japan, some even in small towns at the furthest outskirts of the country, which entertained over 1.1 billion spectators annually, and ushered in the golden age of Japanese cinema. It is well known that this era gave birth to such directors as Kurosawa Akira, who has greatly influenced movie-making in the west. Toei, Japan’s largest film company, began to work in animation early on. While Disney studios boasted of its unmatched share in the animated film market (for animation shown in movie theaters), Toei Studio’s work in animation constructed a foundation in made-for-TV animation, and gradually increased its share of the animated film industry. These days, children all over East Asia spend their afternoons glued to their TV sets in order to watch Japanese TV animation.

Toei Film Studios was the training ground for Miyazaki Hayao, whose animated films have captured the hearts of many fans. Sen to chihiro no kamikakushi (released in English as “Spirit Away”) was a runaway hit that broke Titanic‘s box office record in Japan. Japanese animation, or “Japanimation,” tends to depict stories that “have roots in real life,” and is a new experience for western viewers.16 Moreover, the expression of certain subtle psychological responses resonates with East Asian viewers, who share common cultural characteristics with the Japanese. With increased import demand, the Japanese animation industry reduced overt Japanese national traits in the images and began constructing “nationlessness,” or non-nationalist texts and images.

In the comic book industry, the power of Japan’s market and capital is unsurpassed. The Japanese comic, or manga books and magazines together make up to 600 billion yen in annual market sales. This is the no.1 market of its kind in the world and its quality and maturity is high. There are no other examples outside of Japan of major publishing companies taking part in comic book publishing, and editors have accumulated a wealth of experience. Only in Japan are there popular writers who make a yearly salary of over 100 million yen, and where over 100 cartoonists ranging from those with only a junior high school degree to those with masters’ degrees earn more than presidents of major publishing companies. Growing out of such a “system,” manga commands a wide-ranging readership from children to adults. While European comics like the French bande dessinée (graphic novel comic) are considered too highbrow, American comics, by comparison, generally cater to children. Japanese manga with children as protagonists depict a world of honest truths that have universal appeal, and have secured popularity among a diverse range of readers overseas. The allure of the Japanese manga lies in “storytelling that can capture the imagination of adults” and in “a manifold power of expression.”17

Thus shaped and refined in the domestic market, manga are now coveted by many international readers. For example, the comic book series, Dragonball Z, sold over 50 million copies in Asia and 10 European countries. In Thailand and Hong Kong, manga appearing in “Jump,” the weekly manga magazine with a circulation of 6 million copies, are translated and printed alongside the works of native cartoonists. Korean comic magazines have such a large number of Japanese imports that they must make a special effort to print as many Korean cartoonists as possible.

These samples of Japan-made popular culture entered into the heart of the major cities of East Asian countries in the 1980’s and 1990’s. Enjoying a materially prosperous life as a result of economic growth, the “new middle class” of these cities carried on life styles similar to those living in Tokyo, the birthplace of Japanization, thus making the spread of Japanization that much more rapid. It is also likely that a longing for the lifestyle of those living in Tokyo, a major global city, helped to encourage the process.

Most importantly, unlike Americanization, no ideology like Wilsonian internationalism exists in Japanization. For Japan, as yet unable to leave behind its historical responsibility for colonization and wartime aggression, has no desire to convey such ideas. Accordingly, Japanization is unequivocally “materialistic” cultural dissemination. Yet, as previous experiences of Americanization’s influence on Japanese society make clear, a “faith” in an affluent society, or in this case the desire to capture the “Japanese dream,” is the greatest motivation for the spread and influence of Japanization.

Notwithstanding the “materialism” of cultural dissemination, the influence wielded by the behavior of protagonists depicted in manga or animation and the thought processes behind their actions is undeniable, particularly in the case of children. An example of such influence is found in the use of the word, “HITACHI,” the name of a major Japanese company, which in Thailand has come to mean, “an individual who responds quickly and perceptively to situations.”

The Japanese influence in the East Asian region is not restricted to popular culture. Following the Plaza Accord in 1985, the Japanese economy advanced even farther into East Asia. This kind of economic advancement produced people who were both fascinated by the products of an “advanced” capitalistic society and also overwhelmed and awestruck by Japan’s economic and technological power. Japanese products, conceived with a wealth of capital and state of the art technology, are elaborate and fashionable and thus are trusted and valued as “luxury items” in various areas of Asia. Furthermore, those who witnessed the success of companies that had incorporated Japanese technology and established capital partnerships could not help but be drawn to Japan’s economic prowess.

Japan’s biggest department stores and supermarkets have opened branches in most major cities in East Asia, and utilizing techniques for pristine window displays, they tangibly demonstrate the “cutting edge” of consumer culture. Imported Japanese department stores and supermarkets outshine the traditional local establishments, and have brought about a “consumer revolution.” With ever-increasing power and allure, Japanese products are entering local societies through these Japanese department stores. The dissemination and assimilation of Japanese popular culture has not come without some protest or resistance from Japan’s neighbors. Particularly in Korea, Japan’s direct neighbor and a former victim of colonization, the government associated Japanese culture with the Korean experience of oppression and strongly opposed it, even prohibiting the importation of Japanese popular culture.18 Yet, above the apartments of Seoul’s middle-class grows a forest of antennas tuning into Japanese satellite broadcasts, and pirated tapes of J-pop circulate the city. These circumstances illustrate the difficulty of intercepting the invasion of popular culture with national boundaries. This led the Kim Dae-jung administration to ease the regulations in late 1990s.

Meanwhile, in East Asia, economic growth has brought about capital gains, and the consumer market has grown with a newly emergent middle class, allowing for the production of their own popular culture. Inevitably, Japan’s popular culture is consulted as an archetype, and Japan’s experiences gleaned from manga and animation production are copied. In Korea, where not only Japanese films, but western films were also sanctioned in order to protect the domestic film industry, Hollywood film techniques were mastered. Korean production companies have recently released a slew of international hits. There are also films produced through legitimate partnerships with Japan. In addition, films, music, and fashion from Hong Kong, Thailand, and India are widely distributed. Furthermore, Japan’s popular culture industry now considers all regions of East Asia as a market, and while aggressively promoting Japanese popular culture they also have begun to scout out talent in China. In this way, East Asia has formed a borderless world of popular culture with Japanization at its center. The fifteen or so satellite broadcasts which travel through the airwaves of this region attest to this.

American popular culture in this region is alive and well; however, it no longer exerts an absolute influence. With the end of the Cold War, rapid economic growth and the resulting spread of globalization, East Asian society is undergoing great change. Popular culture or a consumer culture are not the only means by which people within the region share common experiences and deepen their mutual understanding. There is increased travel within the region for the purpose of business and tourism, and through the development of mass media networks and a heightened interest in other countries within their region, the amount of information made available through television and newspapers has also grown. Parallel to the heightened interest and interaction within the region, there is a stronger tendency towards a “unified” East Asian region at the level of international relations. Malaysian Prime Minister Mahathir Mohamad, who has taken up these issues with the most fervor, proposed the “Look East” policy in 1981, designating the economic development of Japan and Korea as models for his own country. In 1990, he proposed an East Asian economic community, or EAEG (East Asian Economic Grouping), which was to include only the countries of Asia, excluding the United States and the countries of Oceania. But because of strong U.S. objections at the time, it could not be realized. This community was conceived to counter the formation of the Asia-Pacific Economic Cooperation (APEC), which is made up of 21 countries, including the United States, Australia, New Zealand, Japan, Korea, and China. With the history of Malaysia’s colonial experience always at the forefront of his thoughts, Mahathir maintains a strong anti-Western stance. In 1992, Southeast Asian countries, which have held together the unity of the Asian region, established the ASEAN Free Trade Area (AFTA), which would aim to lower tariffs among participating countries by 5% by 2003. In 1994, ASEAN countries established the ASEAN Regional Forum (ARF), which dealt with the mutual building of confidence and trust among participating countries and provided a forum for preventative diplomacy and the peaceful resolution of regional disputes. ASEAN has had considerable success in setting the terms for a dialogue between 22 countries including Japan, the U.S., China, Australia, New Zealand, Russia, India, and the EU.

In the wake of the currency crisis of 1997, the harsh intervention of the IMF incited a growing distrust toward the IMF and the United States, its most powerful supporter. The Japanese government proposed the Asian Monetary Fund (AMF) to prevent Asian currency crises. The proposal, however, was withdrawn after strong U.S. opposition over its potential obstruction of IMF functions. In October 1998, the Japanese government proposed the “New Miyazawa Initiative,” which would carry out the distribution of funds on a bilateral basis. Under this initiative, Japan subsequently distributed funds to Indonesia, Korea, Malaysia, the Philippines, and Thailand, and was highly applauded within the region. In May 1995, “ASEAN+3” (ASEAN countries along with Japan, China, and Korea) agreed that countries would carry out bilateral currency swaps in order to prevent currency crises. This was a reinforcement of the New Miyazawa Initiative, and is also related to the AMF concept. In this way, there is talk of a regional unification with frequent comparisons to the European Union.

In East Asia, a new middle-class arose along with general economic growth, and by raising the power of their societal voice they have rapidly realized democratization since the mid-1980’s. In the Philippines, the “People Power Revolution” occurred in February 1986, giving birth to the Aquino administration. Following “The Bloody May” incident in Thailand in 1992, the civilian Chuan administration replaced the military administration. In 1987, Korea’s democratic movement brought the long years of military rule to an end. In 1998, after the death of President Chiang Ching-kuo, Lee Teng-hui’s succession as the new president propelled democratization in Taiwan. In Indonesia, 1998 was the year in which President Suharto resigned, following a huge popular mobilization. Thus confidence arising from democratization signified independence from American influence, and the image of “America, the land of freedom and democracy,” which had been implanted through Americanization, came to represent only one of many perspectives.

3. “9.11” to the Afghan War: Responses and Criticism of the U.S.

The blow to Japanese society tuning into late-night programs on September 11, 2001, and witnessing the coverage of the 9.11 terrorist attacks, was great. Military bases in Okinawa and elsewhere were on full alert, and tensions enveloped the Japanese archipelago. Feelings of apprehensions over the North and South Korean tensions crossed the minds of the populace. Needless to say, as the state of the victims and the grief of their families were reported day after day, compassion for the American people deepened.

Listening to President Bush’s “This is war” statement, the Japanese government must have immediately recalled the “defeat” of the Persian Gulf War–a very recent memory of rebuke, Japan was excluded from Kuwait’s thank you letter printed in the Washington Post despite $13 billion of aid to the U.S., a contribution which was completely disregarded after the end of the war because of Japan’s refusal to comply with repeated demands for the deployment of self defense forces, an outright violation of Japan’s constitution. Prime Minister Jyun’ichiro Koizumi swiftly departed to the U.S. to promise Japan’s “cooperation.”19, Tokyo: Iwanami Shoten, 1999.]

The attitude of a majority of the Japanese people, however, was far from approving Prime Minister Koizumi’s actions. Many Japanese who, while sympathizing with the victims and feeling anger toward terrorism, felt uneasy with the image of American society draped in the stars and stripes of the national flag and the American government’s race toward “war” as a solution. Sakamoto Yoshikazu, a leading postwar progressive scholar of international politics writes the following:

President Bush’s congressional address includes the following sentence: “Americans are asking, ‘Why do they hate us?’… [The terrorists] hate our freedoms: our freedom of religion, our freedom of speech, our freedom to vote and assemble and disagree with each other.”

Upon hearing this, I was astounded. I wondered how he could believe that such words would be acceptable within the international community. Among terrorists, there may be those who fit such a description. Yet, there are also many people within the developing nations who, to some extent, harbor some sympathy for the terrorists and think, “the actions taken by the terrorists were wrong but their motives and intentions are understandable.” Is not the very reason these people hate America because America crushes and silences those very people who seek to realize the “freedom, human rights, and democracy” of which America speaks? Furthermore, is it not also because the “global standard” on which American civilization is based is perceived as increasing the gap between the world’s rich and poor and eroding that “other culture,” different from America? Japan’s “civilization,” which has been in continual alignment with that of the U.S., is no less guilty.20

Sakamoto’s critique of Bush represents a widely held criticism of the U.S. government and its people, and their arrogance in believing in the universality of their kind of democracy, freedom, and human rights, their erroneous understanding of themselves, and their ignorance of the rest of the world. At the base of such views is a condemnation of the American government’s recent unilateralism that include its shelving of the Comprehensive Test Ban Treaty (CTBT) and the Anti-Ballistic Missile Defense Treaty (ABM), their rejection of the Kyoto Accord, their disabling of international regulation of small arms, and their objections to the verification of the Biological Weapons Treaty. Such attitudes of the U.S. government deviate from the aforementioned concept of Americanization of the “Christian tradition, liberalistic expansionism, and Wilsonian internationalism.” Furthermore, each time the Japanese, or East Asian people in general, witness on television and in the newspaper the large numbers of casualties arising from the “collateral damage” of U.S. bombings, which are not widely publicized in the U.S., these opinions only grow stronger.

The former journalist turned critic and writer, Henmi Yô, who among the Japanese media has most aggressively and independently spearheaded the discussion on 9.11 and its aftermath, emphasizes the need to move away from the perspective of “a world seen through the eyes of America”:

The more we attempt to focus our vision, the more we see through the smoke and raining bullets a despairing and inequitable world system. It cannot be as simplistic as a clash between the “madness” of Islamic extremists and the “sanity” of the rest of the world. Behind Osama bin Laden lies, not several thousands of armed men, but a hatred of over a hundred million poverty-stricken people toward the United States. And counter to this, there is President Bush, who not only carries the vengeance of the WTC terrorist attacks, but also exhibits the irrepressible arrogance of the privileged.

…It is time for us to reexamine the true identity of the United States. Since it’s founding, it has repeatedly carried out over 200 foreign military campaigns, including nuclear bombings. Have we yielded ultimate arbitration to a country that has shown almost no official remorse for its militaristic actions? Perhaps we have been for too long “looking at the world through the eyes of America.” This time, however, we must reexamine these terrible war casualties through our own eyes and come up with our own conclusions based on fundamental moral codes. For the U.S. is showing vigorous signs of a new form of imperialism.

The U.S. counterattack was supported by an absolute majority of “nations,” however it was an act that defied the conscience of an absolute majority of “people.” The problem is not whether one “is with the U.S., or with [the terrorists].” Now is the time for us to stand, not on the side of the state, but on the side of those people who are being bombed.21

What Henmi emphasizes is a move from one perspective to several overlapping perspectives by moving from “North” to “South,” from the powerful in war to the weak, from the state to the individual. These are ways to move away from the perspective of the American side and from the image of the world seen “through the eyes of America” fashioned by Americanization.


  1. Setogawa Shuta, “‘Burakku hôku daun’ to hariuddo sensô eiga” (‘Blackhawk Down’ and the Hollywood war film), from the “Blackhawk Down” movie program.
  2. Asahi Shinbun, evening edition, April 19, 2002.
  3. Saito Tadao, “Blackhawk Down,” Asahi Shinbun.
  4. Emily S. Rosenberg, Spreading the American Dream: American Economic Cultural Expansion, 1890-1945, New York: Hill and Wang, 1982.
  5. Kiyomizu Sayuri, “Bunka kôryu toshite no nichibei kankei” (Japan-American relations as cultural exchange), in Masuda Hiroshi and Tsuchiya Jitsuo, eds., Nichibei kankei kiwado (keywords of Japan-America relations), Tokyo: Yuhikaku Sôsho, 2001.
  6. Rinjirou Sodei, “Dear General MacArthur; Letters from the Japanese during the American Occupation,” Maryland: Lanham, 2001. Released in 1946, the year immediately following defeat, Oka Haruo’s hit song, “Tokyo hana uri musume” (Tokyo flower selling girl), has the following verses: “jazz flows, light and shadows on the platform, ‘would you like a flower,’ ‘a flower for you’/ A real jacket of an American G.I., a sweet breeze that chases away the shadows/ oh Tokyo flower girl.” (lyrics: Sasaki Sho; music: Uehara Gen). In Sakurai Tetsuo,America wa naze kirawarerunoka (why America is hated), Tokyo: Chikuma Shobô, 2002. 123.
  7. Igarashi Akio, Sengo seinendan undo no shisô: kyôdô shutaisei wo motomete (the concepts and activism of postwar youth organizations; seaching for subjectivity), in Rikkyo Hougaku 42: (July 1995).
  8. John Dower, Embracing Defeat: Japan in the Wake of World War II, New York: W.W. Norton & Co., 2000.
  9. Koseki Shoichi, “Heiwa kokka”: Nihon no saikentô (‘peaceful nation’: a reexamination of Japan), Tokyo: Iwanami Shoten, 2002.
  10. Takabatake Michitoshi, “Taishu undô no tayôka to henshitsu” (the diversification and transformation of mass movements), Nihon Seijigakkai ed., 55nen taisei no keisei to hakai (the development and destruction of the 1955 position), Tokyo: Iwanami Shoten, 1977.
  11. Kiyomizu, Ibid.
  12. John W. Dower, “Peace and Democracy in Two Systems; External Policy and Internal conflict,” in Andrew Gordon, ed., Postwar Japan as History, Berkeley: University of California Press, 1993. See also, Chalmers Johnson, Blowback: The Costs and Consequences of American Empire, New York: Metropolitan Books, Henry Holt, 2000.
  13. The American film industry earns 40% of its profits from overseas sales. 75% of the movies or TV shows viewed by people worldwide are made in America. From Alfredo Valladao, trans, Itô Gô, Murashima Yuichirô, and Tsuru Yasuko, Jiyu no teikoku: American shisutemu no seiki (Le XXIe siecle sera americain), Tokyo: NTT Shuppan, 2000.
  14. The International Symposium Organizing Committee, The World’s View of Japan Through “Oshin,”Tokyo: NHK International Inc., 1991.
  15. Akio Igarashi, ‘From Americanization to Japanization in East Asia,’ The Journal of Pacific Asia, Vol. 4, 1997. The Committee for Research on Pacific Asia. This volume of the journal is dedicated entirely to the topic of Japanization. Akio Igarashi, ed., Henyosuru ajia to nihon: ajia shakai ni shintosuru nihon no popular culture, (a changing Asia and Japan: the infiltration of Japanese pop culture into Asian society), Tokyo: Seori Shobô, 1998, was edited for this special volume and was published in Japanese. In the late 1990’s, there was growing interest in this topic, and many research texts and articles have been published since. In the U.S. as well as other areas overseas, students taking Japanese studies courses have preferred it to the ever-popular subjects of Japanese economics and accounting.
  16. “Japanimation,” Nihon Keizai Shinbun, Nov. 18, 1995.
  17. Mainichi Shinbun, April 18, 1996.
  18. For further discussion see: Arjun Appadurai, Modernity at Large: Cultural Dimensions of Globalization, Minnesota: University of Minnesota Press, 1996. 27-47.
  19. Kunimasa Takeshige, Wangan senso to iu tenkaiten (the Persian Gulf War as a consequence
  20. Sakamoto Yoshikazu, “Tero to ‘bunmei’ no seijigaku” (the political science of terrorism and “civilization”) in Terogo: sekai wa dô kawattaka (after the terror: how the world changed), Tokyo: Iwanami Shinsho, 2002.
  21. Henmi Yô, Tandoku hatsugen 99nen no handô kara afugan hôfuku sensô made (independent remarks on the resistance, from 1999, to the war in Afghanistan), Tokyo: Kadokawa Shobô, 2001. 39-41.

Colombia’s Conflict and Theories of World Politics

Ann Mason, Political Science, University of the Andes, Bogotá, Colombia1

Among the multiple critiques of International Relations theory, its limited relevance for understanding the Third World’s place in global affairs has gained increasing attention during the past decade.2 First, the end of the Cold War revealed a more complex world stage with a plurality of actors, problems and interests that had little to do with traditional interstate power relations. September 11 drove home like a sledgehammer the point that the world is about far more than the high politics of Western nations. Today, IR theory’s poor ability to describe and explain, much less predict, the behavior of states in the global South is recognized as one of its primary shortcomings. This in part accounts for the tepid reception that this body of theory has received within countries not counted among the great powers. Both academic and policy-making circles in the developing and less developed world are skeptical about a theoretical tradition whose claims to universalism not only ignore them, but also act to reify a global order within which they are destined to draw the short straw.3

The Andean Region exemplifies this breach between contemporary IR theorizing and the multifarious problems besetting peripheral states and societies. Until very recently, the violence and social conflicts found in nearly every corner of the Andes were not even on IR’s radar screen. The 40-year plus armed conflict in Colombia, the violent opposition to Hugo Chavez’s populism, massive social protests in Bolivia and Peru, and Ecuador’s persistent political and social instability have all been branded domestic issues, and thus not within the purview of systemic IR thinking. Worldwide transformations that have blurred the internal-external dichotomy, however, have prompted some to recognize what has long been common knowledge in the region: local conflicts and problems are completely enmeshed with complex global economic, social and political processes. Colombia’s conflict is a case in point. Global markets for illicit drugs, links between Colombian armed actors and international criminal organizations, regional externalities of Colombian violence, the massive level of migration to the North, the explosion of the global third sector’s presence in Colombia, increasing U.S. military involvement, and growing concerns of the international community about the deteriorating Colombian situation all illustrate the international face of this crisis.

What light might IR theory shed on a conflict that is estimated to result in 3,500 deaths a year, two thirds of which are civilian, that is responsible for 2.7 million displaced people and another 1 million plus international refugees, whose political economy is such that an average of seven kidnappings occur daily, and that has the country awash in numbing levels of violence and human rights abuses?4 IR theory is in the business of explaining and predicting violent conflict, as well as the behavior of the world’s member states in relation to conflict and stability. Although critical and second-order theories of international relations have fundamentally different concerns5, substantive theorizing must address what Michael Mann calls IR’s “most important issue of all: the question of war and peace.”6 Indeed, realist and liberal theories within the classical paradigm, which share a similar ontology, assumptions and premises, purport to do just that. Given that Kal Holsti’s latest figures estimate that 97% of the world’s armed conflicts between 1945 and 1995 took place in either the traditional or the new Third World, a viable theoretical framework of world politics must be able to integrate the global periphery.7

In this short essay I will discuss what contemporary IR scholarship may or may not offer in its treatment of the Andean Region, and of the armed conflict in Colombia in particular. My commentary will be limited to three issues familiar to the developing world, as seen through the lens of Colombia’s current crisis: the correlation of state weakness with violence and instability, the post-territorial nature of security threats, and the North-South power disparity. I will conclude with some observations on what this may tell us about the adequacy of the theories themselves.

State Weakness

The sovereign state that lies at the heart of the Westphalian model is the building block of mainstream IR theory. Most theorizing about international politics characterizes the state in terms of power, understood as the capability of achieving national interests related to external security and welfare. Realist and liberal perspectives, and some versions of constructivism, are all concerned with explaining conflictual and cooperative relations among territorially distinct political units, even while their causal, or constitutive, arguments are quite different. Although Kenneth Waltz was taken to task for blithely claiming that states under anarchy were always “like units” with similar functions, preferences and behavioral patterns, much of international relations scholarship persists in a top-down, juridical view of statehood largely abstracted from internal features.8

But international legal sovereignty may be the most that the advanced industrialized states have in common with states on the global periphery such as Colombia. First of all, Colombia’s priority is internal security, not its power position relative to other states. Threats to the state originate within Colombian territory, not in neighboring countries. In spite of some longstanding border tensions and historical rivalries within the Andean Region, Colombia and its neighbors tend to be more concerned with the strength of domestic social movements and armed actors than they are with the international balance of power. Indeed, even in the absence of a regional balancer, strong democratic institutions, dense economic and political networks, or multilateral governance structures, inter-state wars in the region during the 20th century have been extremely rare. This no-war zone, or negative peace according to Arie Kacowicz, appears to be best explained by a shared normative commitment to maintaining a society of states and to peaceful conflict resolution, contradicting both material and systemic explanations of interstate behavior.9

State strength in much of the developing world is not measured in terms of military capability to defend or project itself externally, but rather according to the empirical attributes of statehood: the institutional provision of security, justice and basic services; territorial consolidation and control over population groups; sufficient coercive power to impose order and to repel challenges to state authority; and some level of agreement on national identity and social purpose.10 States in the Andean Region all receive low marks for the very features that mainstream IR theory accepts as unproblematic, and immaterial. Although Colombia is in no immediate danger of collapse, most indications point to a state that has become progressively weaker: the basic functions required of states are poorly and sporadically performed, central government control is non-existent in many jurisdictions, social cohesion is poor, and the fundamental rules of social order and authority are violently contested.11 Most importantly, the Colombian state fails the basic Weberian test of maintaining its monopoly over the legitimate use of force and providing security for its citizens.12

Internal state weakness, ranging from impairment to outright collapse, is the common denominator of post-Cold War global violence and insecurity.13 It is also the permissive condition of Colombia’s security emergency. Reduced state capacity underlies the more proximate causes of the violent competition with and among contending subnational groups, namely the FARC, ELN, paramilitaries, and narcotrafficking organizations. Recent efforts by the Alvaro Uribe administration to build up Colombia’s military suggest movement toward state strengthening, although effective consolidation must go far beyond this one component of stateness. It remains to be seen whether in the long run Colombia’s bloody conflict becomes a force for state creation in the Tillian tradition,14 or on the contrary a structure that has ritualized violent discord as a normal part of Colombian social life.

This erosion in capacity and competence has taken its toll on what is perhaps a state’s most valuable asset–legitimacy. The Colombian state’s mediocre performance and problem-solving record degrades central authority, reducing public compliance and policy options, and leading to a further deterioration in internal order as para-institutional forms of security and justice emerge. This dynamic has been exacerbated by new mechanisms of global governance and the proliferation of global actors within domestic jurisdictions increasingly perceived as legitimate alternatives to sovereign state authority. What Jessica Matthews describes as a “power shift” away from the state–up, down, and sideways–to suprastate, substate, and nonstate actors as part of the emergent world order may also involve a relocation of authority.15 This is particularly apparent in the post-colonial and developing world where the state is less equipped to respond to internal challengers, and sovereignty’s norm of exclusivity is more readily transgressed. In Colombia, alternative political communities such as transnational NGO’s, church and humanitarian associations, and global organizations, as well as insurgent and paramilitary groups, are increasingly viewed as functional and normative substitutes for the state.

Global Security Dynamics

At the same time that Colombia’s security crisis is in great measure attributable to the empirical weakness of the state, it also highlights another dimension of the emerging global order: the complex interplay between domestic and international security domains. The globalization of security puts into sharp relief the growing discontinuity between fixed, territorial states and the borderless processes that now prevail in world politics. While Realists would point out that current events in North Korea and Iraq are eloquent reminders of the applicability of a traditional national security model in which state-on-state military threats predominate, concerns in Colombia reflect a somewhat different security paradigm.

First of all, insecurity in Colombia is experienced by multiple actors, including the state, the society at large, and particular subnational groups. Security values, in turn, vary according to the referent: national security interests, both military and nonmilitary, exist alongside societal and individual security concerns. Colombian society not only seeks security against attacks, massacres, torture, kidnapping, displacement, and forced conscription, but also in the form of institutional guarantees related to democracy and the rule of law, and access to basic services such as education, employment and health care. Many of the internal risks that Colombia confronts are also enmeshed with regional, hemispheric and global security dynamics that are dominated by state and non-state actors.

While Colombia is typically viewed as being the in eye of the regional storm, the Colombian crisis is itself entangled with transregional and global security processes, including drug trafficking, the arms trade, criminal and terrorist networks, and U.S. security policies.16 The remarkable growth in the strength of Colombia’s most destabilizing illegal groups during the 1990s, for instance, is directly attributed to their ability to generate revenue from activities related to the global market for illegal drugs.17 Both the FARC and the paramilitaries capture rents from the cultivation, production and trade of cocaine and heroin, which finances their organizations, keeps them well stocked with arms also traded on regional and worldwide black markets, and sustains a pernicious conflict. These transactions occur within complex transnational criminal associations within and at the edges of the Andean region, which in turn are involved in global financial, crime, and even terrorist networks.18 Seen from this perspective, Colombia’s war is not so internal after all: it actively involves dense transborder networks composed of an array of global actors.19 Such a post-sovereign security setting underscores the necessity for mainstream IR theorizing to go beyond its state-centered vision of world politics and to develop conceptual tools better equipped to deal with global realities.

Power and Authority on the Periphery

IR theory’s notion of formal anarchy coexists uneasily with relations of inequality and domination that pervade world order. While most states in the South would tell you that the exclusive authority with which the institution of sovereignty endows them is not quite equal to that of their more powerful northern associates, neorealism and neoliberalism insist that the evident discrepancies among states are mere power differentials within a decentralized international system that lacks a central political authority. Thus hegemony and asymmetrical interdependence as such do not contradict the fundamental IR distinction between anarchy and hierarchy.

Some dominant-subordinate structures, such as the U.S.-Colombian relationship, may indeed be about more than material differences, however. The immense disparity in economic, political and military power has permitted Washington to impose its will in Colombia on a wide range of issues similar to a coercive hegemonic project. Nevertheless, Colombian observance of American preferences in its foreign and internal security policy is not exclusively related to overt threats or quid pro quos. The rules of what Alexander Wendt and Daniel Friedheim call “informal empire” are such that inequality can also be characterized as a de facto authority structure.20 Authority implies that the U.S. exercises a form of social control over Colombia, and that in turn Colombian compliance cannot always be explained by fear of retribution or self-interest, but rather suggests some acceptance, no matter how rudimentary, of the legitimacy of U.S. power.21 Ongoing practices that become embedded in institutional structures can create shared behavioral expectations and intersubjective understandings reflected in identities and preferences. Colombia’s anti-drug posture, for example, that was in great measure shaped by Washington´s militarized war on drugs and aggressive extradition policy, has over time become internalized.22 Colombia has appropriated the prohibitionist discourse of the United States, and become an active agent in reproducing its own identity and interests vis-à-vis the illegality and danger of drugs.23

It would be an exaggeration, however, to conclude that Colombia’s behavior on its shared agenda with the U.S. is completely consensual: the underlying power configuration is a constant reminder that Washington calls the shots. The U.S. reconstruction of Colombia’s internal conflict into part of its war on global terror, with great uncertainty within the country about its implications for a negotiated settlement, is illustrative. U.S. preponderance can also lead to “increased incentives for unilateralism and bilateral diplomacy,” at times directly against Colombian interests.24 Recent arm-twisting to grant immunity to American citizens and military in Colombia from prosecution for human rights violations under the International Criminal Court is a case in point. Still, material inequalities can obscure how third-dimensional power also operates in the informal authority relations between the United States and Colombia.


The Colombian situation suggests various themes which theories of world politics would be well advised to take into consideration. IR theory has been largely silent on the issues of state-making and state-breaking that reside at the heart of the Third World security problematic. In neglecting domestic contexts more broadly speaking, this body of theory is inadequate for explaining the relationship between violent “internal” conflicts and global volatility at the start of the 21st century. These theories also have a blind spot when it comes to non-state actors in world politics. In overemphasizing states, realist theories in particular are hard pressed to adequately account for the countless sources of vulnerability of states and societies alike. Security threats, from terrorism to drug trafficking to AIDS, defy theoretical assumptions about great power politics and the state’s pride of place in world order. Similarly, non-territorial global processes such as Colombia’s security dynamics are not well conceptualized by conventional IR levels of analysis that spatially organize international phenomena according to a hierarchy of locations.25 Colombia’s experience with sovereignty also calls into question the logic of anarchy in realist and liberal IR theorizing. Seen from a peripheral point of view, the notion of formal equality is little more than a rhetorical device that camouflages deep and persistent material and social inequalities in the international system. We thus arrive at the conundrum of a “stable” world order, in IR terms defined by the absence of war among the world’s strongest states, wracked by violent conflict and immeasurable human suffering in peripheral regions. Perhaps most importantly, today’s global security landscape should prompt us to rethink theories that by and large bracket the non-Western, developing domains and suppress their narratives.

The heterogeneity of the IR discipline cautions us against jettisoning the entire canon as flawed when it comes to the Third World, however. Constructivists’ incorporation of a social dimension into an analysis of state identities and interests is a promising research agenda for analyzing non-material aspects of North-South relations. Institutionalism theory has also contributed to our understanding of the role of global institutions and norms in conflict resolution and cooperation in the Third World, and may offer insight into seemingly intractable conflicts such as Colombia’s. Paradoxically, certain realist precepts also have utility for analyzing the international politics of developing states. The distribution of global economic, political and military power has an enormous impact on center and peripheral states alike. As we have seen, the inequality in U.S.-Colombian relations poses a serious challenge to multilateralism and mechanisms of global governance. In spite of the ongoing reconfiguration of the state in response to global transformations, the sovereign state has proved to be highly adaptive and resilient. Colombia’s internal weakness, for example, is to be contrasted with the state’s increasingly successful political and diplomatic agenda within the international community, even in the face of increasing global constraints. The complexities of Colombian security dynamics, which vividly illustrate a non-realist security landscape, nevertheless require that public policies prioritize, delineate and specify threats and responses largely in conventional, military terms. And finally, Colombia’s efforts to recuperate state strength, or complete its unfinished state-making process, as the case may be, suggest that state power remains pivotal to internal, and thus global, order.

Rather than dismissing IR theory outright for its shortcomings in explaining the problems of countries such as Colombia, we may be better advised to look toward peripheral regions for what they can contribute to testing, revising, and advancing our theories of international politics. Perhaps the explosion of war-torn societies in the Third World and the implications this has for global order will inspire critical analysis of where the theories fail and what they have that is germane to analysis of international relations in the South. Just as there is no single theoretical orthodoxy in IR, neither is the Third World a like unit. With any luck, the diversity of these experiences will lead to new theorizing about world politics.


  1. I would like to thank Arlene Tickner for her helpful comments on this article.
  2. For an introduction to this line of analysis, see Stephanie Neuman, ed., International Relations Theory and the Third World (New York: St. Martin’s Press, 1998).
  3. On how Latin American scholarship has incorporated Anglo-American IR thought, see Arlene Tickner, “Hearing Latin American Voices in International Studies”, International Studies Perspectives, 4, 4, 2003 (forthcoming).
  4. While the number of conflict related deaths is high, the overall figures on violence in Colombia are nothing short of alarming. In 1999 there were 22,300 violent deaths, representing a homicide rate of 53.66 per 100,000 individuals, according to Alvaro Camacho, “La política colombiana: los recorridos de una reforma,” Análisis Político. No. 41, 2000, pp. 99-117. Displaced population figures are as of September 2002, according to the NGO Consultoría para los Derechos Humanos y el Desplazamiento (Codhes), see Boletín Codhes, September 4, 2002. The number of international refugees is taken from the website of the U.S Embassy in Bogotá,, last updated April 4, 2002, and from the U.N. High Commissioner for Refugees’s website, last updated January 24, 2003, Colombia has the dubious distinction of having the highest kidnapping rate in the world, with 2, 304 cases being reported in 2001 according to Pais Libre, a local NGO dedicated to the problem of kidnapping. See, “Total Secuestros en Colombia 1997-2002”, May 14, 2002.
  5. My comments will not engage Marxist approaches, critical theory, or post-positivist constructivism, but will rather focus largely on first-order problem-solving theories of international relations.
  6. Michael Mann, “Authoritarianism and Liberal Militarism: A Contribution from Comparative and Historical Sociology,” in Steve Smith, Ken Booth and Marysia Zalewski, eds., International Theory: Positivism and Beyond (NY: Cambridge University Press, 1996), p. 221.
  7. Kalevi Holsti, The State, War, and the State of War (NY: Cambridge University Press, 1996), pp. 210-24.
  8. Kenneth Waltz, Theory of International Politics (Boston: Addison-Wesley, 1979). Both constructivism and liberal democratic theory are important exceptions.
  9. Arie Kacowicz, Zones of Peace in the Third WorldSouth America and West Africa in Comparative Perspective (Albany: State University of New York, 1998).
  10. On empirical versus juridical definitions of statehood, see Robert Jackson, Quasi-States: Sovereignty, International Relations and the Third World (Cambridge: Cambridge University Press, 1990).
  11. For an excellent overview of Colombian state weakness and the “partially failed” thesis, see Ana María Bejarano and Eduardo Pizarro, “The Coming Anarchy: The Partial Collapse of the State and the Emergence of Aspiring State Makers in Colombia,” paper prepared for the workshop “States-Within-States,” University of Toronto, Toronto, Ontario, October 19-20. 2001.
  12. Max Weber, Economy and Society, in G. Roth and C. Wittich, eds., E. Fischoff et al., trans. (Berkeley: University of California Press, 1978).
  13. For a sampling of the large literature on this topic, see Joel Migdal, Strong Societies and Weak States: State-Society Relations and State Capabilities in the Third World (Princeton: Princeton University Press, 1988); Robert Jackson, Quasi-States: Sovereignty, International Relations and the Third World (Cambridge: Cambridge University Press, 1990); Barry Buzan, People, States and Fear: An Agenda for International Security Studies in the Post-Cold War Era (Boulder, CO: Lynne Rienner, 1991); Brian Job, ed., The Insecurity Dilemma: National Security of Third World States (Boulder, CO: Lynne Rienner, 1992); William Zartman, ed., Collapsed States: The Disintegration and Restoration of Legitimate Authority (Boulder, CO: Lynne Rienner, 1995); Kalevi Holsti, The State, War, and the State of War (Cambridge: Cambridge University Press, 1996); Ali Mazrui, “Blood of experience: the failed state and political collapse in Africa,” World Policy Journal, 9, 1, 1995: 28-34; and Lionel Cliffe and Robin Luckham, “Complex political emergencies and the state: failure and the fate of the state,” Third World Quarterly, 20, 1, 1999: 27-50.
  14. Charles Tilly, Coercion, Capital, and European States, AD 990-1992 (Cambridge, MA: Blackwell, 1990).
  15. Jessica Matthews, “Power Shift,” Foreign Affairs, 76, 1, 1997: 50-66.
  16. For an elaboration of the transregional security model see Arlene B. Tickner and Ann C. Mason, “Mapping Transregional Security Structures in the Andean Region,” Alternatives, 28, 3, 2003 (forthcoming).
  17. The relationship between rents from illegal drugs and the internal conflict in Colombia is well established in Nizah Richani, Systems of Violence: The Political Economy of War and Peace in Colombia (Albany: State University of New York Press, 2002).
  18. See Tickner and Mason (2003) and Bruce Bagley “Globalization and Organized Crime: The Russian Mafia in Latin America and the Caribbean,” School of International Studies, University of Miami, 2002, unpublished paper.
  19. Colombia’s conflict increasingly resembles the new war nomenclature. See Mary Kaldor, New and Old Wars: Organised Violence in a Global Era (Cambridge: Polity Press, 1999).
  20. Alexander Wendt and Daniel Friedheim, “Hierarchy under Anarchy: Informal Empire and the East German State,” International Organization, 49, 4, 1995: 689-721. See also Nicolas Onuf and Frank Klink, “Anarchy, Authority, Rule,” International Studies Quarterly, 33, 1989: 149-173, on the paradigm of rule as an alternative to anarchy.
  21. On the different methods of social control, see Ian Hurd, “Legitimacy and Authority in International Politics,” International Organization. 53, 2, 1999: 379-408.
  22. This process is, nevertheless, highly uneven, and can be mediated by multiple factors. For an analysis of how domestic considerations led Colombia to adopt a confrontational position toward U.S. demands on extradition during the Gaviria administration, see Tatiana Matthiesen, El Arte Político de Conciliar: El Tema de las Drogas en las Relaciones entre Colombia y Estados Unidos, 1986-1994 (Bogotá: FESCOL-CEREC-Fedesarrollo, 2000).
  23. Curiously, even while various states in the U.S. are considering the decriminalization of drug use for medicinal purposes, Colombia’s current proposed political reform includes eliminating the “personal dosis” of illicit substances, which had been legalized by the Constitutional Court in 1994. On the construction of an anti-drug national security identity see David Campbell, Writing Security: United States Foreign Policy and the Politics of Identity (Minneapolis: University of Minnesota Press, 1992).
  24. Robert Keohane, “The Globalization of Informal Violence, Theories of World Politics, and ‘The Liberalism of Fear,'” in Craig Calhoun, Paul Price and Ashley Timor, eds., Understanding September 11 (New York: The New Press, 2002), p. 85.
  25. Barry Buzan, “The Level of Analysis Problem in International Relations Reconsidered,” in Ken Booth and Steve Smith, eds., I.R. Theory Today (University Park: The Pennsylvania State University Press, 1995).

Institutions in Turbulent Settings

Francisco Gutiérrez1

This paper critiques some applications of the neoinstitutionalist program (NI) to the study of Latin American and Andean polities, and tries to develop some aspects of an alternative framework.

The critique develops on two levels. First, the “turbulent” institutions of several Latin American and Andean countries highlight some of the shortcomings of NI tout court. Since turbulent settings are the norm, rather than the exception, both theoretically and empirically there is a need in each case to explain the specific sense in which institutions can be taken as an independent variable. Second, the variant of NI most frequently applied to Latin America–with its heavy and almost always implicit normative and theoretical assumptions2–is deeply flawed and fails to address the very core of political conflict and change in these countries. I hope that the context will indicate at which level my criticism is developing. I contend that the shortcomings of standard NI models can be overcome with a kind of political analysis that “brings society back in” and incorporates learning in a nontrivial way. This discussion is linked to the “philosophy of history” of NI (path dependency), sensibility to initial conditions and the transition from laminarity to turbulence.

In the first part of the essay, I flesh out some basic definitions. In the second, I contrast how institutions work in stable (or “laminar”) and turbulent settings. In the third, I stress the importance of nontrivial learning from a Schumpeterian perspective. In the fourth, I show that path dependency is only part of the history. Each section leaves some unanswered and (hopefully) interesting questions.

1. Definitions

I understand NI in a very conventional way: a theory that takes institutions, broadly understood, as a relatively fixed set of incentives that explain differential social outcomes (as in North and Thomas´s rendering of “the rise of the Western World”, 1976). How broadly understood is open to debate. A typical gambit of NI as applied to Latin American contexts is to resort to so-called “informal institutions” when the proposed correlation between institutions proper and outcomes fails to show up. Thus, the concept of institutions would contain all the rules of the political game. The problem of fancifully broad definitions of institutions is that they are all-encompassing in nature, identifying institutions with any stable pattern of human interaction. This is an open door for circular argumentation, tautology, and programmatic degeneracy (in Lakatos's [1970] jargon). Specifically, the notion of “informal institutions” is open to two criticisms. First, contrary to institutions proper, the so-called informal ones are identified by an external observer without the conscious acceptance of the protagonists of the interaction. It is an analytic device, at another level of reality than explicit rules. How can you demonstrate empirically that the set of informal rules that supposedly were observed actually regulate a given universe of interactions? Second, informal rules have a very vigorous (and unstudied) life, below and above formal institutions. Below: conventions, for example, are potent devices for coordination-solving, that largely live in a world previous to explicit rule-establishment (Lewis, 1986) and that certainly do not require a two person interaction.3 Above: meta-agreements are not necessarily well behaved. In countries with weak polities, it can be the case that the only valid rule is the (informal) motto “the rule is that no rule will be observed”. This kind of (paradoxical) order does not orient actors in their every day life–and so the typical inference of the NI program (such and such set of institutions generates such and such social outcomes) is out of place. Thus, it is better to stick to something like the more restrictive but sensible definition of Eggertsson: “Let us define institutions as sets of rules governing interpersonal relations, noting that we are talking about formal and organizational practices” (Eggertsson 1990, 70); see also Tsebelis (1990).

Which is the explanatory power of institutions? In a world of (semi)rational agents, stories of failure and success have to be accounted for: Why do people (sometimes) take the wrong paths? Because they are sent the wrong signals, which encourage suboptimal behavior. And why do wrong signals occur? Mainly because of unspecified property rights, imperfect information and transaction costs. NI offers microfoundations where “organic culturalism” (à la Putnam) does not.4 Suboptimal outcomes can be explained by poor institutional design–which, in turn, is maintained by coalitions that benefit from it. Transaction costs are the prototype of such a mechanism. Market and state failures create social forces that help to maintain those very failures. From positive feedback and the decisive role of institutional design, path dependency follows. This provides for a theory of social change.

Note that the microfoundations provided by NI make sense if and only if institutions constitute a system of incentives that explains the behavior of the agents. On the other hand, NI does not entail hard-nosed rationalism; it should suffice that institutional signaling be the most salient feature of the incentive landscape, the other incentives playing the–perhaps important but logically subordinate–role of noise.

2. The Neoinstitutionalist Framework in Turbulent Settings: Does It Work?

Does NI provide an adequate toolkit for understanding conflict and change in countries with high levels of instability and where “noise” is stronger than institutional signaling? I think not. I will not dispute that in such countries “institutions matter”–an assertion that is trivially true (Harris, Hunter and Lewis 1995)–nor that a broad family of social and political problems can be captured by the rich conceptual framework of NI. I start instead with the defense of the (once again, very conventional) claim that such matters as contemporary political change in Colombia, Ecuador or Venezuela cannot be accounted for by standard NI.5 I believe there is a fundamental difference, from the point of view of the role and status of institutions in social and political life, between those turbulent Andean cases and the core capitalist countries.

a. My main defense of the conventional claim is based on the fact that in a very important sense countries like Colombia, Perú or Venezuela fit too well the theories that constitute the formal backbone of NI. To see this, let us use the powerful dichotomy between “policy politics” (playing the game) and “institutional politics” (debating the rules of the game), common to many variants of social choice theory. Buchanan (1986) argued that in all developed democracies (of his time), policy politics was the dominant practice, because basic institutions were taken for granted. This is perplexing, because if institutions are decisive for outcomes of efficiency and distributional problems, they should be a permanent object of political strife among (even boundedly) rational agents. The Tullock question (“Why so much stability?”) has no easy answer; in this regard, I can do no better than quote Eggertsson in extenso:

If institutions are somewhat chosen as we want to argue [rationally], then we are back to the disequilibrium outcomes of majority-rule voting, and the choice among institutions will not lead to stable or equilibrium institutions. And it does not help to argue that the choice of institutions is prescribed by higher rules, written or unwritten constitutions, because this only pushes the argument one step back, requiring us to predict unstable constitutions. . . . Just as in the case of voting outcomes, empirical observations tell us that the institutional structures in democratic countries are relatively stable, that they tend to be [in] equilibrium. . . . Shepsle argues that political institutions are ex ante agreements about cooperation among politicians. According to this view, institutions can be seen as a capital structure designed to produce a flow of stable policy outcomes, and institutional change is a form of investment. One of the costs of institutional change is the uncertainity about which outcomes the new regime will produce. Uncertainty implies that a given structure may ex ante be associated with a set of structure-induced equilibrium points. Ex post, this uncertainty is gradually reduced as the operational qualities of a new institutional structure become known. Finally, Shepsle argues that this uncertainty about the impact of structural change on equilibrium outcomes is enough reason to stabilize institutions and prevent continuous institutional change. He maintains, thus, that the calculations of agents in decisions involving policy choices are qualitatively different from calculations regarding institutional change (Eggertsson 1990, 71-72).

Stability is a major anomaly, for which auxiliary arguments had to be developed. In contrast, in the Andean area we have fully normal cases. The Andean constitutional wave of the 80s and 90s was one of the two biggest in the contemporary world, together with the post-socialist Central-East European. Colombia issued one new constitution and several important reforms related, among other matters, to property rights and the judiciary; Ecuador produced two constitutions (1979 and 1998), and a couple of major reforms in the meantime; Perú also enacted two constitutions; and Venezuela and Bolivia one each. In all these countries, “writing a new charter” has been the main motive of politics for long periods, a phenomenon backed by a strong historical tradition. Furthermore, large-scale institutional change is the main objective of everyday political squabbling: decentralization; electoral, judicial and congressional reform; and so on. The legislative inflation in the Andes is monstrous; not even specialists can follow comfortably the unending stream of change in the rules of game in practically all of the basic areas of life.6

In normal Andean countries, then, institutional politics is dominant (and policy politics is rather poor). But this creates a real problem for NI. First, the basic rules of the game are not stable. Rather, they are the main object of contention and change very fast. Now, “unstable institutions” is rather an oxymoronic expression–whatever sense one can give to it, it affords only weak independent explanatory power to institutions. It also calls into question evolutionary arguments, one of the best alternatives to hyperrationality assumptions (Axelrod 1986; Young 2001), because the slow pressure against maladaptations does not have time to unfold (I will return to this). Thus, we have to ask in what sense we are speaking about a system of incentives that actually affects agents´ behavior. If, as is the case in countries like Colombia or Ecuador, agents are conscious of the velocity of institutional change, their expectations are not strongly linked to any clearly specified (present) system of incentives. More subtly, we cannot maintain simultaneously both the theory and the explanations for the anomaly; both are stated on a general plane, and each one will hold for all cases or for none.

The obvious empirical question is: In such a setting, do institutional arrangements act as independent variables? And, then: How do agents respond to this family of noisy and fluid incentives?

b. Although in the above sense turbulent countries are too normal,7 in another sense they are odd. Instability, war and violence give rise to self-sustaining patterns of human interaction, which in turn generate explanatory problems for the concept of rational calculus and thus of “systems of incentives.”

In Colombia, for example, war and elections have coexisted for a very long time–more or less twenty years, if we fix the start of the present wave of armed conflict in the 1982-84 period.8 Violence has become an everyday component of political action, making it a high risk activity. This changes the menu of options for politicians, as well as their mindset. It also creates a rationality problem.

Due to war, Colombian politicians have the option–often the need–of switching between institutional systems: the jurisdiction of the state or the jurisdiction of the warlords. It is important to stress that warlords are involved in electoral politics in their territories, so the modal politician will participate in different and contradictory institutional worlds. This is another type of institutional politics (with changes in space, not in time). Agents can choose among competing rules of the game, indicating what kind of game they prefer in a given time period, but they pay the price of prohibitively high risk taking.

Colombian politicians of all political parties and families are presently taking very high risks of being kidnapped or killed.9 Oddly, there has been a strong increase both in risk and political participation (in the sense that there are many more candidates and lists) in Colombia in the last twenty years.10 Why? Whatever the answer, I would argue that at the limit, when one is playing Russian roulette (i.e., when the loss of one’s life is one of the prizes), using the principle of revealed preferences is simply not sound. But in what sense then are we speaking of utility functions and systems of incentives? Risk can be coped with under the incentive system up to a certain point–after that, incommensurability appears.

c. Turbulent countries tend to be weak and vulnerable nation-states. A fundamental part of their decision making and political life is transnational. War, narcotrafficking, decentralization, economic adjustment, to name just some of the dominant motifs in the Colombian case, involve extended webs of national and foreign actors. This would not be a major problem except that transnational governance systems, and their corresponding distributional problems, have generally escaped the gaze of NI analysis–in part, indeed, because they are institutionally mis-specified,11 in part because the mechanism of “micromotives and macrobehavior” (Schelling 1978) implies a change of scale: when one moves from the national to the global, the unit of analysis moves from individuals to states and organizations.

On the one hand we have, then, crucial global-ridden processes–that is, narcotrafficking, development models, technological change–and, on the other hand, a national institutional framework.12 The result is that, empirically, one can observe a strong link between the “small” everyday practices of local politicians and the “big” arrays of transnational phenomena, but NI gives no way of capturing it.13 When one poses simple questions–such as why democracies are unstable in the Andean area or why the Colombian political system has changed in the specific sense it has–this tension becomes particularly uncomfortable. In the new wave of institutional studies on Latin America, novel and interesting aspects of political life were analyzed successfully, but a quaint dichotomy took center stage: adequate problem specification versus strong methods. One can have one of these goods, but not both simultaneously. Geddes (1994), for example, in her neat and intelligent analysis of state reform and the trajectory of bureaucracy-technocracy in Brazil, overlooks the role of international financial institutions. This description of a “purely national dilemma” defies credibility. An even more extreme example concerns the interpretation of political change in Colombia: analysts have focused on the niceties of electoral legislation, or more generally on institutional design, forgetting about such small details as narcotrafficking, war or the changes caused by television in political life (e.g., Nielson and Shugart 1999). Old institutionalism was not free of such difficulties, as denounced by Eckstein (2000), who criticized it for its focus on the small print of electoral design and its failure to give adequate context to the fall of the Weimar Republic.14 This draws our attention to the problem of metrics. Which is the set of institutions relevant for the specific problem at hand? Can, for example, the electoral fragmentation of several Andean polities be explained simply by wrong electoral rules, or are other, more distant, institutions relevant?15

In brief, society (with its “long networks” of political action) has to be brought back in. This applies, if the previous arguments are correct, especially to normal countries, where the institutional framework is unstable.

Conflicts around basic institutional design, the breakdown and creation of coalitions and national–transnational agendas–this is the very matter of everyday politics in “states in crisis”. Is it at all possible to speak about it, displaying at least part of NI rigor and using some of its methodological tools?

3. Innovation and Learning

The question merits a positive answer; there are now interesting programmatic reflections (from Hedström and Swedberg [1998] to McAdam, Tarrow and Tilly, [1997]) as well as appealing empirical works that address many of these challenges. Since the adaptation of the framework of “the vast majority of analyses produced by political economists” (using Hall’s [1997] expression) to these concerns will necessarily be incremental and piecemeal, I will offer a different, Schumpeterian twist.

Schumpeter’s name in political science is bound to the so-called elitist perspective, a contested though fertile view of electoral competition. But here I use another aspect of Schumpeter´s work: his notion of entrepreneurship and innovation. My belief is that studying political innovation allows the researcher to trace the links of the chain that go from transnational processes to small local coalitions and conflicts. It may also allow for a better understanding of the nature of political change and a better fit to empirical data than standard recountings.

The intuition of viewing the politician or social leader as an entrepreneur is already well established and has been particularly successful in the study of social movements (see, for example, McAdam, McCarthy and Zald 1996). However, the typical definition of entrepreneur in the social movement literature is as a resource mobilizer, failing thus to address what Schumpeter considered the distinctive nature of entrepreneurship–innovation.16 Perhaps because social movements tend to be short lived, the study of the repertoires of contention has not led to the analysis of innovation and its long-lasting effects on organisms and systems. More than in risk taking or mobilizing resources, entrepreneurs (by Schumpeter´s definition) are engaged in innovation, defined as “technological change in the production of commodities already in use, the opening up of new markets or of new sources of supply, Taylorization of work, improved handling of material, the setting up of new business organizations . . . in short, any ‘doing things differently’ in the realm of economic life” (Schumpeter 1939, 84).

What causes innovation? Endogenous change and exogenous shocks.17 Schumpeter focused on the former, actually considering the latter of little interest to economics. Whatever the merits of his reasoning, both types of forces seem crucial to the study of political systems. In politics, exogenous shocks occur frequently and are of indisputable interest: the way in which wars, changes in tastes and new technologies give rise to ways of “doing things differently” are not well understood, yet they are crucial for the interpretation of political change.

The import of applying the Schumpeterian view of innovation is that we can study explicitly the interaction between processes of innovation and “exogenous shocks” caused by distant drivers of political change. We can do this without abandoning the basic tools that give NI its analytical force, particularly some notion of rationality and informational economy (signals, incentives and constraints)–that is, microfoundations.18 In other words, our agents will remain basically the same (for example, politicians who want to win elections) but now they are open to many different incentives and constraints. Exogenous shocks exist, so the system and its environment are moving simultaneously. The social landscape will be the vector that results from aggregated microinteractions, institutions and exogenous forces.

It is important to stress that the idea of innovation also allows for the study of both moments of interaction between institutions and agents: how institutions restrict agents and orient them in specific directions, and how agents, through small changes, transgressions and adaptations perturb and finally transform institutions.19 In this sense, as addressed specifically by Schumpeter, the study of innovation is evolutionary. But if narrow rationalism is replaced by an evolutionary perspective, we have a “syncopated evolution,” because the environment changes very fast, creating juxtaposed layers of adaptative practices. We do not just have a society of limited, myopic agents struggling in (sometimes very) noisy environments. Evolution can be imperfect and allow for the survival of maladaptations to previous incentive systems, because exogenous shocks can accumulate, truncating the evolutionary process. One of the typical results of this kind of evolution is thus “mixed types” and heterogeneous coalitions. Keeping track of the exogenous shocks, their unfolding and their effects on the political system, is consequently an antidote for the extremely acritical adoption of the modern-backward dichotomy that is so evident in several NI analyses.

For example, elsewhere I have shown (Gutiérrez 2001) that to explain the type of relations between narcotraffickers and politicians in Colombia in the last twenty years I needed two dimensions: a principal-agent model that accounted for the contractual conflicts between criminals; and a new insertion of Colombia in the international system, marked by the 1991 Constitution, that changed the role of the state vis-à-vis illegality. On both dimensions the characteristic of social interaction was strong ambiguity and a large amount of noise. The 1991 Constitution was considered in its time a modernizing landmark, and in fact it provided a wealth of institutional resources to expel organized criminals from political life; but at the same time it gave in to the main demand of narcotraffickers (a ban on extradition for Colombians).

Schumpeter´s analysis made learning the driving force of change, though focusing only on imitation. The “new ways of doing things” spread in waves, with early imitators replicating successful practices. But entrants keep imitating even after the marginal benefits of the innovation have reached zero, while other practitioners simply can´t catch the beat of the new rhythms. Thus, a typical result of Schumpeterian evolution is that learning entails overreacting. Herd effects and congestion lead innovation-prone rational agents to suboptimal behavior. Agglomeration around successful devices results in catastrophes and organizational destruction. Schumpeter could identify the specific (market) mechanisms behind organizational breakdown and catastrophes.

We are not near doing the same, because in politics there is no equivalent of markets, but I suggest this is a fundamental task. There has been a big bang in the political system in the Andes, but we do not know why it took place or what are the reasons for the differential outcomes (an organizational earthquake in Perú, Ecuador and Venezuela; relative stability in Bolivia; change with important organizational invariants in Colombia). Indisputably, there is some kind of relation between the diverse ways the agents adapted to rather similar processes. Several interesting questions spring from this simple statement: Which concrete mechanisms explain the differential reactions? When and why do the differential reactions imply divergence or convergence of outcomes?

Innovation and learning are the core of technical change. Note that, as in economics, in politics the latter expression has two meanings: the introduction of new devices; and the development of new forms of organization, discourses and practices. Both of them are very important. Television, for example, has had a very serious impact on Latin American political systems, giving an advantage to individuals over organizations20–a circumstance that can hardly be overcome by changing electoral statutes.21 On the other hand, there is a rich menu of purely “soft” technological innovations that trigger long-range political change. The following is a list of basic innovations, with illustrations of the decisions and processes involved:

Finding new financial resources. With the growing importance of organized crime in political life, a major decision is whether or not to accept illegal funds, and, if so, how to do it and how to justify (or deny) it publicly. The diverse ways in which these decisions are made–and rebuked by adversaries–produce a specific technology of public debate over the legal-illegal and formal-informal divide (which, by the way, is crucial in the contemporary world, not only for states in crisis).

Finding new languages, political discourses and symbols. In the Andean area, the second half of the 1980s and the 1990s saw the upsurge of the so-called antipolíticos, who sought to capture the votes of the citizens by staging an involved public performance of denial (“elect us because we are not politicians”). This complicated ritual produced a new technology of political symbolism.

Finding new ideas and interest aggregation–articulation forms. New ideas are decisive in the political experience. In this regard, Hall has made two extremely important points. First,

[T]he vast majority of analyses produced by political economists take the same general form, which is to say that they identify a fixed set of variables, whether composed of interests, institutions or ideas, given exogenously to the process of political conflict, and then show how these structure the situation so as to produce the relevant outcomes. This kind of analysis can have real value, but what it misses is the extent to which the outcomes may be created via processes of political conflict and not generated entirely by the antecedents of that conflict.22 (Hall 1997, 197)

Second, ideas are a basic dimension of the political, because “politics is not only a contest for power. It is also a struggle for the interpretation of interests . . . politics is more open than most political economists see it” (ibid.; see also Hall 1992). Ideas, and ideals, migrate, suffer counterintuitive adaptations and are articulated (to use another Hall category) with others in an evolutionary fashion.

Finding new dimensions of political practice. Shall parties resort to violence or not? Will they change significantly their traditional repertoire of political action? Will they publicly change their ideology?

Highlighting innovations allows one to exhibit the remarkable, and often neglected, technical content of political struggle and change. The technical is not only a way of presenting interests, it is a way of building them (Hall, 1997). At the same time, it shows some stark contrasts with standard NI. From the NI perspective, learning is “transparent.” Indeed, the pathbreaking analyses of Akerlof (1970), Stiglitz and others (e.g., Kreps [1990]) develop exquisite models that enhance our understanding of informational problems, and offer potent tools to spell them out. However, their translation into political terms remains doubtful. For example, principal–agent structures have been applied to the people–government relation, losing the fact that “the people” is not an actor but a space crossed by cleavages and fractures. Moreover, NI political scientists have ignored systematically the extremely simple–and intelligent–observation of Hirschman: politics is also (fundamentally) about speaking, and thus we need a theory of voice and signaling.

Learning through technical innovation suggests a different picture. First, agents create and adapt along many different dimensions, and are frequently struck by endogenous waves of innovation and/or exogenous shocks. This has two types of consequences. On the static side, the concept of dimensions of evaluation precedes that of systems of incentives. The best example I know of the dramatic changes in the model when the dimensions of evaluation are methodically and explicitly introduced is the classical critique by Hirschman (1970) of the Schumpeterian-Downsian model of elections. On the dynamic side, as stressed above, this evolution is punctuated by frequent large-scale changes in the environment, a factor that limits the power of the basic mechanism of the slow and gradual elimination of unfit agents.

Second, learning advances in waves, so that even a movement toward Pareto-optimal situations can cause catastrophic organizational mortality. There is a clear analogy, then, between herding behind a successful innovator and the classical collective action dilemmas. A polity that learns well can end in a state of constant disarray. Third, signals are not transparent–they have to be read. This argument can be developed in several stages. To start with, innovators can successfully override institutional considerations. Institutions themselves can be inconsistent: to take a typical contemporary situation, for example, answering simultaneously to a national and an international constituency with contradictory interests and concerns. Additionally, agents are exposed to lumpy stimuli–not one signal from one institution, but an institutional score, if I may, which the agent has to learn to interpret and play in front of different audiences. In turbulent settings, where institutions are unstable, short lived and inconsistent, and coexist with other very strong systems of incentives, the ambiguity and polisemy of institutional signals reach a point where the (sunken) assumptions of the comunicational transparency of contractual incentives is untenable.

4. Whither Path Dependency?

The previous discussion is intimately related to the notions of equilibrium and change. To simplify: while neoclassical models predict convergence independently of historical contingency, NI establishes divergence and irreversibility as core concepts that entail path dependency (single versus multiple equilibria outcomes). Taken together, both perspectives beg many questions. The first one is the level of resolution of the explanatory mechanisms. How “small” can an event be to trigger a “big” change? For example, Eckstein (2000) argues that the analysis of the Weimar republic by old institutionalists was wrong because they highlighted the finesse of electoral legislation, forgetting the huge historical tragedy behind Nazism. The criticism is intuitively appealing, but at the same time one would want an argument that addressed the very real fact that there need not be congruence between the size of the “cause” and the size of the “effect.”

The second question is related to the fact that the nature of change can change . Suppose there are two types of systems, those oriented toward an outcome (convergent) and those with several degrees of freedom (path dependent). It is clear that path-dependent systems can in a given moment become convergent–actually, it can even happen that a convergent system becomes path dependent if, for example, it is subjected to a strong enough exogenous shock. This means that “original sin accounts” as those discussed at the beginning of this paper fail because they ignore the second order processes of change. This sounds much less involved when grounded in the experience of the Andean countries. Was the “third wave” of democratization a genuine wave (toward a convergent system), or rather a turn in a cycle of regime change related to the form of insertion of these countries in the global market? Did these democracies depend for their existence on national assets or on international constraints? And what kind of counterfactual could be built by removing one or another constraint?

The intuition that there is an internal dynamic of the system, different from its observed path, might be important in two senses. First, the basic concepts of equilibrium in economics and game theory (i.e., Nash equilibrium) coincide tautologically with the notion of stability. Once the system arrives at a state, the task of the analyst is to explain why it is actually an equilibrium given the nature of the agents. By definition, there is no out-of-equilibrium outcome (Eggertsson notes this difficulty [1990]). However, with the simple idea of exogenous shocks and perturbations, there is the theoretical possibility of stability without equilibrium, when a system is continually perturbed up to a critical point, and thus of self-organizing structures in far-from-equilibrium situations. Political systems in States in Crisis–especially those that show some kind of stable institutional life–would correspond to this description. Second, very different trajectories exposed to similar perturbations may have the same end point. The Andean countries, for example, show similar patterns of political problems, despite their very different traditions–the same can be said about South and Central European nations. Convergence takes place because of many factors, one of the most neglected being that agents learn about what is happening in their neighboring nations. Once again the technical domain–innovation–is a driving force for learning: in the collapse of the Central and East European centrally planned systems, the “round table” technique used for the first time in Poland was used successfully in very different contexts; in the Andean countries the wave of antipolítica was propagated through explicit appropriation of the motives of electorally successful leaders. Space still counts–perhaps it counts more than ever. All this highlights that historicism does not entail path dependency. Systems can be absorbent (a single-ending state, as in neoclassical models), periodic (cyclical) or none of these (path dependent)–and large-scale historical changes can imply “second-order change” passing from one type of system to another.

This takes us back to the “big effects”/”small causes” motive. If we take path dependency seriously, its meaning is very near the “sensibility to initial conditions” concept. When dynamic systems show sensibility to initial conditions, given the trajectory of two distinct “particles,” very small differences in the starting point can entail enormous differences in the outcome. In historical analysis, however, this type of analysis is ridden with difficulties. How can one determine the “starting point” of the trajectory? Seldom is this question posed explicitly. Will one accept the theological concept that all future generations are determined by the (original) sin(s) of their great-great-grandfathers? Putnam (1994) has shown an iron consistency in this regard, and goes as far back as possible to find the reasons for differential outcomes–a real and explicit concern for “time zero” in the historical trajectory. Though his effort is basically flawed (Putzel, 1997; Tarrow, 1996), it clearly and honestly reveals that path dependency and sensibility to initial conditions are kindred notions. However, if original-sin determinism is introduced into the analytical framework, in what sense do institutions (or culture, for that matter) count? Institutions would be only the demiurge that expresses the (perhaps tiny) differences at time zero.

All this has implications for the very foundations of an evolutionary perspective. In “abnormal” (stable) situations, institutions constitute a general framework and outcomes can be seen as the result of the interactions of individuals.23 The “micromotives and macrobehavior” mechanism works well. Thus, stable worlds are transparent in yet another sense: given the rules, outcomes can be studied as if they were the result of aggregated microdecisions. But vulnerable and unstable (turbulent) systems are not transparent, because agents are “shooting at a moving target”–the environment changes faster than the system and the boundaries between the system and the environment are ill defined.

5. Conclusions

I very much agree with Rubinstein´s (2000) assertion that the abstract analysis of the interaction of rational agents has an independent value in itself. However, as soon as a theory, or threads of it, is used as a tool for empirical statements about the state of the world, the categories of the theory should be carefully evaluated to see if they capture the basic content of the system under study. What are the conditions for institutions to be the basic “system of incentives” in a social world?

NI literature applied to Latin America has plainly neglected this question. When systems of incentives are taken for granted, the interest of agents can be fixed deductively. The basic question then becomes cui bono? (Elster 1997): “Who is the beneficiary of this rule of the game”? Modernizers are, and should be, interested in liberalization (Diamond et al., 1997), clientelists in statism, and so on. If what I have being saying about “mixed types” is true, the cui bono question, however attractive, is very nearly the death of good empirical research for countries like those in the Andean region.

A focus on learning and innovation would help to alter the analytical landscape (interests, by the way, are also learned and discovered) and take society back in. On the other hand, it is a good antidote against narrow (economic, cultural, original-sin) determinism. Innovation and learning are embedded in sequences of political-socioeconomic-military structures and events. I believe these Hirschmanian sequences give a better understanding than explaining politics through economics, explaining politics through politics or explaining politics through time–zero events.

Contrary to Schumpeter´s framework for economics, in politics exogenous shocks are also of analytic interest. I have used the expression “exogenous shocks” here not as “international” nor as “uncalled for,” but as “part of the environment, not of the system.” Political change cannot be understood without taking into account the open-ended nature of political systems (Hall) and the exogenous shocks to which they are exposed, especially in turbulent settings. Exogenous shocks also stress the contingent and nontransparent nature of institutional signaling, a fundamental aspect of empirical political conflict in Andean countries.


Akerlof, G. 1970. “The market for lemons: quality uncertainty and the market mechanism.” Review of Economic Studies, no. 54:345-64.

Ames, Barry. 1999. “Approaches to the study of institutions in Latin American politics.” Latin American Research Review 34, no .1:221-27.

Aoki, Masahiko. 2001. Toward a Comparative Institutional Analysis. Cambridge, Mass.: MIT Press.

Axelrod, Robert. 1986. La evolución de la cooperación. Madrid: Alianza.

Buchanan, James. 1986. Liberty, Market and the State. Political Economy in the 1980s. New York University Press, N.Y.

Collier, Paul, Hoeffler Anke. 1998. “On economic causes of civil wars,” Oxford Economic Papers no. 50 pp. 563-573.

Diamond, Larry, Plattner Marc, Yu-han Chu, Hung-mao Tien. 1997. Consolidating the Third Wave Democracies. John Hopkins University Press, Baltimore and London.

Eckstein, Harry. 2000. “Unfinished business.” Comparative Political Studies 33 (6/7):505-35.

Eggertsson, Thráin. 1990. Economic Behavior and Institutions. Cambridge, England; New York: Cambridge University Press.

Elster, Jon. 1997. Egonomics. Barcelona: Gedisa.

______. 1992. El cambio tecnológico. Investigaciones sobre la racionalidad y la transformación social. Barcelona: Gedisa.

______. 1984. Ulises y las sirenas. Estudios sobre racionalidad e irracionalidad. Fondo de Cultura Económica, México

Fleischer, David. 1996. “Las consecuencias del sistema electoral brasileño: partidos políticos, poder legislativo y gobernabilidad,” Cuadernos de Capel-IIDH no. 39

Geddes, Barbara. 1994. Politician´s Dilemma: Building State Capacity in Latin America. Berkeley: University of California Press.

Gutiérrez, Francisco. 2001. “Organized crime and the political system in Colombia.”

Hall, Peter. 1997. “The role of interests, institutions and ideas in the comparative political economy of the industrialized nations.” Pp. 174-207 in Comparative Politics. Rationality, Culture and Structure, edited by Mark Lichbach and Alan Zuckerman. Cambridge, England; New York: Cambridge University Press.

______. 1992. “The movement from Keynesianism to monetarism: Institutional analysis and British economic policy in the 1970s.” Pp. 90-113 in Structuring Politics. Historical Institutionalism in Comparative Analysis,” edited by Sven Steinmo, Kathleen Thelen, and Frank Longstreth. Cambridge,England; New York: Cambridge University Press.

Harris, John, Janet Hunter; and Colin Lewis. 1995. The New Institutional Economics and Third World Development. London; New York: Routledge.

Hedström, Peter, and Richard Swedberg. 1998. Social Mechanisms: An Analytic Approach to Social Theory. Cambridge, England; New York: Cambridge University Press.

Hirschman, Albert. 1970. Exit, Voice and Loyalty: Responses to Decline in Firms, Organizations, and States. Cambridge, Mass: Harvard University Press.

Kreps, David. 1990. A Course in Microeconomic Theory. Princeton, NJ: Princeton University Press.

Lakatos, Imre. 1970. “Falsification and the methodology of scientific research programmes.” Pp. 91-195 in Criticism and the Growth of Knowledge, edited by Imre Lakatos and Alan Musgrave. Cambridge, England: Cambridge University Press.

Mainwaring, Scott. 1999. Rethinking Party Systems in the Third Wave of Democratization. The Case of Brazil. Stanford, Calif.: Stanford University Press.

McAdam, Doug; John McCarthy; and Mayer Zald, eds. 1996. Comparative Perspectives on Social Movements. Political Opportunities, Mobilizing Structures and Cultural Framings. Cambridge, England; New York: Cambridge University Press.

McCarthy, John. 1996. “Constraints and opportunities in adopting, adapting and inventing.” Pp. 141-52 in Comparative Perspectives on Social Movements. Political Opportunities, Mobilizing Structures and Cultural Framings, edited by Doug McAdam, John McCarthy, and Mayer Zald. Cambridge, England; New York: Cambridge University Press.

Nielson, Daniel, Matthew Shugart. 1999. “Constitutional change in Colombia. Policy adjustment through institutional reform.” Comparative Political Studies 32, no. 3:313-42.

North, Douglass, Thomas Robert. 1976. The Rise of the Western World: a New Economic History. Cambridge: Cambridge University Press.

Putnam, Robert, Robert Leonardi, Rafaella Nanetti. 1994. Making Democracy Work. Princeton University Press

Putzel, James. 1997. “Accounting for the dark side of social capital,” Journal of International Development vol. 9 no .7 pp. 939-949

Rubinstein, Ariel. 2000. Economics and Language: Five Essays. New York: Cambridge University Press.

Schelling, Thomas. 1978. Micromotives and Macrobehavior. New York: Norton.

Schumpeter, Joseph. 1934. Theory of Economic Development. An inquiry into Profits, Capital, Credit, Interest and the Business Cycle.Cambridge, Mass.: Harvard University Press.

______. 1939. Business Cycles. A Theoretical, Historical, and Statistical Analysis of the Capitalist Process. New York; London: McGraw-Hill Book Co., Inc.

Tarrow, Sidney 1996. “Making social science work across space and time: a critical reflection on Robert Putnam´s ´Making democracy work,` American Political Science Review vol. 90 no. 2.

Tsebelis, George. 1990. Nested Games. Rational Choice in Comparative Politics. Berkeley: University of California Press.

Young, Peyton. 2001. Individual Strategy and Social Structure. An Evolutionary Theory of Institutions. Princeton, NJ: Princeton University Press.


  1. Researcher, Instituto de Estudios Políticos y Relaciones Internacionales, Universidad Nacional de Colombia. This paper was sponsored by the London School of Economics – DFID Crisis States Program. I wish to thank James Putzel, Eric Hershberg, Juanita Villaveces, Juan Camilo Cárdenas and Paul Price for their extremely valuable input.
  2. See, for example, the excellent review by Ames (1999).
  3. I can develop conventions to protect myself from my weakness of will, Elster (1984).
  4. I would note, however, that Northian and Putnamian tales have in common being narratives of an original sin, a pretty strong symptom of their obliteration of power asymmetry motives. This aside means that I do NOT take sensibility to initial conditions as given, a subject to which I will return.
  5. I hope to do so by not-too-conventional means.
  6. One among many possible examples: during the first three years of the Andrés Pastrana (1998-2002) administration in Colombia, five important tax reforms took place. Even the rules that affect property, that apparently cozy haven of stability, are flexible.
  7. And indeed they are simply more than stable ones.
  8. 1982 was the year in which the main guerrilla force (Fuerzas Armadas Revolucionarias de Colombia-FARC) declared itself the People´s Army. Collier and Hoeffler give 1984 as the initial date of war in Colombia (1998).
  9. An ominous symptom of this fact is that insurance companies withdrew coverage for Colombian mayors.
  10. Violence affects the political parties differentially. Controlling by size, proportionally more left-wing than tradional party members are killed; but all suffer massive bloodletting.
  11. For example, Aoki’s (2001) interesting and comprehensive work doesn´t treat the subject.
  12. This mismatch between national institutional frameworks and patterns of power distribution appeared long before the present wave of globalization; indeed, it is at the heart of the “rise of the Western world.”
  13. Certainly, this is a motive explicitly posed by the Crisis States Program.
  14. Eckstein, though, seems to take for granted that “big” objects always have “big” causes, a point of view that can´t be shared.
  15. And, once again, a meta-institutional problem arises here: the constant change in the rules of game -and not one set of rules valid at a given period-can decisively shape the nature of political conflict. See the excellent Fleischer (1996) for the successive waves of electoral reformism in authoritarian Brazil.
  16. McCarthy (1996) comes close to this idea, but then drifts away.
  17. It is important to take into account that for Schumpeter, these expressions don´t correspond to the national-international dichotomy, but rather to “internal or external to the economic system.” I adopt this usage, replacing “economic” with “political.”
  18. On the other hand, endogenous innovation will not be successful unless it is robust relative to external shocks, especially if these are strong and repeated.
  19. Here the existence of transnational forces is critical: agents can shortcircuit institutions resorting to transnational coalitions.
  20. Unlike the majority of countries in Latin America, parties in Colombia were institutionalized long before the introduction of TV. In countries in which both came more or less together, like Brazil, the impact must have been stronger (Mainwaring 1999).
  21. As soon as the majority of voters starts to entertain the notion that individuals are better than “machines,” stringent statutes that give an edge to parties over ambitious politicians don´t have a chance to survive. Ecuador is a good example of this.
  22. That is, the assumption of the exogenous character of institutions can be wrong.
  23. Of course groups and organizations enter the analysis, but they in turn are a result of the interaction between individuals.