The History of Lessons: Power and Rule in Imperial Formations

Emmanuelle Saada is a sociologist and historian whose work on the French Empire bears on colonial legal categories and their articulation with race and citizenship. She currently teaches at the Ecole des Hautes Etudes en Sciences Sociales in Paris. Her book Les Enfants de la colonie: les métis de l’Empire français entre sujétion et citoyennetéis forthcoming from La Découverte.

The content of “The History of Lessons: Power and Rule in Imperial Formations” is located externally.

Your browser will be redirected shortly, or you may click on the above link to immediately view the essay.

American Colonial Empire: The Limit of Power’s Reach

Julian Go is an assistant professor of sociology at the University of Illinois and was recently an Academy Scholar at the Harvard Academy for International and Area Studies. He is the co-editor of The American Colonial State in the Philippines: Global Perspectives (Duke University Press, 2003) and various articles on the United States colonial empire in the early twentieth century.

The content of “American Colonial Empire: The Limit of Power’s Reach” is located externally.

Your browser will be redirected shortly, or you may click on the above link to immediately view the essay.

Modernizing Colonialism and the Limits of Empire

Frederick Cooper is professor of history at New York University. His latest work has focused on 20th century African history, theoretical and historical issues in the study of colonial societies, and the relationship of social sciences to decolonization. In addition to critical essays on the concepts of “identity” and “globalization,” his recent publications include Africa Since 1940: The Past of the Present (2002) and Decolonization and African Society: The Labor Question in French and British Africa (1996)

The content of “Modernizing Colonialism and the Limits of Empire” is located externally.

Your browser will be redirected shortly, or you may click on the above link to immediately view the essay.

Counter-Insurgency on the Cheap

Alex de Waal is a fellow of the Global Equity Initiative at Harvard University, and programme director at the Social Science Research Council in New York. He is the author of Famine that Kills: Darfur, Sudan (revised edition, Oxford University Press 2005) and, jointly with Julie Flint, of Darfur: A Short History of a Long War (forthcoming, Zed Press, September 2005).

The content of “Counter-Insurgency on the Cheap” is located externally.

Chasing Ghosts: Alex de Waal on the rise and fall of militant Islam in the Horn of Africa

Alex de Waal is a fellow of the Global Equity Initiative at Harvard University, and programme director at the Social Science Research Council in New York. He is the author of Famine that Kills: Darfur, Sudan (revised edition, Oxford University Press 2005) and, jointly with Julie Flint, of Darfur: A Short History of a Long War (forthcoming, Zed Press, September 2005).

Three of the suspects in the attempted bombings in London on 21 July were born in the Horn of Africa. One, Yasin Hassan Omar, was born in Somalia; a second, Osman Hussein, in Ethiopia; and a third, Muktar Said Ibrahim, in Eritrea. Ten years ago, when Osama bin Laden lived in Khartoum, the Horn of Africa could plausibly have been described as both the headquarters and the front line of international jihadism. American analysts have argued that Africa’s porous borders and ineffectual policing make the continent attractive to groups like al-Qaida, and the Pentagon has two major anti-terrorist operations in sub-Saharan Africa: a base in the tiny Red Sea state of Djibouti (sandwiched between Somalia and Eritrea) monitors the movements of suspected terrorists and the Pan Sahel Initiative is intended to hunt down jihadists in the Sahara. But they are chasing ghosts, mopping up the remnants of a jihad that had already failed in the late 1990s. It’s unlikely that the attempted bombings alleged to have been committed by Yasin Hassan Omar, Osman Hussain and Muktar Said Ibrahim can be traced back to Islamism in their respective homelands. It is much more probable that their jihadism belongs to a new militant manifestation nurtured in European cities over the last few years.

The rise and wane of political Islam in the Horn has left deep imprints on the region and on jihadism itself. In 1990, as the anti-Saddam coalition triumphed in Kuwait, Islamists took solace from the collapse of three of the most disliked secular dictatorships in Africa: Hissène Habré in Chad in December 1990, Siad Barre in Somalia in January 1991 and Mengistu Haile Mariam in Ethiopia in May 1991 (precipitating Eritrea’s secession). In the networks of the Islamist international, Sudan claimed credit for all this. Khartoum’s new radical Islamist government had thrown open its doors to militants from across the Muslim world. They had counted on Islamist revolutions sweeping the Arab world in 1990, after Saddam Hussein’s invasion of Kuwait had dramatically shown up the rottenness and dependency of the Gulf monarchies, as they turned to America to save them. When this didn’t happen, the jihadists instead congregated in Khartoum, where Islamists had staged a coup d’état in 1989, and their sheikh – Hassan al-Turabi – had created a Popular Arab and Islamic Congress to rival the conservative Organisation of Islamic States and the moribund Arab League. The PAIC meetings attracted people as disparate as the old leftist Palestinian George Habash, members of Hamas, Algerian jihadists and Iraqi Baathists – not to mention Osama bin Laden and Ayman al-Zawahiri.

Then, in December 1992, President Bush dispatched the US army to Somalia on what he described as a humanitarian mission. The Islamists didn’t believe that for a moment: for them it was another invasion of a Muslim country. But Operation Restore Hope made them realise the importance of the African front in the struggle for a new caliphate. Bin Laden rented a villa in Khartoum, bought up several businesses and opened training camps. And Abu Ubaidah al-Banshiri, an Egyptian and a senior commander of al-Zawahiri’s Tanzim al-Jihad, was sent to establish an African Muslim army, beginning in Somalia.

Years later, after 11 September, when the litany of al-Qaida terrorist outrages was compiled, the Black Hawk Down episode in Mogadishu in October 1993 was included. It shouldn’t have been. Al-Banshiri’s deputy, Mohammed Atef, was in town, but as a student of the Somali method of urban insurgency, not as a planner or instructor. He cheered on Aidid’s militia, but they didn’t need any help from him. (Atef later became al-Qaida’s military commander in Afghanistan.) Al-Banshiri was said to have been impressed by the bravery and military prowess of the Somalis, but saddened by their factionalisation and unwillingness to acknowledge that Islam offered an alternative future for the country. Only the small outpost in Luuq, set up by Somalia’s Islamist party, al-Itihaad al-Islaamiya (‘The Islamic Union’), showed potential, and there the jihadists made their headquarters.

Luuq sits on a narrow neck of land between two horseshoe bends of the Jubba river. It’s an old trading centre, founded in the Middle Ages by the first Muslim merchants in the Horn. The single gate to the town is flanked by steep riverbanks. The Jubba flows south from the Ethiopian highlands, a ribbon of blue and green in a flat reddish plain. Before Somalia’s collapse in the late 1980s, the floodplains were farmed by the Gabwing, a small clan who had the misfortune to live next to the home district of the president, Siad Barre, whose Marehan clansmen greedily eyed the fertile alluvial land. In 1988, when the country was on the brink of civil war, I asked the chief of the Gabwing in Luuq about the workings of his customary court. Waiting until we were out of earshot of any government functionaries, he explained: ‘No one comes to my court now. It is total war.’ Week by week, his villagers were losing their land at gunpoint to well-connected Marehan merchants and army officers.

Five years later, in the depths of the civil war, I returned to the Jubba Valley. Most of the Gabwing villages along the banks of the river had vanished. Asked about the Gabwing, the Marehan warlords who now controlled the region laughed and denied that a people with such a name had ever lived there. But in Luuq it was different. Here, a recently arrived band of nervous young men was trying to run a local administration based on Koranic principles, without reference to clan distinction. Most were students; a few were teachers and professionals who had lived abroad. A few poorly armed militiamen guarded the town gate. Gabwing farmers had gathered on the land protected by the meanders of the river and were farming maize and tomatoes.

In the chaos after Siad Barre was overthrown in January 1991, the Islamists had made several attempts to gain a foothold in Somalia. But the country’s major clans were heavily armed and determinedly independent and none would cede any territory. So the cadres of al-Itihaad targeted the leftover places, those belonging to the forgotten minorities who were the main victims in the war. The Itihaad followers were a mixed lot: some idealistic students, some criminals and mercenaries, a couple of businessmen with links to Arab countries, militiamen aggrieved by land seizures and exclusion from local power. Their first base was the seaport of Merca, just south of Mogadishu, where the local Hamar people were traders with a long urban history and no militia. They welcomed the earnest young men who came promising honesty, equality and respect for women. When the US marines of Operation Restore Hope fanned out from Mogadishu in January 1993, however, the Islamists thought it wise to leave Merca, and headed for Luuq. The Americans wouldn’t go to the remote Jubba Valley.

For al-Banshiri, its attractions included its proximity to Ethiopia and Kenya, where al-Qaida’s next operations were planned. Al-Banshiri drowned in a ferry accident on Lake Victoria in May 1996, while trying to set up a Ugandan battalion, but not before he had established a cell in Nairobi. Run by a Lebanese American and a man from the Comoros Islands, and made up mainly of disaffected Kenyan Muslim youths, this cell went on to bomb the US embassies in Nairobi and Dar es Salaam in August 1998, killing 225 people. The Luuq outpost also channelled arms to Somali insurgents in south-eastern Ethiopia who were fighting under the banner of the Ogaden National Liberation Front, piggybacking the jihadist agenda on the age-old grievance of Muslim lowlanders against the Christian highland domination of the country.

There are as many Muslims in Ethiopia and Eritrea as there are Christians, and they have been there since the beginnings of Islam. One of Islam’s holiest cities, Harar, lies on the eastern slopes of the Ethiopian highlands – though it has never been the site of religious conflict. There are also substantial Muslim minorities in the other East African countries, populations not discovered by Islamic intellectuals until the 1970s. Further to the west, Islam has dominated the Sahel and savanna for centuries, and extremist Islamist political groups have recently emerged in northern Nigeria. All these Muslims are overwhelmingly Sufi, and their version of Islam incorporates mysticism and the cult of saints. Puritan evangelists and political Islamists have long tried to make African Muslims more orthodox, pressing for adherence to Islamic law and female dress codes and trying to end such ‘un-Islamic’ practices as drinking alcohol and building tombs.

Two months after al-Banshiri’s death, assassins trained in Luuq loitered outside the central post office in Addis Ababa. Their target was Ethiopia’s minister of transport and communications, Abdul Mejid Hussein. An ethnic Somali and veteran of Ethiopia’s left-leaning student movement, Hussein symbolised the new Ethiopian government’s commitment to ethnic and religious pluralism. He was also leading painstaking efforts to resolve the extraordinarily complicated internal conflicts that were preventing the ethnic Somali region of Ethiopia from achieving stability. Leaving his office next to the post office, Hussein took six bullets but survived. One of his bodyguards was killed.

Ethiopia’s retribution was swift. In August 1996, helicopters supported by armoured columns crossed the border and attacked Luuq. The overwhelmed al-Itihaad militia ran for it, but 18 foreign al-Qaida members – suspected to be Pakistanis and Egyptians – fought to the last man, who drowned himself in the Jubba river rather than surrender when his ammunition ran out. The Ethiopians captured thousands of documents, which they handed over to the Americans, although the Americans, short of Arabic translators, didn’t begin to examine them for 18 months. Ethiopia’s chief of staff warned the Somali factions that if there was another terrorist attack traced to them, he would not hesitate to go as far as Mogadishu. Further smaller raids drove the point home.

Since the Ethiopian attacks, al-Qaida has had no military base in the Horn. Chastened, al-Itihaad dismantled its militia and its attempt to build an Islamic micro-state, focusing instead on setting up Islamic law courts and schools, and buying influence with the main clan factions. A civil Islamism is alive and well in Somalia. The Islamists run most of the schools and clinics, and Islamic law is enforced in many courts. Women cover themselves far more than their mothers ever did.

For a while Somali jihadists continued to take whatever chances came their way, acquiring arms from Eritrea during its bloody border war with Ethiopia, for example, and shipping them on to its affiliates inside Ethiopia. Other radical Islamist groups have cropped up on the margins of Somalia’s factional politics, occasionally sparking rumours of secret training camps for international terrorists. The Ethiopians use these stories to justify their continuing interventions. Such lurid claims tend to become less credible the closer one gets to them. After 9/11, the State Department, bemused about how to handle this stateless territory, stumbled on the simplest and most effective measure: it published the names of those it suspected of al-Qaida links. Every political leader in Somalia is also a businessman, and all business there involves financial and trade links with East Africa, the Arabian Gulf, Europe and America, so even a hint that assets might be frozen or money transfers blocked is enough to cause any named suspect to be shunned.

Although jihadism has been reduced to the political margins, its tiny number of adherents still pose a danger. An al-Qaida cell, operating from Mogadishu, bombed Mombasa’s Paradise Hotel and shot at an Israeli airliner in November 2002. Several cell members – including a Tanzanian, a Sudanese and a Yemeni – are still at large. Aden Hashi Ayro, a former al-Itihaad fighter who trained in Afghanistan, runs a nameless network that has no political programme and makes no proselytisation efforts: he is interested only in killing. His thugs murdered four foreign aid workers, including an elderly British couple who ran a school. Raids and abductions sponsored by the Ethiopians and Americans continue, although they are clumsy, brutal, often net the wrong people and serve only to add to Mogadishu’s lawlessness.

While recuperating from his bullet wounds, Hussein, together with his wife, Anab, was invited by the Egyptian president, Hosni Mubarak, for a Nile cruise. The two men had been the target of another assassination attempt a year earlier. In June 1995, Mubarak had taken the precaution of flying his armour-plated car to Addis Ababa for the twenty-minute drive from the airport to the conference hall where the summit of the Organisation of African Unity was to be held. Halfway along one of the city’s boulevards, gunmen leapt from the crowds lining the streets and opened fire. A block downhill, a truck filled with explosives burst through. Mubarak had been accompanied from the airport by Hussein, who didn’t know that the car was bullet-proof and threw himself to the floor. He and Mubarak clutched each other as the president’s driver spun the car round and sped back to the airport.

News of the assassination attempt reached Ethiopia’s chief of security, Kinfe Gebre Medhin, within a minute. He was on the steps of Africa Hall, inside the United Nations compound, awaiting the procession of heads of state. Instantly he confronted his Sudanese counterpart, Nafie Ali Nafie. Gebre Medhin knew that Khartoum was supporting terrorist cells. In Sudan itself the al-Qaida network included al-Gamaa al-Islamiya (‘The Society of Islam’), led by Sheikh Omar Abdel Rahman, later convicted of conspiracy for his part in the 1993 World Trade Center bombing, whose operatives were intent on murdering Mubarak. Other sponsored groups across the Horn included Eritrea Jihad, al-Itihaad and, bizarrely, the Lord’s Resistance Army in northern Uganda, which takes inspiration from self-proclaimed Christian prophets. Nafie had assured the Ethiopian intelligence chief that the summit would pass without incident. It’s still not clear whether he was lying, or had been kept in the dark. At any rate, Gebre Medhin had his revenge twice over. Acting against orders, he led the commando squad that cornered the terrorist cell in an Addis house. And he also commanded a tank unit that crossed the Sudanese border close to the Blue Nile river, defeated the army garrisons there, and then handed over the territory to the rebel Sudan People’s Liberation Army.

The Eritreans thought Gebre Medhin had been naive to believe Sudanese assurances. As early as January 1994, Eritrea’s president, Isseyas Afewerki, had declared that since Khartoum was backing jihadists from Eritrea’s Muslim population who were planting landmines on Eritrean roads, he would respond by supporting the Sudanese opposition. ‘President Omer al-Bashir will be overthrown within a year,’ Afewerki announced. Bashir is still in power, but Eritrean-backed guerrillas from the Beja tribe, who live on both sides of the Eritrea-Sudan frontier, soon sealed off the border and prevented further Sudanese infiltration. They are still insurgent in Sudan, threatening another war on its eastern flank.

While militant Islamism in Somalia was a local affair, buffeted by the currents of civil war, in Sudan it enjoyed the backing of the most powerful individuals in the state. Hassan al-Turabi, a lawyer and philosopher, was the visionary, with a grand plan for an Islamic state in Sudan and Islamist revolutions in neighbouring states. His student cadres spread across Sudan, setting up schools, clinics and micro-credit schemes, seeking practical solutions to the pressing problems faced by ordinary Sudanese. Most of these Islamist social projects didn’t work, but at least they tried. Turabi’s security cabal ran clandestine training camps, sometimes under the cover of Islamic philanthropic agencies, and smuggled al-Qaida operatives onto Sudan Airways flights destined for all over Africa and the Middle East. Khartoum airport even had a terrorist protocol unit responsible for meeting and greeting these men when they returned, ensuring that they bypassed customs and immigration and went straight to safe houses. A terrorist cell can survive without state sponsorship, but its capability is infinitely greater if it has a government to facilitate its every move.

In the three years following the assassination attempt on Mubarak, a military coalition of Eritrea, Ethiopia and Uganda, with the discreet endorsement of the US government, brought Sudanese jihadism to a halt. They launched an undeclared regional war in which all three states sent troops across Sudan’s border. They also co-ordinated action with the new Rwandan government, which was facing a similar problem from Congo. The Sudanese president repeatedly protested that his country had been invaded. The invaders denied it, crediting their victories to Sudanese rebels. With this knife at its throat, Sudan rapidly closed down militant bases, expelled bin Laden, and reined in its jihadist security agencies. The world saw a raft of UN and American sanctions – a Clintonite version of regime change – and, a few days after the East African embassy bombings, Clinton anticipated the Bush doctrine of pre-emptive war by precisely but mistakenly destroying Khartoum’s al-Shifa pharmaceutical factory with cruise missiles. A soil sample containing traces of a precursor to the deadly chemical agent VX was evidence that something illegal had been going on there, but most analysts agree that Sudan’s chemical weapons were stored elsewhere. Sudanese diplomats now turned the tables on the US, threatening to demand a UN investigation if the US raised Sudan at the Security Council. The firestorm in the night sky had been frightening, but what really worried Sudan’s security chiefs was the foreign battalions that secretly threatened to capture major towns. As the Islamist project began to fall apart, President Bashir turned on his mentor, al-Turabi, and eventually jailed him. Bashir and his powerful deputy, Ali Osman Taha, gambled that if they made peace with the Sudan People’s Liberation Army in the south, then at least they would stay in power. That improbable scenario came to pass earlier this year and on 9 July, the SPLA commander in chief, John Garang, flew to Khartoum to be sworn in as vice president. Garang died in a helicopter crash only 22 days later but his successor, Salva Kiir, is likely to consummate the peace deal. But even if Sudan returns to war, it will not be a renewed jihad aimed at founding an Islamic state but a nasty struggle for power.

Throughout East Africa, the fortunes of political Islamism rose and fell in the 1990s. By the time of 9/11, it was already into its endgame. The attacks on the World Trade Center and Pentagon occurred just two days before the UN Security Council was due to debate lifting the sanctions on Khartoum which had been imposed in the wake of the Mubarak assassination attempt. The US had decided to abstain, having concluded that Sudan was no longer sponsoring terrorism. But the Global War on Terror made Bashir understandably nervous: his regime’s history made it a soft target. He offered more counter-terrorist co-operation. His security chief, Salah Gosh, widely suspected of command responsibility for the extreme violence in Darfur and elsewhere, has visited both London and Washington DC to discuss Khartoum’s extensive files on the terrorists it once hosted. Bashir, Gosh, Nafie and others live on their nerves: they know they are implicated, and will sacrifice anything except themselves to stay in power. Last month, al-Qaida added the Sudan government to its list of targets, accusing it of selling out.

Khartoum had been saved from certain military defeat when its adversaries fell out among themselves. In May 1998, Eritrea and Ethiopia went to war over their disputed border, and a couple of months later Ugandan and Rwandan troops fought each other in the occupied Congolese city of Kisangani. As the attempt to found an Islamic state was running into the sand, so was the rival left-wing project of revolutionary militarism. The guerrillas-turned-governments in the four ‘frontline states’ were, like the Sudanese leaders, concerned only with staying in power.

This parallel is more than a neat coincidence. The radical Islamists and their regional enemies shared ideological fervour and organisational discipline. Both believed that enduring problems of state and society could be overcome by revolutionary change; and, as this failed, both reverted to simple power politics. Like other political creeds, jihadist Islamism is shaped by the contours of local politics – and sometimes it vanishes into the landscape.

The demise of grand ideology in the Horn did not mean the end of violence or militancy. Various ideologies have emerged from the ruins of the Islamic state project. Most are regional or tribal. In Darfur and Chad, Arab supremacism took over. Leaders of the infamous Janjawiid militia adhere to the philosophy of Qoreish, according to which the lineal descendants of the Prophet Muhammad and his Qoreish tribe are entitled to rule Muslim lands. This supposedly gives the Arabic-speaking Saharan Bedouin of the Juhayna confederation the right to dominate all the land between the Nile and Lake Chad. While US Special Forces chase a handful of jihadists in the mountains of the central Sahara, they have overlooked this vicious and archaic ideology, which has spread far more havoc just a few miles to the south.

A century ago, the first fundamentalists saw their task as challenging the imperial powers and their modern rationalism. Hassan al-Banna, the Egyptian schoolteacher who invented Islamism as a socio-political movement in the 1920s, saw his Muslim Brothers as a party comparable to the Fascists and Communists he contended with. For the next generation, the struggle was with secular pan-Arabism, Communism and, in Africa, leftist liberation movements. For Muslims in the Horn, 9/11 came at a moment when the Islamist project had been overtaken by the politics of exhaustion. By declaring his War on Terror, President Bush provided a convenient new enemy, but resisting America is so remote from the real problems faced by ordinary Muslims as to be meaningful only to a handful of misfits and criminals. Luuq was a real and courageous attempt to build an Islamic community in Somalia’s ruins, though it was fatally hijacked by al-Qaida. Ayro’s murders, by contrast, are utterly meaningless.

Today, East African Muslims are more likely to be radicalised in Finsbury Park or Brixton than in Khartoum or Luuq. Personal bitterness, a search to find affirmation in membership of small exclusive groups, and the endless news stories about Muslims being victimised in Palestine, Iraq and Europe are all more significant influences on young Muslims in England than al-Itihaad or Eritrea Jihad. What we have learned so far about Yasin Hassan Omar, Hussein Osman and Muktar Said Ibrahim suggests that this is their story.

Review of Gerard Prunier, Darfur: The Ambiguous Genocide, Hurst and Co.

Alex de Waal is a fellow of the Global Equity Initiative at Harvard University, and programme director at the Social Science Research Council in New York. He is the author of Famine that Kills: Darfur, Sudan (revised edition, Oxford University Press 2005) and, jointly with Julie Flint, of Darfur: A Short History of a Long War (forthcoming, Zed Press, September 2005).

Ahmat Acyl Aghbash is known to few, and then mostly for his grisly end—he stepped backwards into the spinning propellers of his Cessna aeroplane in 1982. His last words can only be guessed. His legacy is the Janjawiid militia, now infamous for genocidal atrocity in Darfur.

The plane was a gift from Libya’s Colonel Muammar Gaddafi and for ten years Ahmat Acyl was both a commander in Libya’s multinational pan-Sahelian ‘Islamic Legion’ and leader of a Chadian Arab militia known as the Volcano Brigade. Today, Acyl’s fighters from the Salamat of south-central Chad and the Sudanese intermediaries who smuggled their weapons can stake a good claim to be the original Janjawiid. Acyl’s name crops up in most histories of the long-running wars between Libya, Chad and Sudan. His supplier’s name doesn’t. It was Sheikh Hilal Mohamed Abdalla, whose Um Jalul clan’s yearly migration routes took them from the pastures on the edge of the Libyan desert in northern Darfur to the upper reaches of the Salamat River where it crosses from Sudan into Chad. Renowned for their traditionalism, vast herds of camels and the huge reach of their semi-nomadism, the Um Jalul were a logical intermediary for Libya’s gun-running. Their encounter with the Salamat militia, first social, then commercial (the Jalul sold camels) and finally military, forged the Janjawiid, which is now headed by the Sheikh’s younger son Musa Hilal.

Acyl’s gifts to Darfur also included an Arab supremacist ideology which holds that the lineal descents of the Prophet Mohamed and his Qoreish tribe are entitled to rule Muslim lands. Specifically, the Juhayna Arabs, a group that includes both Salamat and Um Jalul, should control the territories from the Nile to Lake Chad. Darfur, an independent sultanate until just eighty years ago, lies in the centre of this land, its massif providing both the most fertile land and the headwaters of the Salamat river. The Qoreishi ideology, mobilized through a shadowy group known as the ‘Arab Alliance’ or ‘Arab Gathering’ motivates some of those involved in the vicious war to control this land. Understanding the hideous violence in Darfur demands an understanding of complex local histories that is possessed by few Sudanese and fewer foreigners. Generally relegated to a footnote of Sudanese history, as Gerard Prunier explains, Darfur warrants its own political ethnography.

Darfur’s is an ambiguous genocide indeed. The crudity of its violence belies fine-grained particularities of motive that only make sense within the unique history of Darfur and its neighbours. Theirs is no centralized blueprint for racial annihilation, but rather a shading of different agendas and opportunistic alliances. The pivot of these is the Um Jalul, and its aspiring leaders’ links with Chad, Libya and—more recently—Khartoum. The Um Jalul are a clan of the Mahamid, who are in turn a section of the Abbala (‘camel-herding’) Rizeigat of Northern Darfur and Chad. Their Bedouin roots can be traced back five centuries at least, when their patrilineal Juhayna ancestors crossed the Libyan desert, entering Darfur from the north-west. Juhayna Arabs were already present in Darfur when the Fur Sultanate emerged in the early 17th century and were part of its bilingual Arab-Fur identity from the outset. In the mid-18th century, the Sultan granted the Baggara (‘cattle-herding’) Rizeigat jurisdiction over a huge area of land south-east of the Sultanate’s heartlands. Known as ‘hawakir’ (sing.: hakura), such grants are the basis of Darfur’s land tenure today, and who controls them is the subject of bitter political struggle. The Baggara’s northern cousins, more mobile and living in the more densely-administered northern lands, were less fortunate. Until today, many Abbala Rizeigat ascribe their role in the current conflict to the fact that they weren’t allocated a hakura a quarter of a millennium ago.

Others also didn’t receive hawakir. After annexing Darfur on 1 January 1917—almost the last territory to be added to the Empire—British colonial officials began tidying up the splendid confusion of Darfur’s ethnic geography. Another Northern Darfur Arab group, the Beni Halba, were collected in one district, which was then allocated to them in a latter-day hakura. The Abbala Rizeigat had their eyes on a territory that forms a ‘U’ shape north of the mountainous centre of the region. But the leading families of the two main sections—Mahamid (including Um Jalul) and Mahariya—couldn’t agree on who should be paramount chief, or nazir. Since 1925 there have been at least six attempts at unifying the different sections. None has succeeded. One stratagem used by the rival sheikhs to increase their chances was to enlarge their numbers by attracting followers from Chad. The Um Jalul had an advantage here: there are more Mahamid than Mahariya clans in Chad, and in the 1970s they were armed by Libya and organized by Ahmat Acyl, the warlord who began to enmesh Darfur in Chad’s racial war.

As we turn political ethno-political lens, we find that the contours of Janjawiid mobilization correspond to the political fractures within the Abbala Rizeigat. Heads of Mahamid lineages have key positions while most leading Mahariya families are uninvolved. A third section, the Ereigat, also plays a different but equally critical role. Historically impoverished and marginal, Ereigat men found employment at the colonial police stables. Living adjacent to towns, their sons obtained an education and joined the police and army. One of these boys, Abdalla Safi el Nur, rose to become an airforce general and was Governor of Northern Darfur at the time when the Janjawiid coalesced from a tribal militia tolerated by the government into becoming a proxy for military intelligence. Another became an army general and, now retired, heads the parliamentary defence committee. Meanwhile, the Baggara Rizeigat—far more numerous and powerful—are themselves divided. Several are leading lights in the Arab Gathering. But the paramount chief, Nazir Saeed Madibbu, is trying to steer a neutral course through Darfur’s mayhem, hoping to negotiate peace and clear his tribe’s name.

Historians of mass atrocity will be unsurprised to learn that much of the dynamic of escalation can be attributed to extremely local power-struggles, and that even at the lowest levels of ethnic aggregation, such as the sub-sections of the Rizeigat (themselves one of a half dozen large Arab tribes in Darfur), extreme violence is the choice of a minority. Such is the poor state of basic documentation of Darfur that these basic facts have not been detailed. That is still the case. Prunier’s account makes not a single mention of Ahmat Acyl, Hilal Abdalla or his son, the Qoreish and its manifesto, or indeed the Abbala Rizeigat and the Um Jalul, though all are essential to understanding Darfur’s descent into war and atrocity.

The Darfur rebels’ history is equally important and also little documented. They spring from convergent resistance movements based among Darfur’s three largest non-Arab groups, the Fur, Zaghawa and Masalit. Multiple versions exist of the origins of the Sudan Liberation Army (SLA) and the Justice and Equality Movement (JEM), not least among the members of the two groups themselves. All concur that the SLA has sympathies with the Southern Sudan-based Sudan People’s Liberation Army (SPLA), and took both arms and advice from the latter in 2003, but that it emerged two years earlier from an alliance of Fur militiamen and Zaghawa desert fighters, independently of the SPLA. Until 2003—when SPLA members assisted in writing the SLA manifesto—the main SPLA role had been training Masalit volunteers who crossed Sudan’s eastern border to its camps in Eritrea. (Denied economic opportunities at home, many Masalit have migrated across the entire breadth of Sudan looking for work.) A couple of battalions of these Darfurians were then transferred to Southern Sudan, from where they planned to return home to bolster local self-defence units. Thwarted by the government, many deserted and went back home in 2001. The SPLA then lost interest in Darfur, while the local rebellion quietly gathered force. After reconnecting in January 2003, leader John Garang and Darfur’s guerrillas have with regarded each other with ambivalence. The SLA could indeed become part of a grand alliance of Sudan’s marginalized peoples and thereby a springboard for Garang to take power on behalf of an ‘African’ majority. But Darfurian leaders are fearful that they will be manipulated, and with good cause. The SLA was catapulted to prominence before it could develop internal political institutions, so that it is an amalgam of village militia and rural intellectuals marshaled by indigenous warrior tradition and the discipline of former army NCOs. The Fur and Zaghawa wings have often disagreed and even on one occasion fought each other.

The origins of JEM are even more controversial. The leadership is drawn from the ranks of Darfurian Islamists and they widely believed to have received funds from Islamists abroad. In contrast to the amateur public relations machinery of the SLA, JEM runs a sophisticated political bureau. JEM’s roots lie in the fragmentation of Sudan’s Islamist movement in the late 1990s, as the twin dreams of national development as an Islamic state and the emancipation of all Muslims as equal citizens, regardless of colour, disintegrated into internal squabbling. The implosion of the Islamic project was clear when, in December 1999, President Omar al Bashir dismissed the government’s eminence grise Hassan al Turabi, sheikh of the Sudanese Islamists, and later put him in gaol. Darfur’s Islamist leaders were already disaffected. Handicapped by the latent Arabist racism of the leadership, which hails almost entirely from Khartoum and the middle Nile Valley, few Darfurians had risen to the top ranks of the government or the civil service. A clandestinely-published ‘Black Book’ documented the racial and regional domination of the Sudanese state.

There are many conspiracy theories concerning the origins of the SLA and JEM, but Prunier’s account—that the Darfur rebellion emerged as a direct consequence of a memorandum of understanding between John Garang and Turabi in 2001—is among the unlikeliest. Putting forward such a claim requires strong supporting documentation, of which Prunier provides none.

The critique in the Black Book was aimed, in fact, at Turabi as well as Bashir. Following the 1999 split and Turabi’s imprisonment Bashir and his lieutenant Ali Osman Taha relied more and more on their own kinsmen, security officers and Islamist cadres drawn from precisely the same Nile Valley tribes fingered in the Black Book. Alarmed at its haemorrhage of support in Darfur, Khartoum’s security cabal turned to one of the few senior military figures from Darfur, Gen. Abdalla Safi el Nur, who responded by putting his kinsmen into key local security posts. The alliance between Khartoum and the Saharan Bedouins is one of convenience. Accustomed to seeing Sudan through an ‘Arab-African’ lens, many observers have missed the fact that the riverine Arabism of Bashir and Taha, coloured by the Islamic movement’s orientation to Arab civilization, is a far cry from the Qoreishi beliefs of Acyl’s Bedouin acolytes. Khartoum’s ruling elites regard the Darfur Arabs as no less backward than their non-Arab neighbours. True adherents of the Qoreish ideology reciprocate by dismissing the riverine tribes as half-caste ‘Arabized Nubians.’

Lacking local knowledge about what is actually driving the Darfur conflict, many have given it their own spin. The debate over the label ‘genocide’ is an example of high-velocity spinning. Both diagnosis of ‘genocide’ and the question of what to do about it are fraught with ambiguity.

One approach was followed by the U.S. government. Following a Congressional resolution in May 2004, the State Department dispatched a team of investigators to refugee camps in Chad to ascertain whether the Sudan Government was committing genocide. On 9 September, Secretary of State Colin Powell reported, ‘genocide has been committed in Darfur and that the government of Sudan and the Jingaweit bear responsibility—and genocide may still be occurring.’

A determination of genocide should demonstrate both that a crime is committed that fits the definition in the 1948 Genocide Convention (actus reus) and also specific intent on the part of the perpetrator (mens rea) ‘to destroy, in whole or part, a national, ethnical, racial or religious group.’ Powell had good evidence for a pattern of atrocities that looked like genocide. He had no proof of intent. But to equivocate—as his predecessor Madeleine Albright had done over Rwanda a decade earlier—risked being pilloried. State Department lawyers were encouraged by the reasoning of the International Criminal Tribunal for Rwanda. Faced with the problem of proving genocidal intentions when the accused denied them, the tribunal’s judges ruled that it was legitimate to infer intent from the ‘general context’ of extreme violence directed against a group. Another good reason for making this inference, it is almost impossible to reach a conclusion about genocide while it is actually occurring, which would mean that the Genocide Convention would only be good for prosecutions after the fact. Powell’s phrasing was, however, curiously passive. He did not say that Khartoum’s leaders and their militia were genocidal criminals. And in the next breath he said that U.S. policy would not change.

A different approach to determining genocide was adopted by the International Commission of Inquiry into Darfur, set up by the UN Security Council, which reported in January 2005. The ICID detailed the same pattern of abuses as in the State Department report. ‘However,’ it continued,

the crucial intent of genocidal intent appears to be missing, at least as far as the central Government authorities are concerned. Generally speaking the policy of attacking, killing and forcibly displacing members of some tribes does not evince a specific intent to annihilate, in whole or in part, a group . . . Rather it would seem that those who planned and organized attacks on villages pursued the intent to drive the victims from their homes, primarily for the purposes of counter-insurgency warfare.1

In short, the killings in Darfur looked like genocide but were actually a byproduct of defeating the rebellion. The Commissioners, all of them veteran independent human rights specialists, had shied away from the fence that the Americans had so readily jumped. But Khartoum—despite trumpeting the ‘no genocide’ finding—could take no solace from a report that found that ‘the crimes against humanity and war crimes that have been committed in Darfur may be no less serious and heinous than genocide,’ and which noted that individuals—including government officials—may have possessed genocidal intent. And unlike Powell, the ICID recommended a specific course of action, namely referral to the International Criminal Court. In March, the U.S. swallowed its longstanding opposition to the ICC, and allowed the Security Council to refer the Darfur case to The Hague. The ICC is currently examining a sealed list of 51 individuals identified by the ICID. Although indictments are many months away, the prospect of extradition to face prosecution in The Hague has prompted a shiver of fear among Khartoum’s security chiefs. The blades are whirring just behind them.

The ICID determination is based on a higher standard of proof than the State Department’s. It is open to interesting and important legal dispute. Much hinges on primary purpose and double effect. On the one hand it can be argued that genocide is a predictable corollary of counter-insurgency conducted in a certain manner. And that the previous two decades of warfare in Sudan exemplify this. All modern genocides, it may be noted, occur during war. On the other hand, the 1948 Convention is precise in what constitutes intent, and legal work needs to be done if that is to be broadened to include genocidal outcomes as a secondary impact of other aims. Moreover, if the purpose of the determination is to prosecute individuals for known crimes, then beginning with a charge of genocide is surely fruitless: the case is better made by building up from multiple instances of mass murder and group-directed war crimes and then deducing that these cumulatively amount to genocide. Both empirically and legally, the ICID has taken a serious and thoughtful position, which will be scrutinized and contested.

Prunier gives a sketch of the debate over genocide, opening by characterizing the ICID report as part of ‘a coordinated show of egregious disengenousness.’ ‘The semantic play,’ he writes, ‘ended up being an evasion of reality. The notion that this was probably not a “genocide” in the most strict sense of the world seemed to satisfy the Commission that things were not too serious after all.’ Given the title of the book and Prunier’s previous work on the Rwanda genocide this is a disappointingly inadequate conclusion.

After the question of genocide, the most controversial issue in Darfur is the death rate. The question of how many people have died in Darfur is important but desktop demography is hazardous when the methods of data collection are varied and have not been fully scrutinized. Prunier is not alone in hinging strong claims on the fact that in one survey of refugees, 61% said that they had seen a ‘family member’ killed. As a general index of horror this is a compelling statistic. But it is impossible to make any numerical inference until one knows what the investigators meant by ‘family’. Demographers distinguish the household (usually defined as those who eat together daily, and usually used as the unit of enumeration) from the family, which commonly stretches far wider than those five or six individuals. Until there is a thorough population-based survey of mortality in Darfur, all estimates for deaths from violence, disease and hunger will remain conjecture.

Prunier has some good sources but often treats them casually. In his catalogue of international neglect of the conflict, for instance he says that Justice Africa failed to mention Darfur in its October 2003 briefing (p. 126). There were in fact four paragraphs that month on Darfur, which had been covered in every issue since March including a warning on 27 May that the strategy of ‘arming local militia’ would, if followed, ‘run the risk of creating a vicious internecine war targeting civilians.’

Darfur: The Ambiguous Genocide provides a competent sketch of the history of Darfur and the position of the conflict within the politics of Sudan and the region. The account is valuable in locating Darfur within the politics of the central Sahara and the long-running three-cornered wars between Libya, Chad and Sudan. Prunier correctly describes pre-colonial Darfur as an ‘ethnic mosaic’ rather than a region with a binary polarized ‘Arab’-‘African’ identity divide and notes the ambiguity of the term ‘Arab’ (though he doesn’t explore the varieties of Arabism). He makes useful points on the politics of the Umma Party, the main party in the ruling coalition toppled by the current government, and the Darfur Development Front in the 1960s and 1980s and on Libyan-Sudanese relations in the 1970s and 1980s. Errors and omissions are inevitable in any analytical narrative of Darfur: the chief difficulty of this book is that the author omits entirely the central protagonists.

International efforts to find a solution to Darfur’s agony are now in the hands of the African Union. Prunier dismisses this as ‘the politically correct way of saying “We do not really care”.’ But American, British and other international support to the Kenyan-headed North-South peace process, followed a similar formula of ad hoc multilateralism, and did bring an end to twenty years of comparably vicious war. Darfur’s peace process is in some respects more challenging. There is no cohesive leadership on either side and the political issues that divide the belligerents have yet to be thrashed out—the agenda for negotiations is itself a matter of acrimony. Meanwhile, the best hopes for a settlement may come from connecting external peacemaking to internal initiatives. Darfur’s own provincial aristocrats, the paramount chiefs—including the ruling Arab families—are seeking an exit from their predicament, one that restores a conservative social order and salvages their tribes’ reputation. If the Janjawiid are to be politically decapitated, it may be through the efforts of these hardened old tribal chiefs, arguing that for the government and its allies to submit to their mediation is a better option than extradition to The Hague and a cell in a Dutch basement.

  1. Report of the International Commission of Inquiry on Darfur to the United Nations Secretary General, Pursuant to Security Council Resolution 1564 of 18 September 2004, Geneva, 25 January 2005, p. 4.

Who are the Darfurians? Arab and African Identities, Violence and External Engagement

Alex de Waal is a fellow of the Global Equity Initiative at Harvard University, and programme director at the Social Science Research Council in New York. He is the author of Famine that Kills: Darfur, Sudan (revised edition, Oxford University Press 2005) and, jointly with Julie Flint, of Darfur: A Short History of a Long War (forthcoming, Zed Press, September 2005).

This paper is an attempt to explain the processes of identity formation that have taken place in Darfur over the last four centuries. The basic story is of four overlapping processes of identity formation, each of them primarily associated with a different period in the region’s history. The four are the ‘Sudanic identities’ associated with the Dar Fur sultanate, Islamic identities, the administrative tribalism associated with the 20th century Sudanese state, and the recent polarization of ‘Arab’ and ‘African’ identities, associated with new forms of external intrusion and internal violence. It is a story that emphasizes the much-neglected east-west axis of Sudanese identity, arguably as important as the north-south axis, and redeems the neglect of Darfur as a separate and important locus for state formation in Sudan, paralleling and competing with the Nile Valley. It focuses on the incapacity of both the modern Sudanese state and international actors to comprehend the singularities of Darfur, accusing much Sudanese historiography of ‘Nilocentrism’, namely the use of analytical terms derived from the experience of the Nile Valley to apply to Darfur.

The term ‘Darfurian’ is awkward. Darfur refers, strictly speaking, to ‘domain of the Fur’. As I shall argue, ‘Fur’ was historically an ethno-political term, but nonetheless, at any historical point has referred only to a minority of the region’s population, which includes many ethnicities and tribes.1 However, from the middle ages to the early 20th century, there was a continuous history of state formation in the region, and Sean O’Fahey remarks that there is a striking acceptance of Darfur as a single entity over this period.2 Certainly, living in Darfur in the 1980s, and traveling to most parts of the region, the sense of regional identity was palpable. This does not mean there is agreement over the identity or destiny of Darfur. There are, as I shall argue, different and conflicting ‘moral geographies’. But what binds Darfurians together is as great as what divides them.

Identity formation in Darfur has often been associated with violence and external engagement. One of the themes of this paper is that today’s events have many historic precursors. However, they are also unique in the ideologically-polarized nature of the identities currently in formation, and the nature of external intrusion into Darfur. The paper concludes with a brief discussion of the implications of the U.S. determination that genocide is occurring in Darfur. There is a danger that the language of genocide and ideologically polarized identities will contribute to making the conflict more intractable.

While primarily an exercise in academic social history, this paper has a political purpose also. It is my contention that, for almost a century, Darfurians have been unable to make their history on their own terms, and one reason for that, is the absence of a coherent debate on the question, ‘Who are the Darfurians?’ By helping to generate such a debate, I hope it will be possible for the many peoples for whom Darfur is a common home to discover their collective identity.

Sudanic Identities

The first of the processes of identity formation is, the ‘Sudanic model’ associated with indigenous state formation. In this respect, it is crucial to note that Dar Fur (the term I will use for the independent sultanate, from c. 1600 to 1916, with a break 1874-98) was a separate centre of state formation from the Nile Valley, which was at times more powerful than its riverain competitors. Indeed, Dar Fur ruled Kordofan from about 1791 to 1821 and at times had dominion over parts of the Nile Valley, and for much of its life the Mahdist state was dominated by Darfurians. Before the 20th century, only once in recorded history did a state based on the Nile rule Darfur, and then only briefly and incompletely (1874-82). This has been grossly neglected in the ‘Nilocentric’ historiography of Sudan. Rather than the ‘two Sudans’ familiar to scholars and politicians, representing North and South, we should consider ‘three Sudans’ and include Dar Fur as well.

The Keira Sultanate followed on from a Tunjur kingdom, with a very similarly-placed core in northern Jebel Marra (and there are many continuities between the two states, notably in the governance of the northern province) and a Daju state, based in the south of the mountain. Under the sultanate, we have an overall model of identity formation with a core Fur-Keira identity, surrounded by an ‘absorbed’ set of identities which can be glossed as Fur-Kunjara (with the Tunjur ethnicity, the historic state-forming predecessor of the Fur-Keira) enjoying similarly privileged status immediately to the north). This is a pattern of ethnic-political absorption familiar to scholars of states including imperial Ethiopia, the Funj, Kanem-Borno and other Sudanic entities. Analysing this allows us to begin to address some of the enduring puzzles of Fur ethnography and linguistics, namely the different political structures of the different Fur clans and the failure to classify the Fur language, which appears to have been creolized as it spread from its core communities. However, the ethnography and history of the Fur remain desperately under-studied and under-documented.

Surrounding this are subjugated groups. In the north are both nomadic Bedouins (important because camel ownership and long-distance trade were crucial to the wealth of the Sultan) and settled groups. Of the latter, the Zaghawa are the most important. In the 18th century, the Zaghawa were closely associated with the state. Zaghawa clans married into the ruling Keira family, and they provided administrators and soldiers to the court. To the south are more independent groups, some of which ‘became Fur’ by becoming absorbed into the Fur polity, and others of which retain a strong impulse for political independence, notably the Baggara Arabs. As in all such states, the king used violence unsparingly to subordinate these peripheral peoples.

To the far south is Dar Fertit, the term ‘Fertit’ signifying the enslaveable peoples of the forest zone. This is where the intrinsically violent nature of the Fur state is apparent. The state reproduced itself through dispatching its armies to the south, obtaining slaves and other plunder, and exporting them northwards to Egypt and the Mediterranean. This nexus of soldiers, slaves and traders is familiar from the historiography of Sudanic states, where ‘wars without end’ were essential to ensure the wealth and power of the rulers.3 O’Fahey describes the slaving party as the state in miniature.4 This in turn arose because of the geo-political position of the Sultanate on the periphery of the Mediterranean world, consumer of slaves, ivory and other plunder-related commodities.5 During the 18th and 19th century, the Forty Days Road to Asyut was Egypt’s main source of slaves and other sub-Saharan commodities. When Napoleon Bonaparte occupied Egypt, he exchanged letters and gifts with the Sultan of Dar Fur.

All the major groups in Darfur are patrilineal, with identity inherited through the male line. One implication of this is that identity change can occur through the immigration of powerful males, who were in a position to marry into leading families or displace the indigenous men. Historically, the exception may have been some groups classed as Fertit, which were matrilineal. A combination of defensive identity formation under external onslaught and Islamization appears to have made matrilineality no more than a historical fragment. This, however, only reinforces the point that identity change is a struggle to control women’s bodies. With the exception of privileged women at court, women are almost wholly absent from the historical record. But, knowing the sexual violence that has accompanied recent conflicts, we can surmise that rape and abduction were likely to have been mechanisms for identity change on the southern frontier.

Identity formation in the Sultanate changed over the centuries, from a process tightly focused on the Fur identity (from about 1600 to the later 1700s), to a more secular process in which the state lost its ideologically ethnic character and ruled through an administrative hierarchy (up to 1916). It is also important to note the role of claims to Arab genealogy in the legitimation and the institutions of the state. The founding myth of the Sultanate includes Arab descent, traceable to the Prophet Mohammed. This is again familiar from all Sudanic states (Ethiopia having the variant of the Solomonic myth). Arabic was important because it brought a literate tradition, the possibility of co-opting traders and teachers from the Arab world, and above all because of the role of Islam as the state religion.

The state’s indigenous Arab population was meanwhile ‘Arab’ chiefly in the archaic sense, used by Ibn Khaldun and others, of ‘Bedouin’. This is a sense still used widely, and it is interesting that the Libyan government (one of three Bedouin states, the others being Saudi Arabia and Mauritania), has regarded Tuaregs and other Saharan peoples as ‘Arab’.

This model of identity formation can be represented in the ‘moral geography’ of figure 1.

Figure 1
Moral geography of the Dar Fur sultanate as seen from the centre.

One significance of this becomes apparent when we map the categories onto the Turko-Egyptian state in the middle Nile Valley, 1821-74. For this state—which is essentially the direct predecessor of what we have today—the core identity is ‘Arab’, focused on the three tribes Shaigiya, Jaaliyiin and Danagla. (The first and second are particularly dominant in the current regime. The last is ‘Nubian’, illustrating just how conditional the term ‘Arab’ can be.) The other identity pole was originally ‘Sudanese’, the term used for enslaveable black populations from the South in the 19th and early 20th centuries, but which by a curious process of label migration, came by the 1980s to refer to the ruling elite, the three tribes themselves. Meanwhile, the Southerners had adopted the term ‘African’ to assert their identity, contributing to a vibrant debate among Sudanese intellectuals as to Sudan’s relative positions in the Arab and African worlds.6 From the viewpoint of Southern Sudan (and indeed east Africa), ‘African’ and ‘Arab’ are polar opposites. From the viewpoint of Darfur and its ‘Sudanic’ orientation, ‘Arab’ is merely one subset of ‘African’. Darfurians had no difficulty with multiple identities, and indeed would have defined their African kingdom as encompassing indigenous Arabs, both Bedouins and culturally literate Arabs.

The transfer of the term ‘African’ from Southern Sudan to Darfur, and its use, not to encompass the Fertit groups but to embrace the state-forming Fur and Tunjur, and the similarly historically privileged Zaghawa, Masalit, Daju and Borgu, is therefore an interesting and anthropologically naïve category transfer. ‘African’ should have rather different meanings in Darfur.

Dar Fur’s downfall came in the 1870s because it lost out to its competitor, the Turko-Egyptian regime and its client Khartoum traders, over the struggle for the slaving/raiding monopoly in the southern hinterland. The current boundaries of Sudan are largely defined by the point at which the Khedive’s agents had reached at the time when their predatory expansion was halted by the Mahdist revolution. Their commerce and raiding inflicted immense violence on the peoples it conquered, subjecting them to famine and in some cases, complete dissolution and genocide. Historians have managed to reconstruct some of the societies that existed before this onslaught, but others live on in memory only, and others have disappeared without trace.7

Islamic Identities

The second model is the ‘Islamic model’. This substantially overlaps with the ‘Sudanic model’ and complements it, but also has distinctive differences, which came to a head with the Sudanese Mahdiya (1883-98). Let us begin with the overlaps.

Islam was a state cult in Dar Fur from the 17th century. Most likely, Islam came to Dar Fur from the west, because the region was part of the medieval Kanem-Bornu empire, which was formally Islamic from the 11th century if not earlier. Nilocentric historians have tended to assume that Islam reached Dar Fur from the Nile Valley, but there is much evidence to suggest that it is not the case. For example, the dominant Sufi orders in Darfur are west African in origin (notably the Tijaniya), and the script used was the Andalusian-Saharan script, not the classic Arab handwriting of the Nile Valley.

The majority of Darfur’s Arab tribes migrated into the sultanate in the middle of the 18th century, from the west.8 They trace their genealogy to the Juheiyna group, and ultimately to the Prophet (in common with all ruling lineages, Arab or non-Arab). During the 18th century, they exhibited a general south and eastward drift. At all times they were cultivators and herders of both camels and cattle, but as they moved east and south, cattle herding came to predominate and they became known collectively as the Baggara. Most of the tribal names they now possess emerged in the 18th, 19th or 20th centuries, in some cases as they merged into new political units. An interesting and important example is the Rizeigat, a vast confederation of clans and sections, that migrated east and south, with three powerful sections (Nawaiba, Mahamid and Mahriya) converging to create the Rizeigat of ed Daien in south-eastern Darfur. But they also left substantial sections to the north and west, historic remnants of this migration. These sections have a troubled and uncertain relationship with their larger southern cousins, alternately claiming kinship and independence. Whereas the southern, Baggara, Rizeigat were awarded a territory by the Fur Sultan (who had not subjugated the area where they chose to live), the northern clans continued a primarily nomadic existence on the desert edge, without a specific place they could call home. When sections did settle (and many did), they were subject to the administrative authority of the Sultan’s provincial governor of the northern region, Dar Takanawi or Dar el Rih. For historic reasons, this was an area in which administration was relatively de-tribalised, so the northern Bedouins were integrated into the Sultanate more as subjects than as quasi-autonomous tribal units.

The same process explains why we have a large Beni Halba Baggara group, with territorial jurisdiction, in southern Darfur, and a small Abbala group further to the north, and also similarly for the Misiriya whose main territories lie in south Kordofan, but who have remnant sections in northwest Darfur and Chad. Meanwhile the Zayadiya and Ma’aliya are not Juheiyna at all, and did not migrate in the same manner, and had different (though not necessarily easier) historic relations with the Sultanate.

The Hausa and Fulani migrations that occurred in the 19th and 20th centuries also have important parallels. They too populated substantial territories in Darfur (and also further east), and included remnant and more purely pastoral sections (such as the Um Bororo) that continued the eastward migration well into the late 20th century. An important component of the eastward drift is the influence of the Haj (many see themselves as ‘permanent pilgrims’, seeking to move towards Mekka), and Mahdist tradition that emphasizes eastward migration.9 As we shall see, militant Mahdism is itself an import into Sudan from west Africa, brought with these migrants. There are other significant groups with origins to the west, such as the Borgu and Birgid, both of them sedentary Sudanic peoples. We should not see eastward migration as exclusively a phenomenon of politically-Islamized groups, pastoralists or Arabs.

The Juheiyna groups brought with them their own, distinctive ‘moral geography’, one familiar to pastoral nomadic groups across the central Sudan and Sahelian regions. This sees all land as belonging to Allah, with right of use and settlement belonging to those who happen upon it. It sees Darfur as a chequerboard of different localities, some belonging to farmers and others to herders, with the two groups in a mutually-advantageous exchange relationship. It is also open-ended, especially towards the east. (The extent to which this is co-terminous with the moral geography of a Muslim pilgrim, exemplified by the west African migrants in Sudan, is an interesting question.)

This is represented in figure 2, which was drawn for me in outline by one of the most eminent Abbala Sheikhs, Hilal Musa of the Um Jalul Rizeigat, in 1985.

Figure 2
The ‘moral geography’ of Darfur, from a camel pastoralist viewpoint.

Several legacies of this are evident today. Most of the ‘Arab’ groups involved in militia activities including land grabbing are what we might call the Abbala remnants, with weak historic claims to tribally-defined territories, and traditions of migration and settlement to the east and south. Meanwhile, the majority of the Baggara Arabs of south Darfur are uninvolved in the current conflict.

Three other elements in the Islamic identity formation process warrant comment. One is Mahdism, which arrived in Darfur from the west, and has clear intellectual and social origins in the Mahdist state founded by Osman Dan Fodio in what is now northern Nigeria. Unlike the Nile Valley, where the Mahdist tradition was weak, in the west African savannas it was strong and well-articulated. Dan Fodio wrote ten treatises on Mahdism plus more than 480 vernacular poems, and insisted that the Mahdi had to bear the name Mohamed Ahmed (which ruled him out).10 The first Mahdist in 19th century Sudan was Abdullahi al Ta’aishi, grandson of a wandering Tijani Sufi scholar from somewhere in west Africa, who met the Dongolawi holy militant Mohamed Ahmed in 1881 and proclaimed him the Mahdi, in turn becoming his Khalifa. The majority of the Mahdist armies derived from the Baggara of Darfur and Kordofan, and for most of its existence the Mahdist state in Omdurman was ruled by the Khalifa and his Ta’aisha kinsmen. In fulfillment of Mahdist prophecy and to support his power base, the Khalifa ordered the mass and forced migration of western peoples to Omdurman. The Mahdiya was, to a significant extent, a Darfurian enterprise. And it involved extreme violence, though of a radically different kind from that on which the Dar Fur sultanate was founded. This was religious, messianic Jihad, including population transfers on a scale not seen before or since.

Such is the stubborn Nilocentrism of Sudanese historiography that the influence of west African and Darfurian forms of Islam on this pivotal episode in Sudanese history are consistently under-estimated. It was the collision between the heterodox Mahdist Jihadism of the west, including the egalitarian ideology of the Tijaniya, and the more orthodox and hierarchical (though also Sufist) Islam of the Nile Valley that created the Mahdiya.

The Mahdist period is remembered even today in the cultural archive of a time of extraordinary turmoil and upheaval. It was a time of war, pillage and mass displacement. In 1984/5, people looked back to the drought of 1913/14 as their historical point of reference. One wonders if the current historic reference point is the famine of 1888/9, known as ‘Sanat Sita’ because it occurred in the year six (1306 Islamic calendar), and which seems to have surpassed the Darfurians’ otherwise inventive capacity for naming tragedy.

Beyond that historic precedent, I do not want to suggest that there are parallels between the Mahdiya and contemporary or recent political Islam in Sudan, which has had its own manifestations of extreme violence and jihadism. On the contrary, I would argue that it is the failure of Sudan’s recent Islamist project that has contributed to the war in Darfur. This arises from the last important theme of Islamic identity, namely Hassan al Turabi’s alliance-building across the east-west axis of Sudanese identities.

Among the many intellectual and practical innovations in Turabi’s Islamism was an opening to African Muslims as individuals and African Islam as a tradition. The National Islamic Front recognized that Darfur represented a major constituency of devout Muslims that could be mobilized. It made significant openings to Darfur and to the substantial Fellata communities across Sudan.11 It promised that Islam could be a route to enfranchisement as citizens of an Islamic state. In doing so, Turabi and his followers moved away from the traditional focus of the political Islamists on the more orthodox Islam of the Nile Valley, and its close association with the Arab world. It was, unfortunately, a false promise: the Sudanese state is the inheritor of the exclusivist project of the 19th century Khartoum traders, and sought only to enlist the Darfurians and Fellata as foot soldiers in this enterprise. For the Fellata it had a quick win: it could grant them citizenship, correcting a longstanding anomaly of nationality policy. And it has gained the loyalty of many Fellata leaders as a result. But for Darfurians, the best it offered was relative neutrality in the emergent conflicts between Darfur’s Arabs and non-Arabs, and increasingly, not even that. Darfur was marginal even to the Islamists’ philanthropic projects in the 1990s, which at least provided basic services and food relief to many remote rural communities. Perhaps because the Islamists took the region for granted, and certainly because the ruling groups were focused on the threats from the South, Nuba and Blue Nile, Darfur was neglected in the series of Islamist projects aimed at social transformation.

When the Islamist movement split in 1999, most Darfurian Islamists went into opposition. By an accident of fate, the most powerful Darfurian in the security apparatus was an airforce general from the Abbala Rizeigat, and members of those sections were rapidly put in place as leaders of the Popular Defence Force in critical locations, removing men whom the government suspected of having sympathies with the Turabi faction. Thus was created a set of militias popularly known as ‘Janjawiid,’ adopting a term first used to refer to Chadian Abbala militias that used western Darfur as a rear base in the mid-1980s, and who armed some of their Abbala brethren and helped instigate major clashes in 1987-90. The Darfur war is, in a significant way, a fight over the ruins of the Sudanese Islamist movement, by two groups, both of which seem to have abandoned any faith that the Islamist project will deliver anything other than power.

The third note of significance concerns the position of women. In the Tijaniyya sect, with its far more egalitarian tradition than the Sufis of the Nile, women can achieve the status of sheikh or teacher. This reflects both the religious traditions of the Sudanic region, and also the relatively higher socio-economic status of women in savanna societies, where they could own their own fields and engage in trade in their own right. Darfurian ethnographies repeatedly note the economic independence enjoyed by women, among non-Arab and Arab groups alike. The subsequent spread of Islamic orthodoxy, described more below, contributed to a regression in women’s status.

Administrative Tribalism and ‘Becoming Sudanese’

The British conquest of Dar Fur in 1916, and the incorporation of the then-independent sultanate of Dar Masalit in 1922-3, represented a clear break with the past. Darfur was ruled by an external Leviathan which had no economic interest in the region and no ideological ambition other than staving off trouble. Darfur was annexed when the basic determinants of British policies in Sudan had already been established, and the main decisions (e.g., the adoption of Native Administration after 1920, the expulsion of Egyptian civil servants after 1924, the embrace of neo-Mahdism and the Khatmiya, the adoption of the Famine Regulations in the 1930s, the Sudanisation of the civil service, and the moves towards independence) were all taken with scant reference to Darfur.

The key concern in Darfur in the decade after the conquest was security, and specifically the prevention of Mahdist uprisings. An attack on Nyala in 1921 was among the most serious threats the new rulers faced, and the last significant uprising was in 1927. In riverain Sudan, the British faced a more immediate danger, in the form of revolutionary nationalism on the slogan of unity of the Nile Valley, among the educated elite and detribalized elements, especially Sudanese soldiers. To suppress both, and to ensure the utmost economy in rural administration, the British chose a policy of ‘Native Administration’. This was not ‘Indirect Rule’ as practiced in the Nigerian Emirates or Buganda (except in the case of the Sultanate of Dar Masalit, where the British officer was a Resident). Rather, it was the creation of a new hierarchy of tribal administrators, with the significant innovation of the ‘omda, the administrative chief intermediate between the paramount chief (‘nazir’ for Arab tribes) and the village sheikh. ‘Omda was an Egyptian office specially imported for the purpose.12

In a series of ordinances, the British regularized the status of tribal authorities. A particularly important act was to grant judicial powers to chiefs, in addition to their executive powers. This was a means of setting the tribal leaders to police their subjects, to keep an eye on both millenarian preachers and discontented graduates. (It is interesting that the leader of the 1924 nationalist revolt, Ali Al Latif, as a detribalized Southerner or ‘Sudanese’ in the parlance of the day, having no tribal leader to whom he could become subject, was kept in jail in Sudan long beyond his prison term, and then exiled to Egypt.) Along with this came the ‘Closed Districts Ordinance’, much criticized for shutting off the South and Nuba Mountains from external influences, but used in Darfur to keep an eye on wandering preachers and west African immigrants.

But the most significant corollary of Native Administration was tidying up the confusion of ethnic identities and tribal allegiances that existed across Sudan. This was an administrative necessity more than an ideological cleaning-up.

The colonial archives from the 1920s and ’30s are filled with exchanges of letters on how to organize the ethnic chaos encountered in rural Sudan.13 In Darfur, the most significant question was the administration of the Rizeigat, which included shoring up the authority of the pro-British Nazir, Madibbu, regulating the shared pastures on the Bahr el Arab river, also grazed by the Dinka, and deciding the status of the Abbala Rizeigat (initially subject to Nazir Ibrahim Madibbu, then with their own deputy Nazir, finally with their own full Nazir). Other activities included grouping the two sections of the Beni Hussein together, and providing them with land in north-western Darfur (a very rare instance of a wholesale tribal relocation, albeit one done with the consent of the section that needed to be relocated), administratively uniting the two parts of the Beni Halba, finding means of appointing a chief for the Birgid, grouping the miscellaneous sections living in an area called ‘Dar Erenga’ together to form one tribe, etc. A lot of attention was paid to the Fertit groups living on Darfur’s southern frontier, including a brave but futile attempt to move them into Southern Sudan and create a ‘cordon sanitaire’ between Muslims and non-Muslims. But this was an anomaly: the basic approach was ‘live and let live.’

Native Administration was reformed in the 1940s and 1960s (when chiefs were stripped of most of their judicial powers) and formally abolished in 1971, although many people elected to Rural People’s Councils were former native administrators.

Along with the regularizing of tribal administration came the formalizing of boundaries. The British stuck with the four-fold division of the Dar Fur sultanate into provinces, and demarcated tribal territories for the Baggara in south Darfur (following the Sultan’s practice). Elsewhere, the allocation of tribal dars was somewhat haphazard. The creation of Dar Beni Hussein in the western part of north Darfur was anomalous: when a group did not present a problem, it was left to be. However, the de facto recognition of the legality of a tribal dar in south Darfur began to build a legacy.14 Beforehand, the term ‘dar’ had been used in many different senses, ranging from a specific location or administrative unit, to the specific territory of an ethnic group, to the whole Sultanate, to an abstract region such as Dar Fertit. But, by constant usage, twinned with a tribally-based administrative system with judicial powers, the term ‘dar’ came primarily to refer to an ethnic territory in which the dominant group had legal jurisdiction. By the 1970s, Sudan’s leading land law scholar could conclude that tribes have become ‘almost the owners of their homelands.’15 During most of the 20th century, this had no significant political repercussions, as it coincided nicely with the customary practice of a settler adopting the legal code of one’s hosts. There was sufficient free land, and a strong enough tradition of hospitality to settlers, that by the 1970s all ‘dars’ in south Darfur were ethnically mixed, some of them with very substantial settler populations from the drought-stricken north.

Let us not over-emphasize the implications of tribal administration for identity formation. It undoubtedly slowed and even froze processes of identity formation. But it was lightly implemented. Many district officers in Darfur reveled in the myriad forms of ethnic identity and chieftanship they found, documenting the intermediate identities of the Kaitinga (part Tunjur/Fur, part Zaghawa), the Jebel Si Arabs, the Dar Fongoro Fur, and numerous others; also allowing Darfurian administrators to keep their wonderful array of traditional titles including Sultan, Mek, Dimangawi, Shartay, Amir, and Nazir. Given that there were no significant economic interests in Darfur, no project for social change or modernization, and no land alienation, we must recognize the limits of imperial social engineering. It had a very light hand, both for good and ill.

Indeed, in the 1960s and ’70s, Darfur became something of a textbook case for identity change. During the preparatory studies for establishing the Jebel Marra Rural Development Project, a number of eminent social anthropologists were employed to study social change in Darfur.16 Among their writings are a number of studies on how sedentary Fur farmers, on acquiring sufficient cattle, would ‘become Baggara’ in stages, to the extent of teaching their children the Arabic language and adopting many socio-cultural traits of the pastoralists they moved with. This was a remarkable reversal of the previous pattern whereby communities ‘became Fur’ for political reasons; now individuals might ‘become Baggara’ for economic ones. There were also studies of the sedenterization of nomads, underlining how the nomad/farmer distinction is an extremely blurred one. Sadly, there were no comparable studies in northern Darfur.

Most proposals for a settlement of Darfur’s conflict include the revival of Native Administration in some form, both for the resolution of inter-communal conflicts (including settling land disputes) and for local administration.17 Whether or not the important role of chiefs’ courts will be re-established is far less clear. However, the context of the early 21st century is very different from the 1920s. This is clear from a brief examination of the role played by the tribal leaders in the resolution of the 1987-9 conflict and the revived Native Administration Council after 1994.

The first major conflict in Darfur of recent times occurred in 1987-9, and had many elements that prefigure today’s war, not least the fact that the major protagonists were Fur militia and Abbala Arab fighters known as ‘Janjawiid’. Belatedly, a peace conference was called including tribal leaders on both sides, some of whom sought to reestablish their authority over younger militant leaders, and some who sought for advancement of their own positions. Assisted by the fact that the NIF coup occurred while the conference was in session—allowing both sides to make compromises without losing face—an agreement was reached. But it was not implemented; fighting broke out again, and another conference was held in early 1990, which came with similar recommendations, which again were not properly implemented. The key lesson from this is that Native Administration is not a solution in itself, but rather a component of finding and implementing a solution. Control of armed groups, payment of compensation, and measures to deal with the causes of dispute are all necessary.

A form of Native Administration Council was established in 1994, a measure that coincided with the division of Darfur into three states and renewed conflict in western Darfur. There are two ways in which the NAC is implicated in the conflict. First, the government saw the award of chieftancies (usually called Emirates) as a means of rewarding its followers and undermining the power of the Umma Party, which retained the allegiance of many of the older generation of sheikhs. Second, the positions were awarded with a new, simplified and more administratively powerful view of ethnicity. The very rationale for creating the new entities was to reinforce the power of a central authority (a party as much as, or more than, a state). In a militarized environment, with no services delivered by party or state, the reward for the new chiefs was the power to allocate land and guns within their jurisdiction. It was a recipe for local level ethnic cleansing, which duly occurred in several places.

During the colonial period—less than four decades for Darfur, scarcely three for Dar Masalit—and the first decades of independence, Darfur was subject to a state in Khartoum which knew little, and cared less, about this faraway region. Little changed with independence. The entire debate over Sudanese independence was conducted in Nilocentric terms: the dual questions were whether Sudan should be united with Egypt, and what should be the status of the South.18 The position of Darfur was almost wholly absent from this discourse, and remained a footnote in ongoing debates on Sudanese national identity. For example, perhaps the most eloquent analyst of the dilemmas of Sudanese identity, writing in the format of fiction that allows him to explore more explicitly the unstated realities of Sudanese racism, treats Darfurian identity wholly within the North-South framework.19

The state that ensued was a clear successor to the Turko-Egyptian colonial state. It was, and remains, a merchant-soldier state, espousing Arabism, using Arabic as a language of instruction in schools and in the media, and promoting Islam as a state ideology. Its political discourse is almost wholly Nilo-centric: the key debates leading up to independence concerned whether Sudan would opt for unity with Egypt under the slogan of ‘unity of the Nile Valley’, and subsequent debates on national identity have been framed along the North-South axis of whether Sudan is part of the Arab or African world. There were brave attempts by scholars and activists to assert that Sudan is at once Arab and African, and that the two are fully compatible. These efforts came from all parts of the political spectrum: it is particularly interesting to see the Islamists’ arguments on this score.20 Some of the academic historians who engaged in this debate worked on Sudan’s westward links. They, however, were both academically a minority and found no political reverberations for their writings. Whether polarizing or attempting bridging, the discourse was overwhelmingly North-South. And, within Northern Sudan especially, we see the relentless progress of absorption into the culture of the administrative and merchant elite.

What we see is a process that has been called many names, of which I prefer ‘Sudanization,’ following Paul Doornbos, who produced a series of superb if under-referenced studies of this phenomenon in Dar Masalit in the early 1980s.21 ‘Arabization’ is not adequate, because Darfur’s indigenous Bedouin Arabs were also subject to the same process, and because it did not result in people who were culturally ‘Arab’. Rather, individuals came to belong to a category of Sudanese who spoke Arabic, wore the jellabiya or thoub, prayed publicly, used paper money, and abandoned tribal dancing and drinking millet beer. Doubtless, the newly-Sudanised were at social and financial disadvantage when dealing with the established elites. But they were not expropriated of land or identity, and most of them straddled both the ‘Sudanised’ and local identities, and gained from it.

One of the most marked aspects of Sudanisation is a change in the status of women. The Darfurian Sudanised women is (ideally) circumcised, secluded at home, economically dependent on her husband, meek in her behaviour, and dressed in the thoub. The spread of female circumcision in Darfur in the 1970s and ’80s, at a time when the Sudanese metropolitan elite was moving to abandon the practice, is perhaps the most striking physical manifestation of this process, and yet another illustration of how identity change is marked on women’s bodies. It is also an illustration of the recency of a ‘traditional’ practice.

What is remarkable about these processes of identity change is not that they occurred, or that they were subject to the arbitrary impositions of a state, but that they were almost entirely non-violent (with the significant caveat of genital mutilation). This is an important contrast with the South and the Nuba Mountains.

Incorporation into a Sudanese polity did bring with it a clear element of racism, based on colour of skin, and facial characteristics. Although both the Sudanic and Islamic processes of identity formation could not avoid a racial tinge, it was with Egyptian dominance and the successor Sudanese state that this became dominant. The Egyptian or Mediterranean littoral ‘moral geography’ of Dar Fur can be charted as early as 1800, when the Arab trader Mohamed al Tunisi lived there: he graded the land and its inhabitants according to the colour of skin, the beauty of women, and their sexual mores.22 A broadly similar racist classification became evident in Egyptian occupation of the Nile Valley in the mid-19th century, and remains essentially unchanged today.

A particularly important difference between Darfur and other parts of Sudan is the significance of land and labour. Under the British and independent governments, very substantial agricultural schemes were established along the Nile and in eastern Sudan, and subsequently in south Kordofan. These involved widespread land alienation and the transformation of a rural peasantry into a wage labour force, much of it seasonally migrant.23 In Darfur there was no land alienation to speak of, and seasonal labour migration is almost entirely within the region, to work on locally-owned smallholdings (some of which are large by Darfur standards, but do not match the huge registered schemes of the eastern savannas). The violent depredation and dispossession inflicted by the Sudanese state in the 1980s and 90s on the Nuba, Blue Nile peoples and adjacent areas of Upper Nile, creating mass internal displacement with the twin economic aims of creating mechanized farms owned by a Khartoum elite and creating a disadvantaged labour force to work them, has no parallel in Darfur. To a significant degree, Darfur has served as a labour reserve for Gezira and Gedaref, but because of the distances involved, the migration is long-term and not seasonal.24 And the Darfurian labour reserve has never been of strategic economic significance, such that national economic policies have been geared to sustaining it. Male outmigration has left the poorest parts of Darfur with a gender imbalance and a preponderance of female-headed households.25

Labour migration has had implications for the way in with the riverain elite regards westerners. In the 1920s, landowners were reported as saying that just as God (or the British) had taken away their slaves, he/they had brought the Fellata. The lowly status of this devout Muslim pilgrim group is closely associated with their low-status labouring occupations, and much the same holds for the Darfurians (of all ethnicities). The term ‘abid’ was often applied to them all, indiscriminately, reflecting both racism and their labouring status.26 It is arguable that racist attitudes followed economic stratification, rather than vice versa. In either case, there is a clear association between status and skin colour.

Incorporation into a Sudanese national state also, simultaneously, represented incorporation into a wider regional identity schema, in which the three attributes of skin colour, economic status and Arab identification all served to categorize populations. Mohamed al Tunisi would feel at home in the contemporary moral geography of Sudan, almost two centuries after his travels.

Militarized and Ideological ‘Arab’ and ‘African’ Identities

The complex history of identity formation in Darfur provides rich material for the creation of new ethnic identities. What has happened is that as Darfur has been further incorporated into national Sudanese processes, wider African and Middle Eastern processes, and political globalization, Darfur’s complex identities have been radically and traumatically simplified, creating a polarized ‘Arab versus African’ dichotomy that is historically bogus, but disturbingly powerful. The ideological construction of these polarized identities has gone hand-in-hand with the militarization of Darfur, first through the spread of small arms, then through the organization of militia, and finally through full-scale war. The combination of fear and violence is a particularly potent combination for forging simplified and polarized identities, and such labels are likely to persist as long as the war continues. The U.S. government’s determination that the atrocities in Darfur amount to ‘genocide’ and the popular use of the terms ‘Arab’ and ‘African’ by journalists, aid agencies and diplomats, have further entrenched this polarization, to the degree that community leaders for whom the term ‘African’ would have been alien even a decade ago, now readily identify themselves as such when dealing with international interlocutors.

Internally, this polarization began with some of Darfur’s Arabs. Exposed to the Islamist-Arabism of Khartoum, drawing upon the Arab lineage ideology latent in their Juheiyna identities, and often closely involved in Colonel Gaddafi’s ideologically Arabist enterprises in the 1970s and ’80s, these men adopted an Arab supremacist ideology. This seems to have been nurtured by Gaddafi’s dreams of an Arab homeland across the Sahara and Sahel (notwithstanding the Libyan leader’s expansive definition of ‘Arab’ which, true to his own Bedouin roots, includes groups such as the Tuareg), and by competition for posts in Darfur’s regional government in the 1980s. In 1987, a group of Darfurian Arabs wrote a now-famous letter to Prime Minister Sadiq el Mahdi, demanding a better deal for Darfur’s Arabs. They appealed to him as ‘one of their own’. At one level this was simply a legitimate demand for better political representation and better services. But within it lurked an agenda of Arab supremacism. Subsequently, it has become very difficult to separate the ambitious agenda of a Darfurian Arab homeland from wider and more modest goals, and to identify which documents are real and which are not. But there is no doubt that, twinned with similar ambitions among the Chadian Juheiyna Arabs, there was a political and territorial agenda emerging. This helps explain why some of the first and fiercest clashes of 1987 were in the highland Jebel Marra area of Gulu, a territory which would be clearly indicated a ‘Fur’ heartland on any moral geography of the region including that of Sheikh Hilal, reproduced above, whose son Musa has since become infamous as commander of a major PDF brigade. The attacks on Gulu in 1987 and again in 2002 and 2004 represent a symbolic strike at the heart of Fur identity and legitimacy, as well as a tactical assault on a Fur resistance stronghold.

This newly-politicized Arab identity was also militarized. Three overlapping strands of militarization can be seen. One is the Ansar, the core followers of the Mahdi, who are historically a political, religious and military movement. Between 1970 and 1977, the Ansar leadership was in exile in Libya, planning its return to power, which it tried in 1976 and failed. Many returned to Sudan in 1977 as part of the ‘National Reconciliation’ between Sadiq el Mahdi and Nimeiri, but were not, as they had hoped, absorbed into the national army. Instead, they were settled on farming schemes. Disproportionately drawn from the Baggara tribes, former Ansar fighters were instrumental in the creation of the first Baggara militias in the mid-1980s. A second group of Ansar returned in 1985-6, following the fall of Nimeiri.27 While in Libya, the Ansar had been organized, trained and armed alongside Gaddafi’s Islamic Legion, which drew recruits from across the Sahelian countries.28 This is the second contributor to the militarization of the Bedouin. The Islamic Legion was disbanded after its defeat in Ouadi Doum in 1987, but its legacy remained. The third contributor was the formation of Arab militias in Chad, which used Darfur as a rear base for their persistent but unsuccessful attempts to take state power. The different political, tribal and ideological strands of this story have yet to be teased apart. Clearly there are important differences within these groups, including a competition for the allegiance of the Ansar fighters between the Umma leadership and the NIF. Gaddafi was also quite capable of treating with non-Arab groups such as the Zaghawa when it suited him, and was quick to recognize the government of Idris Deby when it took power in late 1990. Although Deby had been a commander of the forces that defeated the Libyan army and Islamic Legion a few years earlier, Gaddafi’s main quarrel was with Hissene Habre.

While there is a definite strain of Arab supremacism, the significance of ‘Arab’ identity must not be overstated. The groups involved in the current conflict are overwhelmingly Juheiyna Abbala (excluding for example the Zayadiya), with relatively few Baggara groups (notably including one part of the Beni Halba, many of whom were armed and mobilized in 1991 to counter the SPLA incursion into Darfur). This means that the largest and most influential of Darfur’s Arabs are not involved, including the Baggara Rizeigat, the Habbaniya, the Maaliya and most of the Taaisha. As the conflict continues to spread and escalate, this may change, and there are clear attempts by some in government to bring in all Arab groups (especially the Rizeigat) on their sides, and attempts by some on the rebel sides to provoke them.

The character of Arab supremacism is manifest in a racist vocabulary and in sexual violence. The term ‘zurug’ has long been used in the casual racism of Arabs in Darfur, despite—or perhaps because of—the absence of any discernible differences in skin colour. Attributions of female beauty or lack thereof are similarly made, again despite or because of the lack of noticeable difference. The term ”abid’, which has long been used by the riverain elites to refer to all Darfurians, has been adopted by some Arab supremacists to refer to non-Arab Darfurians, despite—or because of—its lack of historical precedent. And widespread rape itself is a means of identity destruction or transformation, particularly salient and invasive for Muslim communities. In the early 1990s Nuba Mountains counterinsurgency campaigns, there is ample documentation that rape was used systematically and deliberately for this purpose.29

The creation of ‘Africanism’ is more recent than the ascent of Arab supremacism. It owes much to the SPLA, whose leader, John Garang, began to speak of an ‘African majority’ in Sudan to counter the Islamist government’s claim that Sudan should be an Islamic state because it had a majority Muslim population. Garang reached out to the Nuba and peoples of southern Blue Nile, for whom ‘African’ was an identity with which they could readily identify. For example, the Nuba clandestine political and cultural organization of the 1970s and early ’80s, known as Komolo, asserted the Nuba’s right to their own cultural heritage, which they identified as distinctively ‘African.’ Under the leadership of Yousif Kuwa, Komolo activist and SPLA governor of the Nuba Mountains, the Nuba witnessed a revival of traditional dancing, music and religion.30

Trapped in a set of identity markers derived from the historical experience of the Nile Valley, a number of educated Darfurian non-Arabs chose ‘African’ as the best ticket to political alliance-building. The veteran Darfurian politician Ahmed Diraige had tried to do this in the 1960s, making alliances with the Nuba and Southerners, but had then switched to trying to bring Darfur’s non-Arabs into the Umma Party, hoping thereby to broaden and secularise that party. Daud Bolad, a Fur and a prominent Islamist student leader, switched from one political extreme to the other and joined the SPLA, leading a poorly-planned and militarily disastrous SPLA expedition into Darfur in 1991. Sharif Harir, a professor of social anthropology and as such inherently distrustful of such labels, was one of the first Darfurian intellectuals to recognize the danger posed by the new Arab Alliance, and has ended up reluctantly donning the ‘African’ label. He is now one of the political leaders of Darfur’s Sudan Liberation Movement.

The influence of the SPLA on the Darfurian opposition should be acknowledged. What was originally a peasant jacquerie was given political ambition with the assistance of the SPLA. Indeed, the Darfur Liberation Front was renamed the SLA under SPLA influence, and it adopted Garang’s philosophy of the ‘New Sudan’, perhaps more seriously than its mentor.

It is a commonplace of ethnographic history that communal violence powerfully helps constitute identity. In times of fear and insecurity, people’s ambit of trust and reciprocity contracts and identity markers that emphasize difference between warring groups are emphasized. Where sexual violence is widespread, markers of race and lineage are salient. Much anecdotal evidence indicates that this is happening today, and that the civilian communities most exposed to the conflict are insisting on the ‘African’ label. We can speculate that it serves as a marker of difference from the government and its militia, an expression of hope for solidarity from outside, and—perhaps most significant in the context of forced displacement and threats of further dispossession—a claim to indigeneity and residence rights.

From the point of view of the SLA leadership, including the leadership of the communities most seriously affected by atrocity and forced displacement, the term ‘African’ has served them well. It is scarcely an exaggeration to say that the depiction of ‘Arabs’ killing ‘Africans’ in Darfur conjures up, in the mind of a non-Sudanese (including many people in sub-Saharan Africa), a picture of bands of light-skinned Arabs marauding among villages of peaceable black-skinned people of indeterminate religion. In the current context in which ‘Arabs’ are identified, in the popular western and sub-Saharan African press, with the instigators of terrorism, it readily labels Darfur’s non-Arabs as victims.

From the point of view of the government in Khartoum, the labels are also tactically useful. While insisting that the conflict is tribal and local, it turns the moral loading of the term ‘Arab’ to its advantage, by appealing to fellow members of the Arab League that Darfur represents another attempt by the west (and in particular the U.S.) to demonize the Arab world. In turn this unlocks a regional alliance, for which Darfur stands as proxy for Iraq and Palestine. Looking more widely than Darfur, the term ‘Arab’ implies global victimhood.

The U.S. determination that Darfur counts as ‘genocide’ plays directly into this polarizing scenario. It is easy for self-identified Arab intellectuals in Khartoum (and elsewhere) to see this finding as (yet another) selective and unfair denigration of Arabs. If, in the confrontation between the Arabs and the Israelis and Americans, Arabs are cast as ‘terrorists’, warranting pre-emptive military action and a range of other restrictions on their rights, now in the context of Africa they are cast as ‘genocidaires’ and similarly cast beyond the moral pale and rendered subject to military intervention and criminal tribunals. Arab editorialists are thus driven both to deny genocide and to accuse the U.S. of double standards, asking why killings in (for example) Congo are not similarly labeled.

In fact, the U.S. State Department was reluctant to conclude that Darfur counted as genocide, and the Secretary of State insisted, almost in the same breath that he announced ‘genocide’, that it would not change U.S. policy. The impetus for the genocide finding did not come from Washington’s neocons, but rather from liberal human rights activists in alliance with the religious right. The origins of this alliance lie in the politics of support for the SPLA (with the Israeli lobby as a discrete marriage broker) and influence trading in Congress, specifically finding an issue (slavery in Southern Sudan) that brings together the Black Caucus, the Israeli lobby, the religious right (for whom Sudan is a crusade) and the human rights groups (who began campaigning on this long before the others). Several of these groups were frustrated that the State Department, under the Republicans, had switched from a policy of regime change in Khartoum to a pursuit of a negotiated peace for Southern Sudan. The war in Darfur was a vindication of their thesis that no business could be done with Khartoum’s evildoers. The atrocities were sufficiently swift and graphic, and coincided with the tenth anniversary of the preventable genocide in Rwanda, giving remarkable salience to the genocide claim. Congress passed a resolution, and the State Department prevaricated by sending an investigative team, confident that because there was no evident intent at complete extermination of the target groups, that their lawyers would find some appropriately indeterminate language to express severe outrage, short of moral excommunication of Khartoum (with which State was still negotiating) and military intervention. What they had not counted on was that the definition of Genocide in the 1948 Convention is much wider than the lay definition and customary international usage, and includes actions that fall well short of a credible attempt at the absolute annihilation of an ethnic or racial group. The State Department’s lawyers, faithful to the much neglected letter of the law, duly found genocide, and the Secretary of State, doubtless judging that it would be more damaging to ignore his lawyers’ public advice, duly made the announcement, and then said that this would not affect U.S. policy.

Arrived at more-or-less by accident, the genocide finding has a number of implications. One is that it divides the U.S. from its allies in Europe and Africa. Given that the Sudan peace process is a rare contemporary example of multilateralism (albeit ad hoc) and rare example of a success in U.S. foreign policy (albeit incomplete), it is important that this unity is not fully sundered. At present, it appears that the State Department has succeeded in keeping its policy on track, despite being outflanked by the militants in Washington. (Had the Democrats won in November, we might have faced the ironic situation of a more aggressive U.S. policy.) The damage has been minimized, but some has been done.

Second, the broader interpretation of the Genocide Convention, while legally correct, is one that diplomats have been avoiding for decades, precisely because it creates a vast and indeterminate grey area of atrocity, in which intervention is licensed. A tacit consensus had developed to set the bar higher: now the U.S. has lowered it, and Arab critics are correct: if Darfur is genocide, then so is Congo, Burundi, Uganda, Nigeria and a host of others. The neocons do indeed have another weapon in their armoury of unilateral intervention. Arguably, they didn’t need it, already having sufficient reason to intervene on the basis of the September 2002 U.S. National Security doctrine.

And thirdly, for Darfur, the genocide finding is being internalized into the politics of the region. This is occurring in the context of considerable external dependence by Darfur’s political organizations and communities. The political organizations have centered their strategies around external engagement. The Islamists in the Justice and Equality Movement have a strategy for regime change, using the atrocities in Darfur to delegitimize the Khartoum government internationally, thereby bring it down and bring them to power. The SLA, representing a broad coalition of communities in arms, has yet to develop a full political programme, and is instead largely reacting to events, especially the escalating atrocities since late 2003. It seeks international intervention as a best option, and international monitoring and guarantees as a second best. The communities it represents, many of them either receiving or seeking international assistance, are also orienting their self-representation to the international audience. They have been provided with a simple and powerful language with which to make their case.

The other lenses for analyzing Darfurian identities are too subtle and complex to be of much use for journalists and aid workers. So we are stuck with a polarizing set of ideologically constructed identities, mutually antagonistic. If, as seems likely, these labels become strongly attached, they will hugely complicate the task of reconstructing the social fabric of Darfur—or, given the impossibility of returning to the recent past—they will obstruct the construction of a new Darfurian identity that stresses the common history of the region and the interdependence of its peoples.

Conclusion

Let me conclude this essay with two main observations.

First, who are the Darfurians? I argue that Darfur has had a remarkably stable continuous identity as a locus for state formation over several centuries, and is a recognizable political unit in a way that is relatively uncommon in Africa. But the incorporation of Darfur into Sudan, almost as an afterthought, has led not only to the economic and political marginalization of Darfurians, but the near-total neglect of their unique history and identity. Just as damaging for Darfurians as their socio-political marginalization has been the way in which they have been forced to become Sudanese, on terms that are alien to them. To overcome this, we must move to acknowledging a politics of three Sudans: North, South and West. It is probably a naive hope, but a recognition of the unique contribution of Darfurians and the inclusive nature of African identity in Darfur could provide a way out of Sudan’s national predicament of undecided identity. Short of this ambition, it is important for Darfurians to identify what they have in common, and undertake the hard intellectual labour of establishing their common identity.

Second, what we see is the gradual but seemingly inexorable simplification, polarization and cementing of identities in a Manichean mould. Within four generations, a set of negotiable identities have become fixed and magnetized. We should not idealize the past: while ethnic assimilation and the administration of the Sultanate may have been relatively benevolent at the centre, at the southern periphery it was extremely and systematically violent. Similarly, while Sufism is generally and correctly regarded as a tolerant and pacific set of faiths, it also gave birth to Mahdism, which inflicted a period of exceptional turmoil and bloodshed on Sudan, including Darfur. Violence has shaped identity formation in the past in Darfur, just as it is doing today. Also, from the days of the Sultanate, external economic and ideological linkages shaped the nature of state power and fed its centralizing and predatory nature. Today, the sources and nature of those external influences are different. A ‘global war on terror’ and its correlates influence the political and ideological landscape in which Darfur’s conflict is located, including the very language used to describe the adversaries and what they are doing to one another and the unfortunate civilians who are in the line of fire. The humanitarians and human rights activists, as much as the counter-terrorists and diplomats, are part of this process whereby Darfurian identities are traumatically transformed once again. Hopefully there will be a counter-process, which allows for Darfurians to carve out a space in which to reflect on their unique history, identify what they share, and create processes whereby identities are not formed by violence.

Endnotes

  1. The use of the label ‘tribe’ is controversial. But when we are dealing with the subgroups of the Darfurian Arabs, who are ethnically indistinguishable but politically distinct, the term correlates with popular usage and is useful. Hence, ‘tribe’ is used in the sense of a political or administrative ethnically-based unit. See Abdel Ghaffar M. Ahmed, Anthropology in the Sudan: Reflections by a Sudanese Anthropologist, Utrecht, International Books, 2002.
  2. R. S. O’Fahey, State and Society in Dar Fur, London, Hurst, 1980.
  3. Cf. S. P. Reyna, Wars without End: The Political Economy of a Precolonial African State, Hanover, University Press of New England, 1990; Lidwien Kapteijns, Mahdist Faith and Sudanic Tradition: A History of the Dar Masalit Sultanate 1870-1930, Amsterdam, 1985; Janet J. Ewald, Soldiers, Traders and Slaves: State Formation and Economic Transformation in the Greater Nile Valley, 1700-1885, University of Wisconsin Press, 1990. The term ‘wars without end’ was used by the 19th century traveler Gustav Nachtigal with specific reference to the central Sudanic state of Bagirimi.
  4. R. S. O’Fahey, 1980, ibid
  5. In the late 18th century, Egypt’s trade with Dar Fur was five times larger than with Sinnar.
  6. For the seminal debates on this issue, see Yusuf Fadl Hasan, Sudan in Africa, Khartoum University Press, 1971.
  7. Dennis D. Cordell, ‘The Savanna Belt of North-Central Africa’, in David Birmingham and Phyllis M. Martin (eds.), History of Central Africa, Vol 1., Longman, 1983; Stefano Santandrea, A Tribal History of the Western Bahr el Ghazal, Bologna, Nigrizia, 1964.
  8. H. A. MacMichael, A History of the Arabs in the Sudan, Cambridge, Cambridge University Press, 1922; Ian Cunnison, Baggara Arabs: Power and the Lineage in a Sudanese Nomad Tribe, Oxford, Clarendon Press, 1966.
  9. C. Bawa Yamba, Permanent Pilgrims: The Role of Pilgrimage in the Lives of West African Muslims in Sudan, Washington D.C., Smithsonian Press, 1995.
  10. Ahmed Mohammed Kani, The Intellectual Origin of Islamic Jihad in Nigeria, London, Al Hoda, 1988.
  11. Awad Al-Sid Al-Karsani, ‘Beyond Sufism: The Case of Millennial Islam in the Sudan’, in Louis Brenner (ed.) Muslim Identity and Social Change in Sub-Saharan Africa, Indiana University Press, 1993.
  12. The Turko-Egyptian regime had also used administrative tribalism, and had created the position of ‘sheikh al mashayikh’ as paramount chieftancies of the riverain tribes. In the 1860s, this title was changed to ‘nazir’. The sultans of Dar Fur tried similar mechanisms from the late 18th century, awarding copper drums to appointees.
  13. This discussion derives chiefly from the author’s notes from research in the Sudan National Archives in 1988. For simplicity, specific files are not referenced.
  14. The real drive for the recognition of tribal territories was elsewhere in Sudan, where ethnic territorialization was less complex, and administration denser.
  15. Saeed Mohamed El-Mahdi, Introduction to Land Law of the Sudan, Khartoum, Khartoum University Press, 1979, p. 2. In southern Darfur, there was a strong push by the regional authorities and development projects to recognize tribal dars in the 1980s. See Mechthild Runger, Land Law and Land Use Control in Western Sudan: The Case of Southern Darfur, London, Ithaca, 1987.
  16. Frederik Barth, ‘Economic Spheres in Darfur’, in Raymond Firth (ed.), Themes in Economic Anthropology, London, Tavistock, 1967; Gunnar Haaland, ‘Economic Determinants in Ethnic Processes’, in Frederik Barth (ed.) Ethnic Groups and Boundaries, London, Allen and Unwin, 1969.
  17. James Morton, The Poverty of Nations, London, British Academic Press, 1994.
  18. Cf. Gabriel R. Warburg, Historical Discord in the Nile Valley, Evanston IL, Northwestern University Press, 1993.
  19. Cf. Francis M. Deng, The Cry of the Owl, New York, Lilian Barber Press, 1989. In this novelistic exploration of Sudanese identities, the main protagonist, who is a Southerner, meets a Fur merchant on a train. The encounter reveals that anti-Southern racist feeling exists among Darfurians, while Darfurians themselves are marginalized, exploited and racially discriminated against by the ruling riverain elites.
  20. Muddathir Abd al-Rahim, ‘Arabism, Africanism and Self-Identification in the Sudan’, in Y. F. Hasan, Sudan in Africa, Khartoum University Press, 1971.
  21. Paul Doornbos, ‘On Becoming Sudanese’, in T. Barnett and A. Abdelkarim (eds.), Sudan: State, Capital and Transformation, London, Croom Helm, 1988.
  22. Eve Troutt Powell, A Different Shade of Colonialism: Egypt, Great Britain and the Mastery of the Sudan, Berkeley, University of California Press, 2003. Powell shows convincingly how similar attitudes informed Egyptian attitudes towards Sudan into the 20th century.
  23. Ahmad Alawad Sikainga, Slaves into Workers: Emancipation and Labor in Colonial Sudan, Austin, University of Texas, 1996.
  24. Darfurian migrant labour is remarkably under-researched, in comparison with the Nuba and west Africans. In the modest literature, see Dennis Tully, ‘The Decision to Migrate in Sudan’, Cultural Survival Quarterly, 7.4, 1983, 17-18.
  25. See, for example, my discussion of Jebel Si in Famine that Kills: Darfur, Sudan, Oxford University Press, 2004.
  26. Mark Duffield, Maiurno: Capitalism and Rural Life in Sudan, London, Ithaca, 1981; C. Bawa Yamba, Permanent Pilgrims: The Role of Pilgrimage in the Lives of West African Muslims in Sudan, Washington D.C., Smithsonian Press, 1995.
  27. Alex de Waal, ‘Some Comments on Militias in Contemporary Sudan’, in M. Daly and A. A. Sikainga (eds.) Civil War in the Sudan, London, Taurus, 1994.
  28. Gaddafi’s African policy has not been well documented by journalists and scholars.
  29. African Rights, Facing Genocide: The Nuba of Sudan, London, African Rights, 1995.
  30. The Nuba’s ‘African’ identity is well-documented. The best treatment is Yusuf Kuwa’s own memoir, ‘Things Would No Longer Be The Same’, in S. Rahhal (ed.) The Right to be Nuba: The Story of a Sudanese People’s Struggle for Survival, Trenton NJ, Red Sea Press, 2002.

Why ‘We’ Lovehate ‘You’

Paul Smith is professor of cultural studies at George Mason University and chair in media studies at the University of Sussex, and author most recently of Millennial Dreams (Verso).

“The reaction to the events of 11 September–terrible as they were–seems excessive to outsiders, and we have to say this to our American friends, although they have become touchy and ready to break off relations with accusations of hard-heartedness.”

‘We’ and ‘you’

Doris Lessing’s rueful but carefully aimed words, published in a post-9/11 issue of Granta magazine where a constellation of writers had been asked to address “What We Think of America,” doubtless have done little to inhibit the progress of American excess in the time since the terrorist attacks. The voices of even the most considerable of foreign intellects were hardly alone in being rendered inaudible by the solipsistic noise that immediately took over the American public sphere after 9/11. All kinds of voices and words, from within America and without, immediately lost standing and forfeited the chance to be heard, became marginalised or simply silenced, in deference to the media-led straightening of the possible range of things that could be said. And even after the initial shock of 9/11 had receded, it seems that one’s standing to speak depended largely upon the proximity of one’s sentiments to the bellicose sound-bites of the American president as his administration set sail for retaliatory and pre-emptive violence and promoted a Manichean worldview where one could only be either uncomplicatedly for or uncomplicatedly against America, even as it conducted illegal, immoral, and opportunistic war.

The peculiar American reaction to 9/11 was always latent in the discursive and cultural habits of this society where, as Lessing pointedly insists, “everything is taken to extremes.” Such extremism is perhaps not often enough considered, she suggests, when ‘we’ try to understand or account for the culture (Lessing, p. 54). I’m not sure that it’s the case that that extremism has exactly gone unnoticed; it is, after all, the motor and at the same time the effect of the sheer quotidian brutality of American social relations. But the sudden shock to the American system delivered by the terrorists certainly facilitated a certain kind of extremism, a certain kind of extreme Americanism.

That extremist Americanism is foundational to this culture. America is, as Jean Baudrillard has said, the only remaining primitive society…a utopia that is in the process of “outstripping its own moral, social and ecological rationale” (1988, p. 7). And this is, moreover, a primitivism awash with its own peculiar fundamentalisms–not quite the fundamentalisms that America attacks elsewhere in a kind of narcissistic rage, but fundamentalisms that are every bit as obstinate. This is, after all, a society where public discourse regularly pays obeisance to ancient texts and their authors, to the playbook of personal and collective therapy, to elemental codes of moral equivalency, and so on. And this is to leave aside the various Christian and populist fundamentalisms that are perhaps less respectable but nonetheless have deep influence on the public sphere. But in its perhaps most respectable fundamentalism–always the most important one, but now more than ever in this age of globalisation–the society battens on its own deep devotion to a capitalist fundamentalism. Thus it is a primitive society in a political-economic sense too: a society completely devoted to the upkeep of its means of consumption and means of production, and thus deeply dependent upon the class effects of that system and ideologically dependent upon ancient authorities, which remain tutelary and furnish the ethical life of the culture.

It is to these kinds of fundamentalism that America appealed after 9/11, by way of phrases such as ‘our values,’ ‘who we are,’ ‘the American way of life,’ and so on; or when Mayor Giuliani and others explicitly promoted consumption as a way of showing support for America. None of that was perhaps terribly surprising, however disturbingly crass it might have been, and it was clear how much it was necessary for the production of the forthcoming war economy in the USA. But the construction of such extremist platitudes (endlessly mediatised, to be sure) was surprisingly successful in effecting the elision of other kinds of speech in this nation where the idea of freedom of speech is otherwise canonised as a basic reflex ideology.

But, (as de Tocqueville was always fond of repeating) this is also a nation where dissidents quickly become pariahs, strangers. The voices, the kinds and forms of speech that were silenced or elided in the aftermath of 9/11 are, of course, the dialectical underbelly to the consolidation of a fundamentalist sense of America, and to the production of an excessive cultural ideology of shared values. They go some way to constituting, for the sake of what I have to say here, a ‘we’–strangers both within the land and beyond it. This is not, of course, a consistent ‘we,’ readily located either beyond or within the borders of the USA and who could be called upon to love or hate or to lovehate some cohesive ‘you’ that until recently sat safely ensconced inside those same borders. It goes without saying that nobody within or without those boundaries can be called upon individually to comply seamlessly, or closely, or for very long, to a discourse of putative national identity. So in the end there is no living ‘you’ or ‘we’ here, but only a vast range of disparate and multifarious individuals, living in history and in their own histories, imperfectly coincident with the discursive structure of “America.”

And yet imaginary relations are powerful. The ‘you’ whose sense of belonging to, or owning, that fundamentalist discourse has for the time being asserted or constructed itself qua America; but it is of course unclear who ‘you’ really are. It has never been clear to what extent a ‘you’ could be constructed on the ground by way of ideological and mediatised pressure. It’s certainly unclear how much the mainstream surveys could tell us, conducted as they are through the familiar corporate, university, and media channels. And it would be grossly simplistic to try to ‘read’ the nation’s ideology through its mediatised messages and simply deduce that people believe (in) them.1 So the question of “who are ‘you’?” remains opaque in some way. At the same time, there is a discursive space where the everyday people that American subjects are coincides with the ‘you’ that is now being promulgated as fundamental America.

By the same token, there is also some kind of ‘we’ that derives from the fact that the identities and the everyday lives of so many outside the USA are bound up with the USA, with what the USA does and says, and with what it stands for and fights for. The ways in which ‘our’ identities are thus bound up is different for some than for others, obviously, and ‘we’ are all in any case different from one another. I share nothing significant, I think, with the perpetrators of the attacks on the Trade Towers or on the tourists in Bali. Some of us find ourselves actually inside the boundaries of the USA. That’s where I speak from right now, a British subject but one whose adult life has been shaped by being an alien inside America and thus to some large extent shaped by ‘you.’ And there are many in similar positions, some killed in the WTC attacks, others Muslims, others illegals, and so on. And there are of course, also the internal ‘dissenters’–those who speak and find ways to be heard outside the channels that promote the construction of a ‘you.’ All of ‘us, then, inside and outside the borders of the US, are not ‘you’–a fact that ‘you’ make clear enough on a daily basis.

Dialectics

The ‘we’ is in fact a construct of the very ‘you’ I have just been talking about. This ‘we’ is generated through the power of the long, blank gaze emanating from the American republic that dispassionately, without empathy and certainly without love, refuses to recognise most of the features of the world laid out at its feet; a gaze that can acknowledge only that part of the world which is compliant and willing to act as a reservoir of narcissistic supply to the colossus.

Appropriately (in light of the events of 9/11, certainly, and probably before that) it is to the World Trade Center that Michel de Certeau pointed when he wanted to describe the ideological imposition that such a gaze exerts over the inhabitants of a space. In his famous essay, “Walking in the City”(1984), he begins his disquisition from the 110th floor of the World Trade Center, meditating on the ichnographic gaze that the tower (then) enabled, looking down over a city that becomes for him a “texturology” of extremes, “a gigantic rhetoric of excess in both expenditure and production”(p. 91). That gaze is for him essentially the exercise of a systematic power, or a structure in other words. Its subjects are the masses in the streets, all jerry-building their own relation to that structure as they bustle and move around the spaces of this excessive city.

De Certeau doesn’t say so, but one could suspect that he reads the tower and the view it provides by reference to the mystical eye sitting atop the pyramid on the US dollar bill–another trope in American fundamentalist discourse, the god who oversees ‘your’ beginnings. But at any rate, it’s hard not to be struck in his account by the way the relationship between the ichnographic and systematic gaze and the people below replicates a much more Hegelian dialectic: the master-slave dialectic. De Certeau’s sense of power relations never quite manages to rid itself of that Hegelian, or even Marxist, sense that the grids of power here are structural rather than untidily organic in some more Foucauldian sense. The gaze he interprets, then, is in that sense the colossal gaze of the master, surveying the slaves. It is the gaze of a ‘you’ for whom the real people, foraging below and finding their peculiar ways of living within the ichnographic grids that are established for them, can be seen only as subjects and judged only according to their conformity. And when the structure feels itself threatened by the agitation and even independence of its subjects below (as, in De Certeau’s analysis, the city structure begins to decay and its hold on the city dwellers is mitigated), it tries to gather them in again by way of narratives of catastrophe and panic (p. 96). One boon of the 9/11 attacks for the colossus was of course the opportunity to legitimise such narratives.

I cite De Certeau’s dense essay in part because it has been strangely absent from the many efforts of sociological and cultural studies to ‘re-imagine’ New York after 9/11; one might have imagined a text as important as this one to have something to teach about the intersections of power and control in a modern city. But I cite it more for the reminder it offers–beginning from the same place, as it were, as the terrorist attacks themselves–of the way that the spatial structure of the city “serves as a totalizing and almost mythical landmark for socioeconomic and political strategies.” Part of the lesson of this conceit is the knowledge that in the end the city is “impossible to administer” because of the “contradictory movements that counterbalance and combine themselves outside the reach of panoptic power” (p. 95). De Certeau’s New York City and its power grid act as a reasonable metaphor for the way in which ‘our’ identities are variously but considerably construed in relation to ‘you.’ ‘Your’ identity is the master’s identity in which ‘we’ dialectically and necessarily find ‘our’ own image, ‘our’ reflection, and ‘our’ identity. The master’s identity is inflected to the solipsism of self-involvement and entitlement while emanating a haughty indifference to ‘us.’

The situation is familiar, then. In the places, histories, and structures that ‘we’ know about, but of which ‘you’ always contrive to be ignorant, it is a situation that is historically marked by the production of antagonism and ressentiment. What the master cannot see in the slave’s identity and practice is that ressentiment derives not from envy or covetousness but from a sense of injustice, a sense of being ignored, marginalised, disenfranchised, and un-differentiated. That sort of sense of injustice can only be thickened in relation to an America whose extremist view of itself depends upon the very discourse of equality and democracy that the slave necessarily aspires to. Ressentiment is in that sense the ever-growing sense of horror that the master cannot live up to the very ideals he preaches to ‘us.’

It is a kind of ressentiment that Baudrillard, in his idiosyncratic (but nonetheless correct) way, installs at the heart of his short and profound analysis of the events of 9/11. Whatever else can be located in the way of motivation for the attacks, he suggests, they represented an uncomplicated form of ressentiment whose “acting-out is never very far away, the impulse to reject any system growing all the stronger as it approaches perfection or omnipotence” (2002, p.7). Moreover, Baudrillard is equally clear about the problem with the ‘system’ that was being attacked: “It was the system itself which created the objective conditions for this retaliation. By seizing all the cards for itself, it forced the Other to change the rules” (p.9). In a more prosaic manner, Noam Chomsky notes something similar in relation to the 9/11 attacks when he says that the attacks marked a form of conflict qualitatively different from what America had seen before, not so much because of the scale of the slaughter, but more simply because America itself was the target: “For the first time the guns have been directed the other way” (2001, p. 11-12). Even in the craven American media there was a glimmer of understanding about what was happening; the word ‘blowback’ that floated around for a while could be understood as a euphemism for this new stage in a master/slave narrative.

As the climate in America since 9/11 has shown very clearly, such thoughts are considered unhelpful for the construction of a ‘you’ that would support a state of perpetual war, and noxious to the narratives of catastrophe and panic that have been put into play to round up the faithful. The notion, in any case, that ressentiment is not simply reaction, but rather a necessary component of the master’s identity and history, would always be hard to sell to a ‘you’ that narcissistically cleaves to “the impossible desire to be both omnipotent and blameless” (Rajagopal, p. 175). This is a nation, after all, that has been chronically hesitant to face up to ressentiment in its own history, and mostly able to ignore and elide the central antagonisms of class. This is and has been a self-avowed ‘classless’ society, unable therefore to acknowledge its own fundamental structure, its own fundamental(ist) economic process (except as a process whereby some of its subjects fail to emulate the ability of some of the others to take proper advantage of level playing fields and equality of opportunity). For many of ‘us’ it has been hard to comprehend how most Americans manage to remain ignorant about class and ignorant indeed of their own relationship to capital’s circuits of production and consumption. At least it’s hard to understand how such ignorance can survive the empirical realities of America today. The difficulty was by no means eased when it became known that families of 9/11 victims would be paid compensation according to their relatives’ value as labour, and this somehow seemed unexceptionable to ‘you.’ The blindness of the colossal gaze as it looks on America itself is replicated in the gaze outward as it looks on ‘us.’ This is a nation largely unseeing, then, and closed off to the very conditions of its own existence–a nation blindly staring past history itself.

“Events are the real dialectics of history,” Gramsci says, “decisive moments in the painful and bloody development of mankind” (p.15) and 9/11, the only digitised date in world history, can be considered an event that could even yet be decisive. It would be tempting, of course, to say that once the ‘end of history’ had supposedly abolished all Hegelian dialectics–wherein ‘our’ identities would be bound up with ‘yours’ in an optical chiasmus of history–it was inevitable that history itself should somehow return to haunt such ignorance of historical conditions. Yet, from 9/11 and through the occupation of Iraq, America appears determined to remain ex-historical and seems still unable to recognise itself in the face of the Other–and that has always and will again make magisterial killing all the more easy.

Freedom, equality, democracy

If this dialectic of the ‘you’ and the ‘we’ can claim to represent anything about America’s outward constitution, it would necessarily find some dialectical counterpart in the inward constitution of this state. At the core of the fundamental notions of ‘the American way of life’ that ‘you’ rallied around after 9/11 and that allow ‘you’ to kill Iraqis in order to liberate them, there reside the freighted notions of freedom, equality and democracy that, more than a century and a half ago, de Tocqueville deployed as the central motifs of Democracy in America. De Tocqueville’s central project is hardly akin to my project here, but it wouldn’t be far-fetched to say that his work does in fact wage a particular kind of dialectical campaign. That is, Democracy in America plots the interaction of the terms freedom and equality in the context of the new American republic that he thought should be a model for Europe’s emerging democracies. His analysis of how freedom, equality, and democratic institutions interact and, indeed, interfere with one another still remains a touchstone for understanding the peculiar blindnesses that characterise America today. One of its main but largely under-appreciated advantages is that it makes clear that freedom, equality and democracy are by no means equivalent to each other–and one might even say, they are not even preconditions for one another, however much they have become synonyms in ‘your’ vernacular. While de Tocqueville openly admires the way in which America instantiates those concepts, he is endlessly fascinated by exactly the untidiness and uncertainty of their interplay. That interplay entails the brute realities of everyday life in the culture that is marked for him by a unique dialectic of civility and barbarity. In the final analysis de Tocqueville remains deeply ambivalent about the state of that dialectic in America, and thus remains unsure about the nature and future of the civil life of America.

Unsurprisingly, his ambivalence basically devolves into the chronic political problem of the relationship of the individual to the state. One of the effects of freedom and equality, he suggests, is the increasing ambit of state functions and an increasing willingness on the part of subjects to allow that widening of influence. This effect is severe enough to provoke de Tocqueville to rather extreme accounts of it. For example, his explanation of why ordinary citizens seem so fond of building numerous odd monuments to insignificant characters is that this is their response to the feeling that “individuals are very weak; but the state…is very strong” (p. 443). His anxiety about the strength of such feelings is apparent when he discusses the tendency of Americans to elect what he calls “tutelary” government: “They feel the need to be led and the wish to remain free” and they “leave their dependence [on the state] for a moment to indicate their master, and then reenter it” (p.664).

This tendency derives, he says, from “equality of condition” in social life and it can lead to a dangerous concentration of political power–the only kind of despotism that young America had to fear. It would probably not be too scandalous to suggest that de Tocqueville’s fears had to a great degree been realised by the end of the 20th century. And the current climate, where the “tutelary” government threatens freedom in all kinds of ways in the name of a war that it says is not arguable, could only be chilling to de Tocqueville’s sense of the virtues of democracy. The (re)consolidation of this kind of tutelary power is figured for me in the colossal gaze that I’ve talked about, a gaze that construes a ‘you’ by way of narratives of catastrophe and panic while extending the power of its gaze across the globe by whatever means necessary.

But at the centre of this dialectic of freedom and equality, almost as their motor, de Tocqueville installs the idea that American subjects are finally “confined entirely within the solitude of their own heart,” that they are “apt to imagine that their whole destiny is in their own hands,” and that “the practice of Americans leads their minds to fixing the standards of judgement in themselves alone” (p. 240-241). It’s true that for de Tocqueville this kind of inflection is not unmitigatedly bad: it is, after all, a condition of freedom itself. But nonetheless the question remains open for him: whether or not the quotidian and self-absorbed interest of the individual could ever be the operating principle for a successful nation. He is essentially asking whether the contractual and civil benefits of freedom can in the end outweigh the solipsistic and individualistic effects of equality. Or, to put the issue differently, he is asking about the consequences of allowing a certain kind of narcissism to outweigh any sense of the larger historical processes of the commonwealth–a foundational question, if ever there was one, in the history of the nation.2

Jean Baudrillard’s America, a kind of ‘updating’ of de Tocqueville at the end of the 20th century, is instructive for the way that it assumes that de Tocqueville’s questions are still alive (or at least, it assumes that Americans themselves have changed very little in almost two hundred years [p. 90]). Baudrillard is in agreement with de Tocqueville that the interplay of freedom and equality, and their relation to democratic institutions, is what lies at the heart of America’s uniqueness. He’s equally clear, however, that the 20th century has seen, not the maintenance of freedom (elsewhere he is critical of the way that tutelary power has led to regulation and not freedom [2002], but the expansion of the cult of equality. What has happened since de Tocqueville is the “irrepressible development of equality, banality, and in-difference” (p. 89). In the dialectic of freedom and equality, such a cult necessarily diminishes the extent of freedom, and this is clearly a current that the present US regime is content to steer. But Baudrillard, like de Tocqueville before him, remains essentially enthralled by the “overall dynamism” in that process, despite its evident downside; it is, he says, “so exciting” (p. 89). And he identifies the drive to equality rather than freedom as the source of the peculiar energy of America. In a sense, he might well be right: certainly it is this “dynamism” that ‘we’ love, even as ‘we’ might resist and resent the master’s gaze upon which it battens.

Love and contradiction

The “dynamism” of American culture has been sold to ‘us’ as much as to ‘you’–perhaps even more determinedly in some ways. Brand America has been successfully advertised all around the world, in ways and places and to an extent that most Americans are probably largely unaware of. While Americans would probably have some consciousness of the reach of the corporate media, or of Hollywood, and necessarily some idea of the reach of other brands such as McDonald’s, most could not have much understanding of how the very idea of America has been sold and bought abroad. For many of ‘us,’ of course, it is the media and Hollywood that have provided the paradigmatic images and imaginaries of this dynamic America. It is in fact remarkable how many of the writers in the issue of Granta in which Doris Lessing appears mention something about the way those images took hold for them, in a process of induction that ‘we’ can be sure most Americans do not experience reciprocally.

The dynamism of that imaginary America is a multi-faceted thing, imbuing the totality of social relations and cultural and political practices. It begins, maybe, with a conveyed sense of the utter modernity of American life and praxis, a modernity aided and abetted by the vast array of technological means of both production and consumption. The unstinting determination of the culture to be mobile, to be constantly in communicative circuits and to be open day and night, along with the relative ease and efficiency of everyday life and the freedom and continuousness of movement, all combine to produce a sense of a culture that is endemically alive and happening. This is ‘our’ sense of an urban America, at least, with its endless array of choices and the promised excitement and eroticism of opportunity. The lure of that kind of urbanity was always inspissated by the ‘melting pot’ image of the USA, and is further emphasised in these days of multiculturalism and multi-ethnicity. Even beyond the urban centres, of which there are so many, this dynamic life can be taken for granted, and the realm of the consumer and the obsessive cheapness of that realm reflect the concomitant sense of a nation fully endowed with resources–material and human–and with a standard of living enjoyed by most people but achieved by very few outside the USA–even these days, and even in the other post-industrial democracies. ‘We’ can also see this vitality of the everyday life readily reflected in the institutional structures of the USA: for instance, other ways in which we are sold America include the arts, the sciences, sports, or the educational system, and ‘we’ derive from each of those realms the same sense of a nation on the move. As ‘our’ Americans friends might say, what’s not to like?

Beyond the realms of culture and everyday life, ‘we’ are also sold the idea of America as a progressive and open political system the like of which the world has never seen before. The notions that concern de Tocqueville so much are part of this, of course: freedom, equality, and democratic institutions are the backbone of ‘our’ political imaginary about the USA. In addition, ‘we’ are to understand America as the home of free speech, freedom of the press and media, and all the other crucial rights that are enshrined in the Constitution and the Bill of Rights. Most importantly, ‘we’ understand those rights to be a matter for perpetual discussion, fine-tuning, and elaboration in the context of an open framework of governance, legislation, and enforcement. Even though those processes are immensely complex, ‘we’ assume their openness and their efficacy. Even the American way of doing bureaucracy seems to ‘us’ relatively smooth, efficient and courteous as it does its best to emulate the customer-seeking practices of the service industries. And all this operates in the service, less of freedom and more, as I’ve suggested, in the service of “equality of condition”–and ultimately in the service of a meritocratic way of life that even other democratic nations can’t emulate. And on a more abstract level, I was struck recently by the words of the outgoing Irish ambassador to the US, Sean O’Huiginn, who spoke of what he admired in the American character: the “real steel behind the veneer of a casual liberal society…the strength and dignity [and] good heartedness of the people” and the fact that America had ‘brought real respect to the rule of law.”3

These features, and I’m sure many others, are what go to constitute the incredibly complex woof and weave of ‘our’ imaginaries of the United States. The reality of each and any of them, and necessarily of the totality, is evidently more problematic. The words of another departing visitor are telling: “The religiosity, the prohibitionist instincts, the strange sense of social order you get in a country that has successfully outlawed jaywalking, the gluttony, the workaholism, the bureaucratic inflexibility, the paranoia and the national weakness for ill-informed solipsism have all seemed very foreign.”4 But still those imaginaries are nonetheless part of ‘our’ relation to America–sufficiently so that in the 9/11 aftermath the question so often asked by Americans, “Why do they hate us?”, seemed to me to miss the point quite badly. That is, insofar as the ‘they’ to whom the question refers is a construct similar to the ”we’ I’ve been talking about, ‘we’ don’t hate you, but rather lovehate you.

Nor is it a matter, as so much American public discourse insists, of ‘our’ envying or being jealous of America. Indeed, it is another disturbing symptom of the narcissistic colossus to constantly imagine that everyone else is jealous or envious. Rather, ‘we’ are caught in the very contradictions in which the master is caught. For every one of the features that constitute our imaginary of dynamic America, we find its underbelly; or we find the other side of a dialectic–the attenuation of freedom in the indifferentiation of equality, or the great barbarity at the heart of a prized civility, for instance. Equally, accompanying all of the achievements installed in this great imaginary of America, there is a negative side. For instance, while on the one hand there is the dynamic proliferation of technologies of communication and mobility, there is on the other hand the militarism that gave birth to much of the technology, and an imperious thirst for the oil and energy that drive it. And within the movement of that dialectic–one, it should be said, whose pre-eminence in the functioning of America has been confirmed once more since 9/11–lies the characteristic forgetting and ignorance that subvents the imaginary. That is, such technologies come to be seen only as naturalised products of an ex-historical process, and their rootedness in the processes of capital’s exploitation of labour is more or less simply elided. And to go further, for all the communicative ease and freedom of movement there is the extraordinary ecological damage caused by the travel system. And yet this cost is also largely ignored–by government and people alike–even while the tension between capital accumulation and ecological comes to seem more and more the central contradiction of American capitalism today.5

One could easily go on: the point is that from every part of the dynamic imaginary of America an easy contradiction flows. Despite, for example, the supposed respect for the rule of law, American citizens experience every day what Baudrillard rightly calls “autistic and reactionary violence” (1988, p. 45); and the ideology of the rule of law does not prevent the US being opposed to the World Court, regularly breaking treaties, or picking and choosing which UN resolutions need to be enforced, and illegally invading and occupying another sovereign nation. The imaginary of America, then, that ‘we’ are sold–and which I’m sure ‘you’ believe–is caught up in these kinds of contradictions–contradictions that both enable it and produce its progressive realities. These contradictions in the end constitute the very conditions of this capitalism that is fundamentalist in its practice and ideologies.

So, ‘our’ love for America, either for its symbols and concepts or for its realities, cannot amount to some sort of corrosive jealousy or envy. It is considerably more complex and overdetermined than that. It is, to be sure, partly a coerced love, as we stand structurally positioned to feed the narcissism of the master. And it is in part a genuine admiration for what I’m calling for shorthand the “dynamism” of America. But it is a love and admiration shot through with ressentiment, and in that sense it is ‘about’ American economic, political, and military power and the blind regard that those things treat ‘us’ to. It is the coincidence of the contradictions within America’s extremist capitalism, the non-seeing gaze of the master, and ‘our’ identification with and ressentiment towards America that I’m trying to get at here. Where those things meet and interfere is the locus of ‘our’ ambivalence towards ‘you,’ to be sure, but also the locus of ‘your’ own confusion and ignorance about ‘us.’ But the ‘yea or nay,’ positivist mode of American culture will not often countenance the representation of these complexities; they just become added to the pile of things that cannot be said, especially in times of catastrophe and panic.

What is not allowed to be said

It’s easy enough to list the kinds of things that could not be said or mentioned after 9/11, or enumerate the sorts of speech that were disallowed, submerged, or simply ignored as the narratives of panic and catastrophe set in to re-order ‘you’ and begin the by now lengthy process of attenuating freedom.

What was not allowed to be said or mentioned: President Bush’s disappearance or absence on the morning of the attacks; contradictions in the incoming news reports about not only the terrorist aeroplanes but also any putative defensive ones (it’s still possible to be called a conspiracy theorist for wondering about the deployment of US warplanes that day, as Gore Vidal discovered when he published such questions in a British newspaper);6 the idea that the attacks would never have happened if Bush had not become president; and so on. Questions like those will, one assumes, not be able to be addressed by the governmental inquiry into 9-11, especially many months later when the complexities of 9-11 have been obliterated by the first stages of the perpetual war that Bush promised. In addition, all kinds of assaults were made on people who had dared say something “off-message”: comedians lost their jobs for saying that the terrorists were not cowards, as Bush had said they were, if they were willing to give up their lives; college presidents and reputable academics were charged with being the weak link in America’s response to the attacks; and many other, varied incidents of the sort, including physical attacks on Muslims simply for being Muslim. And in the many months after the attacks, lots of questions and issues are still passed over in silence by the media and therefore do not come to figure in the construction of a free dialogue about ‘your’ response to the event.

Many of ‘us’ were simply silenced by the solipsistic “grief” (one might like to have that word reserved for more private and intimate relationships) and the extreme shock of Americans around us. David Harvey talks about how impossible it was to raise a critical voice about the role bond traders and their ilk in the towers might have had in the creation and perpetuation of global social inequality (p. 59). Noam Chomsky was rounded upon by all and sundry for suggesting, in the way of Malcolm X, that the chickens had come home to roost. The last thing that could be suggested was the idea that, to put it bluntly, these attacks were not unprovoked and anybody who thought there could be a logic to them beyond their simple evilness was subjected to the treatment Lessing describes at the head of this piece. The bafflement that so many of ‘you’ expressed at the idea that someone could do this deed, and further that not all of ‘us’ were necessarily so shocked by it, was more than just the emotional reaction of the moment.

This was an entirely predictable inflection of a familiar American extremism, soon hardening into a defiant–and often reactionary–refusal to consider any response other than the ones ‘you’ were being offered by political and civic leaders and the media. Empirical and material, political and economic realities were left aside, ignored, not even argued against, but simply considered irrelevant and even insulting to the needs of a “grief” that suddenly became national–or rather, that suddenly found a cohesive ‘you.’ And that “grief” turned quickly into a kind of sentimentality or what Wallace Stevens might have called a failure of feeling. But much more, it was a failure, in the end, of historical intelligence. A seamless belief that America can do no wrong and a hallowed and defiant ignorance about history constitute no kind of response to an essentially political event. Even when the worst kinds of tragedy strike, an inability to take any kind of responsibility or feel any kind of guilt is no more than a form of narcissistic extremism in and of itself.7

Symbols

On 9/11 there was initially some media talk about how the twin towers might have been chosen for destruction because of their function as symbols of American capitalist power in the age of globalisation. David Harvey suggests that in fact it was only in the non-American media that such an understanding was made available, and that the American media talked instead about the towers simply as symbols of American values, freedom, or the American way of life (p. 57). My memory, though, is that the primary American media, in the first blush of horrified reaction, did indeed talk about the towers as symbols of economic might, and about the Pentagon as a symbol of military power. But like many other things that could not be said, or could no longer be said at that horrible time, these notions were quickly elided. Strangely, the Pentagon attack soon became so un-symbolic as to be almost ignored. The twin towers in New York then became the centre of attention, perhaps because they were easier to parlay into symbols of generalised American values than the dark Pentagon, and because the miserable deaths of all those civilians was more easily identifiable than those of the smaller number of military workers in Washington.

This was a remarkable instance of the way an official line can silently, almost magically, gel in the media. But more importantly, it is exemplary of the kind of ideological movement that I’ve been trying to talk about in this essay: a movement of obfuscation, essentially, whereby even the simplest structural and economic realities of America’s condition are displaced from discourse. As Harvey suggests, the attacks could hardly be mistaken for anything but a direct assault on the circulatory heart of financial capital: “Capital, Marx never tired of emphasizing, is a process of circulation….Cut the circulation process for even a day or two, and severe damage is done…What bin Laden’s strike did so brilliantly was [to hit] hard at the symbolic center of the system and expose its vulnerability” (p. 64-5).

The twin towers were a remarkable and egregious architectural entity, perfectly capable of bearing all kinds of allegorical reading. But there surely can be no doubt that they were indeed a crucial “symbolic center” of the processes through which global capitalism exercises itself. Such a reading of their symbolism is more telling than Wallerstein’s metaphorical understanding that “they signalled technological achievement; they signalled a beacon to the world” (2001). And it is perhaps also more telling than (though closer to) Baudrillard’s understanding of them: “Allergy to any definitive order, to any definitive power is–happily–universal, and the two towers of the World Trade Center were perfect embodiments, in their very twinness, of that definitive order” (2002, p.6). It is certainly an understanding that not only trumps, but exposes the very structure of the narcissistic reading of them as symbols of ‘your’ values and ‘your’ freedom.

That narcissism was, however, already there to be read in these twin towers that stared blankly at each other, catching their own reflections in an endless relay. They were, that is, not only the vulnerable and uneasy nerve-centres of the process of capital circulation and accumulation; they were also massive hubristic tributes to the self-reflecting narcissism they served. Perhaps it was something about their arrogant yet blank, unsympathetic yet entitled solipsism that suggested them as targets. The attacks at very least suggested that someone out there was fully aware of the way that the narcissist’s identity and the identity of those the narcissistic overlooks are historically bound together. It’s harder to discern whether those people would have known, too, that the narcissist is not easy to cure, however often targeted; or whether they predicted or could have predicted, and perhaps even desired, the normative retaliatory rage that their assault would provoke?

What ‘we’ know, however, is that ‘we’ cannot forever be the sufficient suppliers of the love that the narcissist finds so necessary. Indeed, ‘we’ know that it is part of the narcissistic disorder to believe that ‘we’ should be able to. So long as the disorder is rampant ‘we’ are, in fact, under an ethical obligation not to be such a supplier. In that sense (and contrary to all the post 9/11 squealing about how ‘we’ should not be anti-American), ‘we’ are obliged to remind the narcissist of the need to develop “the moral realism that makes it possible for [you] to come to terms with existential constraints on [your] power and freedom” (Lasch p. 249).

But Christopher Lasch’s final words in a retrospective look at his famous work, The Culture of Narcissism, are not really quite enough. This would be to leave the matter at the ethical level, hoping for some kind of moral conversion–and this is not an auspicious hope when the narcissistic master is concerned. At the current moment when we all–‘we’ and ‘you’–have seen the first retaliation of the colossus and face the prospect of extraordinary violence on a world scale, too much discussion and commentary (both from the right and the left) remains at the moral or ethical levels. This catastrophic event has and the perpetual war that has followed it have obviously, in that sense, produced an obfuscation of the political and economic history that surrounds them and of which they are part. Such obfuscation serves only the master and does nothing to satisfy the legitimate ressentiment of a world laid out at the master’s feet. At the very least, in the current conjuncture, ‘we all’ need to understand that the fundamentalisms and extremisms that the master promulgates, and to which ‘you’ are in thrall, are not simply moral or ethical, or even in any sense discretely political; they are just as much economic and it is that aspect of them that is covered over by the narcissistic symptoms of a nation that speaks through and as ‘you.’

References

Baudrillard, J. (1988), America (Verso).

Baudrillard, J. (2002), The Spirit of Terrorism (Verso).

Chomsky, N. (2001), 9-11 (Seven Stories Press).

De Certeau, M. (1984), The Practice of Everyday Life (U. California).

De Tocqueville, A. (2000), Democracy in America (U. Chicago).

Gramsci, A. (1990), Selections from Political Writings, 1921-1926 (U. Minnesota).

Harvey, D. (2002), “Cracks in the Edifice of the Empire State,” in Sorkin and Zukin, eds., After the World Trade Center (Routledge), 57-68.

Lasch, C. (1991), The Culture of Narcissism (Norton).

Lessing, D. (2002), Untitled article, Granta 77 (spring 2002), 53-4.

Rajagopal, A. (2002), “Living in a State of Emergency,” Television and New Media , 3:2, 173 ff.

Wallerstein, I., (2001), “America and the World: The Twin Towers as Metaphor,”http://www.ssrc.org/sept11/essays/wallerstein.htm

Endnotes

  1. This is the error of otherwise worthy work like Sardar, Z. & M.W. Davies (2002), Why Do People Hate America? (Icon Books).
  2. A classic, but largely ignored, statement of American history in these terms is William Appleman Williams (1961), The Contours of American History (World Publishing Company).
  3. “Departing Irishman Mulls ‘Glory of America’,” Washington Post 12 July 2002.
  4. Matthew Engel, “Travels with a trampoline,” The Guardian 3 June, 2003.
  5. See Ellen Wood (2002), “Contradictions: Only in Capitalism?” in Socialist Register 2002 (Monthly Review Press).
  6. Gore Vidal, “The Enemy Within,” The Observer 27 October 2002.
  7. A longer version of this article–forthcoming in Ventura, P. (ed.), Circulations: ‘America’ and Globalization and planned to be part of my forthcoming Primitive America (U. Minnesota)–elaborates on the concept of narcissism that I have been deploying here. I distinguish my use from that of Christopher Lasch in The Culture of Narcissism in order to be able to describe a narcissistic (and primitive) structuration of America, rather than imputing narcissistic disorders to individuals or, for that matter, to classes.

Expert Economic Advice and the Subalternity of Knowledge: Reflections on the Recent Argentine Crisis

Ricardo D. Salvatore is professor of modern history at Universidad Torcuato di Tella in Buenos Aires. He is author of Wandering Paysanos: State Order and Subaltern Experience in Buenos Aires During the Rosas Era (1820-1860) and coeditor of Crime and Punishment in Latin America: Law and Society since Late Colonial Times and Close Encounters of Empire: Writing the Cultural History of U.S.-Latin American Relations, all published by Duke University Press.

Act 1. Taking the Master’s Speech as Proper

President Duhalde and Minister Remes Lenicov went to Washington and Monterrey to speak with the key figures in the US Treasury, the IMF and the World Bank. They carried with them a message: that with an impending hyperinflation it was necessary to check the devaluation of the peso; that, to strengthen the reserves of the central bank and avoid the collapse of financial institutions, IMF funding was needed; that, for the moment, provincial and federal deficits were difficult to control. After some preliminary talks, their arguments crashed against a wall of technical reason. All their arguments were rejected as out-moded views or erroneous arguments. And they listened to a new and unexpected set of arguments: The Fund and the Treasury will accept no more false promises from Argentina. In the view of their experts, the Argentine representatives presented no credible and “sustainable plan” for macro-economic stability and growth. Instead of promises of financial assistance, they issued a warning: abandoning free-market reforms and altering contracts and compromises would lead Argentina into an isolationist path that will produce more damage to its people. Half-persuaded by these strong arguments, President Duhalde and Minister Remes Lenicov returned to Argentina and started to speak with the voice of the IMF and the US Treasury. Hence, they started to spread the new gospel: a free-floating exchange rate, inflationary targeting, and macro-economic consistency. Back in Buenos Aires, President Duhalde explained to the TV audience: sometimes a father, in order to keep feeding his family, has to “bend his head” and accept, against his best judgement, the truth of the Other (to take its “bitter medicine”). This subaltern gesture, of course, contradicted his prior statements, because the prescribed policies fundamentally questioned the validity of the belief system that united the main partners of the governing alliance (Peronistas and Radicales).

After a long negotiation that seemed to go nowhere, Argentine functionaries found out that the rhetoric of the knowledgeable empire was harsh. Conservative voices were beginning to argue that neither the IMF nor the US should continue to pour money into a corrupt and unviable economy. Secretary of the Treasury Paul O’Neil spoke of Argentina as a land of waste where the savings of American “plumbers and carpenters” could be easily washed away by wrong policy decisions.1 The suspicion of the American worker was the basis of the duress of the new US policy towards Unwisely Overindebted Countries (UOCs). In this conservative viewpoint, popular common sense taught government to be wary of international bankers and their advisers. They would willingly waste other people’s money in order to save their own interests.

Act 2. Two Ideologies In Conflict

A long-political and ideological conflict brought about the fall of President De la Rua in December 20, 2001. With him fell the so-called modelo económico implemented first by Minister Domingo Cavallo during the Menem administration (free convertibility between the peso and the dollar, free international capital mobility, government without discretionary monetary policy, privatization of government enterprises, opening of the economy to foreign competition). The defeat of De la Rua-Cavallo was read as the end of the hegemony of a way of understanding economic policy and its effect on economic development (the so-called “Washington Consensus” that in Argentina translated into a “convertibility consensus”). The politico-ideological terrain has long been divided between supporters of economic integration and free-market policies and supporters of regulation, protectionism, state-led development, and re-distributionist policies. De la Rua-Cavallo tried to defend until the end the former model, while their successors (Rodriguez Saa for a week, and Duhalde) promised policies that seemed to satisfy the expectations of the latter camp. Thus, the events of December 19-20 anticipated the re-emergence and potential hegemony of what, for simplicity, I shall call “nationalist-populist reason” at the expense of “neo-liberal reason” and the Washington consensus.

Rodríguez Saa’s announcement of the Argentine default, his promise of one million new jobs, and his televised embrace with union leaders gave Argentines the impression that a great REVERSAL was under way. The same could be said of President Duhalde. His devaluation in early January, his promises to distribute massive unemployment compensations, his announcement of a new “productivist alliance” against the interests of speculators, foreign banks and privatized utilities, and the interventionist policies to “compensate” the effects of this devaluation all created the impression that things would turn around. That, at last, the people had defeated the modelo económico and its rationale and that, consequently, it was time for a redistribution of economic gains and for strong government. The new productivist alliance–it seemed–would increase employment, re-industrialize the country, and put under check the rapacity of foreign companies. If this was so, the winners of the past “neo-liberal age” would have to accept losses for the benefit of the poor, the unemployed, and the economic renaissance of the interior provinces. As it turned out to be (so far), this public perception was disappointed by the sudden “conversion” of their politicians to the refurbished Washington Consensus.

Act 3. New Faces on TV

Momentarily at least, mainstream economic experts (most of them trained in US top universities) have disappeared from the TV screens. Although they continue to be called to integrate discussion panels for news-and-commentary TV programs, many of those associated with the words “liberal,” “neo-liberal” or “ortodoxo” refuse to participate in these programs. Their space has been occupied by heterodox economists–some of them neo-Keynesian, others simply expressing the views of the unions or industries they represent, others speaking for opposition and leftist parties, still others carrying the credential of having participated in the cabinets of governments of the 1980s or before. Unlike the US-trained experts, these other economists had studied in local universities and display a technical expertise and common wisdom that some might find insufficient or old-fashioned. Some of these economists are making their first appearances on TV programs, while others are re-appearing after decades of ostracism and neglect. Although they represent a diversity of perspectives, they are in agreement that the modelo económico of the 1990s is over and that their positions (income redistribution, active or expansionary policies, more regulation, taxes on privatized utilities, price controls, and even statizations of foreign-owned banks and oil companies) need to be heard. Their greater popularity speaks of a displacement in public discourse towards positions closer to what I have called “nationalist-populist reason.” The economists that are now appearing on TV screens are putting into debate whether Argentina should listen to the words of advice of the IMF and the US Treasury; whether it is convenient to give up so much (policy autonomy) for so little (fresh loans and tons of compelling advice).

This change in the type of economist now exposed to TV audiences speaks of the crisis of legitimacy of the “Washington Consensus” and its Argentine variant, the “convertibility consensus.” In part the retrenchment of orthodox or neo-liberal economists (which I repeat is only temporary) is founded upon solid arguments: the lack of reception in government circles for their advice, the evident failure of the modelo to generate sustained economic growth, and a bit of fear. Some economists (Roberto Aleman) have been assaulted by small groups of demonstrators, others (Eduardo Escasani) have been subject to public escraches, and still others have been advised to remain uncritical. And, as we all know, Domingo Cavallo is now in prison, arrested on charges of contraband. This Harvard-trained economist may be unfairly paying for felonies that he did not commit, but in the eyes of many of his countrymen he is guilty of the increased unemployment, poverty, and inequality that resulted from the application of policies that were part and parcel of the Washington Consensus.

Act 4. The Washington Consensus Reconsidered

To understand the meaning of this crisis of legitimacy for the US-trained economist, we must look at the “policy community” in the US and at its changing consensus. Since the Asian crisis (Summer of 1997), the Washington Consensus and its disseminating agencies (the IMF and the World Bank) has faced severe criticism.2 Leading economists such as S. Fisher, A. Krueger, J. Stiglitz, or R. Barro have begun to openly criticize IMF policies, calling for major reforms if not its abolition. Economists and financial experts have argued that opening commodities and financial markets simultaneously was bound to generate explosive financial crises. That macroeconomic stability was not enough to guarantee the stable and sustained growth of “emerging markets.” That the IMF (with its enhanced credit facilities, and high premium loans, and its insistence on fiscal austerity) had pushed countries into debt traps that led to financial crises and severe depressions. Attacks from left and right have led experts to re-examine the role that the IMF must play in the new global economy. Key experts have argued that IMF large loans to unreliable countries only encourage irresponsible private lending. Others have suggested that, in the future, the IMF should play a much more limited role in the world economy, restricted to “crisis prevention” and “crisis management;” that is, to collect data, give warnings, and provide policy advice (Barro and Reed 2002).

Albeit not the only one, Joseph Stiglitz has been perhaps the most vociferous in this criticism (North 2000; Stiglitz 2001). Since the 1997-98 Asian crisis, he has been exposing IMF policies as “fire added to the fire” (as the main reason for economic disaster and social distress). Instead of stimulating economies with increased social expenditures and an expansion of credit, the IMF had consistently recommended budget cut-backs, monetary restriction, and further de-regulations. These policies have sunk into deep and prolonged recessions economies that had the chance of rapid recovery. His book Globalization and its Discontents (2002) has circulated widely among critics of globalization and international financial institutions. Translated into Spanish in the same year, it has been enthusiastically received in Argentina by supporters of industrial protection, Keynesian economic policies, and the reconstruction of a “national” and “popular” economy. Radio and TV programs have familiarized the Argentine public with Stiglitz’s criticism of the IMF, emphasizing his authority as a Nobel-prize winner. Readers, of course, take from a text what strengthens their own views. Thus, Stiglitz has been locally portrayed as a detractor of the IMF and as a defender of “active industrial policies”–others have gone further, presenting him as a crusader against free-market ideology–while, in actuality, his positions have been more conservative. (True, he has accused the IMF of “lack of intellectual coherence” and has called for major limitations in the role of the IMF; but his understanding of the world economy falls short of falling into the camp of the “nationalist-populist” consensus.)

Over time, this criticism has been eroding the basis of the Washington Consensus. Though many US-trained experts still believe that macroeconomic stability and free-market reforms should not be abandoned, many see now that these two conditions are not sufficient. The consensus has shifted towards the terrain of institutions. Now scholars and policy makers agree that, in addition to free-markets and macroeconomic stability, emerging economies need good government, reliable financial systems, and exemplary judiciaries. In particular, given the frequency of external shocks, countries need to have good bankruptcy laws and (some recommend) some degree of control over short-term speculative capital. Some ambiguities of the earlier consensus (between alternative exchange regimes) have been closed: now only flexible exchange regimes are considered acceptable. The experiences of Mexico, Asia and Brazil have given the Fund grounds for arguing that exchange rate devaluations can prove successful. And arguments about the inefficaciousness of IMF loans (“giving fish to the sharks” as Stiglitz has put it, or in the conservative variety of a “moral hazard” argument) have gained widespread support in the policy community.

Act 5. US Economists Give Opinion of the Argentine Crisis

Expert economic opinion in the US is divided regarding the Argentine current crisis. There are those who blame Argentine policy-makers and politicians for the economic and financial collapse. Among them are Professor Martin Feldstein at Harvard, Professor Gary Becker at Chicago, and Professor Charles Calomiris at Columbia. On the other side are those who fault the IMF for the Argentine crisis: Professors Paul Krugman, Mark Weisbrot, and Arthur MacEwan, among others. Those who blame the IMF focus on either its unhelpful or wrong economic advice and its ill-timed financial assistance. The IMF sins are limited to not telling Argentina to abandon its fixed exchange system sooner; or advising orthodox, austerity measures in the middle of a depression. Those who place the blame on Argentine policy-makers (and minimize the IMF’s responsibility) point to the transformation of fiscal deficits into mounting public debt and to the inability to complete the structural reforms needed to make the currency board system work. Their interventions tend to distinguish between economic liberalization (not to be blamed) and ill-advised monetary and fiscal policy (guilty as charged). The positions in the debate are two-sided: either the patient did not follow the doctor’s prescription completely; or the doctor, willingly or by ignorance, provided the patient with the wrong medicine.

In any of the two extreme situations, Argentina stands in a subaltern position vis-à-vis expert (economic) knowledge. Its policy-makers are conceived, in both perspectives, as dependent upon the authorized word of IMF or the “policy analysis” community. Or, put in other terms, Argentina appears always as an “experiment” or an “example” (some use the phrase “textbook case”) of a theory or policy paradigm that is under discussion. Argentina, whether it is the “poster child” of neo-liberal reforms or the “basket case” or IMF folly, stands always as a passive object of knowledge, providing mostly data to feed a vigorous academic and policy debate that goes on elsewhere. Leading scholars and policy-makers in Argentina can argue against the current, but can hardly avoid the protocols of authorization governing “voice” in the US academic and policy community. Yes, at the end of a persistent effort, Minister Cavallo persuaded many of his peers in the United States that convertibility had been a success and could be sustained over time. But in order to do this, he had to publish his views in the most traditional and respected journal of the profession: the American Economic Review. (Cavallo and Cottani 1997).3 Cavallo’s arguments, to the extent that he used the master’s idiom (he spoke of the strength of an internationalized banking system, of the soundness of macro-economic fundamentals, and the resilience of the national economy to global crises), were considered valid, though not completely persuasive.

Few in the US tribunal of expert economic opinion connect Argentina’s failure with economic knowledge or the way this translates into authoritative economic advice. Few of these economic commentators are aware of the impact produced by their own university’s teaching upon the policy-makers of developing economies.4 After all, their students go to occupy key positions of responsibility as ministers of finance or presidents of central banks. Their students are the ones who generally sit at the other side of the table when the IMF experts dispense advice–and they are those who communicate to the population the “bitter medicine” prescribed by the money doctors. Profound disagreement within the US academy (about the causes of the Argentine crisis) stands in sharp contradiction with the single-voice advice dispensed by IMF experts.

After the crisis of November 2001-March 2002, Argentina is back at the center of interest of economic opinion as a “leading case.” Why has it failed? What were the forces that triggered the collapse? Has the confidence in free-market policies been severely damaged by this event? With the images of supermarket riots (“saqueos“), middle-class citizens banging pots and pans (“cacerolazos“) and youths throwing stones at money tellers in Buenos Aires, US economists return to the laboratory of Economic Science to re-think and re-establish their preconceptions. The essay that Martin Feldstein wrote for Foreign Affairs immediately after these events is symptomatic. Here we encounter not the doubt of the researcher but the certainty of the judge. The case (the financial collapse of Argentina) calls for an attribution of guilt. Without much supporting evidence, Feldstein rushes to indict two potential suspects: the overvalued peso, and excessive foreign debt. (Feldstein 2002). If this were so, then liberalizing policies are not to blame. Only the Argentine government is to blame, for it promised something that became impossible to deliver: convertibility at a fixed exchange rate. In the end, the “case” (Argentina) helps to reinforce orthodoxy. If the government would have been able to lower real wages (twisting labor’s arms) or maintain the golden rule of convertible regimes (to reduce the money supply and raise the interest rate) to discipline the economy into cost-reduction “competitiveness,” Argentina would have been able to maintain its convertibility.

What failed, in Feldstein’s opinion, was the political ability and the vision of the Argentine government to adapt national reality to the conditions of the world market. Argentine reality proved stubborn. Even with 15 percent unemployment rates, real wages did not decline. After the Brazilian devaluation, Argentine wages became unrealistically high. But instead of accepting the painful reality (cutting wages to world-market levels), Argentine politicians opted for the easy road: increasing indebtedness. Feldstein draws three “lessons” from the Argentine experience: 1) that a fixed exchange system combined with a currency board is a bad idea; 2) that substantial foreign borrowing is unsustainable in the long run (is a high-risk development strategy); and 3) that free-market policies continue to be desirable, in spite of this sad experience. If in the 1980s Argentina stood as a class example of what could go wrong in the land of “financial restriction,” closed economies, and populist policies, in 2002 the country was again an example–now of the failure of the currency board in countries with too much debt and inflexible labor laws.

On the other side of the argument, there are also prominent public figures and renowned scholars. Best-known among the critics of the IMF is Professor Joseph Stiglitz. But other economists have joined this position arguing that the “Argentine case” is another proof of the failure of IMF policies. University of Massachusetts Professor Arthur MacEwan is on this train. In his analysis (MacEwan 2002), he presents Argentina as a victim of ill-oriented policies, policies that insisted on increasing debt to sustain an outmoded system (the currency board). Bad economic advice comes not from good economic science but from interest. The IMF, trying to defend powerful US corporations and global banking firms, continued to funnel loans to an already bankrupt economy. To Lance Taylor, a monetary policy expert and economic historian, the fall of Argentina must be viewed as a consequence of wrong policy choices (Interview 2001). The fixed exchange rate was good to tame inflationary expectations, but disastrous as a development strategy. In the end, the Argentine case demonstrates the foolishness of opening at the same time commodity and capital markets. Economic growth becomes dependent upon the flow of capital. If successful, the inflow of fresh capital creates price inflation and this generates an overvalued exchange that conspires against local industry. If unsuccessful, capital outflows create expectations of devaluation.

Curiously, in the end, this liberal view coincides with the orthodox one. There is nothing wrong with economic expertise, only with the institutions that represent powerful interests. Liberal and conservative opinion clash only with regard to the desirability of unregulated markets, but they are one in regard to defending the citadel of knowledge. And, what is more important, in one or the other perspective, Argentina continues to be a “case,” an experimental site of policy and knowledge.

How does economic “doxa” pass for knowledge? How is economic theory able to sustain its orthodox core under a storm of counter-evidence? What makes an opinion emanated from Harvard a dominant view? Why is Argentina always a subject of study and object of advice and not a producer of knowledge? These are questions that this essay cannot attempt to answer. Nevertheless, it is useful to reflect on the unevenness of this situation. Certain locations (the subaltern ones) provide the data for experiments in policy while other locations (the dominant ones) provide the theory to understand the success or failure of these experiments. Here lies a condition of subalternity that cannot be solved by improving the quality of the national administration with foreign-trained economists. For the last word remains on the side of those who produce knowledge.

Act 6. Imperial Economics

Informal empires can treat their areas of influence with more or less duress, with more or less affection. It depends upon international political conditions and upon the vision and convictions of a US president (or his administration). Thus, when Paul O’Neil assumed a key position as the US Secretary of the Treasury (January 2001), it seemed that the empire would get tougher with regard to those countries that followed “bad policies.” Comply with the economic advice the IMF and the policy community give you or else suffer isolation: this seemed to be the image the new administration wanted to project (“Tough-Love”). This, combined with recurrent expressions by IMF high-level officials that Argentine negotiators were unable to produce a plan (meaning that their technical experts were not able to draw a consistent macro-economic program), created the impression that the time of “carnal relations” between the US and Argentina was over. From then on, distrust, derision, and distance would characterize the relations between the two countries. This was, to a certain degree, to be expected. Few could anticipate, nonetheless, that misunderstandings about economic policy and economic goals would be at the basis of this imperial duress–and, more importantly, that the hegemony of economic discourse would be the source of contention.

Perhaps part of Argentina’s neo-colonial condition is given by the fact that the country has been taken as a site of experiment for economic policy. In the early 1990s, Argentina pioneered neo-liberal reforms in the region, becoming the “poster child” of free-markets, privatization, and macro-economic stability. Since the Asian crisis, Argentina has turned into a “basket case” of poor fiscal management, increasing country risk, and bad international loans. Curiously, few have examined the location from which the judgement of “success” and “failure” emanates: US universities, think tanks, and multilateral credit institutions. The same place where tons of economic and financial advice is produced on a daily basis. Local economists engage in the debates proposed by these centers of knowledge and policy, shifting gradually or suddenly their opinions about policy and doctrinal trends. They are not equal contributors to the world of knowledge and policy: like President Duhalde, they abide with the authorized word of US-experts. Otherwise, they get displaced into the territory of the “dysfunctional.”

Is economics an imperial science? We know that the discipline has tried to colonize other social and human sciences with its maximizing principles and its implicit rationality. But, economic science could be dubbed “imperial” in a more fundamental sense. New work on international finance and economics deals with the question of global governance. Larry Summers, the current president of Harvard University, is an expert on this subject. He has been arguing that the US is the first non-imperialist empire; that US primacy in the field of economics will assure US leadership in the management of the global economy.

In 1999, Summers celebrated the globalization of US economics. US-trained economists were taking control of key positions in emergent economies’ governments and central banks (Summers 1999). There were Berkeley-trained economists in Indonesia, Chicago alumni in Chile, MIT and Harvard graduates in Mexico and Argentina, etc. These economists were spreading the knowledge of how to manage national economies in a globalized environment and providing the rationale for the transformation under way. They were called to assume a central role in completing the globalization process in the terrain where corporations alone could not progress: the reform of government. It was only in this terrain in which the imperatives of greater international integration could be made compatible with the demands of national communities.

US-trained economists will be the ones facing the challenges of a globalized world: they had to find innovative solutions to the problem of reducing financial volatility and the contagion across countries of financial crises. Since the late 1990s, Summers has been arguing for the building of a new “international financial architecture” for the world economy. His views of a pacified global economy is one in which experts dominate and help the rest of humanity cushion the effects of inevitable “market failures.” (Summers 1999). What role does the United States play in this imagined world scenario? To Summers, the US is the “indispensable nation,” the only power that can lead a movement towards international economic integration without causing a major disruption (restructuring) in the nation-state system.5 How could this be accomplished? By the persuasive power of economic knowledge. In the end, only the diffusion of economic rationale (“economist want their fellow citizens to understand what they know about the benefits of free trade”) can produce a compromise between promises of democratic governance (widespread public goods) and the recurrent constraints imposed by global financial crises.

The United States has changed the rhetoric of empire. For it is the first “outward-looking,” “non-imperialist” superpower with the energy (its own system of government and its economic expertise) to lead the world to cooperative solutions to its problems (Summers 1998). The new “civilizing mission” is to spread to the four winds the rationale of responsible and transparent government. The new promised land is a financial architecture that resists the pressure of periodic financial crises and a new regulatory system that neither stifles the forces of capitalist enterprise nor destroys the belief in democratic government. The new ideal is a novel compromise between government and market-one that can only be imagined and disseminated by economic experts.

Act 7. Post Devaluation Blues (Universidad di Tella)

Financial and economic crises take a heavy toll in the university system of peripheral countries (“emergent economies”) such as Argentina. An abrupt devaluation (that multiplies by three the value of the dollar) makes it almost impossible to continue study abroad programs, makes the library cut down dramatically its foreign subscriptions and purchases of books in other languages, makes it quite difficult for professors to attend conferences and congresses overseas or to invite foreign colleagues, and leaves demands for new computers or better internet connections in the basket of Utopia. If the change in exchange regimes is accompanied by a dramatic fall in GDP and employment and by rampant inflation (as is the case now in Argentina), the conditions are given for a dispersion of the Faculty, attracted by better employment possibilities elsewhere. In short, international crises strike at the very foundation of developing universities. A small but growing university, such as Universidad Torcuato Di Tella, is faced with a paralysis, in terms of human and physical resources, if it manages to withstand the collapse of the economy.

Curiously, our university has one of the best economics school in the country. Our professors have been advisers to governments, if not themselves government officials in areas related to economic policy. They have been trained at UCLA, Chicago, Yale, MIT, and other leading economic schools in the US. They themselves fell prey to the trap of the “convertibility consensus” and now form part of the “mainstream economists” who are very cautious to speak in the context of an economic meltdown and high political volatility. Our departments of Economics and Business were at the center of the university.

Now that old-line economic reasoning is called into question, it may be time to re-think priorities. Perhaps re-configuring the curricula of economic majors to include alternative modes of thinking about “equilibria,” “incentive structures,” or “economic performance.” Maybe it is now time for greater exchange among the social sciences and the humanities. Maybe it is time to challenge the master discourse elaborated in US economic schools about what constitutes “sound economic policy” and to re-think the position of authority (and the scarcity of evidence) from which international economists dispense advice to “emerging markets.” Perhaps, as Paul Krugman has recently suggested, if IMF advice would be offered in the market, the price of this service would be very low due to insufficient demand.

Conclusion. The Subalternity of Knowledge (and How to Turn it Around)

One of the problems associated with a peripheral location in the world of expertise is not to know the right answer at the right time. In March 1999 (in an address at the Inter-American Development Bank), Summers suggested that the keys to prevent financial crises in Latin America were transparent accounting, corporate governance, and effective bankruptcy regimes. This was the vocabulary of the new science of global governance. Those who did not pay attention to the relationship between information, judicial institutions, and markets were just out of tune with history. This is what happened to President Duhalde and his Minister Remes in their encounters with the IMF, US Treasury, and other experts. If they had been listening to the word of experts in “global finance” and “crisis management,” they would have been better able to understand the non-cooperative stand and duress of US experts. For (guided by an outdated policy agenda) they were enacting policies that were exactly the opposite of those recommended by experts.

The current crisis is first a crisis of governability and public confidence, but it is also a crisis of legitimacy for the policy-expert. Bad economic advice and bad government policy has contributed to a deepening of the economic depression and to create divisions in society. The discredit of past governments that were unable to fulfill their promises of economic improvement and lesser social inequality drags along the discredit of economic advice. The gigantic struggle between supporters and detractors of convertibility has now turned into another gigantic struggle between nationalist-populist and neo-liberal policy solutions. There is a profound disbelief in “expert (economic) reason.” People watch their TV screens in astonishment as representatives of local expertise (home-grown economists) pile up criticism against neo-liberal reforms and orthodox economics. People are beginning to realize that economic predictions are quite frequently in error, that technically sound advice is often politically and socially unviable, and that economists many times represent not just independent thought, but certain narrow corporate interests.

How costly is economic experimentation in peripheral economies? Will a greater dose of economic advice from the center (or a greater number of US-trained economists) spare us from the effects of globalization? Are we teaching our economists to speak with their own voice? Is their knowledge integrated into a broader conception of the world and of the human and social sciences? How should we feed the minds of those who will be managing the global economy from these peripheral outposts? Should they be only producing data for the center and applying policy solutions developed at the center? We need to seriously examine the bi-polarities created by this knowledge structure. Why are arguments about poverty and social inequality unable to penetrate the wall of technical neo-liberal reason? Why is fiscal responsibility anathema for heterodox economists? We need to re-consider the formation and circulation of economic expert advice as a constitutive moment of global governance–and challenge its foundational precepts. We need to examine the broader implications of universal financial and economic recipes, and the denial of locally situated and socially-embedded policy solutions.

Epilogue

In May of 2002, President Duhalde appointed a new economic minister, Roberto Lavagna, an expert who had made a career in the European policy community. Against the double talk of Minister Remes Lenicov (who tried to appear tough as a hyper-regulator but accepted the views of the IMF in every policy issue), the new minister took distance from the IMF and its policies. He let the Central Bank intervene in the exchange market so as to stabilize the currency (something the IMF experts were against), then started to accumulate foreign exchange reserves, delaying the payment of financial obligations to the IMF and the World Bank (a decision that provoked the anger of experts in these institutions). Soon, having obtained some minor achievements in terms of declining inflation and a stop in the fall in real output, the minister started to pursue a new negotiation strategy with the Fund: “we have to be responsible, not obedient.” In Congress, this translated into a series of delaying tactics that avoided “resolving” the problems that the Fund wanted (re-structuring of the banking system, an immediate raise in the prices of public services, the abolition of provincial bonds, and some commitment to re-start negotiations with government bonds), complemented with new bills (postponing bankruptcies and housing evictions) that went against the wishes of the IMF. As the president soon discovered, taking distance from the IMF provided important political gains. So, he began to excel in the practice of appearing committed to a successful negotiation with the IMF and, at the same time, boycotting every possibility for success.

To be fair, one has to acknowledge that from the other side of the window, the IMF leadership (the same as the Secretary of the Treasury) became increasing alienated from Argentina, as they saw President Duhalde taking the wrong turn towards a “nationalist-populist” agenda. In fact, they started to consider that he was missing altogether the train that led to “capitalist development” and “good government.” In the end, perhaps, President Duhalde did not understand (and could not understand) the rules of the economy. The initial misreading of the reasons of empire–an inexplicable rejection of the reasons of the local policy-maker–turned into alienation and mutual distrust. Perhaps, reasons Duhalde, the IMF does not want to sign an agreement. Perhaps, reasons the IMF leadership, Duhalde is no longer truly committed to reaching an agreement. Once the child has gone back to its rebellious state, the father will step up the threat of punishment. In this the imperial father will speak through the voice of local and international experts: if Argentina does not negotiate with the IMF and does not fulfill its international commitments it will “fall out of the world.”

December, 2002

References

Barro, Robert and Jay Reed (2002), “If We Can’t Abolish IMF, Let’s at Least Make the Big Changes,” Business Week, April 10.

Becker, Gary (2002), “Deficit Spending Got Argentina into This Mess,” Business Week, February 11.

Broad, Robin and John Cavanagh (1999), “The Death of the Washington Consensus?” World Policy Journal 16:3 (Fall), 79-88.

Cavallo, Domingo F. and Joaquín Cottani (1997), “Argentina’s Convertibilty and the IMF,” American Economic Review 87:2 (May), 17-22.

Feldstein, Martin (2002), “Argentina’s Fall: Lessons from the Latest Financial Crisis,” Foreign Affairs (March-April).

Fisher, Stanley (2001a), “Exchange Rate Regimes: Is the Bipolar View Correct?” Finance & Development 38:2 (June), 18-21.

______. (2001b), “The IMF’s Role in Poverty Reduction,” Finance & Development 38:2 (June), S2-S3.

______. (1997), “Applied Economics in Action: IMF Programs,” The American Economic Review 87:2 (May), 23-27.

Taylor, Lance (2001), “Argentina: Poster Child for the Failure of Liberalized Policies?” Challenge (November-December).

Keaney, Michael (2001), “Consensus by Diktat: Washington, London, and the `modernization` of modernization,” Capitalism, Nature, Socialism 12:3 (September), 44-70.

Levinson, Mark (2000), “The Cracking Washington Consensus,” Dissent 47:4 (Fall), 11- 14.

MacEwan, Arthur (2002), “Economic Debacle in Argentina: The IMF Strikes Again,” Dollars & Sense (March-April).

Naim, Moises (2000), “Fads and Fashion in Economic Reforms: Washington Consensus or Washington Confusion?” Third World Quarterly 21:3 (June), 505-528.

North, James (2000), “Sound the Alarm,” Barron’s, April 17.

Stiglitz, Joseph E. (2002), Globalization and Its Discontents (New York: W.W. Norton).

________. (2001), “Failure of the Fund: Rethinking the IMF Response,” Harvard International Review 23:2 (Summer), 14-18.

Summers, Lawrence (1999), “Distinguished Lecture on Economics in Government,” Journal of Economic Perspectives, 13:2 (Spring), 3-18.

_______. (1998), “America: The First Nonimperialist Superpower,” New Perspectives Quarterly, April 1st.

“To Little, Too Late? The IMF and Argentina,” (2001) The Economist, August 25.

“Tough-Love Ya to Death,” (2001) Newsweek, May 28.

“Unraveling the Washington Consensus: An Interview with Joseph Stiglitz,” Multinational Monitor 21:4 (April 2000), 13-17.

Weisbrot, Mark, “Another IMF Crash” (2001), The Nation, December 10.

Weisbrot, Mark and Thomas I. Palley (1999), “How to Say No to the IMF,” The Nation, June 21.

Endnotes

  1. Many articles and commentaries picked up on O’Neil’s phrase. See for example The Economist (2001).
  2. See Krueger 1998; Stiglitz 2001; Fisher 2001b, among others. Even a supporter of IMF policies such as Stanley Fisher (1997) had to acknowledge that IMF programs had a limited effect upon the domestic side of developing economies (“few countries seeing significant increases in growth or sharp reductions in inflation”), although the Fund’s policies did produce improvements in the external sector and momentary reductions in fiscal deficits.
  3. In this essay, it is clear that Cavallo was running against the current. He acknowledged that key people in the policy and theory community were skeptical about the currency board. But they were ready to accept price stability and good fiscal figures as “proofs” of success. In fact, as Cavallo conceded, no one in the “policy community” raised the issue of rising unemployment at a time in which Argentina seemed to have weathered bravely the aftermath of the Mexican crisis (1996-97).
  4. Mark Weisbrot is perhaps an exception in this regard. He argues that Argentina had been mismanaged by US-trained economists and subjected, for too long, to ill advice from the IMF. In the end, a failed experiment (Argentine convertibility), disguised as success, could only bring discredit to those economists who supported it (Weisbrot 2001).
  5. Summers is aware of the unevenness implicit in this conception of the world system. Whereas the US government claims absolute sovereignty to control domestic economic policy, other countries should content themselves with a limited sovereignty. Subject to the financial surveillance of multilateral institutions, they cannot entertain the dream of ever issuing world-money. Even if they become “fiscal and monetarily responsible,” the Federal Reserve System will never allow other countries to become new members in the board of directors.