Fill This Form To Receive Instant Help

Help in Homework
trustpilot ratings
google ratings


Homework answers / question archive / Giorgio Agamben HOMO SACER Sovereign Power and Bare Life Translated by Daniel Heller-Roazen Stanford University Press Stanford California 1998 Homo Sacer: Sovereign Power and Bare Life was originally published as Homo sacer

Giorgio Agamben HOMO SACER Sovereign Power and Bare Life Translated by Daniel Heller-Roazen Stanford University Press Stanford California 1998 Homo Sacer: Sovereign Power and Bare Life was originally published as Homo sacer

Psychology

Giorgio Agamben HOMO SACER Sovereign Power and Bare Life Translated by Daniel Heller-Roazen Stanford University Press Stanford California 1998 Homo Sacer: Sovereign Power and Bare Life was originally published as Homo sacer. Il potere sovrano e la nuda vita, © 1995 Giulio Einaudi editore s.p.a. Stanford University Press-Stanford, California © 1998 by the Board of Trustees of the Leland Stanford Junior University Printed in the United States of America // |] !r4t3 pdf // 4nT1(o|] YR!6H7 // 2 o 0 7 // by s|] r3ad d3p7 // u5345u8v3r5! \/ 3pur|] o 535 // please excuse the remaining scan glitches INTRODUCTION The Greeks had no single term to express what we mean by the word “life.” They used two terms that, although traceable to a common etymological root, are semantically and morphologically distinct: zo?, which expressed the simple fact of living common to all living beings (animals, men, or gods), and bios, which indicated the form or way of living proper to an individual or a group. When Plato mentions three kinds of life in the Philebus, and when Aristotle distinguishes the contemplative life of the philosopher (bios the?r?tikos) from the life of pleasure (bios apolaustikos) and the political life (bios politikos) in the Nichomachean Ethics, neither philosopher would ever have used the term zo? (which in Greek, significantly enough, lacks a plural). This follows from the simple fact that what was at issue for both thinkers was not at all simple natural life but rather a qualified life, a particular way of life. Concerning God, Aristotle can certainly speak of a zo? arist? kai aidios, a more noble and eternal life (Metaphysics, 1072b, 28), but only insofar as he means to underline the significant truth that even God is a living being (similarly, Aristotle uses the term zo? in the same context – and in a way that is just as meaningful – to define the act of thinking). But to speak of a zo? politik? of the citizens of Athens would have made no sense. Not that the classical world had no familiarity with the idea that natural life, simple zo? as such, could be a good in itself. In a passage of the Politics, after noting that the end of the city is life according to the good, Aristotle expresses his awareness of that idea with the most perfect lucidity: This [life according to the good] is the greatest end both in common for all men and for each man separately. But men also come together and maintain the political community in view of simple living, because there is probably some kind of good in the mere fact of living itself [kata to z?n auto monon]. If there is no great difficulty as to the way of life [kata ton bion], clearly most men will tolerate much suffering and hold on to life [zo?] as if it were a kind of serenity [eu?meria, beautiful day] and a natural sweetness. (1278b, 23-31) In the classical world, however, simple natural life is excluded from the polis in the strict sense, and remains confined – as merely reproductive life – to the sphere of the oikos, “home” (Politics, 1252a, 2635). At the beginning of the Politics, Aristotle takes the greatest care to distinguish the oikonomos (the head of an estate) and the despotes (the head of the family), both of whom are concerned with the reproduction and the subsistence of life, from the politician, and he scorns those who think the difference between the two is one of quantity and not of kind. And when Aristotle defined the end of the perfect community in a passage that was to become canonical for the political tradition of the West (1252b, 30), he did so precisely by opposing the simple fact of living (to z?n) to politically qualified life (to eu z?n): ginomen? oun tou z?n heneken, ousa de tou eu z?n, “born with regard to life, but existing essentially with regard to the good life” (in the Latin translation of William of Moerbeke, which both Aquinas and Marsilius of Padua had before them: facta quidem igitur vivendi gratia, existens autem gratia bene vivendi). It is true that in a famous passage of the same work, Aristotle defines man as a politikon z?on (Politics, 1253a, 4). But here (aside from the fact that in Attic Greek the verb bionai is practically never used in the present tense), “political” is not an attribute of the living being as such, but rather a specific difference that determines the genus z?on. (Only a little later, after all, human politics is distinguished from that of other 10 living beings in that it is founded, through a supplement of politicity [policita] tied to language, on a community not simply of the pleasant and the painful but of the good and the evil and of the just and the unjust.) Michel Foucault refers to this very definition when, at the end of the first volume of The History of Sexuality, he summarizes the process by which, at the threshold of the modern era, natural life begins to be included in the mechanisms and calculations of State power, and politics turns into biopolitics. “For millennia,” he writes, “man remained what he was for Aristotle: a living animal with the additional capacity for political existence; modern man is an animal whose politics calls his existence as a living being into question” (La volonté, p. 188). According to Foucault, a society’s “threshold of biological modernity” is situated at the point at which the species and the individual as a simple living body become what is at stake in a society’s political strategies. After 1977, the courses at the Collège de France start to focus on the passage from the “territorial State” to the “State of population” and on the resulting increase in importance of the nation’s health and biological life as a problem of sovereign power, which is then gradually transformed into a “government of men” (Dits et écrits, 3: 719). “What follows is a kind of bestialization of man achieved through the most sophisticated political techniques. For the first time in history, the possibilities of the social sciences are made known, and at once it becomes possible both to protect life and to authorize a holocaust.” In particular, the development and triumph of capitalism would not have been possible, from this perspective, without the disciplinary control achieved by the new bio-power, which, through a series of appropriate technologies, so to speak created the “docile bodies” that it needed. Almost twenty years before The History of Sexuality, Hannah Arendt had already analyzed the process that brings homo laborans – and, with it, biological life as such – gradually to occupy the very center of the political scene of modernity. In The Human Condition, Arendt attributes the transformation and decadence of the political realm in modern societies to this very primacy of natural life over political action. That Foucault was able to begin his study of biopolitics with no reference to Arendt’s work (which remains, even today, practically without continuation) bears witness to the difficulties and resistances that thinking had to encounter in this area. And it is most likely these very difficulties that account for the curious fact that Arendt establishes no connection between her research in The Human Condition and the penetrating analyses she had previously devoted to totalitarian power (in which a biopolitical perspective is altogether lacking), and that Foucault, in just as striking a fashion, never dwelt on the exemplary places of modern biopolitics: the concentration camp and the structure of the great totalitarian states of the twentieth century. Foucault’s death kept him from showing how he would have developed the concept and study of biopolitics. In any case, however, the entry of zo? into the sphere of the polis – the politicization of bare life as such – constitutes the decisive event of modernity and signals a radical transformation of the politicalphilosophical categories of classical thought. It is even likely that if politics today seems to be passing through a lasting eclipse, this is because politics has failed to reckon with this foundational event of modernity. The “enigmas” (Furet, L’Allemagne nazi, p.7) that our century has proposed to historical reason and that remain with us (Nazism is only the most disquieting among them) will be solved only on the terrain – biopolitics – on which they were formed. Only within a biopolitical horizon will it be possible to decide whether the categories whose opposition founded modern politics (right/left, private/public, absolutism/democracy, etc.) – and which have been steadily dissolving, to the point of entering today into a real zone of indistinction – will have to be abandoned or will, instead, eventually regain the meaning they lost in that very horizon. And only a reflection that, taking up Foucault’s and Benjamins suggestion, thematically interrogates the link between bare life and politics, a link that secretly governs the modern ideologies seemingly most distant from one another, will be able to bring the political out of its concealment and, at the same time, return thought to its practical calling. One of the most persistent features of Foucault’s work is its decisive abandonment of the traditional approach to the problem of power, which is based on juridico-institutional models (the definition of sovereignty, the theory of the State), in favor of an unprejudiced analysis of the concrete ways in which power penetrates subjects’ very bodies and forms of life. As shown by a seminar held in 1982 at the Introduction 11 University of Vermont, in his final years Foucault seemed to orient this analysis according to two distinct directives for research: on the one hand, the study of the political techniques (such as the science of the police) with which the State assumes and integrates the care of the natural life of individuals into its very center; on the other hand, the examination of the technologies of the self by which processes of subjectivization bring the individual to bind himself to his own identity and consciousness and, at the same time, to an external power. Clearly these two lines (which carry on two tendencies present in Foucault’s work from the very beginning) intersect in many points and refer back to a common center. In one of his last writings, Foucault argues that the modem Western state has integrated techniques of subjective individualization with procedures of objective totalization to an unprecedented degree, and he speaks of a real “political ‘double bind,’ constituted by individualization and the simultaneous totalization of structures of modern power” (Dits et écrits, 4: 229-32). Yet the point at which these two faces of power converge remains strangely unclear in Foucault’s work, so much so that it has even been claimed that Foucault would have consistently refused to elaborate a unitary theory of power. If Foucault contests the traditional approach to the problem of power, which is exclusively based on juridical models (“What legitimates power?”) or on institutional models (“What is the State?”), and if he calls for a “liberation from the theoretical privilege of sovereignty” in order to construct an analytic of power that would, not take law as its model and code, then where, in the body of power, is the zone of indistinction (or, at least, the point of intersection) at which techniques of individualization and totalizing procedures converge? And, more generally, is there a unitary center in which the political “double bind” finds its raison d’être? That there is a subjective aspect in the genesis of power was already implicit in the concept of servitude volontaire in Etienne de La Boétie. But what is the point at which the voluntary servitude of individuals comes into contact with objective power? Can one be content, in such a delicate area, with psychological explanations such as the suggestive notion of a parallelism between external and internal neuroses? Confronted with phenomena such as the power of the society of the spectacle that is everywhere transforming the political realm today, is it legitimate or even possible to hold subjective technologies and political techniques apart? Although the existence of such a line of thinking seems to be logically implicit in Foucault’s work, it remains a blind spot to the eye of the researcher, or rather something like a vanishing point that the different perspectival lines of Foucault’s inquiry (and, more generally, of the entire Western reflection on power) converge toward without reaching. The present inquiry concerns precisely this hidden point of intersection between the juridicoinstitutional and the biopolitical models of power. What this work has had to record among its likely conclusions is precisely that the two analyses cannot be separated, and that the inclusion of bare life in the political realm constitutes the original – if concealed – nucleus of sovereign power. It can even be said that the production of a biopolitical body is the original activity of sovereign power. In this sense, biopolitics is at least as old as the sovereign exception. Placing biological life at the center of its calculations, the modern State therefore does nothing other than bring to light the secret tie uniting power and bare life, thereby reaffirming the bond (derived from a tenacious correspondence between the modern and the archaic which one encounters in the most diverse spheres) berween modern power and the most immemorial of the arcana imperii. If this is true, it will be necessary to reconsider the sense of the Aristotelian definition of the polis as the opposition between life (z?n) and good life (eu z?n). The opposition is, in fact, at the same time an implication of the first in the second, of bare life in politically qualified life. What remains to be interrogated in the Aristotelian definition is not merely – as has been assumed until now – the sense, the modes, and the possible articulations of the “good life” as the telos of the political. We must instead ask why Western politics first constitutes itself through an exclusion (which is simultaneously an inclusion) of bare life. What is the relation between politics and life, if life presents itself as what is included by means of an exclusion? The structure of the exception delineated in the first part of this book appears from this perspective to be consubstantial with Western politics. In Foucault’s statement according to which man was, for Aristotle, a “living animal with the additional capacity for political existence,” it is therefore precisely the meaning of this “additional capacity” that must be understood as problematic. The peculiar phrase “born § 3 Life That Does Not Deserve to Live 3.1. In 1920, Felix Meiner, one of the most distinguished German publishers of philosophical works, released a blue-gray plaquette bearing the title Authorization for the Annihilation of Life Unworthy of Being Lived (Die Freigabe der Vernichtung lebensunwerten Lebens). The authors were Karl Binding, a highly respected specialist of penal law (an insert attached to the jacket cover at the last minute informed readers that since the doct. iur. et phil K. B. had passed away during the printing of the work, the publication was to be considered as “his last act for the good of humanity”), and Alfred Hoche, a professor of medicine whose interest lay in questions concerning the ethics of his profession. The book warrants our attention for two reasons. The first is that in order to explain the unpunishability of suicide, Binding is led to conceive of suicide as the expression of man’s sovereignty over his own existence. Since suicide, he argues, cannot be understood as a crime (for example, as a violation of a duty toward oneself) yet also cannot be considered as a matter of indifference to the law, “the law has no other option than to consider living man as sovereign over his own existence [als Souverän über sein Dasein]” (Die Freigabe, p. 14). Like the sovereign decision on the state of exception, the sovereignty of the living being over himself takes the form of a threshold of indiscernibility between exteriority and interiority, which the juridical order can therefore neither exclude nor include, neither forbid nor permit: “The juridical order,” Binding writes, “tolerates the act despite the actual consequences that it must itself suffer on account of it. It does not claim to have the power to forbid it” (ibid.). Yet from this particular sovereignty of man over his own existence, Binding derives – and this is the second, and more urgent, reason for our interest in this book – the necessity of authorizing “the annihilation of life unworthy of being lived.” The fact that Binding uses this disquieting expression to designate merely the problem of the lawfulness of euthanasia should not lead one to underestimate the novelty and decisive importance of the concept that here makes its first appearance on the European juridical scene: life that does not deserve to be lived (or to live, as the German expression lebensunwerten Leben also quite literally suggests), along with its implicit and more familiar correlate – life that deserves to be lived (or to live). The fundamental biopolitical structure of modernity – the decision on the value (or nonvalue) of life as such – therefore finds its first juridical articulation in a well-intentioned pamphlet in favor of euthanasia. ? ??It is not surprising that Bindings essay aroused the curiosity of Schmitt, who cites it in his Theorie des Partisanen in the context of a critique of the introduction of the concept of value into law. “He who determines a value,” Schmitt writes, “eo ipso always fixes a nonvalue. The sense of this determination of a nonvalue is the annihilation of the nonvalue” (p. 80, n. 49). Schmitt approximates Binding’s theories concerning life that does not deserve to live to Heinrich Rickert’s idea that “negation is the criterion by which to establish whether something belongs to the sphere of value” and that “the true act of evaluation is negation.” Here Schmitt does not seem to notice that the logic of value he is criticizing resembles his own theory of sovereignty, according to which the true life of the rule is the exception. 3.2. For Binding the concept of “life unworthy of being lived” is essential, since it allows him to find an answer to the juridical question he wishes to pose: “Must the unpunishability of the killing of life remain limited to suicide, as it is in contemporary law (with the exception of the state of emergency), or Life That Does Not Deserve to Live 81 must it be extended to the killing of third parties?” According to Binding, the solution depends on the answer to the following question: “Are there human lives that have so lost the quality of legal good that their very existence no longer has any value, either for the person leading such a life or for society?” Binding continues: Whoever poses this question seriously must, with bitterness, notice the irresponsibility with which we usually treat the lives that are most full of value [wertvollsten Leben], as well as with what – often completely useless – care, patience, and energy we attempt, on the other hand, to keep in existence lives that are no longer worthy of being lived, to the point at which nature herself, often with cruel belatedness, takes away any possibility of their continuation. Imagine a battle camp covered with thousands of young bodies without life, or a mine where a catastrophe has killed hundreds of industrious workers, and at the same time picture our institutes for the mentally impaired [Idioteninstitut] and the treatments they lavish on their patients – for then one cannot help being shaken up by this sinister contrast between the sacrifice of the dearest human good and, on the othet hand, the enormous care for existences that not only are devoid of value [wertlosen] but even ought to be valued negatively. (Die Freigabe, pp. 27-29) The concept of “life devoid of value” (or “life unworthy of being lived”) applies first of all to individuals who must be considered as “incurably lost” following an illness or an accident and who, fully conscious of their condition, desire “redemption” (Binding uses the term Erlösung, which belongs to religious language and signifies, among other things, redemption) and have somehow communicated this desire. More problematic is the condition of the second group, comprising “incurable idiots, either those born as such or those – for example, those who suffer from progressive paralysis – who have become such in the last phase of their life.” “These men,” Binding writes, “have neither the will to live nor the will to die. On the one hand, there is no ascertainable consent to die; on the other hand, their killing does not infringe upon any will to live that must be overcome. Their life is absolutely without purpose, but they do not find it to be intolerable.” Even in this case, Binding sees no reason, “be it juridical, social, or religious, not to authorize the killing of these men, who are nothing but the frightening reverse image [Gegenbild] of authentic humanity” (ibid., pp. 31-32). As to the problem of who is competent to authorize annihilation, Binding proposes that the request for the initiative be made by the ill person himself (when he is capable of it) or by a doctor or a close relative, and that the final decision fall to a state committee composed of a doctor, a psychiatrist, and a jurist. 3.3. It is not our intention here to take a position on the difficult ethical problem of euthanasia, which still today, in certain countries, occupies a substantial position in medical debates and provokes disagreement. Nor are we concerned with the radicality with which Binding declares himself in favor of the general admissibility of euthanasia. More interesting for our inquiry is the fact that the sovereignty of the living man over his own life has its immediate counterpart in the determination of a threshold beyond which life ceases to have any juridical value and can, therefore, be killed without the commission of a homicide. The new juridical category of “life devoid of value” (or “life unworthy of being lived”) corresponds exactly – even if in an apparently different direction;---to the bare life of homo sacer and can easily be extended beyond the limits imagined by Binding. It is as if every valorization and every “politicization” of life (which, after all, is implicit in the sovereignty of the individual over his own existence) necessarily implies a new decision concerning the threshold beyond which life ceases to be politically relevant, becomes only “sacred life,” and can as such be eliminated without punishment. Every society sets this limit; every society – even the most modern – decides who its “sacred men” will be. It is even possible that this limit, on which the politicization and the exceptio of natural life in the juridical order of the state depends, has done nothing but extend itself in the history of the West and has now – in the new biopolitical horizon of states with national sovereignty – moved inside every human life and every citizen. Bare life is no longer confined to a particular place or a definite category. It now dwells in the biological body of every living being. 82 PART THREE: THE CAMP AS BIOPOLITICAL PARADIGM OF THE MODERN 3.4. During the physicians’ trial at Nuremberg, a witness, Dr. Fritz Mennecke, related that he had heard Drs. Hevelemann, Bahnen, and Brack communicate in a confidential meeting in Berlin in February 1940 that the Reich had just issued measures authorizing “the elimination of life unworthy of being lived” with special reference to the incurable mentally ill. The information was not quite exact, since for various reasons Hitler preferred not to give an explicit legal form to his euthanasia program. Yet it is certain that the reappearance of the formula coined by Binding to give juridical credence to the so-called “mercy killing” or “death by grace” (Gnadentod, according to a euphemism common among the regime’s health officials) coincides with a decisive development in National Socialisms biopolitics. There is no reason to doubt that the “humanitarian” considerations that led Hitler and Himmler to elaborate a euthanasia program immediately after their rise to power were in good faith, just as Binding and Hoche, from their own point of view, acted in good faith in proposing the concept of “life unworthy of being lived.” For a variety of reasons, including foreseen opposition from Christian organizations, the program barely went into effect, and only at the start of 1940 did Hitler decide that it could no longer be delayed. The Euthanasia Program for the Incurably ill (Euthanasie-Programm filr unheilbaren Kranke) was therefore put into practice in conditions – including the war economy and the increasing growth of concentration camps for Jews and other undesirables – that favored misuse and mistakes. Nevertheless, the transformation of the program, over the course of the fifteen months it lasted (Hitler ended it in August 1941 because of growing protest on the part of bishops and relatives), from a theoretically humanitarian program into a work of mass extermination did not in any way depend simply on circumstance. The name of Grafeneck, the town in Württemberg that was the home of one of the main centers, has remained sadly linked to this matter, but analogous institutions existed in Hadamer (Hesse), Hartheim (near Linz), and other towns in the Reich. Testimony given by defendants and witnesses at the Nuremberg trials give us sufficiently precise information concerning the organization of the Grafeneck program. Every day, the medical center received about 70 people (from the ages of 6 to 93 years old) who had been chosen from the incurably mentally ill throughout German mental hospitals. Drs. Schumann and Baumhardt, who were responsible for the Grafeneck center, gave the patients a summary examination and then decided if they met the requirements specified by the program. In most cases, the patients were killed, within 24 hours of their arrival at Grafeneck. First they were given a 2-centimeter dose of Morphium-Scopolamine; then they were sent to a gas chamber. In other institutions (for example in Hadamer), the patients were killed with a strong dose of Luminal, Veronal, and Morphium. It is calculated that 60,000 people were killed this way. 3.5. Some have referred to the eugenic principles that guided National Socialist biopolitics to explain the tenacity with which Hitler promoted his euthanasia program in such unfavorable circumstances. From a strictly eugenic point of view, however, euthanasia was not all necessary; not only did the laws on the prevention of hereditary diseases and on the protection of the hereditary health of the German people already provide a sufficient defense against genetic mental illnesses, but the incurably ill subjected to the program – mainly children and the elderly – were, in any case, in no condition to reproduce themselves (from a eugenic point of view, what is important is obviously not the elimination of the phenotype but only the elimination of the genetic set). Moreover, there is absolutely no reason to think that the program was linked to economic considerations. On the contrary, the program constituted a significant organizational burden at a time when the state apparatus was completely occupied with the war effort. Why then did Hitler want the program to be put into effect at all costs, when he was fully conscious of its unpopularity? The only explanation left is that the program, in the guise of a solution to a humanitarian problem, was an exercise of the sovereign power to decide on bare life in the horizon of the new biopolitical vocation of the National Socialist state. The concept of “life unworthy of being lived” is clearly not an ethical one, which would involve the expectations and legitimate desires of the individual. It is, rather, a political concept in which what is at issue is the extreme metamorphosis of sacred life – which may be killed but not sacrificed – on which sovereign power is founded. If euthanasia lends itself to this exchange, it is because in euthanasia one man finds himself in the position of having to separate zo? and bios in Life That Does Not Deserve to Live 83 another man, and to isolate in him something like a bare life that may be killed. From the perspective of modern biopolitics, however, euthanasia is situated at the intersection of the sovereign decision on life that may be killed and the assumption of the care of the nations biological body. Euthanasia signals the point at which biopolitics necessarily turns into thanatopolitics. Here it becomes clear how Binding’s attempt to transform euthanasia into a juridico-political concept (“life unworthy of being lived”) touched on a crucial matter. If it is the sovereign who, insofar as he decides on the state of exception, has the power to decide which life may be killed without the commission of homicide, in the age of biopolitics this power becomes emancipated from the state of exception and transformed into the power to decide the point at which life ceases to be politically relevant. When life becomes the supreme political value, not only is the problem of life’s nonvalue thereby posed, as Schmitt suggests but further, it is as if the ultimate ground of sovereign power were at stake in this decision. In modern biopolitics, sovereign is he who decides on the value or the nonvalue of life as such. Life – which, with the declarations of rights, had as such been invested with the principle of sovereignty – now itself becomes the place of a sovereign decision. The Führer represents precisely life itself insofar as it is he who decides on life’s very biopolitical consistency. This is why the Führer’s word, according to a theory dear to Nazi jurists to which we will return, is immediately law. This is why the problem of euthanasia is an absolutely modern problem, which Nazism, as the first radically biopolitical state, could not fail to pose. And this is also why certain apparent confusions and contradictions of the euthanasia program can be explained only in the biopolitical context in which they were situated. The physicians Karl Brand and Viktor Brack, who were sentenced to death at Nuremberg for being responsible for the program, declared after their condemnation that they did not feel guilty, since the problem of euthanasia would appear again. The accuracy of their prediction was undeniable. What is more interesting, however, is how it was possible that there were no protests on the part of medical organizations when the bishops brought the program to the attention of the public. Not only did the euthanasia program contradict the passage in the Hippocratic oath that states, “I will not give any man a fatal poison, even if he asks me for it,” but further, since there was no legal measure assuring the impunity of euthanasia, the physicians who participated in the program could have found themselves in a delicate legal situation (this last circumstance did give rise to protests on the part of jurists and. lawyers). The fact is that the National Socialist Reich marks the point at which the integration of medicine and politics, which is one of the essential characteristics of modern biopolitics, began to assume its final form. This implies that the sovereign decision on bare life comes to be displaced from strictly political motivations and areas to a more ambiguous terrain in which the physician and the sovereign seem to exchange roles. § 7 The Camp as the ‘Nomos’ of the Modern 7.1. What happened in the camps so exceeds the juridical concept of crime that the specific juridicopolitical structure in which those events took place is often, simply omitted from consideration. The camp is merely the place in which the most absolute conditio inhumana that has ever existed on earth was realized: this is what counts in the last analysis, for the victims as for those who come after. Here we will deliberately follow an inverse line of inquiry. Instead of deducing the definition of the camp from the events that took place there, we will ask: What is a camp, what is its juridico-political structure, that such events could take place there? This will lead us to regard the camp not as a historical fact and an anomaly belonging to the past (even if still verifiable) but in some way as the hidden matrix and nomos of the political space in which we are still living. Historians debate whether the first camps to appear were the campos de concentraciones created by the Spanish in Cuba in 1896 to suppress the popular insurrection of the colony, or the “concentration camps” 1 into which the English herded the Boers toward the start of the century. What matters here is that in both cases, a state of emergency linked to a colonial war is extended to an entire civil population. The camps are thus born not out of ordinary law (even less, as one might have supposed, from a transformation and development of criminal law) but out of a state of exception and martial law. This is even clearer in the Nazi Lager, concerning whose origin and juridical regime we are well informed. It has been noted that the juridical basis for internment was not common law but Schutzhaft (literally, protective custody), a juridical institution of Prussian origin that the Nazi jurors sometimes classified as a preventative police measure insofar as it allowed individuals to be “taken into custody” independently of any criminal behavior, solely to avoid danger to the security of the state. The origin of Schutzhaft lies in the Prussian law of June 4, 1851, on the state of emergency, which was extended to all of Germany (with the exception of Bavaria) in 1871. An even earlier origin for Schutzhaft can be located in the Prussian laws on the “protection of personal liberty” (Schutz der persönlichen Freiheit) of February 12, 1850, which were widely applied during the First World War and during the disorder in Germany that followed the signing of the peace treaty. It is important not to forget that the first concentration camps in Germany were the work not of the Nazi regime but of the Social-Democratic governments, which interned thousands of communist militants in 1923 on the basis of Schutzhaft and also created the Konzentrationslager für Ausländer at Cottbus-Sielow, which housed mainly Eastern European refugees and which may, therefore, be considered the first camp for Jews in this century (even if it was, obviously, not an extermination camp). The juridical foundation for Schutzhaft was the proclamation of the state of siege or of exception and the corresponding suspension of the articles of the German constitution that guaranteed personal liberties. Article 48 of the Weimar constitution read as follows: “The president of the Reich may, in the case of a grave disturbance or threat to public security and order, make the decisions necessary to reestablish public security, if necessary with the aid of the armed forces. To this end he may provisionally suspend [ausser Kraft setzen] the fundamental rights contained in articles 114,115,117,118, 123, 124, and 153.” From 1919 to 1924, the Weimar governments declared the state of exception many times, sometimes prolonging it for up to five months (for example, from September 1923 to February 1924). In this sense, 1 In English in the original. – Trans. 166 96 PART THREE: THE CAMP AS BIOPOLITICAL PARADIGM OF THE MODERN when the Nazis took power and proclaimed the “decree for the protection of the people and State” (Verordnung zum Schutz von Volk und Staat) on February 28, 1933, indefinitely suspending the articles of the constitution concerning personal liberty, the freedom of expression and of assembly, and the inviolability of the home and of postal and telephone privacy, they merely followed a practice consolidated by previous governments. Yet there was an important novelty. No mention at all was made of the expression Ausnahmezustand (“state of exception”) in the text of the decree, which was, from the juridical point of view, implicitly grounded in article 48 of the constitution then in force, and which without a doubt amounted to a declaration of the state of exception (“articles 114, 115,117,118,123,124, and 153 of the constitution of the German Reich,” the first paragraph read, “are suspended until further notice”). The decree remained de facto in force until the end of the Third Reich, which has in this sense been aptly defined as a “Night of St. Bartholomew that lasted twelve years” (Drobisch and Wieland, System, p. 26). The state of exception thus ceases to be referred to as an external and provisional state of factual danger and comes to be confused with juridical rule itself. National Socialist jurists were so aware of the particularity of the situation that they defined it by the paradoxical expression “state of willed exception” (einen gewollten Ausnahmezustand). “Through the suspension of fundamental rights,” writes Werner Spohr, a jurist close to the regime, “the decree brings into being a state of willed exception for the sake of the establishment of the National Socialist State” (quoted ibid., p. 28). 7.2. The importance of this constitutive nexus between the state of exception and the concentration camp cannot be overestimated for a correct understanding of the nature of the camp. The “protection” of freedom that is at issue in Schutzhaft is, ironically, protection against the suspension of law that characterizes the emergency. The novelty is that Schutzhaft is now separated from the state of exception on which it had been based and is left in force in the normal situation. The camp is the space that is opened when the state of exception begins to become the rule. In the camp, the state of exception, which was essentially a temporary suspension of the rule of law on the basis of a factual state of danger, is now given a permanent spatial arrangement, which as such nevertheless remains outside the normal order. When Himmler decided to create a “concentration camp for political prisoners” in Dachau at the time of Hitler’s election as chancellor of the Reich in March 1933, the camp was immediately entrusted to the SS and – thanks to Schutzhaft – placed outside the rules of penal and prison law, which then and subsequently had no bearing on it. Despite the multiplication of the often contradictory communiqués, instructions, and telegrams through which the authorities both of the Reich and of the individual Länder took care to keep the workings of Schutzhat as vague as possible after the decree of February 28, the camp’s absolute independence from every judicial control and every reference to the normal juridical order was constantly reaffirmed. According to the new notions of the National Socialist jurists (among whom Carl Schmitt was in the front lines), which located the primary and immediate source of law in the Führer’s command, Schutzhaft had, moreover, no need whatsoever of a juridical foundation in existing institutions and laws, being “an immediate effect of the National Socialist revolution” (Drobisch and Wieland, System, p. 27). Because of this – that is, insofar as the camps were located in such a peculiar space of exception – Diels, the head of the Gestapo, could declare, “Neither an order nor an instruction exists for the origin of the camps: they were not instituted; one day they were there [sie waren nicht gegründet, sie waren eines Tages da] “ (quoted ibid., p. 30). Dachau and the other camps that were immediately added to it (Sachsenhausen, Buchenwald, Lichtenberg) remained almost always in operation – what varied was the size of their population (which in certain periods, in particular between 1935 and 1937, before the Jews began to be deported, diminished to 7,500 people). But in Germany the camp as such had become a permanent reality. 7.3. The paradoxical status of the camp as a space of exception must be considered. The camp is a piece of land placed outside the normal juridical order, but it is nevertheless not simply an external space. What is excluded in the camp is, according to the etymological sense of the term “exception” (ex-capere), taken outside, included through its own exclusion. But what is first of all taken into the juridical order is the state of exception itself. Insofar as the state of exception is “willed,” it inaugurates a new juridicopolitical paradigm in which the norm becomes indistinguishable from the exception. The camp is thus the The Camp as 'Nomos' 97 structure in which the state of exception – the possibility of deciding on which founds sovereign power – is realized normally. The sovereign no longer limits himself, as he did in the spirit of the Weimar constitution, to deciding on the exception on the basis of recognizing a given factual situation (danger to public safety): laying bare the inner structure of the ban that characterizes his power, he now de facto produces the situation as a consequence of his decision on the exception. This is why in the camp the quaestio iuris is, if we look carefully, no longer strictly distinguishable from the quaestio facti, and in this sense every question concerning the legality or illegality of what happened there simply makes no sense. The camp is a hybrid of law and fact in which the two terms have become indistinguishable. Hannah Arendt once observed that in the camps, the principle that supports totalitarian rule and that common sense obstinately refuses to admit comes fully to light: this is the principle according to which “everything is possible.” Only because the camps constitute a space of exception in the sense we have examined – in which not only is law completely suspended but fact and law are completely confused – is everything in the camps truly possible. If this particular juridico-political structure of the camps – the task of which is precisely to create a stable exception – is not understood, the incredible things that happened there remain completely unintelligible. Whoever entered the camp moved in a zone of indistinction between outside and inside, exception and rule, licit and illicit, in which the very concepts of subjective right and juridical protection no longer made any sense. What is more, if the person entering the camp was a Jew, he had already been deprived of his rights as a citizen by the Nuremberg laws and was subsequently completely denationalized at the time of the Final Solution. Insofar as its inhabitants were stripped of every political status and wholly reduced to bare life, the camp was also the most absolute biopolitical space ever to have been realized, in which power confronts nothing but pure life, without any mediation. This is why the camp is the very paradigm of political space at the point at which politics becomes biopolitics and homo sacer is virtually confused with the citizen. The correct question to pose concerning the horrors committed in the camps is, therefore, not the hypocritical one of how crimes of such atrocity could be committed against human beings. It would be more honest and, above all, more useful to investigate carefully the juridical procedures and deployments of power by which human beings could be so completely deprived of their rights and prerogatives that no act committed against them could appear any longer as a crime. (At this point, in fact, everything had truly become possible.) 7.4. The bare life into which the camp’s inhabitants were transformed is not, however, an extrapolitical, natural fact that law must limit itself to confirming or recognizing. It is, rather, a threshold in which law constantly passes over into fact and fact into law, and in which the two planes become indistinguishable. It is impossible to grasp the specificity of the National Socialist concept of race – and, with it, the peculiar vagueness and inconsistency that characterize it – if one forgets that the biopolitical body that constitutes the new fundamental political subject is neither a quaestio facti (for example, the identification of a certain biological body) nor a quaestio iuris (the identification of a certain juridical rule to be applied), but rather the site of a sovereign political decision that operates in the absolute indistinction of fact and law. No one expressed this peculiar nature of the new fundamental biopolitical categories more clearly than Schmitt, who, in the essay “State, Movement, People,” approximates the concept of race, without which “the National Socialist state could not exist, and without which its juridical life would not be possible,” to the “general and indeterminate clauses” that had penetrated ever more deeply into German and European legislation in the twentieth century. In penetrating invasively into the juridical rule, Schmitt observes, concepts such as “good morals,” “proper initiative,” “important motive,” “public security and order,” “state of danger,” ar “case of necessity,” which refer not to a rule but to a situation, rendered obsolete the illusion of a law that would a priori be able to regulate all cases and all situations and that judges would have to limit themselves simply to applying. In moving certainty and calculability outside the juridical rule, these clauses render all juridical, concepts indeterminate. “In this sense,” Schmitt writes with unwittingly Kafkaesque accents, today there are now only ‘indeterminate’ juridical concepts. . . . The entire application of law thus lies between Scylla and Charybdis. The way forward seems to condemn us to a shoreless Epilogue Why lndia Survives The Sikhs may try to set up a separate regime. I think they probably will and that will be only a start of a general decentralization and break-up of the idea that India is a country, whereas it is a subcontinent as varied as Europe. The Punjabi is as different from a Madrassi as a Scot is from an Italian. The British tried to consolidate it but achieved nothing permanent. No one can make a nation out of a continent of many nations. GENERAL SIR CLAUDE AUCHINLECK, ex Indian army C-in-C, 1948 Unless Russia first collapses, India – Hindustan, if you will – is in grave danger of becoming communist in the not distant future. SIR FRANCIS TUKER, ex Indian army General, 1950 As the years pass, British rule in India comes to seem as remote as the battle of Agincourt. MALCOLM MUGGERIDGE,broadcaster and author, 1964 Few people contemplating Indira Gandhi’s funeral in 1984 would have predicted that ten years later India would remain a unity but the Soviet Union would be a memory. ROBIN JEFFRET, historian, 2000 I IN ITS ISSUE FOR February 1959, that venerable American magazine The Atlantic Monthly carried an unsigned report on the state of Pakistan. General Ayub Khan had recently assumed power via a military coup. What was missing in Pakistan, wrote the correspondent, was ‘the politicians. They have been banished from public life and their very name is anathema. Even politics in the ab- stract has disappeared. People no longer seem interested in debating socialism versus free enterprise or Left versus Right. It is as if these controversies, like the forms of parliamentary democracy, were merely something that was inherited willy-nilly from the West and can now be dispensed with.’ The Atlantic reporter believed that ‘the peasants [in Pakistan] welcome the change in government because they want peace’. He saw law and order returning to the countryside, and smugglers and black-marketeers being putin their place. ‘Already the underdog in Pakistan’ is grateful to the army, he wrote, adding: ‘In a poor country ... the success of any government is judged by the price of wheat and rice’, which, he claimed, had fallen since Ayub took over. Foreign correspondents are not known to be bashful of generalizations, even if these be based on a single fleeting visit to a single unrepresentative country. Our man at the Atlantic Monthly was no exception. From what he saw – or thought he saw – in Pakistan he offered this general lesson: ‘Many of the newly independent countries in Asia and Africa have tried to copy the British parliamentary system. The experiment has failed in the Sudan, Pakistan and Burma, while the system is under great stress in India and Ceylon. The Pakistan experiment [with military rule] will be watched in Asia and Africa with keen interest.’ Forty years later the Atlantic Monthly carried another report on the state of Pakistan. Between times the country had passed from dictatorship to democracy and then back again to rule by men in uniform. It had also been divided, with its eastern wing seceding to form the sovereign state of Bangladesh. And it had witnessed three wars, each one initiated by the generals whom the peasants had hoped would bring them peace. This fresh Atlantic report was signed, by Robert D. Kaplan, who is something of a travelling specialist on ethnic warfare and the breakdown of nation-states. Kaplan presented a very negative portrayal of Pakistan, of its lawlessness, its ethnic conflicts (Sunni vs. Shia, Mohajir vs. Sindhi, Balochi vs. Punjabi etc.), its economic disparities, and of the training of jihadis and the cult of Osama bin Laden. Kaplan quoted a Pakistani intellectual who said: ‘We have never defined ourselves in our own right – only in relation to India. That is our tragedy.’ The reporter himself thought that Pakistan ‘could be a Yugoslavia in the making, but with nuclear weapons’. Like Yugoslavia, Pakistan reflected an ‘accumulation of disorder and irrationality that was so striking’. Kaplan’s conclusion was that ‘both military and democratic governments in Pakistan have failed, even as India’s democracy has gone more than half a century without acoup’.1 Kaplan doubtless had not read the very different prognosis of Pakistan offered in his own magazine forsty years previously. What remains striking are the very different assessments of India. In 1959, the Atlantic Monthly pitied India for having a democracy when it might be better off as a military dictatorship. In 1999 the same magazine thought this very democracy had been India’s saving grace. Two years later the Twin Towers in New York fell. As attempts were made by Western powers to foster democracy by force in Afghanistan and Iraq, India’s record in nurturing democracy from within gathered renewed appreciation. When, in April 2004, India held its fourteenth general election the contrast with Pakistan was being highlighted by Pakistanis themselves: ‘India goes to the polls and the world notices,’ wrote the Karachi columnist Ayaz Amir. ‘Pakistan plunges into another exercise in authoritarian management – and the world notices, but through jaundiced eyes. Are we so dumb that the comparison escapes us?’ ‘When will we wakeup?’ continued Amir, ‘When will we learn? When will it dawn on us that it is not India’s size, population, tourism or IT industry [that is] making us look small, but Indian democracy?’2 II In those elections of 2004 some 400 million voters exercised their franchise. The ruling alliance, led by the Bharatiya Janata Party, was widely expected to win by a comfortable margin, prompting fears of a renewal of the ‘Hindutva’ agenda. As it happened, the Congress-led United Progressive Alliance defiedt he pollsters and came to power. The outcome was variously interpreted as a victory for secularism, a revolt of the aam admi (common man) against the rich and an affirmation of the continuing hold of the Nehru-Gandhi dynasty over the popular imagination. In the larger context of world history, however, what is important is not why the voters voted as they did but the fact that they voted at all. Ever since the 1952 elections were described as the ‘biggest gamble in history’, obituaries have been written for Indian democracy. It has been said, time and again, that a poor, diverse and divided country cannot sustain the practice of (reasonably) free and fair elections. Yet it has. In that first general election voter turnout was less than 46 per cent. Over the years this has steadily increased; from the late 1960s about three out of five eligible Indians have voted on election day. In assembly elections the voting percentage has tended to be even higher. When these num- bers are disaggregated they reveal a further deepening. In the first two general elections, less than 40 per cent of eligible women voted; by 1998 the figure was in excess of 60 per cent. Besides, as surveys showed, they increasingly exercised their choice independently, that is regardless of their husband’s or father’s views on the matter. Also voting in ever higher numbers were Dalits and tribals, the oppressed and marginalized sections of society. In northern India in particular, Dalits turned out in far greater numbers than high castes. As the political analyst Yogendra Yadav points out, ‘India is perhaps the only large democracy in the world today where the turnout of the lower orders is well above that of the most privileged groups.’3 The Indian love of voting is well illustrated by the case of a cluster of villages on the Andhra/Maharashtra border. Issued voting cards by the administrations of both states, the villagers seized the opportunity to exercise their franchise twice over.4 It is also illustrated by the peasants in Bihar who go to the polls despite threats by Maoist revolutionaries. Dismissing elections as an exercise in bourgeois hypocrisy, the Maoists have been known to blacken the faces of villagers campaigning for political parties, and to warn potential voters that their feet and hands would be chopped off. Yet, as an anthropologist working in central Bihar found, ‘the overall effect of poll-boycott on voter turnout seems to be negligible’. In villages where Maoists had been active for years, ‘in fact, election day was seen as an enjoyable (almost festive) occasion. Women dressed in bright yellows and reds, their hair oiled and adorned with clips, made their way to the polling booth in small groups.’5 Likewise, in parts of the north-east where the writ of the Indian state runs erratically or not at all, insurgents are unable to stop villagers from voting. As the chief election commissioner wryly putit, ‘the Election Commission’s small contribution to the integrity of the country is to make these areas part of the country for just one day, election day’.6 That elections have been successfully indigenized in India is demonstrated by the depth and breadth of their reach – across and into all sections of Indian society – by the passions they evoke, and by the humour that surrounds them. There is a very rich archive of electoral cartoons poking fun at promises made by prospective politicians, their desperation to get a party ticket and much else.7 At other times the humour can be gentle rather than mocking. Consider the career of a cloth merchant from Bhopal named Mohan Lal who contested elections against five different prime ministers. Wearing a wooden crown and a garland gifted by himself, he would walk the streets of his constituency, ringing a bell. He unfailingly lost his deposit, thereby justifying his own self-imposed sobriquet of Dhartipakad, or he who lies, humbled, on the ground. His idea in contesting elections, said Mohan Lal, was ‘to make everyone realise that democracy was meant for one and all’.8 That elections allow all Indians to feel part of India is also made clear by the experience of Goa. When it was united – or reunited – with India by force in 1961 there was much adverse commentary in the Western press. But where in 400 years of Portuguese rule the Goans had never been allowed to choose their own leaders, within a couple of years of coming under the rule of New Delhi they were able to do so. The political scientist Benedict Anderson has tellingly compared India’s treatment of Goa with Indonesia’s treatment of East Timor, that other Portuguese colony ‘liberated’ by armed nationalists: Nehru had sent his troops to Goa in 1960 [sic] without a drop of blood being spilt. But he was a humane man and the freely elected leader of a democracy; he gave the Goanese their own autonomous state government, and encouraged their full participation in India’s politics. In every respect, General Suharto was Nehru’s polar opposite.9 Considering the size of the electorate, it is overwhelmingly likely that more people have voted in Indian elections than voters in any other democracy. India’s success in this regard is especially striking when compared with the record of its great Asian neighbour, China. That country is larger, but far less divided on ethnic or religious lines, and far less poor as well. Yet there has never been a single election held there. In other ways too China is much less free than India. The flow of information is highly restricted – when the search engine Google setup shop in China in February 2006 it had to agree to submit to state censorship. The movement of people is regulated as well – the permission of the state is usually required to change one’s place of residence. In India, on the other hand, the press can print more or less what they like, and citizens can say exactly what they feel, live where they wish to and travel to any part of the country. India/China comparisons have long been a staple of scholarly analysis. Now, in a world that becomes more connected by the day, they have become ubiquitous in popular discourse as well. In this comparison China might win on economic grounds but will lose on political ones. Indians like to harp on about their neighbour’s democracy deficit, sometimes directly and at other times by euphemistic allusion. When asked to put on a special show at the World Economic Forum of 2006, the Indian delegation never failed to de- scribe their land, whether in speech or in print or on posters, as the ‘World’s Fastest Growing Democracy’. If one looks at what we might call the ‘hardware’ of democracy, then the self-congratulation is certainly merited. Indians enjoy freedom of expression and of movement, and they have the vote. However, if we examine the ‘software of democracy, then the picture is less cheering. Most political parties have become family firms. Most politicians are corrupt, and many come from a criminal background. Other institutions central to the functioning of a democracy have also declined precipitously over the years. The percentage of truly independent-minded civil servants has steadily declined, as has the percentage of completely fair-minded judges. Is India a proper democracy or a sham one? When asked this question, I usually turn for recourse to an immortal line of the great Hindi comic actor Johnny Walker. In a film where he plays the hero’s sidekick, Walker answers every query with the remark: ‘Boss, phipty-phipty’. When asked what prospect he has of marrying the girl he so deeply loves, or of getting the job he so dearly desires, the sidekick tells the boss that the chances are roughly even, 50 per cent of success, or 50 per cent of failure. Is India a democracy, then? The answer is well, phipty-phipty. It mostly is when it comes to holding elections and permitting freedom of movement and expression. It mostly is not when it comes to the functioning of politicians and political institutions. However, that India is even a 50 per cent democracy flies in the face of tradition, history and the conventional wisdom. Indeed, by its own experience it is rewriting that history and that wisdom. Thus Sunil Khilnani remarked of the 2004 polls that they represented the largest exercise of democratic election, ever and anywhere, in human history. Clearly, the idea of democracy, brought into being on an Athenian hillside some 2,500 years ago, has travelled far-and today describes a disparate array of political projects and experiences. The peripatetic life of the democratic idea has ensured that the history of Western political ideas can no longer be written coherently from within the terms of the West’s own historical experience.10 III The history of independent India has amended and modified theories of democracy based on the experience of the West. However, it has confronted even more directly ideas of nationalism emanating from the Western experience. In an essay summarizing a lifetime of thinking on the subject, Isaiah Berlin identifies ‘the infliction of a wound on the collective feelings of a society, or at least of its spiritual leaders’, as a ‘necessary’ condition for the birth of nationalist sentiment. For this sentiment to fructify into a more widespread political movement, however, requires ‘one more condition’, namely that the society in question ‘must, in the minds of at least some of its most sensitive members, carry an image of itself as a nation, at least in embryo, in virtue of some general unifying factor or factors – language, ethnic origin, a common history (real or imaginary)’. Later in the same essay, Berlin comments on the ‘astonishingly Europo-centric’ thought of nineteenth – and early twentieth-century political thinkers, where ‘the people of Asia and Africa are discussed either as wards or as victims of Europeans, but seldom, if ever, in their own right, as peoples with histories and cultures of their own; with a past and present and future which must be understood in terms of their own actual character and circumstances.’11 Behind every successful nationalist movement in the Western world has been a certain unifying factor, a glue holding the members of the nation together, this provided by a shared language, a shared religious faith, a shared territory, a common enemy – and sometimes all of the above. Thus, the British nation brought together those who huddled together on a cold island, who were mostly Protestant and who detested France. In the case of France, it was language which powerfully combined with religion. For the Americans a shared language and mostly shared faith worked in tandem with animosity towards the colonists. As for the smaller east European nations – the Poles, the Czechs, the Lithuanians etc. – their populations have been united by a common language, a mostly common faith and a shared and very bitter history of domination by German and Russian oppressors.12 By contrast with these (and other examples) the Indian nation does not privilege a single language or religious faith. Although the majority of its citizens are Hindus, India is not a ‘Hindu’ nation. Its constitution does not discriminate between people on the basis of faith; nor, more crucially, did the nationalist movement that lay behind it. From its inception the Indian National Congress was, as Mukul Kesavan observes, a sort of political Noah’s Ark which sought to keep every species of Indian on board.13 Gandhi’s political programme was built upon harmony and co-operation between India’s two major religious communities, Hindus and Muslims. Although, in the end, his work and example were unsuccessful in stopping the division of India, the failure made his successors even more determined to construct independent India as a secular republic. For Jawaharlal Nehru and his colleagues, if India was anything at all it was not a ‘Hindu Pakistan’. Like Indian democracy, Indian secularism is also a story that combines success with failure. Membership of a minority religion is no bar to advancement in business or the professions. The richest industrialist in India is a Muslim. Some of the most popular film stars are Muslim. At least three presidents and three chief justices have been Muslim. In 2007, the president of India is a Muslim, the prime minister a Sikh, and the leader of the ruling party a Catholic born in Italy. Many of the country’s most prominent lawyers and doctors have been Christians and Parsis. On the other hand, there have been periodic episodes of religious rioting, in the worst of which (as in Delhi in1984 and Gujarat in 2002) the minorities have suffered grievous losses of life and property. Still, for the most part the minorities appear to retain faith in the democratic and secular ideal. Very few Indian Muslims have joined terrorist or fundamentalist organizations. Even more than their compatriots, Indian Muslims feel that their opinion and vote matter. One recent survey found that while 69 per cent of all Indians approve and endorse the ideal of democracy, 72 per cent of Muslims did so.And the turnout of Muslims at elections is higher than ever before.14 Building democracy in a poor society was always going to be hard work. Nurturing secularism in a land recently divided was going to be even harder. The creation of an Islamic state on India’s borders was a provocation to those Hindus who themselves wished to merge faith with state. My own view – speaking as a historian rather than citizen – is that as long as Pakistan exists there will be Hindu fundamentalists in India. In times of stability, or when the political leadership is firm, they will be marginal or on the defensive. In times of change, or when the political leadership is irresolute, they will be influential and assertive. The pluralism of religion was one cornerstone of the foundation of the Indian republic. A second was the pluralism of language. Here again, the intention and the effort well pre-dated Independence. In the 1920s Gandhi reconstituted the provincial committees of the Congress on linguistic lines. The party had promised to form linguistic provinces as soon as the country was free. The promise was not redeemed immediately after 1947, because the creation of Pakistan had promoted fears of further Balkanization. However, in the face of popular protest the government yielded to the demand. Linguistic states have been in existence for fifty years now. In that time they have deepened and consolidated Indian unity. Within each state a common language has provided the basis of administrative unity and efficiency. It has also led to an efflorescence of cultural creativity, as expressed in film, theatre, fiction and poetry. However, pride in one’s language has rarely been in conflict with a broader identification with the nation as a whole. The three major secessionist movements in independent India – in Nagaland in the 1950s, in Punjab in the 1980s and in Kashmir in the 1990s – have affirmed religious and territorial distinctiveness, not a linguistic one. For the rest, it has proved perfectly possible – indeed, desirable – to be Kannadiga and Indian, Malayali and Indian, Andhra and Indian, Tamil and Indian, Bengali and Indian, Oriya and Indian, Maharashtrian and Indian, Gujarati and Indian and, of course, Hindi-speaking and Indian. That, in India, unity and pluralism are inseparable is graphically expressed in the country’s currency notes. On one side is printed a portrait of the ‘father of the nation’, Mahatma Gandhi; on the other side apicture of the Houses of Parliament. The note’s denomination – 5, 10, 50, 100 etc. – is printed in words in Hindi and English (the two official languages), but also, in smaller type, in all the other languages of the Union. In this manner, as many as seventeen different scripts are represented. With each language, and each script, comes a distinct culture and regional ethos, here nesting more or less comfortably with the idea of India as a whole. Some Western observers – usually Americans – believed that this profusion of tongues would be the undoing of India. Based on their own country;s experience, where English had been the glue binding the different waves of immigrants, they thought that a single language – be it Hindi or English – had to be spoken by all Indians. Linguistic states they regarded as a grievous error. Thus, in a book published as late as 1970, and at the end of his stint as the Washington Post’s man in India, Bernard Nossiter wrote despairingly that this was ‘a land of Babel with no common voice’. The creation of linguistic states would ‘further divide the states from each other [and] heighten the impulse toward secession’. From its birth the Indian nation had been ‘plagued by particularist, separatist tendencies’, wrote Nossiter, and ‘the continuing confusion of tongues ... can only further these tendencies and puts in question the future unity of the Indian state’.15 That, to survive, a nation-state had necessarily to privilege one language was a view that the Soviet dictator Joseph Stalin shared with American liberals. Stalin insisted that ‘a national community is inconceivable without a common language’, and that ‘there is no nation which at one and the same time speaks several languages’.16 This belief came to inform the language policy of the Soviet Union, in which the learning of Russian was made obligatory. The endeavour, as Stalin himself put it, was to ensure that ‘there is one language in which all citizens of the USSR can more or less express themselves – that is Russian’.17 Like Bernard Nossiter, Stalin too might have feared for the future of the Indian nation-state because of its encouragement of linguistic diversity. In fact, exactly the reverse has happened: the sustenance of linguistic pluralism has worked to tame and domesticate secessionist tendencies. A comparison with neighbouring countries might be helpful. In 1956, the year the states of India were reorganized on the basis of language, the Parliament of Sri Lanka (then Ceylon)introduced legislation recognizing Sinhala as the sole official language of the country. The intention was to make Sinhala the medium of instruction in all state schools and colleges, in public examinations and in the courts. Potentially the hardest hit were the Tamil-speaking minority who lived in the north of the island, and whose feelings were eloquently expressed by their representatives in Parliament. ‘When you deny me my language’, said one Tamil MP, ‘you deny me everything.’ ‘You are hoping for a divided Ceylon’, warned another, adding: ‘Do not fear, I assure you [that you] will have a divided Ceylon.’ A left-wing member, himself Sinhala speaking, predicted that if the government did not change its mind and insisted on the act being passed, ‘two torn little bleeding states might yet arise out of one little state’.18 In 1971 two torn medium-sized states arose out of one large-sized one. The country being divided was Pakistan, rather than Sri Lanka, but the cause for the division was, in fact, language. For the founders of Pakistan likewise believed that their state had to be based on a single language as well as a single religion. In his first speech in the capital of East Pakistan, Dacca, Mohammad Ali Jinnah warned his audience that they would have to take to Urdu sooner rather than later. ‘Let me make it very clear to you’, said Jinnah to his Bengali audience, ‘that the State Language of Pakistanis going to be Urdu and no other language. Anyone who tries to mislead you is really the enemy of Pakistan. Without one State language, no nation can remain tied up solidly together and function.’19 In the 1950s bloody riots broke out when the Pakistan government tried to impose Urdu on recalcitrant students. The sentiment of being discriminated against on the grounds of language persisted, and ultimately resulted in the formation of the independent state of Bangladesh. Pakistan was created on the basis of religion, but divided on the basis of language. And for more than two decades now a bloody civil war has raged in Sri Lanka, the disputants divided somewhat by territory and faith but most of all by language. The lesson from these cases might well be: ‘One language, two nations’. Had Hindi been imposed on the whole of India the lesson might well have been: ‘One language, twenty-two nations’. That Indians spoke many languages and followed many faiths made their nation unnatural in the eyes of some Western observers, both lay and academic. In truth, many Indians thought so too. Likewise basing themselves on the European experience, they believed that the only way for independent India to survive and prosper would be to forge a bond, or bonds, that overlay or submerged the diversity that lay below. The glue, as in Europe, could be provided by religion, or language, or both. Such was the nationalism once promoted by the old Jana Sangh and promoted now, in a more sophisticated form, by the BJP. This reaches deep into the past to invoke a common (albeit mostly mythical) ‘Aryan ancestry for the Hindus, a common history of suffering at the hands of (mostly Muslim) invaders, with the suffering tempered here and there by resistance by valiant ‘Hindu’ chieftains such as Rana Pratap and Shivaji. A popular slogan of the original Jana Sangh was ‘Hindi, Hindu, Hindustani’. The attempt was to makeIndian nationalism more natural, by making – or persuading – all Indiansto speak the same language and worship the same gods. In time, the bid to impose a uniform language was dropped. But the desire to impose the will of the majority religion persisted. This has led, as we have seen in this book, to much conflict, violence, rioting and death. Particularly after the Gujarat riots of 2002, which were condoned and to some extent even approved by the central government, fears were expressed about the survival of a secular and democratic India. Thus, in a lecture delivered in the university town of Aligarh, the writer Arundhati Roy went so far as to characterize the BJP regime as ‘fascist’. In fact, she used the term ‘fascism eleven times in a single paragraph while describing the actions of the government in New Delhi.20 Here again, Indian events and experiences were being analysed in terms carelessly borrowed from European history. To call the BJP ‘fascist is to diminish the severity and seriousness of the murderous crimes committed by the original fascists in Italy and Germany. Many leaders of the BJP are less than appealing, but to see the party as ‘fascist’ would be both to overestimate its powers and to underestimate the democratic traditions of the Indian people. Notably, the BJP now vigorously promotes linguistic pluralism. No longer are its leaders from the Hindi heartland alone; and it has expanded its influence in the southern states. And it is obliged to pay at least lip service to religious pluralism. One of its general secretaries is a Muslim; even if he is dismissed as a token, the ideology he and his party promote goes by the name of ‘positive secularism’. The qualifier only underlines the larger concession – that even if some BJP leaders privately wish for a theocratic Hindu state, for public consumption they must endorse the secular ideals of the Indian Constitution. Finally, despite all their best efforts, the BJP was not able to disturb the democratic edifice of the Indian polity. A month after Arundhati Roy delivered her speech, the BJP alliance lost power in a general election that it had called. Its leaders moved out of office and allowed their victors to move in instead. When was the last time a ‘fascist’ regime permitted such an orderly transfer of power? The holding of the 1977 elections – called by an individual who had proven dictatorial tendencies – and of the 2004 elections – called by a party unreliably committed to democratic procedure – were both testimony to the deep roots that democracy had struck in the soil of India. In this respect, the country was fortunate in the calibre of its founding figures, and in the fact that they lived as long as they did. Few nations have had leaders of such acknowledged intelligence and integrity as Jawaharlal Nehru, Vallabhbhai Patel and B. R. Ambedkar all living and working at the same time. Within a few years of Independence Patel had died and Ambedkar had left office; but by then the one had successfully overseen the political integration of the country and the other the forging of a democratic constitution. As Nehru lived on, he was kept company by outstanding leaders in his own party – K. Kamaraj and Morarji Desai, for instance – and in the opposition, in whose ranks were such men as J. B. Kripalani and C. Rajagopalachari. Jawaharlal Nehru served three full terms in office, a privilege denied comparable figures in the countries of South Asia, where, for example, Aung San was murdered on the eve of the British departure from Burma, Jinnah died within a few years of Pakistan’s freedom, Mujib within a few years of Bangladesh’s independence and the Nepali democrat B. P. Koirala was allowed only a year as prime minister before being dismissed (and then jailed) by the monarchy. What might those men have done if they had enjoyed power as long as Nehru, and if they had had the kind of supporting cast that he did?21 Of course, there has been a rapid, even alarming, decline in the quality of the men and women who rule India. In a book published in 2003 the political theorist Pratap Bhanu Mehta wrote feelingly of ‘the corruption, mediocrity, indiscipline, venality and lack of moral imagination of the [Indian] political class’. Within the Indian state, he continued, ‘the lines between legality and illegality, order and disorder, state and criminality, have come to be increasingly porous’.22 That said, the distance – intellectual or moral – between Jawaharlal Nehru and Indira Gandhi, or between B. R. Ambedkar and Mulayam Singh Yadav, is not necessarily greater than between, say, Abraham Lincoln and George W. Bush. It is in the nature of democracies, perhaps, that while visionaries are sometimes necessary to make them, once made they can be managed by mediocrities. In India, the sapling was planted by the nation’s founders, who lived long enough (and worked hard enough) to nurture it to adulthood. Those who came afterwards could disturb and degrade the tree of democracy but, try as they might, could not uproot or destroy it. IV Indian nationalism has not been based on a shared language, religion, or ethnic identity. Perhaps one should then invoke the presence of a common enemy, namely European colonialism. The problem here is the methods used to achieve India’s freedom. The historian Michael Howard claims that ‘no Nation, in the true sense of the word ... could be born without war ... no self-conscious community could establish itself as a new and independent actor on the world scene without an armed conflict or the threat of one’.23 Once again, India must count as an exception. Certainly, it was the movement against British rule that first united men and women from different parts of the subcontinent in a common and shared endeavour. However, their (eventually successful) movement for political freedom eschewed violent revolution in favour of non-violent resistance. India emerged as a nation on the world stage without an armed conflict or, indeed, the threat of one. Gandhi and company have been widely praised for preferring peaceful protest to armed struggle. However, they should be equally commended for having the wisdom to retain, after the British left, such aspects of the colonial legacy as might prove useful in the new nation. The colonialists were often chastised by the nationalists for promoting democracy at home while denying it in the colonies. When the British finally left, it was expected the Indians would embrace metropolitan traditions such as parliamentary democracy and Cabinet government. More surprising per- haps was their endorsement and retention of a quintessentially colonial tradition – the civil service. The key men in British India were the members of the Indian Civil Service (ICS). In the countryside they kept the peace and collected the taxes, while in the Secretariat they oversaw policy and generally kept the machinery of state well oiled. Although there was the odd rotten egg, these were mostly men of integrity and ability.24 A majority were British, but there were also a fair number of Indians in the ICS. When Independence came, the new government had to decide what to do with the Indian civil servants. Nationalists who had been jailed by them argued that they should be dismissed or at least put in their place. The home minister, Vallabhbhai Patel, however, felt that they should be allowed to retain their pay and perquisites, and in fact be placed in positions of greater authority. In October 1949 a furious debate broke out on the subject in the Constituent Assembly of India. Some members complained that the ICS men still had the ‘mentality [of rulers] lingering in them’. They had apparently ‘not changed their manners’, ‘not reconciled themselves to the new situation’. ‘They do not feel that they are part and parcel of this country’, insisted one nationalist. Vallabhbhai Patel had himself been jailed many times by ICS men, but this experience had only confirmed his admiration for them. He knew that without them the Pax Britannica would simply have been inconceivable. And he understood that the complex machinery of a modern independent nationstate needed such officers even more. As he reminded the members of the assembly, the new constitution could be worked only ‘by a ring of Service which will keep the country intact’. He testified to the ability of the ICS men, but also to their sense of service. As Patel put it, the officers had ‘served very ably, very loyally the then Government and later the present Government’. Patel was clear that ‘these people are the instruments [of national unity]. Remove them and I see nothing but a picture of chaos all over the country.’25 In those first, terribly difficult years of Indian freedom, the ICS men vindicated Vallabhbhai Patel’s trust in them. They helped integrate the princely states, resettle the refugees and plan and oversee the first general election. Other tasks assigned to them were more humdrum but equally consequential – such as maintaining law and order in the districts, working with ministers in the Secretariat and supervising famine relief. In 1947 Patel inaugurated a new cadre modelled on the ICS but with a name untainted by the colonial experience. This was the Indian Administrative Service, or IAS. In 2008 there are some 5,000 IAS officers in the employment of the government of India. The IAS is complemented, as in British days, by other ‘all India’ services, among them the police, forest, revenue and customs services. These serve as an essential link between the centre and the states. Officers are assigned to a particular state; they spend at least half of their service career in that province, the rest in the centre. To the older duties of tax collection and the maintenance of law and order have been added a whole range of new responsibilities. Conducting elections is one; the supervising of development programmes another. In the course of his career an average IAS officer would acquire at least a passing familiarity with such different and divergent subjects as criminal jurisprudence, irrigation management, soil and water conservation and primary health care. This, like its predecessor, is truly an ‘elite’ cadre. The competition to enter the higher civil services is ferocious. In 1996, 120,712 candidates appeared for the examination, of whom a mere 738 were finally selected. Their intelligence and ability is of a very high order. However, there are complaints of increasing corruption among its members, and of their succumbing too easily to their political masters. Perhaps if the IAS is abolished at one stroke the country will not descend into chaos. But as it stands IAS officers play a vital role in maintaining its unity.26 In times of crisis they tend to rise to the challenge. After the tsunami of 2004, for example, IAS officers in Tamil Nadu were commended for their outstanding work in relief and rehabilitation. It was an ICS man, Sukumar Sen, who laid the groundwork for elections in India, and it has been IAS men who have kept the machinery going. The chief election commissioners in the states are drawn from the service. Junior officers supervise polls in their districts; those in the middle ranks serve as election observers, reporting on violations of procedure. More generally, the civil services serve as a bridge between state and society. In the course of their work, these administrators meet thousands of members of the public, drawn from all walks of life. Living and working in a democracy, they are obliged to pay close attention to what people think and demand. In this respect, their job is probably even harder than that of their predecessors in the ICS. A colonial institution that has played an equally vital role is the Indian army. Its reputation took a battering after the China war of 1962, before it redeemed itself through its performance in successive wars with Pakistan. The blows inflicted by Tamil insurgents in Sri Lanka in 1987–8 dented the army somewhat, but then honour was restored by the successful ousting of the Kargil intruders a decade later. While its reputation as a fighting force has gone up and down, as an agency for maintaining order in peacetime the Indian army has usually commanded the highest respect. In times of communal rioting, the mere appearance of soldiers in uniform is usually enough to make the rioters flee. And in times of natural disaster they bring succour to the suffering. When there is a flood, famine, cyclone or earthquake, it is the army which is often first on the scene, and always the most efficient and reliable actor around. The Indian army is a professional and wholly non-sectarian body. It is also apolitical. Almost from the first moments of Independence, Jawaharlal Nehru made it clear to the army top brass that in matters of state – both large and small – they had to subordinate themselves to the elected politicians. At the time of the transfer of power the army was still headed by a British general, who had ordered that the public be kept away from a flag-hoisting ceremony to be held on the day after Independence. As prime minister, Nehru rescinded the order, and wrote to the general as follows: While I am desirous of paying attention to the views and susceptibilities of our senior officers, British and Indian, it seems to me that there is a grave misunderstanding about the matter. In any policy that is to be pursued, in the Army or otherwise, the views of the Government of India and the policy they lay down must prevail. If any person is unable to lay down that policy, he has no place in the Indian Army, or in the Indian structure of Government. I think this should be made perfectly clear at this stage.27 A year later it was Vallabhbhai Patel’s turn to put a British general in his place. When the government decided to move against the Nizam, the commander-inchief, General Roy Bucher, warned that sending troops into Hyderabad might provoke Pakistan to attack Amritsar. Patel told Bucher that if he opposed the Hyderabad action he was free to resign. The general backed down, and sent the troops as ordered.28 Shortly afterwards Bucher retired, to be succeeded by the first Indian Cin-C, General K. M. Cariappa. At the beginning of his tenure Cariappa restricted himself to military matters, but as he grew into the job he began to offer his views on such questions as India’s preferred model of economic development. In October 1952 Nehru wrote advising him to give fewer press conferences, and at any rate to stick to safe subjects. He also enclosed a letter from one of his Cabinet colleagues, which complained that Cariappa was ‘giving so many speeches and holding so many Press Conferences all over the country’, giving the impression that he was ‘playing the role of apolitical or semi-political leader’.29 The message seems to have gone home, for when Cariappa demitted office in January 1953, in his farewell speech he ‘exhorted soldiers to give a wide berth to politics’. The army’s job, he said, was not ‘to meddle in politics but to give unstinted loyalty to the elected Government’.30 Nehru knew, however, that the general was something of a loose cannon, who could not be completely trusted to follow his own advice. Within three months of his retirement Cariappa was appointed high commissioner to Australia. The general was not entirely pleased, for, as he told the prime minister, ‘by going away from home to the other end of the world for whatever period you want me in Australia, I shall be depriving myself of being in continuous and constant touch with the people .Nehru consoled the general that as a sportsman himself he was superbly qualified to represent India to a sporting nation. But the real intention, clearly, was to get him as far away from the people as possible.31 As the first Indian to head the army, Cariappa carried a certain cachet, which lost its lustre with every passing month after he had left office. By the time he came back from Australia Cariappa was a forgotten man. Nehru’s foresight was confirmed, however, by the statements the general made from time to time. In 1958 he visited Pakistan, where army officers who had served with him in undivided India had just effected a coup. Cariappa publicly praised them, saying that it was ‘the chaotic internal situation which forced these two patriotic Generals to plan together to impose Martial Law in the country to save their homeland from utter ruination’.32 Ten years later, he sent an article to the Indian Express, in which he argued that the chaotic internal situation in West Bengal demanded that President’s Rule be imposed for a minimum of five years. The recommendation was in violation of both the letter and the spirit of the constitution. Fortunately, the piece was returned by the editor, who pointed out to the general that ‘it would be embarrassing in the circumstances both to you and to us to publish this article’.33 The pattern set in those early years has persisted into the present. As Lieutenant General J. S. Aurora notes, Nehru ‘laid down some very good norms’, which ensured that ‘politics in the army has been almost absent’. ‘The army is not a political animal in any terms’, remarks Aurora, and the officers especially ‘must be the most apolitical people on earth!’34 It is a striking fact that no army commander has ever fought an election. Aurora himself became a national hero after overseeing the liberation of Bangladesh, but neither he nor other officers have sought to convert glory won on the battlefield into political advantage. If they have taken public office after retirement, it has been at the invitation of the government. Some, like Cariappa, have been sent as ambassadors overseas; others have served as state governors. The army, like the civil services, is a colonial institution that has been successfully indigenized. The same might be said about the English language. In British times the intelligentsia and professional classes communicated with one another in English. So did the nationalist elite. Patel, Bose, Nehru, Gandhi and Ambedkar all spoke and wrote in their native tongue, and also in English. To reach out to regions other than one’s own, its use was indispensable. Thus a pan-Indian, anti-British consciousness was created, in good part by thinkers and activists writing in the English language. After Independence, among the most articulate advocates for English was C. Rajagopalachari. The colonial rulers, he wrote, had ‘for certain accidental reasons, causes and purposes ... left behind [in India] a vast body of the English language’. But now it had come there was no need for it to go away. For English ‘is ours. We need not send it back to Britain along with Englishmen. He humorously added that, according to Indian tradition, it was a Hindu goddess, Saraswati, who had given birth to all the languages of the world. Thus English ‘belonged to us by origin, the originator being Saraswati, and also by acquisition’.35 On the other hand, there were some very influential nationalists who believed that English must be thrown out of India with the British. In Nehru’s day, fitful attempts were made to replace English with Hindi as the language of inter-provincial communication. But it continued to be in use within and outside government. Visiting India in 1961 the Canadian writer George Woodcock found that, despite India’s strangeness, its ‘immense variety of custom, landscape and physical types’, this was ‘a foreign setting in which one’s language was always understood by someone nearby, and in which to speak with an English accent meant that one was seen as a kind of cousin bred out of the odd, temporary marriage of two peoples into which love and hate entered with equal intensity’.36 After Nehru’s death the efforts to extinguish English were renewed. Despite pleas from the southern states, on 26 January 1965 Hindi became the sole official language of inter-provincial communication. As we have seen, this provoked protests so intense and furious that the order was with drawn within a fortnight. Thus English continued as the language of the central government, the superior courts and higher education. Over the years English has confirmed, consolidated and deepened its position as the language of the pan-Indian elite. The language of the colonizers has, in independent India, become the language of power and prestige, the language of individual as well as social advancement. As the historian Sarvepalli Gopal observes, ‘that knowledge of English is the passport for employment at higher levels in all fields, is the unavoidable avenue to status and wealth and is mandatory to all those planning to migrate abroad, has meant a tremendous enthusiasm since independence to study it’. But, as Gopal also writes, English ‘may be described as the only non-regional language in India. It is a link language in a more than administrative sense, in that it counters blinkered provincialism.’37 Those, like Nehru and Rajaji, who sought to retain English, sensed that it might help consolidate national unity and further scientific advance. That it has done, but largely unanticipated has been its role in fuelling economic growth. For behind the spectacular rise of the software industry lies the proficiency of Indian engineers in English. V If India is roughly 50 per cent democratic, it is approximately 80 per cent united. Some parts of Kashmir and the north-east are under the control of insurgents seeking political independence. Some forested districts in central India are in the grip of Maoist revolutionaries. However, these areas, large enough in themselves, constitute considerably less than a quarter of the total land mass claimed by the Indian nation. Over four-fifths of India, the elected government enjoys a legitimacy of power and authority. Throughout this territory the citizens of India are free to live, study, take employment and invest in businesses. The economic integration of India is a consequence of its political integration. They act in a mutually reinforcing loop. The greater the movement of goods and capital and people across India, the greater the sense that this is, after all, one country. In the first decades of Independence it was the public sector that did most to further this sense of unity. In plants such as the great steel mill in Bhilai, Andhras laboured and lived alongside Punjabis and Gujaratis, fostering appreciation of other tongues, customs and cuisine, while underlining the fact that they were all part of the same nation. As the anthropologist Jonathan Parry remarks, in the Nehruvian imagination ‘Bhilai and its steel plant were seen as bearing the torch of history, and as being as much about forging a new kind of society as about forging steel’. The attempt was not unsuccessful; among the children of the first generation of workers, themselves born and raised in Bhilai, provincial loyalties were superseded by a more inclusive patriotism, a ‘more cosmopolitan cultural style’.38 More recently, it has been the private sector which has, if with less intent, furthered the process of national integration. Firms headquartered in Tamil Nadu set up cement plants in Haryana; doctors born and educated in Assam establish clinics in Bombay. Many of the engineers in Hyderabad’s IT industry come from Bihar. The migration is not restricted to the professional classes; there are barbers from Uttar Pradesh working in the city of Bangalore, as well as carpenters from Rajasthan. However, it must be said that the flow is not symmetrical. While the cities and towns that are ‘booming’ become ever more cosmopolitan, economically laggard states sink deeper into provincialism. VI Apart from elements of politics and economics, cultural factors have also contributed to national unity. Pre-eminent here is the Hindi film. This is the great popular passion of the Indian people, watched and followed by Indians of all ages, genders, castes, classes, religions and linguistic groups. Each formally recognized state of the Union, says the lyricist Javed Akhtar, ‘has its different culture, tradition and style. In Gujarat, you have one kind of culture, then you go to Punjab, you have another, and the same applies in Rajasthan, Bengal, Orissa or Kerala. Then Akhtar adds, ‘There is one more state in this country, and that is Hindi cinema.’39 This is a stunning insight which asks to be developed further. As a separate state of India, Hindi cinema acts as a receptacle for all that (in a cultural sense) is most creative in the other states. Thus its actors, musicians, technicians and directors come from all parts of India. Thus also it draws ecumenically from cultural forms prevalent in different regions. For example a single song may feature both the Punjabi folk dance called the bhangra and its Tamil classical counterpart, bharatan-atyam. Having borrowed elements from here, there and everywhere, the Hindi film then sends the synthesized product out for appreciation to the other states of the Union. The most widely revered Indians are film stars. Yet cinema does not merely provide Indians with a common pantheon of heroes; it also gives them a common language and universe of discourse. Lines from film songs and snatches from film dialogue are ubiquitously used in conversations in schools, colleges, homes and offices – and on the street. Because it is one more state of the Union, Hindi cinema also speaks its own language – one that is understood by all the others. The last sentence is meant literally as well as metaphorically. Hindi cinema provides a stock of social situations and moral conundrums which widely resonate with the citizenry as a whole. But, over time, it has also made the Hindi language more comprehensible to those who previous...

Option 1

Low Cost Option
Download this past answer in few clicks

16.89 USD

PURCHASE SOLUTION

Already member?


Option 2

Custom new solution created by our subject matter experts

GET A QUOTE

Related Questions