This book is the fruit of sixteen months of work carried out by the Internation Collective, which aims to respond to two speeches given by António Guterres, Secretary-General of the United Nations, first on 10 September 2018 at the UN, then on 24 January 2019 in Davos (Switzerland), as well as to the appeals made on various occasions by Greta Thunberg.
The COP25 held in Madrid in December 2019 showed to what degree neither the IPCC, nor António Guterres, nor Greta Thunberg, nor the movements she has sparked in youth around the whole world, are being heard by the political and economic powers – while public opinion, with the exception of the younger generation, seems to have lost its voice in relation to these appeals, despite the increase in the environmental vote, for example in Europe.
The view of the Internation Collective is that, in addition to all the particular conflicts of interest with the general interest that clearly exist on the side of both governments and corporations, thanks to which they fail to live up to their responsibilities – which seems to us to amount, in the current situation, to a moral, political and economic fault – this state of affairs is due primarily to the fact that the implementation of truly decisive and effective measures to combat climate change, and, more generally, the disorders tied to the excesses of the Anthropocene era, depends upon profoundly changing the scientific models that have dominated the industrial economy since the late eighteenth century.
These models all have a fundamentally Newtonian construction, inasmuch as they ignore the question of entropy. Integrating these issues raised by this question (and the toxic aspects of development are all expressions of these issues) presupposes modifying the microeconomic and macroeconomic axioms, theorems, methods, instruments and organizations of the global industrial economy – an industrial economy characterized by the fact that, like technology, it integrates scientific formalisms with knowledge and with technical production methods. The need for a change of economic organization, due to the toxicity generated by the current industrial economy, was highlighted during COP23 by the researchers who signed the appeal published on 13 November 2017 in BioScience, in particular in their twelfth point.
Humanity as a whole, which on the largest scale is represented by the UN, has the challenge of formalizing, and bringing into play at the level of the planetary economy, new theoretical models equal to the real situation – a global threat caused by the global economy in its encounter with the biosphere, which could in the near future turn into a kind of ‘necrosphere’ as a result of the irrational and unreasonable exploitation of what, since Vernadsky, has also been called the technosphere. Is it possible for such a discourse to be listened to any more than have the warnings that have constantly been issued since 1992, which, despite the countless catastrophes that have now unfolded in the biosphere, of which the 2019 fires provide the most dreadful images, have remained without effect?
Such a discourse can become audible, and in the short term, to the extent that it turns this challenge into an opportunity to create new forms of economic activity, industrial as well as artisanal, agricultural and in terms of services, based on the struggle against entropy, more solvent forms that, with a transitional and in-depth approach, progressively redefine (1) investment and work, and (2) employment, by taking advantage of the automation currently underway – not so that technology will become capable of resolving all problems, but so that technology will be able to strengthen the capabilities of individuals and groups in the struggle against entropy, and in so doing, and in a strict sense, to enable them to earn their living [gagner leur vie], to regain their life, both individually and collectively.
From nine different angles, corresponding to nine chapters, this work proposes:
• a diagnosis of the present situation;
• a theoretical formalization of its causes, consequences and possible transformations;
• a method of large-scale social experimentation, based on the rapid transfer of the results of contributory research – fundamental research, applied research and action research – in the form of contributory economic models;
• the sharing of results and experiments by consolidating them on a global scale through a specific organization inspired by the concept of ‘internation’ outlined by Marcel Mauss in 1920.
The nine angles are:
1. epistemology; 2. territorial dynamics; 3. contributory economy; 4. contributory research; 5. the internation as institution; 6. contributory design; 7. ethics in the Anthropocene era; 8. addiction and the dopaminergic system; 9. the global political economy of carbon (fire) and silicon (information).
Composed of scientists, economists, epistemologists, philosophers, sociologists, lawyers, artists, doctors, engineers, designers and citizens actively engaged in these issues, the Internation Collective was formed in order to confront these questions of axioms, theorems, methods, instruments and organizations of the global industrial economy in the context of automation – through a progressive transformation of macroeconomic norms, starting from an experimentally-driven process of transition aimed at setting up an alternative industrial macroeconomy through which all aspects related to the Anthropocene’s encounter with its own limits would be addressed in a functional and systemic way.
The name ‘Internation Collective’ was adopted in November 2019 – the collective having been formed in London on 22 September 2018. ‘Internation’ is a neologism put forward by Marcel Mauss in 1920, during the time of the creation of the institution that would on 10 January 1920 come to be named the League of Nations, at the Palais Wilson in Geneva (then called the Hôtel National).
On 10 January 2020, the work presented in the following chapters will be presented publicly in Geneva at a press conference preceded by a day of work and exchange with two international youth movements, Youth for Climate and Extinction Rebellion. The press conference will be held on behalf of the Internation Collective, but also on behalf of those invited to the event and wanting to be present at the table, whether they have been invited to take part in these discussions on behalf of institutions, associations or informal groups, or are there in their personal capacity.
The work being done with members of Youth for Climate and Extinction Rebellion – two movements working to drive political and economic powers to take the action required by the extremely critical situation in which the biosphere finds itself, both of which are essentially led by the younger generation – is being carried out within the framework of the Association of Friends of the Thunberg Generation, whose project was presented at the Centre Pompidou on 17 December 2019, created from a proposal to transform the Ars Industrialis association.
The vocation of the Association of Friends of the Thunberg Generation will be found in an appendix. To put it in one sentence, its goal is to open up an ongoing dialogue with the youth movements struggling to cope with the climate emergency, starting from Greta Thunberg’s demand to ‘listen to the scientists’, and in order to formulate well-considered proposals from various standpoints, with notable generational differences, this being a source of enrichment.
The materials contained in the following chapters have been written collectively. They are addressed firstly to the UN and expand on points that were raised in an appendix to a letter addressed to the Secretary-General of the United Nations. They were partially presented and discussed during a symposium held at the Centre Pompidou on 17–18 December 2019, as part of the Entretiens du nouveau monde industriel that the Institut de Recherche et d’Innovation organizes there each December. The letter to António Guterres is appended to this introduction.
The Internation Collective met for the first time on 22 September 2018 at the Serpentine Galleries in London, after its director, Hans Ulrich Obrist, suggested that we organize a debate on the question of work in the twenty-first century – and that we do so in reference to a program of social experimentation and contributory research launched in Seine-Saint-Denis in 2016 under the name of Territoire Apprenant Contributif (Contributory Learning Territory). It aimed to explore the question of the future of work, and was conducted within the framework of the Marathon, an initiative of Hans Ulrich Obrist organized each autumn and held at the Serpentine Galleries.
The Collective has set itself the task of submitting proposals to the United Nations in order to rethink work in the twenty-first century on new theoretical and practical bases, in the context of an essential transformation of the industrial economy, which at the end of the Anthropocene era is confronted with its own toxic effects. In other words, it is a question of facing up to the injunctions regularly formulated by the scientific world with regard to the immediate future of humanity and life on Earth.
This meeting was followed by several seminars held in various locations, including a session held in February 2019 based on the symposium, Le travail au XXIè siècle, organized by Alain Supiot at the Collège de France as part of the centenary of the ILO, the proceedings of which have now been published. A two-day seminar was also held at Maison Suger in early July 2019, within the framework of the Collège d’études Mondiales of the Fondation Maison des Sciences de l’Homme, which included the participation of members of Youth for Climate.
The scientific work analysing the threats to the biosphere posed by the industrial development of human societies emerged within the United Nations context in 1972, with the first Earth Summit held that year in Stockholm, leading to the establishment of the United Nations Environment Program (UNEP). Since then, such work has continued to develop and strengthen, with almost every new assessment confirming and extending the significance of the toxic consequences of the current form of industrial development – up to and including the most recent IPCC reports, to which the Secretary-General of the United Nations has frequently referred, especially since the autumn of 2018, reports that are indeed highly alarming.
In the same year that the Stockholm summit was held, the famous Meadows report, a commission given to MIT by the Club of Rome, was published as The Limits to Growth. A year earlier, Nicholas Georgescu-Roegen’s The Entropy Law and the Economic Process was published by Harvard University Press. In 1976, Arnold Toynbee’s Mankind and Mother Earth appeared, followed in 1979 by René Passet’s L’économique et le vivant.
Long before all these works, an article by Alfred Lotka was printed in a 1945 issue of the journal Human Biology, under the title ‘The Law of Evolution as a Maximal Principle’. This article, and Lotka’s earlier work (in a way synthesized in the 1945 article), receive broad discussion in the work presented here. Lotka was a mathematician and biologist who studied entropy in the field of life as early as the 1920s, and it is notable that his reflections came to the attention of Vladimir Vernadsky, who referred to them, together with those of Alfred Whitehead, in the final chapter of The Biosphere (1926).
As has already been mentioned, the proposals of the Internation Collective presented below are inspired by an ongoing social experiment in the department of Seine-Saint-Denis. This experimental Contributory Learning Territory is devoted to the reinvention of work in the context of a contributory economy. As we will see repeatedly, the future of work, which is more or less the heart of all these analyses, is fundamentally and functionally linked to climate and environmental issues.
In Le travail au XXIè siècle, Alain Supiot writes that
through its work, Homo faber aims in principle to adapt its vital milieu to its needs, or in other words, to create a cosmos from out of chaos, a humanly liveable world from out of the worldless [immonde]. But conversely, its work can, whether voluntarily or not, also destroy or devastate its vital milieu, and make it humanly unliveable. The question of work and the ecological question are thus inextricably linked.
Unlike employment, from which it is therefore strictly distinguished, just as it is distinguished from labour or toil (ponos in Greek), work (ergon in Greek) is here conceived above all as a production of knowledge.
In 1945, however, Lotka showed that the production of knowledge is the condition of the struggle against entropy for this technical form of life that is human life. If the organogenesis in which the evolution of life in general consists produces endosomatic organs spontaneously ordered by biological constraints, then, in the specifically human form of life, organogenesis is also exosomatic. In what Lotka calls exosomatic evolution, artificial organs are produced by the cooperation of human groups, and this always involves knowledge that intensifies their negentropic capabilities rather than their entropic tendencies.
With respect to cooperation, and with respect to the development of the division of work as the acquisition of constantly renewed knowledge, recent palaeo-anthropology in North America and Australia has shown that it was the condition of survival of Homo sapiens, and before that was the condition of hominization itself. In his recent work, Richard Sennett has brought these questions into the context of the contemporary world.
Exosomatic organs are bivalent: they amount to what Socrates called pharmaka – both poisons and remedies (and this is why, by its work, Homo faber can as easily produce a kosmos as devastate its milieu). The practice of exosomatic organs must therefore be prescribed by theories as well as by the empirical knowledge supplied by experience.
Georgescu-Roegen takes up Lotka’s perspective, arguing that it is the economy that has the function of limiting entropy and increasing negentropy. For Georgescu-Roegen, this means that the economy must no longer be based exclusively on Newtonian physics, but must integrate both thermodynamics, as the question of entropy, and biology, as the question of negentropy.
Here, however, we must reiterate that in Lotka’s view, and beyond a strictly biological question, it is possible for the economy to limit the entropy of exosomatic organs and increase their negentropy only if it valorizes knowledge. Hence it is in order to avoid being trapped in a biological model whose inadequacy was described by Lotka that we refer to anthropy and neganthropy, positing that what produces neganthropy is knowledge in all its forms.
Once the vital function of knowledge has been recognized, it becomes necessary to analyse the consequences of the fact that, from the beginning of the Anthropocene era – assuming that this can be dated from the industrial revolution – work has been transformed into employment, and the knowledge that was implemented by work has been progressively transformed into machinic formalisms. This has resulted in a structural impoverishment of employment, ever more clearly proletarianized, something that already worried Adam Smith, and which will be at the centre of Marxist theory.
Today, we know that above all, this impoverishment consists in:
• an entropic development of employment, with, as we know, disastrous consequences for the environment;
• a loss of meaning, which lies at the origin of what is now called ‘suffering at work’, but is also the origin, more generally, of demotivation and the crisis of ‘human resources’;
• the replacement of proletarianized employees by automatons (whether robotic or algorithmic, as was highlighted by an MIT report taken up by Oxford), proletarianized jobs tending to disappear, and the activity of pure labour (ponos) without work (ergon) being transferred to automated machines.
The employment variable, however, which is crucial to the development model called the perpetual growth economy, is for this reason systemically oriented to fall, with the result that the overall solvency of the model is necessarily and irreversibly compromised. ‘Irreversibly’ – unless there is a change of macroeconomic model, and of its functions and variables.
It is to propose achievable and experimental pathways to such a change, which must occur as a matter of urgency, that the Internation Collective is advocating a specific experimental approach called ‘contributory research’, which was proposed in 2014 in France by the Conseil National du Numérique, as part of the Jules Ferry 3.0 report (part 5 of which, where this proposal is formulated, is reproduced in an appendix).
It is on the basis of this observation of a systemically downward tendency of proletarianized employment, and the subsequent need for the productivity gains obtained by automation to be redistributed via work performed and remunerated outside employment, that the program of the Contributory Learning Territory has been developed in Seine-Saint-Denis, which thus conducts experiments in the development of a contributory economy.
Work outside employment means a knowledge-activity that is not yet economically and socially valued. We maintain that in the context of the Anthropocene era, we must invest in the development of this kind of work, in order to foster the emergence of new knowledge – of how to live, make and conceive differently – capable of disintoxicating the industrial economy.
The goal of the contributory economy, as a macroeconomic model based on microeconomic and mesoeconomic territorial activities, is thus to re-valorize knowledge of all kinds – from that of the mother who raises her child in the epoch of touchscreens (an issue being worked on by the contributory clinic of the Plaine Commune Contributory Learning Territory) to the most formalized and mathematized forms of knowledge, which are disrupted by ‘black boxes’, and passing through the work-knowledge [savoir-faire] of the manual or intellectual worker in the epoch of automation.
In this conception of the contributory economy, which remunerates work through a contributory income inspired by the French model for intermittent entertainment workers, employment, which becomes intermittent, is functionally deproletarianized. This also means that new ways of organizing work – inspired first by free software, but also by action research methods practised by institutional psychiatry, or those studied by Gregory Bateson (through the Alcoholics Anonymous association) – are implemented through specific systems and institutions. (Starting from the case of Seine-Saint-Denis, management institutes of the contributory economy (IGECs) have been conceived and designed, a description of which will be found in chapter 3 ).
Here, the decarbonization of the economy therefore implies the deproletarianization of industry. Of course, this evolution does not concern all jobs. But it centrally concerns all those that tend to decrease the entropic human footprint – the human form of entropy production also being called anthropogenic forcing in the 2014 IPCC report, and referred to more generally, for example in geography, as anthropization.
This is why, in what follows, we will use the term anthropy in order to qualify the specifically human form of entropy. The increase of anthropy (in thermodynamic, biological and informational forms) is the specific feature of the Anthropocene era. Conceived in this way, and having now developed to such an extent that its own conditions of possibility are inevitably compromised, the issue at stake with anthropy is to reconstitute neganthropic potentials. What defines knowledge as knowledge, moreover, is precisely its neganthropic character.
Inasmuch as it makes it possible to struggle against this anthropy, knowledge may be empirical, such as the knowledge of the hand as described by Richard Sennett or Matthew Crawford, or, again, in the sense of Winnicott’s ‘good enough mother’, who does work by raising her child, that is, by cultivating a knowledge of her child and thus transmitting knowledge to her child, which is called parental education. Empirical knowledge can be an art (ars) in the sense of the craftsman, but also in the sense of the artist, or even in the sense of the sportsperson.
Conceptual knowledge may be scientific, or technical, or technological. As for the social knowledge of everyday life – hospitality, companionship, neighbourly relations, festive practices, rules of life constituting mores – they are destroyed and ruined by marketing, user manuals, the reduction of usages to utility coming to replace social practices that still contain specific forms of knowledge amounting to ‘mores’ or ‘morals’ as collective care, and hence as solidarity. Such practices are the basis of what Henri Bergson called obligation, which is the condition of social life, and which, if destroyed, is bound to lead to generalized incivility.
We could continue for a long time delineating everything that (empirical, conceptual, social) knowledge could be: the task is inherently interminable, because knowledge, as inventiveness, creativity or discovery, is infinite in principle and in potential, albeit always coming to completion in actualization, the whole issue of reason being of knowing how to make the most of this difference between potential and act (in Aristotle’s sense of dunamis and energeia, the root of the latter being ergon).
We should stress here that decarbonization, like deproletarianization, does not just concern work and employment activities in production or services: the issue is also the detoxification of consumers, that is, the deproletarianization of ways of life. Here, an immense educational project opens up, whose terms and stakes are profoundly new, and which cannot wait for the reforms of educational institutions (which are increasingly disastrous), but must on the contrary lead to social dynamics of civil society that nourish and transform educational institutions – which once again raises the question of what was developed in the twentieth century under the banner of popular education and the relationships between democracy and education in John Dewey’s sense.
Here, we posit in principle that all knowledge, of whatever kind – empirical, parental, artistic, sporting, scientific, academic or social, in all the senses that we can give to this last adjective – all knowledge knows something of the world in that it adds something to this world: it knows that this world is unfinished, and that we must continue to make it unfold towards a future [faire advenir, to make it happen]. This adding something, through which the world happens through knowledge, is a neganthropic (and anti-anthropic, this notion being based on that of anti-entropy developed in the first chapter ) contribution to human worlds – which would otherwise collapse into anthropy: knowledge, whatever its form, is what, in the spontaneous tendency of the universe as a whole to move towards disorder, maintains or constitutes an order.
Deprived of such knowledge, employment can become toxic and ‘devastate’ its milieu, as Supiot points out. It is precisely in such deprivation, however, that proletarianization consists. And here lies the deepest origin of the Anthropocene era that is now reaching its limits – the IPCC reports precisely describe such limits from the climatological perspective, but the challenge posed by the warming of the biosphere does not, unfortunately, exhaust the subject of the limits of the Anthropocene, which will undoubtedly mark all the most salient features of the remainder of the twenty-first century, including, hopefully, in terms of responses to these limits, and as the overcoming of the Anthropocene era by the Neganthropocene era.
At the origin of thermodynamic anthropization lies the toxic anthropization of human life, itself produced by the anthropization of knowledge. By defining knowledge above all as neganthropic potential (in the wake of Alfred Whitehead and Georges Canguilhem), the elements of a response to António Guterres and Greta Thunberg presented here consist, above all, in reconsidering the very purpose of the economy in general – in particular when the latter, having become industrial, functionally and systemically mobilizes scientific knowledge.
It is this specific relationship of the industrial economy to scientific knowledge that Chapter 1, ‘Anthropocene, exosomatization and negentropy’ – co-authored by Maël Montévil, Bernard Stiegler, Giuseppe Longo, Ana Soto and Carlos Sonnenschein – tries to describe.
On the basis of this chapter, it is shown that, in the context of the fact that the Anthropocene is reaching its limits, the economy must be redefined above all as collective action in the struggle against entropy and against anthropy, given that the various disturbances afflicting the current stage of the Anthropocene all consist in an increase of (1) thermodynamic entropy, as the dissipation of energy, (2) biological entropy, as the reduction of biodiversity, and (3) informational entropy, as the reduction of knowledge to data and computation – and, correspondingly, as the loss of credit, as mistrust, as generalized mimetism and as the domination of what has been called the ‘post-truth era’ at the very moment when, more than ever, what Alfred Whitehead called the function of reason should be brought back to the heart of what amounts to an extreme state of emergency.
If it is obvious that the economy firstly consists in the production, sharing and exchange of value, and if the so-called consumer economy fundamentally consists, since the advent of the industrial economy, in producing various forms of value beyond what value had meant in economies of subsistence (by devaluing traditional values, and by the valuing, by the economy, of scientific discoveries and technical inventions through a process of innovation whose primary function is marketing inasmuch as it ‘creates needs’), then in the current stage of the Anthropocene:
• this value has been devalued, which amounts to a form of extreme disenchantment, in the sense that Max Weber gives to this word – but far beyond what he himself could anticipate;
• the ‘value of all values’ becomes in an ever more overt way that which allows this era to overcome its limits – and to thus enter into a new era.
Overcoming these limits can only mean struggling against entropy, and against its main source: anthropy. Struggling against entropy is what living things do: we have referred to negative entropy in this sense ever since Erwin Schrödinger formulated it as a concept in 1944 in Dublin – during lectures subsequently published as What is Life?.
As we have already indicated, in 1971, thirty-seven years after his encounter with Joseph Schumpeter at Harvard, Nicholas Georgescu-Roegen showed that the industrial economy does not take entropy into account, and is thereby necessarily condemned to destroy its own conditions of possibility. Arnold Toynbee will develop similar arguments by taking up Vladimir Vernadsky’s analysis, in a chapter of Mankind and Mother Earth entitled ‘The Biosphere’.
Negative entropy, which controls the organizational process of living things throughout their evolution, can, however, only ever occur in a temporary and local way. We argue that this is also true of what we call negative anthropy, or neganthropy, and we posit that every society is a neganthropic locality belonging to a larger locality of the same type, and so on until the largest locality on Earth, which is the biosphere itself as an absolute singularity in the known sidereal universe.
Conversely and consequently, globalization (as a toxic and unsustainable completion of the transformation of the biosphere into a technosphere), when it systematically eliminates local specificities, leads to a massive increase in entropic and anthropic processes. This is why the present initiative, aimed towards the United Nations, also consists, for our collective, in reviving the notion of the ‘internation’ put forward by Marcel Mauss in 1920.
We argue that the notion of the ‘internation’ must be reconsidered by starting from a negentropic standpoint, by producing neganthropic value, and by taking into consideration what, inspired by Francis Bailly, Giuseppe Longo and Maël Montévil’s theorizing of anti-entropy, we will therefore call anti-anthropy. Anti-anthropy is distinguished from neganthropy in that it diachronizes a synchronic neganthropic order. These (neganthropic and anti-anthropic) values are produced by the locality as such, which they characterize and, in so doing, delimit.
The way in which Mauss described nations in 1920 must be re-evaluated according to these notions, which he did not himself have at his disposal: nations, like all other forms of those localities called human societies (from the clan to the negentropic locality that the biosphere itself as a whole constitutes on the scale of the solar system), are cases of organizations that we call neganthropic in order to distinguish them from the negentropy constituted by life in general.
Using such a vocabulary is a way of taking heed of the ‘pharmacological’ issue at stake in exosomatic organs as theorized by Lotka. Any economy worthy of the name must reduce the various forms of toxicity produced by these organs to a minimum, through a form of organization appropriate for both knowledge (and therefore education) and exchange (and therefore economy) – knowledge itself being based on exchanges, of which the editorial economy, in all its forms, is a fundamental condition, along with scientific institutions, and we will see how this is something about which Albert Einstein, like Bergson and Mauss, was concerned within the context of the League of Nations.
In 1920, Mauss posited, in the context of the creation of the League of Nations and of the debate this provoked among socialists (of which he was one), that nations must not be diluted into internationalism, contrary to the reaction of most Marxist supporters of the October Revolution of 1917: for Mauss, it was a matter of enabling the ‘concert’ of nations by the constitution of an internation. We can see this as a prescient warning that any negation of nations is bound to lead to an exacerbation of nationalisms. But we can also see it as wishful thinking, or as pious wishes – especially after the failure of the League of Nations.
If this is true, then this wish and its piety (as a belief in the superiority of the peaceful interest of men) must today be reconsidered from the standpoint of an economy conceived above all as the struggle against entropy, and therefore as the valorization of open locality, which for this reason must be founded (this economy and these localities) on a new epistemology of economics and the disciplines it involves (especially mathematics, physics, biology and information theory), taking the stakes of entropy fully into account.
Taking the stakes of entropy into account means learning to count otherwise, by translating these stakes into formal terms, in particular in the processes of certification, traceability and accounting that constitute every industrial economy, and by translating them into juridical and institutional terms at the various scales that require reconstitution – not as barriers but as crossing points [points de passage] and negotiations of economies of scale as required by an economy of negentropy, and by extraterritorial monetization. All kinds of possibilities are being raised in the work currently being undertaken in accounting by, in particular, economists, jurists and philosophers – for example, in Europe, with the setting up of what are called ‘satellite accounts’, see below, chapter 3, p. …
A century after the institution of the League of Nations, a century after Mauss’s reflections, the immediate concern is not to avoid global conflict – even if, over the last decade, worry about this has continued to rise once again, a long way from the ‘optimism’ that dominated the end of the twentieth century. The main concern in terms of conflict has become economic war, which is ruinous for environments – social, moral and mental as well as physical.
It is in this context that the most archaic nationalisms are on the rise throughout the world – and, along with them, processes of remilitarization, and thus new threats of war, the difference with what led to the two world wars of the twentieth century being the spread of the atomic weapon. In other words, the situation is immeasurably more serious than at the time of the League of Nations.
Why, in that case, does it seem that nothing can be done to change this state of affairs? We argue in the first chapter that it is firstly an epistemic and epistemological question: the question ‘quid juris?’, as Kant introduces it at the beginning of Critique of Pure Reason, must be posed anew, and this requires – and in an extreme state of emergency – setting up and supporting appropriate contributory research processes, supported by a scientific institution that must be created for this purpose, and that would constitute the institutional basis of an internation.
The League of Nations became the United Nations in 1945, precisely because of the failure to contain the exacerbated nationalisms of Germany, Italy and Japan – with all of the consequences we know so well, while the world had in the meantime divided into two blocs. Now that
• internationalization is effected by the market,
• the Anthropocene has been defined, the question of the struggle against entropy thus imposing itself at the core of economics,
it is time to rethink this century-long history from the perspective of a critique of the globalized economy that structurally and functionally ignores local diversities and specificities such that, as neganthropy, they generate noodiversity (that is, infinitely varied and precious knowledge) – just as negentropic life generates biodiversity.
Let us note here that initiatives as different as those emerging from the territorialist school instigated in Italy by Alberto Magnaghi, and those of the ‘transition towns’ inspired by Rob Hopkins in the United Kingdom, above all amount to discourses and practices conducted on and through locality – as do, in slightly different ways, the reaffirmations of ‘ancestral knowledge’ in South America (for example, in the Ecuadorian constitution, or in Eduardo Viveiros de Castro’s perspectivism ), and of indigenous people in North America (in Canada, see Naomi Klein, No Is Not Enough ), reopening the prior question of the status of locality in social, economic and noetic life.
Similarly, it should be recalled that:
• politeia, as it comes from the Greek experience of the polis, and inasmuch as it has always consisted in affirming the prevalence of political decision over economic decision, is always the privilege of a place, whether it is called a city (polis, civitas or republic in the sense of the Renaissance, then of Kant), monarchy, empire, nation or union (federation or confederation as in the United States, India, Brazil and so on);
• the ‘people’ and their ‘independence’ are constituted by their territorial right to self-determination, and this is something that no cosmopolitanism can afford to ignore (starting with Kant’s).
Globalization suddenly spread to the entire planet at the end twentieth century by using the technological vector to prescribe standard usage, no longer taking any account of the specificities of what Bertrand Gille and Niklas Luhmann called the social systems, thereby ignoring the singular social practices that new exosomatic organs also make possible. Carried out in this way, globalization has eliminated all local scales – from the domestic nano-locality to the national, or even continental (regional in the Anglo-Saxon sense of a geographical unit) macro-locality, thus imposing a standardized and monolithic conception of the market that itself attempts to impose itself as a computational hegemony based on the elimination of everything that is not calculable.
It has in this way ruined biospherical metalocality, which can remain a singularity in the universe (as a living environment) only on the condition of protecting its biodiversity, and, when it tends to become technospherical, its noodiversity: such is the reality of the Anthropocene era reaching its extreme limits. And this is why nationalist extremism is reappearing almost everywhere, even becoming or again becoming the leading political force.
As for the city, not only in the sense of the small locality of Totnes, England, as described by Hopkins, but as the metropolis or megalopolis, constituting what it has become customary to refer to as the global city, after the work of Saskia Sassen, it is also, as she has shown, the site of a complex reinvention of locality and citizenship:
The space constituted by the worldwide grid of global cities […] is perhaps one of the most strategic spaces for the formation of new types of politics, identities, and communities, including transnational ones. This is a space that is place-centered in that it is embedded in particular and strategic sites, and transterritorial in that it connects sites that are not geographically proximate […]. The centrality of place in a context of global processes engenders a transnational economic and political opening…
In this respect, the global city and networks of global cities are not just ‘learning territories’ in Pierre Veltz’s sense in 1994 : since then, digital networks have developed at such a speed and on such a scale that urban localities have been profoundly transformed:
The whole issue of context and of its surroundings, as part of locality, is profoundly affected [by digital networks].
As a result, new types of borders are appearing, which are not just national or territorial, while at the same time there is the formation of
a global law […] that must be […] distinguished from both national law and international law
– which is above all a contract law that disintegrates notions of law that emerged from Greco-Roman antiquity, fundamentally tied to the questions ‘quid juris?’ and ‘quid facti?’ as Kant revisits them and inasmuch as they concern both science and law. The fact remains that these local urban economies and organizations, which are reticulated and as such becoming global, are thus far more like ‘Trojan horses’, aiding in the penetration of those criteriologies of value emerging from the global market as it continues to ignore questions of entropy, than the other way around.
With the erasure of localities insofar as they are negentropic and neganthropic, what the global market has destroyed is also commerce – in the sense of the distinction between commerce and the market proposed by Armand Hatchuel and Olivier Favereau. It is important to underline, here, that the notion of the global market is based on an utterly fallacious a priori according to which rational behaviour is a calculation, that is, a ‘ratio’, all economic agents then being defined as making calculations with respect to utterly decontextualized and delocalized particular interests, supporting, after consolidation, a universal rationality that has more to do with what Adorno called rationalization than with what Whitehead called reason. This is what leads to what Supiot has called governance by numbers.
Such a conception of the economy inevitably leads to the negation of politics, as democracy disintegrates into marketing, which generates among the populations of the whole world a feeling of being dispossessed of their future and of submitting to a functionally blind computational becoming – all the more so as this computationalist hegemony, of which ‘platforms’ have become the operators, now in fact control the reticulation of these global cities, leading to the anticipation of catastrophe, and on a timescale so short that it could strike with unprecedented violence at today’s younger generations when they become adults (and we can then see that the prize to be gained by the renunciation of finality in the name of efficiency is absolutely illusory).
On 10 September 2018, ten days before the first meeting of the Internation Collective in London, António Guterres delivered a speech in New York to the UN General Assembly in which he called upon nations to take the urgent measures required by the latest IPCC reports. Four months later, on 24 January 2019, he repeated these kinds of remarks in speaking to the global companies meeting at the World Economic Forum in Davos – where Greta Thunberg was also present, after taking the initiative in August 2018 to speak on behalf of her generation while engaging in a ‘global climate strike’.
The Internation Collective then decided to send to António Guterres, as Secretary-General of the United Nations, the letter that is reproduced after this introduction, announcing the proposals set out in the following chapters. In this letter, we proposed to António Guterres and to the United Nations:
• on the one hand, a diagnosis of what blocks any concerted effort by public and economic authorities to overcome the catastrophes now variously anticipated and described;
• on the other hand, a method for overcoming these blockages – this method taking note, firstly, of the sustainable development objectives adopted by the UN in 2015, secondly, of the imperative need for an integrated way of tackling the immense challenges posed by climate change but also by its consequences for migration, and thirdly, the upheavals brought by digital technology – as António Guterres pointed out on 24 January in Davos.
Let us reiterate that if neither the member states nor global or transnational companies act in the way required by António Guterres and Greta Thunberg, it is not only because of particular conflicts of interest, faced with the need to give priority to the public good at the level of the biosphere: it is firstly because of a lack, at the scale of nations and corporations, of concepts and methods adequate for facing up to this ‘reversal of all values’ that is the ordeal of the Anthropocene in the post-truth era.
What this implies is that a colossal research effort must be undertaken in order to meet these challenges, even though the IPCC says that action must be taken without delay, and thus without the time for preliminary research through a process in which reflection would precede action. This apparent contradiction, however, is not one for us, and we have already argued in this way: turning this contradiction into a new prospect is both the goal and the method of contributory research.
In addition to the fact that years of research has been undertaken in an attempt to overcome dominant forms of thought that remain profoundly tied to the paradigm that has led to what the IPCC has announced will be, if it does not change course, an inevitable disaster, contributory research consists in the development of laboratory territories bringing together inhabitants, associations, institutions, businesses and administrations, and involving them on a daily basis. For these learning communities, it is a question of dealing in a very practical way with the immediate challenges of the Anthropocene, such as toxic processes of all kinds, while at the same time testing and formalizing new theoretical models, that is, generic and thus transposable models – on the precise condition that they take account of localities.
This is why our proposal to the United Nations via its Secretary-General is for a large-scale launch, in all regions of the world, of laboratory territories carrying out contributory research, by opening a call for tender endowed with sufficient means, and calling for applications on the basis of a set of specifications, in relation to which the work we present here is intended as a starting point.
As already indicated, the first thesis consists in positing that the main blockage in current economic development has causes that are firstly epistemological. This is set out in Chapter 1.
The integration of the issues and formalisms linked to entropy requires territorialized approaches, for the reasons explained above. The challenge is thus to find ways of shifting from the microeconomic level to the macroeconomic level by passing through regional mesoeconomic strata and sectors. Territorial and urban dynamics, on the one hand, and the specificities of contributory economies that value work and deproletarianize employment, on the other hand, constitute the issues at stake in Chapters 2 and 3.
The contributory research method, inspired in part by what the German artist Joseph Beuys called ‘social sculpture’, is discussed in Chapter 4. As proposed here, that is, in the framework of an experimental approach implemented on a global scale, this requires the constitution of a scientific institution that should be the starting point for an internation – as explained in Chapter 5.
Such an experimental as well as theoretical and contributory research practice requires instruments of deliberation, cooperation and exchange, for which new practices of computer design and engineering are required. This presupposes a redefinition of those questions we call ethical, by, on the one hand, starting from the notion of ethos – which is also to say, of locality – and by, on the other hand, redefining ethos in the global and now technospherical context. These analyses are discussed in Chapters 6 and 7.
The challenge of climate change is clearly identified, qualified and quantified as the question of carbon metabolism in a society based on thermodynamic technology, and first and foremost the steam engine – from the study of which thermodynamic theory emerged. The question of silicon technologies – which today have become competitors of proletarianized employees and automated decision-making systems – is just as crucial in the struggle against crossing the threshold limits of the Anthropocene era.
Since the beginning of the twenty-first century, and in the context of the trade war, with smartphones and so-called social networks, these silicon technologies have in addition been socialized in the form of a systemically addictive exploitation of dopaminergic reward circuits. Chapters 8 and 9 discuss these issues, laying out the fundamental basis of a politics of disintoxication based on deproletarianization, forging new relationships with these highly toxic exosomatic systems that carbon and silicon technologies have become, the question being to know how to reorient them towards curative economic practices.
By introducing the issue of the struggle against anthropy, we have emphasized the irreducible character of locality. In the case of the exosomatic form of life, however, locality can itself become toxic: since exosomatic organs are irreducibly bivalent, they can harm individuals and collectives, who then suffer from their entropic effects. Any crisis situation stems directly or indirectly from such a ‘disadjustment’ in which the exosomatic ‘pharmakon’ can thus reverse its sign and become a ‘poison’ rather than a ‘remedy’. Locality then tends to withdraw and to close in upon itself – that is, to decline.
As for the possible toxicity of organs that are in principle beneficial, the early twenty-first century presents itself as a veritable accumulation of such reversals of signs by which the remedy suddenly turns out to be poisonous. In every respect, the Anthropocene appears to be precisely such a reversal, on the scale of the entire planet, and it is now clear to what extent such reversals of values can lead to violence.
This is all the more the case since most of the time, when an exosomatic system or device that has more or less established its positivity reverses its sign, it happens that the victims of this bivalence turn upon another victim, an ‘expiatory’ victim: a ‘pharmakos’, as the ancient Greeks and the Scriptures of monotheism say, that is, a scapegoat. Locality then constitutes itself essentially as a symptomatology of exclusion.
Hence it is often the case that, because locality is nowadays lived in some way by default, claims for it are made in terms of an assertion of identity, one that is closed and sterile – the scapegoat making it possible to conceal the challenges involved in a true revaluation of localities based on the sharing and exchange of new knowledge, inaugurating a new relationship to technologies and, more generally, to the milieu that this forms (an exosomatic milieu that, below, Dan Ross calls an element). Locality then becomes the phantasmatic projection of a given identity, and not the process of a perpetually open identification, one that is still to come and adoptive, that is, metabolizing its alterity.
A locality is not an identity. On the contrary, it is a process of alteration, composed of smaller and multiple localities, and included within larger localities. The fundamental question is that of the metabolism that is locality qua neganthropic process – including at its highest level, as the biosphere as a whole, which has now become a technosphere.
The metabolism through which localities enter into relationships and exchange alterities is the economy, which is not reducible to the exchange of subsistence or consumer goods, and which always constitutes what Paul Valéry called a political economy of spirit value – the most sublimated level of what Freud more generally called the libidinal economy. This economy is conditioned in its forms by the historical configurations of the exosomatization process.
The process of exosomatization is what continuously disorients the exosomatic form of life. First and foremost, locality is the taking-place [avoir lieu] from which emerges an orientation, that is, a meaning – an end, arising from a point of view shared by the community, thus constituting knowledge, or rather, a bundle of knowledge, always already on the way to diffracting towards an open and diverse future.
Such a point of view is a potential for bifurcation, that is, for the emergence of a difference qua place – where a phase shift occurs in the relationship to matter that is metabolization, generating a dimensionality that is both singular and collective. Conceived in this way, locality is the engine of difference itself: it is not constituted by its identity (it does not have one: it arises from the originary default that strikes – and as mystery – exosomatization), but by its potential for differentiation.
This is true of locality in all epochs and everywhere around the world. The fact that the Baruya are organized into tribes that themselves belong to an ethnic group, the tribe itself being composed of clans, means that it is in the differential constituted by these scales of locality that local processes of individuation can arise – these different scales being cosmologically inscribed in localities that exceed ethnicity, this exceeding being the object of what we here call noesis qua noodiversity. Locality, in other words, is always expressed in points of view that are themselves local in relation to the process of unification that the locality forms.
Locality is therefore relational and functions as the place of activation of another dimension in a field – which is itself the product of another differential produced by another locality on another dimension of the field. Difference is primary, that is, primordially tied to another difference, rather than to the existence of a pre-constituted identity.
The revaluation of localities, conceived as sources of neganthropy and anti-anthropy (metastabilized processes in the form of social structures and emergent singularities always capable of calling into question any constituted order), requires rethinking automated calculation and algorithms on a new information theoretical basis – the most general principles of which are outlined in Chapter 6 – and as technodiversity constitutive of cosmotechnics.
The current automatic generation of relations between psychic individuals leads – through ‘user profiling’, ‘echo chambers’ and ‘nudging’ – to the literal annihilation of these psychic localities that are individuals themselves, which find themselves replaced by what Félix Guattari had called dividuals, in the sense in which ‘patterns’ are statistically extracted in a way that Robert Musil already in a way foreshadowed in The Man Without Qualities, while in Italy, German and Japan a catastrophe was brewing.
Here it is knowledge as memories (sets of collective retentions and protentions) that are very seriously compromised by ‘user profiling’, ‘echo chambers’ and ‘nudging’: society thus becomes systemically amnesic. It is not, however, a question of advocating the protection of an ‘authentic’ individual or collective memory that would be kept away from and sheltered from calculation: it is a question of the neganthropic and anti-anthropic socialization of artificial retention, which, as exosomatization, constitutes every form of society, like the totem reflected on by Emile Durkheim, or works in the sense of Ignace Meyerson. Today, digital retention must be theorized in a new way in order to put it at the service of the metabolization of localities, and not their purely computational and extractive abstraction.
It is in this sense that IGECs, as management institutes of the contributory economy, are based above all on deliberative platforms that are constituted by starting from the local level, and on the basis of projects forming micro-reticular exchange structures and aiming towards macro-reticular exchange structures.
Faced with the mortal and (in the strict sense) apocalyptic challenges of the end of the Anthropocene era announced by the vast majority of the scientific community, human beings must reconstitute knowledge by rediscovering old knowledge, even ancestral knowledge, and by producing new knowledge in all fields. Inventiveness, creativity and discovery are today, as always, the only guarantees of the future of humanity – and of life in general.
Contributory research posits that everyone can and must take part in such a production of new wealth, and the contributory economy posits that this requires a reasoned, tested and deliberate macroeconomic change, based on taking into account all scientific work, in the service of a new economic rationality that combats anthropy, and opening up an age founded on cooperation and economic peace, rather than on destruction that is no longer in any way ‘creative’: the Anthropocene era is the revelation of the primarily destructive character of the ‘creative destruction’ that according to Joseph Schumpeter describes consumerist capitalism.
If inventiveness, creativity and discovery are always the only guarantees of the future, then what is now changing, and in this respect disorienting, is the fact that a global economy of extraordinary efficiency, which has made it possible to feed, clothe and house billions of people, more or less badly, turns out to have also been extraordinarily toxic – so toxic that it threatens to put an end to what Toynbee called ‘the great human adventure’.
Here, and in order to learn from them, we must reread three quite extraordinary – extra-lucid – little sentences that were published by Henri Bergson in 1932:
Mankind lies groaning, half crushed beneath the weight of its own progress. Men do not sufficiently realize that their future is in their own hands. Theirs is the task of determining first of all whether they want to go on living or not.
The industrial economy took shape between the late eighteenth century and the nineteenth century, initially in Western Europe and then in North America. Besides technical production, it involves technological production – the integration of sciences in order to produce indus-trial goods –, to the strict extent that, as Marx showed, capitalism makes knowledge and its economic valorization its primary element.
Newton’s physics and the metaphysics that goes with it originated the epistemic (in Michel Foucault’s sense) and epistemological (in Gaston Bachelard’s sense) framework of this great transformation. In this transformation, otium (productive leisure time) submits to negotium (worldly affairs, business). All along, mathematics has been applied with ever more powerful and performative calculating machines.
After precursors such as Nicholas Georgescu-Roegen, himself inspired by Alfred Lotka, we maintain that political economy in what is now called the Anthropocene (whose features were delineated by Vladimir Vernadsky in 1926) is a challenge that requires a fundamental reconsideration of these epistemic frameworks and epistemological frameworks. With Dar-win, living beings became part of a historical process of becoming. In humans, knowledge is a performative part of this process that shapes and reshapes lifestyles in order to tame the im-pact of technical novelties.
A brief historical introduction: knowledge and technics
The intellectual context of the industrial revolution is the idea that science and econ-omy, especially trade, would become the new basis of legitimacy,security, justice, and peace. For example, Hume argued that the gold standard adjusts the balance of payments between states spontaneously. The underlying scientific paradigm is Newtonian, where deterministic mathematical laws are the ultimate embodiment of knowledge. Under this perspective, equi-librium and optimization follow from the relations between the parts of a system. Studies de-scribe spontaneous, optimal equilibria and, therefore,they promote the withdrawal of ra-tional supervision once the intended dynamic takes place. Further intervention would break the balance of these equilibria. Along these lines, scientific and technological developments yield progress by the optimization of processes and the providence of spontaneous balances. However, by construction, such analyses neglect the context of a situation even when this context is the condition of possibility of this very situation. Moreover, following the same ra-tionale, both in science and industry, complicated situations are reduced to a combination of simple elements that can be known and controlled. Then, for example, the production of a single craftsman can be decomposed into simple tasks performed by several specialized workers eventually by machines. This method entails the progressive loss of workers’ knowledge because of its transfer to the technological apparatus; this was first described by Adam Smith and later by Karl Marx, who named this trend proletarianization. This loss of knowledge is a critical component in a more general process of denoetization, that is, the loss of the ability to think (noesis). Technics has become technology, and like technics, technology is a pharmakon: like drugs, it can lead both to positive and toxic outcomes.
At the same time that these events took place, new major scientific ideas emerged. Darwin’s views on biological evolution provided a historical framework to understand living beings. Darwin’s framework has been interpreted by some as another instantiation of the Newtonian model of science, while others emphasized the originality of historical reasoning in natural science. In this Darwinian framework, the living world is no longer a static manifes-tation of divine order. Instead, current life forms stem from a process of historical becoming. This change of perspective led to question the becoming of humankind and the role played by human intelligence in this process; namely, eugenics and social Darwinism emerged - against Darwin’s view that embraced the singularity of human societies.
Another scientific framework appeared on the scene. With the industrial revolution, heat engines were developed which raised theoretical questions that gave birth to thermo-dynamics. Physicists developed the concept of entropy and showed that entropy can only in-creases in isolated systems. In physics, energy is conserved by principle but entropy increase means that it becomes less usable to perform macroscopic tasks. In a nutshell, the increase of entropy in a physical system is the process of going from less probable to more probable macroscopic states. It follows that the increase of entropy is the disappearance of improbable initial features and their replacement with more probable features. This means the erasing of the past. This notion departed from the reversibility of classical mechanics – the latter lacks an objectivized time arrow – and brought about the cosmological perspective of the universe heat death. This concept goes hand in hand with the discovery of chaotic dynamics by Poinca-ré and the refutation of Laplace’s view that mathematical determinism entails predictability, thus taking a stab, in principle, at the notion of mathematical predictability and control of natural phenomena. In particular, Poincaré’s work applies to the solar system whose stability cannot be ascertained. These scientific developments provide a precarious view of the cos-mos.
Nevertheless, in the XXth century, determinism sensu Laplace has found a second wind with mathematical logic and the subsequent computer sciences. These developments took place when industrial production shifted to consumer capitalism, a framework driven by mass consumption. Mass media are designed to trigger standard responses from consumers. As a result, the trend of denoetization expands to consumers as such – for example, processed foods led to a loss of folk cooking knowledge and contributed to the pandemic of non-communicable diseases like obesity.
In this context, the lax notion of information became central. Shannon coined a precise concept of information in order to understand the transmission of written or audio messages in noisy channels of communication. A very different concept was proposed by Kolmogorov to describe how hard the generation of a given sequence of characters is for a computer pro-gram. Specifically, Shannon’s theory states that information means improbability. This idea becomes absurd when used in order to assess meaning instead of facing transmission diffi-culties (noise), which was Shannon’s original motivation. For example, a constant binary se-quence has maximum information sensu Shannon, while a random sequence has the maximal information sensu Kolmogorov (i.e, elaboration of information), and both limit cases have more information in their respective sense than a Shakespeare’s play of the same length. De-spite the incompatibility of these frameworks and their limits, the received view in current cognitive sciences – themselves dominating representations in digital capitalism – is that in-telligence is information processing, that is to say, a computation. Similarly, information plays a central role in molecular biology in spite of the failure to characterize it theoretically. Last, ignoring early criticism by authors such as Poincaré, the economy has been conceptual-ized as a process of spontaneous, mathematical optimization by “rational” agents, with pos-sibly biased information processing due to “imperfect” cognitive processing.
At the beginning of the XXIst century, computer use has spread in diverse forms (such as personal computers, smartphones, and tablets). Their connection in networks has deep-ened and transformed the role of media. Private interests started competing to catch and re-tain the attention of users. With these technologies, the services provided to users depend on users’ data, and at the same time, service providers use these data to capture the users’ atten-tion. These transformations led to a further wave of automatization. Algorithms like those used in social networks formalize and automatize activities that were foreign to the formal economy. These changes lead to further losses of knowledge and denoetization where atten-tion itself is disrupted. Since the received view in cognitive sciences is that intelligence is in-formation processing, several scientists consider the algorithms used as artificial intelligence an neglect the conditions of possibility of human intelligence such as attention. At the same time, management, as well as commercial platforms, decompose humans into tables of skills, interests, behaviors that feed algorithms, drive targeted political and commercial marketing, and shape training and recruitment policies.
The same trend occurs in the sciences,: knowledge tends to be balkanized in always more specialized fields of investigation, and scientific investigations tend to be reduced to the deployment of new observation apparatus and new information processing on the data obtained. By contrast, theorization is a necessary process for science, and it is a synthetic ac-tivity that reevaluates the concepts and history of a field, empirical observations, and the in-sights of other fields. With the emergence of data mining Chris Anderson advocated the end of theory. This perspective has been accurately criticized; however, the dawn of theorization in sciences seems to come mostly from another path. Following society’s general trend, it comes as the indirect result of institutional re-structurations and the increasing weight of scientific marketing, both in publications and funding decisions. It also comes from an insufficient crit-ical assessment of digital technologies and their consequences for scientific activities; it fol-lows that the academic appropriation of these technologies to mitigate their toxic conse-quences and push forward scientific aim is lacking (except for purely mathematical ques-tions).
Now, at the beginning of the XXIst century we are also witnessing the rising awareness of the consequences of human activities on the rest of the planet, leading to define a new era: the Anthropocene. The Anthropocene is characterized by human activities that tend to de-stroy their conditions of possibility – including both biological organizations (organisms, ecosystems) and the ability to think (noesis). In this context, the ability to generate knowledge to mitigate the toxicity of technological innovations is deeply weakened, to the extent that the problem of this toxicity is seldom raised as such by governments and socie-ties.
Entropies and the Anthropocene
Energy or mineral resources, such as metals, are conserved quantities from the per-spective of physics; however, there is some truth in saying that these resources are becoming scarce. A crucial concept to understand these situations is the concept of entropy. Entropy describes configurations and is directly related to our ability to use such resources. For ex-ample, ore deposits are at an improbably high concentration - generated by geological and atmospheric far from equilibrium processes -, and human activities concentrate them further by the use of free energy. For these resources, the critical concepts are the dispersion and, on the opposite, the concentration of matter; that is, the entropy of their distribution on Earth. However, a straightforward accounting of entropy is not conceptually accurate, and it is nec-essary to provide a finer-grained discussion of the articulation of entropy and the living, in-cluding the special case of human societies.
From the perspective of thermodynamics, biological situations are not at a maximum entropy and do not tend towards maximum entropy. The low and sometimes decreasing entropy of biological objects seems to “contradict” the second principle of thermodynamics, which states that entropy cannot decrease in an isolated system. However, biological situations, in-cluding the biosphere as a whole, are not isolated systems. Biological situations are open; they use flows of energy, matter, and entropy. At the level of the biosphere, the sun is the pri-mary provider of free energy that is used by photosynthetic organisms. Therefore, biological situations do not contradict the second principle. A consequence is that biological organiza-tions and, by extension, social organizations, are necessarily local and depend on their cou-pling with their surroundings. In organisms, the relationship between the inside and the out-side is materialized and organized by semi-permeable membranes.
How to move forward in order to understand biological situations and their articulation to thermodynamics? Predicting requires to single out theoretically a situation among many oth-ers: typically, the state that the changes of the object will bring about. Entropy maximization singles out a macroscopic state: the one that maximizes entropy. Functions performing this role in physics are called potentials. There is a diversity of potentials in the field of equilibri-um thermodynamics, which are different variants of free energy, involve entropy, and whose relevance depends on the coupling between the system studied and its surroundings. Howev-er, in the case of systems far from thermodynamic equilibrium – situations that require flows with the surroundings to last, like organisms –, there is no consensus on the theoretical exist-ence of such a function or family of functions. For example, Prigogine’s fundamental idea is that the rate of entropy production (i.e., the rate of energy dissipation) could play the theoret-ical role of potential; however, this idea is valid only in particular open systems. It follows that the ability to understand general systems far from equilibrium by calculus is not theoret-ically justified. From a less technical perspective, Schrödinger introduced the idea that the problem in biology is not to understand order from disorder, like in many physical situations, but instead to understand order from order. To capture this idea, he proposed to look into negative entropy, an idea which was later elaborated by Brillouin, who named the corre-sponding negative entropy “negentropy.”
However, negative entropy does not precisely reflect biological organizations. Entropy can be lowered just by decreasing temperatures, while biological organizations remain as such only within a range of temperatures. A major glaciation would decrease entropy, but it would also destroy biological organizations. Moreover, functional parts of biological organi-zations often involve a local increase of entropy to be functional. For example, diffusion of a compound from its production location to the rest of the cell is a process of physical entropy production. Nevertheless, this process leads the said compound to reach locations where it can play a functional role. It follows that an articulation between entropy and biological or-ganizations requires a careful analysis. In a nutshell, biological organizations maintain them-selves far from maximum entropy configurations thanks to fluxes from their surroundings. At a given time, they actively sustain this situation by the interaction between their parts and fluxes. The necessary coupling between organisms and their surroundings takes place in eco-systems that are themselves embedded in larger levels up to the biosphere. The viability of living situations stems from the systemic properties of these various levels, and at the same time, from the underlying history that originated organizations in their respective past con-texts. More generally, the way biological organization sustain themselves is fundamentally historical, i.e., they stem from natural history. This historicity implies a particular vulnerabil-ity to fast anthropogenic changes that disrupt biological organizations at various levels sim-ultaneously. Examples of those changes are climate change at the level of ecosystems, or en-docrine disruptors at the level of organisms. Moreover, life forms continue to change over time by generating new structures and functions. More than individual species, biologists emphasize the conservation of biodiversity and of the branching process of evolution that we may call biodiversification. This process is itself the object of anthropogenic disruptions. In a nutshell, biological organizations are precarious because the existence and the nature of their parts are fundamentally contingent and these parts need to be actively sustained. Organiza-tions sustain themselves in ways that stem from past contexts, and can reorganize with suffi-cient time, however both processes are disrupted by anthropogenic changes. This argument is well accept in the state of the art biological knowledge, and at the same time, these matters are insufficiently theorized.
A possible strategy to go further in this analysis is to propose a complementary con-cept to that of entropy (and its mathematical opposite negentropy. Bailly, Longo, and Mon-tévil proposed such a new concept called anti-entropy thatrefers to biological organizations (organs, functions …). In contrast to (digital) information, which is a one-dimensional notion (Shannon’s and Kolmogorov alpha-numeric strings), its geometry and dimensions do matter. A living organism produces entropy by transforming energy, sustains its anti-entropy by set-ting up and renewing its organization continually and produces anti-entropy by generating organizational novelties.
Anti-entropy aims to accommodate biological organizations in their historicity. Current life forms sustain themselves by the use of functional novelties that appeared in the past (anti-entropy) and the production of functional novelties (anti-entropy production). These novel-ties are unpredictable and unprestatable a priori (i.e., their nature cannot be predicted). At the same time, they are not generic random outcomes. They are specific because they con-tribute to the ability of biological objects to last over time by contributing to their organiza-tion in a given context (that this organization may impact). Entropy depends on the coupling of a system with its surroundings. Similarly, anti-entropy is relative to an organization, and not all objects are organized. For example, considered alone, a heart has no function; it is only at the level of the organism that it is endowed with a function. As a result, all discussions on anti-entropy are relative to an intended organized object, that is to say, to a specific locality.
As pointed out by Lotka, a specificity of human societies is the importance of inorgan-ic objects in their organizations, such as tools, written texts, or computers. These objects are shaped and maintained by human activities. The constitution of objects theoretically analo-gous to organs outside organic bodies is called exosomatization by Lotka, and this process underlies how humans’ ways of living evolve.
In order to enable these inorganic objects to have a functional role and to limit the de-stabilization they introduce, evolution and developmental and physiological plasticity have a role in the process of exosomatisation. For example, reading recruits the plasticity of several brain areas that depend on the writing system. However, these purely biological responses are insufficient, and noetic activities are required to complete the process of exosomatiza-tion. For example, philosophy can be interpreted as a reaction to writing and its use by soph-ists, with possibly catastrophic consequences for the polis. In contemporary terms, it is far from being sufficient for a technic to find a market by the use of marketing to become desira-ble. It is also required to find variations and uses that mitigate the toxicity of these technics – especially in the perspectives of climate change, the decline of biodiversity, and denoetiza-tion. In other words, more work is required to single out exosomatic novelties (i.e., technics and technologies) that would be compatible with a desirable future for humankind. In this perspective, knowledge in all its forms plays a special role. Knowledge prescribes variants and uses for the novelties introduced by exosomatization and is tied to ethics.
Computers participate to this process and can be defined as automatic rewriting sys-tems. With the increase of their speed and inputs (data), computers’ ability to process infor-mation and perform categorization increases dramatically. However, the tasks that they can perform are not equivalent to the novelties produced by human work. In the latter, meanings are produced that are neither in the initial data nor in their combinations by algorithmic methods. For example, the principle of inertia describes a very exotic situation on Earth where no forces are exerted on an object (e.g.,no friction and no gravitation): it cannot be de-rived from data, but was posed by Galileo as an asymptotic principle, a way to “make sense” of all movements at once and analyze what may affect them, that is, frictions and gravita-tion. Similarly, equal rights between citizens and gender equality are political principles that trigger a departure from former situations and reshape social organizations; they cannot be deduced from the former situations. These examples are historically significant in their re-spective domains; however, such processes are, in a sense, ordinary in human activities. They define work by contrast with labor: the former is also the permanent “invention of a new con-figuration of sense.” The current trend, however, is unfortunately not to develop work in this sense; instead, it is a convergence between algorithms and human activities. This conver-gence means a sterilization of work by its standardization – its transformation into generic information processing.
The scientific consensus is that the current path of civilization leads to its destruction, in particular by identifying anti-entropy, extended to social organizations, with information, a one-dimensional flattening. Work invents new tools and uses, thus constructs new configura-tions and sense for human and ecosystemic interactions. Thus, it departs from the alpha-numeric combinatorics in a pre-given set of possibilities (computational data processing), and it is required at all levels of society to face the current crisis.
In order to give inhabitants an active role in the making of their environment, in this chapter we examine new urban possibilities that we explore through a concept of open urban localities. In this we propose a re-appropriation by inhabitants and public administration of digital technologies and infrastructures in order to create a new right to the city. We argue that 21st century citizenship must be founded on a new way of living and working in the city. As part of this we propose what we are calling the Real Smart City, which opening deliberation and urban design processes via participatory technologies, democratise the building and dwelling of the city – and its metabolic functions. One example of this is the Plaine Commune programme, the first regional laboratory of this kind, which we outline as a case study for thinking through the problem of the city under conditions of disruptive digital transformation, as well as of urgent necessity of finding alternative and sustainable urban planning and construction models for diminish its energy consumption.
It is well known that the process of urbanization, initiated with the industrial revolution and intensified during the XXth century with the process of metropolization and the formation of “global cities”, is a crucial challenge in the struggle for limiting the effects of climate change. This is why – even if there is a growing number of experiences of “territories in transition” that are developing in rural, semi-rural, small or medium urban areas – the question of the locality introduced previously arose first in urban areas.
Moreover, we know how the marketing of ‘smart cities’ imposed itself widely by advocating for an energy-efficient city. Should we therefore follow this model? We will pose in what follows that the ‘smart city’ story telling is above all a commercial discourse based on an immeasurable worsening of proletarianization (loss of knowledge in all its forms) to a point which is singularly dangerous in all respects. We must questioned this model keeping in mind the recent report from Amnesty International on the enormous threat posed by the current development of digital technologies, monopolized by a few players who have become literally irresponsible.
On the other hand, the enormous transformation underway generated by the Internet of Things (IoT) and ubiquitous computing – as well as all that is generated by the deployment of these technologies in the various fields of the urban – must lead to a real overhaul of urban development and democracy in general, and in particular in the city environment. As we will see at the end of this chapter, this involves the opening of a formidable project with educational and academic institutes at all levels – and an accelerated training operation, through contributory research (see chapter 4), of the personnel in charge of education.
We recommend here the development of contributory urban research approaches in order to grasp what are the deep dynamics of what we consider to constitute the possibility of a new urban engineering (génie urbain), where the inhabitants would once again become the primary source of territorial intelligence. This should be done in the context of a contributory economy (see Chapter 3) of capacitation of inhabitants of new digital technologies, but also of their elected officials and their administration, who are today totally destitute and very often manipulated by merchants of new services and other totally illusory promises.
In this new urban engineering (génie urbain), based on this new urban research, the technology would be reconfigured and redesigned from the contributory territorial practices themselves.
Urban Metabolism: a broader perspective
Metabolism is a term that has become a plastic category that can be molded to serve diverse analytical objectives, among which, the ecological relations that in our view describe the “intelligence” of a city. In this relationships several “organs” are at stake: in particular, somatic (or biological) organs, technical organs and social organization, that is why we refer to it as an ‘organology’. This ecosystem must be highly interconnected and integrated in a renewed urban metabolism driven by more adequate paradigms and tools (Carta 2014).
Urban metabolism is often reduced to a calculation of the quantity of consumption and release of energy in the whole system (in other words, energetic and material exchanges) instead of being explored as the relation between these organs. But if we consider it as the coupling of the intertwining perspectives of (1) citizens and users, with their narratives and issues; (2) technological and structural dimension, and operational features of the smart cities (infra- and exo-somatizations); and (3) social and normative institutions, with issues linked to policy making, surveillance, education and wealth, the real smartness of the city could lead to a more democratic ecology (Araya 2015).
Indeed, this last possibility is useful also in order to understand how capitalism troubles these relations. This has generally been described as a “metabolic rift”, referring to the tendency of capitalism to give rise to a general separation of the processes (and) of the organs, as well as to the tendency to disrupt every condition of life as such. This effect seems to be exacerbated in the so called Anthropocene, the epoch in which the human being and his productions are considered the most powerful responsible of global environmental and climatic change, leading to the erosion of territorial resources and to the anesthetization of vital metabolisms (water, waste, vegetation, mobility, etc.) (Carta 2017, 135).
But in fact, and precisely in this age, this appears as a simplification of the problem, so that we should rather think, with Jason W. Moore (2017), in terms of “metabolic shifts” that is, these constitutive relations in their reciprocal variation. Considering relations in terms of variation and not as pursuing a specific direction, allows us guessing the possibility of creating and developing an ecology of ‘localities’, as a specific and complex metabolic effect able to challenge and thwart the entropic effects of technological development.
The locality of the city is what stems out from the relations of the multiple social and technical ways of life living within it. It is the genius loci that is always more than the sum of the parts. As Italo Calvino shown in Le Città Invisibili, cities are places of exchange but “these exchanges are not only exchanges of goods, they are exchanges of words, of desires, of memories”. The city is a social and technical locality where a multiplicity of social and technical localities live. “The imposition of [universal] automated technological system and standardized ways of life deprive the inhabitants of their everyday life” (A. Alombert, 2019), as well as their knowledges, local technics and local practices that gives meaning to the social life.
The city is “no longer a crucible of creativity, which is innately unpredictable. Instead it becomes a zone of certainty for the sake of profit” (S. Zuboff, 2019). The mixity and diversity of urban centers has historically been the place of improbable encounters: city centers have always fostered interactions enabling creativity and innovation. The totalitarian synchronicity of the algorithmic gouvernmentality and its will of optimization is entropic in the sense that, pretending to seize the objectiveness of the city (J. Kelleher, 2019), it homogenizes and reduce the ‘ways of living’ the city – hence its metabolism. The reductionism operates via the ‘optimization’ (Kelleher, 2019) can only take into account what is calculable, that is what is foreseeable. We are embracing a path in which the possibilities for finding new solutions – thinking outside of the (black)box – is repressed by the complete standardization of models, by the opacity of softwares, and by the apparent ‘inexplicability’ of algorithms with which we want to make our cities run. In this way, the complex and open system that is a city starts to close down the possibilities for local flourishing, averaging its unicities towards standards.
‘Smart city’: from the promise of decreasing urban metabolism to the accomplished fact of complete platformization of the city
Smart city and urban metabolism
The smart city is often presented as more livable, ‘human-friendly’ and democratic space for living, a solution to the ever growing consumption of energy and resources that has characterised the development of urban areas. Indeed, thanks to the use of sensors, computation, faster and betterly organized communication, city services could improve their performances whilst diminishing their environmental negative impacts.
The implementation of new technologies into the city could indeed enable a more resilient and ecologically sustainable relationship between the urban areas and their surrounding environment. But within mainstream economic logics, a drastic decrease in the entropic impact of cities seems to be too difficult, if not impossible. The hope that technology alone could resolve all our problems has been called technological solutionism. This ideology of wanting to improve everything through “ready-to-wear” technologies neglecting any form of local knowledge – and treating symptoms without ever trying to understand the complexity of the causes (E. Morozov, 2013, 2015). This limits local institutions, but also political and social creativity, in the finding of solution. Furthermore, stick to the neoliberal economic credo, this algorithmic governance (T. Berns and A. Rouvroy, 2013) tends to repeat without major differences the ways of ‘functioning’ of the city: its metabolism, intended in the broad sense we sketch out at p. 3. The implementation of the ‘smart city’ mainstream idea will only accelerate the rate of all the anthropic and entropic processes that led human activities to become the major geological changing factor on the biosphere.
If designing more liveable and sustainable cities is something no one would oppose to, a major concern has grown in the last decade. The mutation of the urban landscape and ways of life we described in the first section has led to the creation of global cities (S. Sassen, 1991), that is cities with differentiated strategic global functions organizing all the kind of flows at the global scale, and organized around headquarters, centers parks and economic districts. The global city functions are marked by extractivism logic, and as it was already highlighted by The Economist in 2017 today’s new extractivism is data extractivism. The data economy is “expressed spatially through what marketing refers to as ‘smart cities’, a term that helps to hide the submission of territories to extraterritorial logic bypassing the local political authorities and the practices of the inhabitants” (A. Alombert, 2019B). We argue that in nowadays mainstream thinking the word smart is reduced to practices of computation and feedback loops, and the word city is reduced to a platform allowing the access to global infrastructures, data-mining and target-advertising our everyday life.
In the development of smart city systems, many cities in Europe have started “beginning to partner with private corporations such as IBM, Cisco, GE, […] Siemens”. In North America, major tech companies such as Google and Uber has started to work with municipalities. The example of the Toronto Side Walk Lab project is the most striking one. These marketing promises – that are never truly maintained – are at the base of the business model of big tech companies. Especially in time of austerity, public administrations are often obliged to opt in for already-given and standardized solutions, as well as for huge private investment with little – if not any – return to the local economy. The imposition of a technocratic and top down approach by tech giants marginalize citizens whilst tying public administration to profitably-sold long-term contracts. Consequently, this firms oblige cities to a complete dependency for the maintenance of the services and infrastructures, redefining the political agenda and economic programs that cities will implement in the next decades.
Hence, despite the promises, the GAFAM as well as BATX smart city model is being more and more perceived as not very ‘livable’, if not quite not urban – that is without suavity, courteousness, and refinement of manner – to local inhabitants. This is stemming out mainly for two reasons: the first one is about the design and the functioning of these technologies that promise inclusion and democratization of decision processes. The development of participatory platforms – such as the one for the Grand Débat in France – has giving rise to a lot of criticism, if not a complete deception, about the way citizens participation has been taken into account – we will analyze more into the detail this matter in Chapter 10 (Design).
Secondly, because the universal model put in place by these global technical systems – algorithmic calculation – platforms tend to reduce, flatten and dilute the techno- as well as noo-diversity of localities, disrupting social norms, costumes, and behaviours. The disorientation we are living at all levels, stems directly from the new disadjustment between the technical system of big tech exo-spheric companies, and social systems of the nations states, rooted in their localities. So-called ‘smart cities’ are hence “disrupting civil cities founded on citizenship” (B. Stiegler, 2018) and with it, a functional sovereignty (F. Pasquale, 2015) is imposing a new state of facts over the state of law of national states’ territorial sovereignty –– leaving a legal gap that needs to be urgently filled.
As it is has been described by Morozov and Bria (2017) the smart city idea represents a continuity with the neoliberal politics, fostering privatization of services and the individualization of the costs, eliminating all historical forms of reciprocity within societies. But it can also represent a complete new age of capitalism, a surveillance capitalism era, because of the privilege of unlimited freedom and asymmetric knowledge between these economic, political actors and citizens as it has been well described by S. Zuboff (2019). The whistle-blowers – such as Edward Snowden and Julian Assange – and Cambridge Analytica scandals should have teached us about the way with which data produced by billions of people on the web are used both by economic and political actors. It is then urgent to ask how the opaque infrastructures of these digital service technologies work and who is the owner, how and which data are collected by these systems, and what kind of data governance should be put in place.
Platformization of the city: infrastructure, infrasomatization and data
Infrasomatization and the Impairment of Minding practices
As far back as 1981, Steve Jobs, then CEO of Apple, famously called computers “Bicycles for the Mind”, implying that they augmented the cognitive capacities of the user, making them faster, sharper and more knowledgeable. More recently, writers such as Nicholas Carr, started worrying that the same technological tools could also undermine and fragment the possibility for thought. Through ubiquitous computing, digital technologies structure and govern the environment and the society in which we live, limiting thought and undermining the possibility for minding practices. This new ‘mode of functioning’ of, what has been called, the automatic society (B. Stiegler, 2015) is disrupting not only social relations and political institutions, but also psychological and social individuation in the sense of G. Simondon.
Today we live within a horizon of interpretability determined in large part by the capture of data and its articulation in and through infrastructures built from algorithms and implemented as infrasomatizations. They can be thought of as social-structuring technologies inscribing new forms of the social – or sometimes the anti-social – onto the bodies and minds of humans and their institutions. For the user these infrasomatizations are experienced through smart-phones and tablets which close the loop from within the brain to the outside environment, such that the aperture of thought is mediated and compressed – and the consciousness bypassed and short-circuited by intensive computation. Hence, the capacity for the human brain to perceive that algorithms are organizing their thoughts, or even to perceive that algorithms are at work, is impaired, if not destroyed – human reason is thereby diminished and made susceptible to persuasion and propaganda.
Infrasomatizations can be mobilised to support specific instances of thought, rationality and action – a hegemonic form of calculative reason in order to create the conditions for anti-democratic thought. The infrasomatization process may suppress and eventually replace local rationalities within particular social spheres, which once constituted the functional independence of the complementary spheres of social life, into a single regime of computation. These conditions create a data intensive economy which is the economic realisation of the gains and possibilities of the data-intensive scientific milieu. So, if the possibility of the citizens is impaired, how can a city be intelligent? This process demands the elaboration of a critique and ethics of data-processing, particularly from the standpoint of an imperative that would be not only negentropic, but anti-entropic, and ultimately anti-anthropic (in the sense that the IPCC refers to “anthropogenic forcing”, that is, the increase of rates of entropy due to human activity).
Smartness and Infrastructures
The constellations of infrasomatizations can be mobilised into de facto monopolies in specific imbrications. This is a better way to understand these computational structures rather than the notion of “platforms” that tends to use a self-description, and therefore hides more than it reveals. As it has been described by R. Mirchell and O. Halpern, the ‘smartness’ of the smart city “is a function of its extensive use of informatics infrastructure” (2018). Data streams are collected through sensors and analysed by learning algorithms, then organised and used for optimising urban interaction via the creation of different predictive models. Barely visible if not invisible to the eyes of citizens, these digital infrastructures impose an extraterritorial logic bypassing local political authorities and the local practices of the inhabitants. The truth of this from revelations from industry insiders and researchers of behavioural nudging and manipulation have been widely documented and have served to prompt public calls for more regulation over these systems (see S. Zuboff 2019 and R. McNamee 2019).
In the mainstream ‘smart’ city approach, the elimination of individual rationality in favour of tele-guidance make city dwellers function as infrastructures for the machine learning techniques. Hence, smartness “reconfigures a human population not just as that which uses infrastructure, but as itself an infrastructure” (R. Mitchell and O. Halpern, 2018). And as we have seen in the previous section – because of the business model that is reigning in tech world nowadays – this has only one aim: collecting, using and selling the maximum amount of data. In her critical study of the actual attempts to build ‘smart cities’, O. Halpern has also convincingly unveiled their fundamentally speculative nature. In spite of all the good intents and genuine hopes of their designers, they rest on the same dynamics as the derivatives that have restructured the financial world of the two last decades. They function as promises, designed to raise – largely unreasonable – hopes, soon to be abandoned in favor of more promising prospects. This derivation towards ever-elusive futures drives a fundamentally speculative economy, whose accelerating dynamics does indeed manage to explore a wide range of possibilities, but whose permanently delayed achievements fail to provide a livable basis for any new form of sustainable existence. The ‘smartness mandate’ (O. Halpern, R. Mitchell, and B.D. Geoghegan, 2017 that rules in parallel the design of smart cities and the social logic of financial derivatives rewards forms of governance, orientation and control whose result is to disorient and mislead our collective decisions.
The ongoing process of smartification of the city can be described as a platformization of the urban milieu. We argue that in order to pass from a ‘smart’ data-mine-city to a Real Smart City, inhabitants should be really empowered by the use of technology and not being subjected by it. Technological sovereignty is the idea for which citizens have the capacity to participate and have a say in how technological infrastructures surrounding them is operating, as well as put into question their purposes. There has been a political vacuum about the question of technology. Yet, it is up to the political institutions at all levels to create the condition of possibilities for technical, social and economical alternatives and to define a political will for a coherent trajectory towards technological but also data sovereignty, that is the re-appropriation by inhabitants and public administration of digital technologies, data and infrastructures. Paired with the capacitation of inhabitants for the development of knowledge (know-how, know how to live, theoretical knowledge, technical knowledge) at the local level. One for being able to criticize needs firstly to understand and then go beyond what has been criticized. This means that inhabitants should have not only the skills for using ready-for-use technology, but also the knowledge to create techo-local alternatives proper to their locality. This will enable the reshaping of the urban dynamics in order to fight against the production entropy.
An holistic approach taking into account the data governance, infrastructures’ ownership, transparency, intelligibility and criteria, as well as the right to explanation (D. Berry, 2018). Changing the legal system would not be enough. We need to go further and seek to understand and challenge the way in which “smart” infrastructures recast certain regulatory or legal limitations into ineffective measures from which they are able to extract excessive amounts of profit and exhaust the wider economy creating new forms of structural poverty and inequality. A new data governance need to be integrated to a “reappropriation strategy” of the infrastructures. However, “without wide-ranging actions on an international scale it will be extremely difficult to reverse a trend that already raises many concern” (E. Morozov and F. Bria, 2019).
Smart Urbanism or Smart Surveillance?
The ‘smart’ model of urbanism also promises to empower urban governance, centralising the data captured on the territory into urban dashboard (S. Mattern, 2015). In order to give to local policy makers a better grasp over the ‘functioning’ of the city, the real-time monitoring of the city provide data-driven decision protocols for bettering the decision making processes.
The underlying idea here is to solve technically any kind of issue, increasingly evacuating the processes of debating and deliberating from the political arena, whether it is for climate change, security, streamlining bureaucracy or progressive policies issues. This tendance has been described as technological solutionism (E. Morozov) and we will explain this in the next section. Sadly, the first implementations of urban dashboards and other network surveillance systems have shown that ‘smart systems’ are not empowering and used by the general public – rather they are used on it. Rio de Janeiro’s Operation Room (IBM) and New York’s Domain Awareness Systems (Microsoft) are only epiphenomena of a process that is now becoming global. The massive use of predictive algorithms and the consequent militarization of the urban areas – previously exclusive for war-conflict zones – is exacerbating the social issues in the urban areas, creating socio-economic silos and invisible frontiers into the cities.
Cities have to contrast the new segregated geography of barriers that follows speculation (and not investment). These economic trends binding surveillance capitalism with predictive behavioural patterns are creating socio-economical frontiers and silos, marginalizing and displacing locals inhabitants, closing down the possibilities of local flourishing and impeding the of historical mixity, and creativity of the urban centers to invent new ways of life for the majority of the population, and not only for the shrinking privileged social strata benefiting from these kind of delocalized investments.
Real Smart City: Contributory Economy & Local Platforms.
The new urban and territorial revolution opens new and promising possibilities to fight against the Anthropocene but, as we have previously shown, it also threatens to become uninhabitable, if not inhuman. As it has been shown by C. O’Neil (2016), the algorithmic governance of Big Data restrain the possibility for bifurcating. If we need to invent new ways of living and producing, we need to think (penser) how to take care (panser) of our psychological, social and environmental ecologies without repeating the errors that led us to the Anthropocene.
To fight against this new kind of algorithmic control (Rouvroy and Berns, 2013; Supiot, 2015) and standardization it is necessary to seize and redesign these “smart” technologies to reconstitute a real urban intelligence. This should be done, firstly, taking into account the pharmacological dimension of any technique and, secondly, conceiving and creating neganthropic infrastructures designed for facilitating the processes of interpretation and collective decision-making. We posit that the resolution of urban problems cannot be only techno-logical, but also social and political – that is to say urban in the strict sense. A Real Smart City (RSC) is not – as we tried to show in the case of ‘smart’ city – an instance for replacing the institutional dialogue, social deliberation and political debate with a limited and standardized set of technologies and technocratic policies. The RSC project’s objective is the understanding and the deconstruction (as intended by J. Derrida) of the ‘smart city’ as an expression of the neoliberal economy and the computational capitalism – as well as the ideology of efficiency and exactness of data – in order to rethink intelligence, the city and its locality. We claim that it is only via a new understanding of today’s problematic and the experimentation of new economic models that we can invent and create new solutions at the height of the epochal issues.
We argue that a Real Smart City is a city where inhabitants have a right to the city, being able to decide and redefine city’s dynamics, hence its metabolism. This is why the Institute for Research and Innovation –– together with the Digital Studies and the Real Smart City project consortium –– in concert with local institutions, associations and inhabitants, is experimenting in Plaine Commune (north of Paris). The aim is to make emerge a new urban dynamics in order to give inhabitants an active role in the making of their environment, whilst diminishing the production of entropy of the territory. This experimentation is based on two pillars.
Contributory Research. The first one is the Contributory Research (for more details see chapter 5). This approach bring together researchers from various academic fields and territorial actors into networks of research and experimentation. Territories would be able to experiment sustainable, solvent and desirable economic activities and technological tools with the aim to develop reproducible recommendations through rapid transfer processes to other localities presenting similar physical and cultural conditions.
Contributory Economy. The second one is the Contributory Economy (for more details see chapter 6). A Real Smart City can be possible only on the basis of a new form of economy, the contributory economy which valorizes the process of capacitation and the collective practice of knowledge (know how,know how-to-live and theoretical knowledge) through which the inhabitants participate to the making of their localities and develop a collective urban intelligence. This ‘intelligence’ cannot be reduced to the smart ideology of efficiency: it has to include sociability, or philia. To become a neganthropic locality – or an open urban locality – a learning and contributory territory needs to experiment and make emerge new local knowledge and new ways of living in the physical and digital milieu. The empowerment of local actors is the condition of possibility for the conception of new urban economy, the implementation of new forms of urban management that are truly contributory – and not only ‘collaborative’, as understood by UBER and the actors of the so-called sharing economy – which consists in redefining (a) the rules of living together in the urban environment in terms of both services – public, associative or private – or in the form of urban commons, (b) the rules of local decision-making, (c) and finally the forms, scales and instances of deliberation, all of which implies reconsidering the social relations of the whole when setting up proximity platforms and original forms of crosslinking.
De-automation of the algorithmic teleguidance: towards a self-governance through contributive and deliberative technologies. For a Real Smart City, the question is to empower citizens’ capabilities also through the practice of digital technology. The efficiency of automation must allow the release of energies and time at the service of urban deliberation in the spirit of cooperation as contributive technologies make it possible, such as deliberative social networks for debating and managing controversies, annotation and categorization tools for the creation of knowledge. Through the practice of such contributive technologies, inhabitants can take collective and reflexive decisions and introduce unpredictable de-automation and bifurcation in their algorithmic and automatic milieu. The Real Smart City gives to inhabitants and public local administration an active role in the making of their environment via the re-appropriation, re-design and normativation of digital technologies and infrastructures. One central point for doing so, is to stop abandoning the web to a mere market logic (The Web Foundation, 2019) that is leading the hyper-industrial societies to a new acceleration in the consumption, hence an increase of augmentation of entropy.
In order to create a new right to the city, 21st century citizenship must be founded on a new way of living and working, opening deliberation and urban design processes via participatory technologies, democratise the building and dwelling of the city – and its metabolic functions. The example of the Plaine Commune programme, the first regional laboratory of this kind, wants to be outlined as a case study for the problematic of the urban areas under conditions of disruptive transformation – as well as of urgent necessity for finding alternatives for sustainable urban planning and construction models in order to diminish the production of entropy generated by human activities. This is why Plaine Commune can be defined as a locality fighting against the anthropic production of entropy –– that is a negentropic and neganthropic locality.
This could lead to what the Italian Territorialist School called a local self-sustainable development. With this concept, this group of scholars emphasized the balance between directing development towards fundamental human requirements (which cannot be reduced to material needs alone – social sustainability) and enhancing environmental quality of the territory. This approach to the territorial planning is not localism, but a form of bottom-up globalization: in this sense, the concept of open-locality proposed here rejoins the Territorialist School’s concept of inter-locality solidarity, that is the flexible and not hierarchical connections between sustainable ways of living and lifestyles present on different localities. Jose Ramos and Michel Bauwens (P2P Foundation) call this cosmo-localization, that is the creation of a cosmo-local production system in which ‘what is light’ [like knowledge and information] is shared globally, in open design commons, and ‘what is heavy is produced locally’, by generative economic entities”. This will lead to a “meaningful (virtualized) knowledge commons of high quality, open source, circular and community owned designs, as well as local production creates a virtual organizations power to produce high quality goods”.
This means that, in a RSC, empowerment paths and practices for developing citizen’s capabilities are put in place, supported by social policies capable of sustaining and valorizing common oriented work and knowledges in and out of employment. Furthermore, the creativity and synergic dynamics of the urban milieu must be freed and not repressed for inventing sustainable ways of living in the city, pairing the ‘cultivation’ of new local practices and knowledges with the critical use of technologies.
Sobriety and energy equality: sustainable and resilient cities for our uncertain future. If silicium and cobalt are at the heart of digital infrastructures, the main materials used in the building industry are concrete, glass and steel. These materials, ubiquitous in the the more and more standardized global cities, are suitable for construction because of their changes from the liquid to the solid states. Hence, they are very easy to be modeled and they require less human work force – and its savoir faire. In nowadays business as usual techniques, these materials extracted and transported, if not found on the territory, and then used on the place of construction: all of this is done with the help of a high concentration of energy. These globalized techniques do not respond to any local energy sobriety, representing one of the most entropic factor of the life of human on earth. As the OECD states in the Global Material Resources Outlook to 2060 “[t]he economic activities that drive materials use have a range of environmental consequences. These stem from obtaining the materials (e.g. greenhouse gas emissions from extracting and processing primary materials), from using them (e.g. air pollution caused by burning fossil fuels), and from disposing of them (e.g. pollution of air, land and water from landfilling waste).
Contemporary construction industry made around 10% of the total greenhouse gasses emission – and they rise up to 30% if we also take into account the operating energy in buildings. The ‘smart’ model of urbanism is an ambiguously defined project. Still largely underestimated because drowned in the smart strategic marketing, this transformation is already modifying the entire urban morphogenesis, with new architectural and engineering design, visualization and management technologies such as BIM (Building Information Modeling/Management). We argue that the model proposed for the automatization of construction, as it is proposed today, will only aggravate the environmental problem. Firstly, it is still based on a newtonian framework ignoring the laws of thermodynamics; and secondly, it standardizes more and more the urban landscapes, averaging the singularities of cities into an ephemeral and expensive (economically and environmentally speaking) ‘fashionable design’. Apart from the fact that they are entropic, these technics – themselves standardized as well – do not allow any kind of individuation and transindividuation of negentropic knowledge. It is essential to research, develop and experiment new technics for all different local contexts.
A Real Smart City needs to blow up social and technical locking making the way to experimentation of new technical and social models for inventing new ways of dwelling (habitare) and new ways of living (habitus) its socio-technical habitat. For redefining local technics, the right to make mistakes is essential in the pursuit of bifurcations. The Real Smart City will stem out from a deep understanding of the negentropic techniques of yesteryear joining our current technical capacity. We propose to redefine new construction techniques coupling 3D printing, cobotizer and BIM (Building Information Modeling/Management) with pre-industrial construction techniques reevaluating raw materials such as earth, stone, and wood. In this sense, the design of cities will be in direct relation to locally available materials, and not resources which generates anthropogenic forcings at the other end of the world (over-consumption of sand, iron mineral, etc.), leading to what has been already described as local self-sustainable development.
BIM/CIM (City Information Modeling) and Contributory Design
Social sciences scholars have long highlighted how important can be the involvement of citizens in the co-production and co-design of services. New urban design tools, such as BIM and now CIM (City Information Modeling), have great potential to facilitate and support the contribution of citizen on urban and territorial planning projects. Thanks to these technologies, increasingly present in the building industry, we are able to integrate at (almost) all times the contributions made by association, professionals and inhabitants of the territory. BIM and CIM technologies are multiscalar, hence can provide a holistic understanding of the urban whilst creating the conditions for a real bottom up participation, and more important, contribution for the future of the territory.
Contributive design could open up to a need of co-creation with and for citizens. BIM and CIM, if used in the proper way – as all technologies - can be seen as a transindividuation infrastructure, a space for collective experimentation and decision making based that stems from individual intuitions. These design tools would not only be used in the context of a new building industry, but also in agriculture, energy, and transport industries. The enhancement of the negentropic processes will be the goal of these contributory platforms and give perspectives for actions, such as creating new techniques for the transformation of local matter. In addition, citizen creation processes can be scalable in order to create networked territories, in the sense of P. Veltz (1996). In this way, we will be able to see the emergence of bottom-up globalization we mentioned before, that is an multiscale territorial networks in order to achieve negentropic objectives. This will increase the resilience of cities as well as the investments made in profound way of life changes. Like an emergency toolkit for the Anthropocene, the techniques for accumulating new knowledge will allow us to respond urgently to future extreme climatic hazards.
A practical example: Urban Modeling Project in Plaine Commune (in collaboration with Créteil’s Rectorate and the CO3 EU project, H2020).
This project aims to identify and develop new potentialities given by new technologies, thanks to a method of contributive research aimed at the appropriation of urban digital technologies by territories and their inhabitants. Such appropriation presupposes the production of new urban knowledge and skills. Aiming at the appropriation of urban digital technologies by the territories and their inhabitants, IRI will develop together with associations and professionals from the Plaine Commune territory, the Rectorate of Créteil and the European Project CO3 some transdisciplinary educational activities related to Urban Studies in middle and high schools of the territory for increasing the awareness about the new disruption in the urban modeling and planning technologies. The aim is to create capacitation workshops as well as theoretical courses that would bring students the necessary knowledge and skills for the development of a critical urban culture engaging the question of the right to the city, and the use of Building Information Management and Modeling (BIM) technologies – starting from the utilization of games (ex. Minecraft) and progressively move towards professional digital tools (ex. SketchUp and BIMs technologies). This will be coupled with the use of CO3 consortium’s technologies for experimenting a knowledge-centered commons and peer-to-peer economy, such as Blockchain, Augmented Reality and a Geolocated Social Network.
The contributory economy, based on contributory research, aims to systematically value capacitation and the acquisition of knowledge during work activities that occur outside employment. In such an economy this acquisition of knowledge and capabilities must be supported by a conditional contributory income inspired by the French scheme supporting casual workers in the performing arts [intermittent du spectacle]. This economy is both: a macro-economy – based on the use of contributory income as a collective investment in individuals and their capacity to cultivate knowledge; a micro-economy – linked to local collective enterprises at various scales. The task here is to redefine the accounting rules used by both companies and local public services, and to do so in order that nations (and ultimately an “internation” yet to be invented can enable transitional arrangements to form between the various economies: the social and solidary economy, the economy of the commons (including urban commons), the (non-profit) associative economy, the market economy and the public economy.
Two major issues of our time may likely lead to economic reorganization of our society:
(1) the transformation of production by the acceleration of automation caused by digital technologies and global reticulation (world digital networks);
(2) the anthropocene and ecological unsustainability that can lead in a short time to the disappearance of humanity.
These two issues may appear contradictory and in many ways they are. But if we attempt to articulate them it would be possible to bifurcate our society to a “société du soin” limiting entropy. In addition, worldwide globalization accelerated by these network technologies leads to new cultural confrontations and new power relations. Thus, it is necessary to take into account the recognition of the world’s singularities . These are the objectives of “Plaine Commune Territoire Apprenant Contributif” research programme, namely understanding and experimenting with a model of contributory economy on the territory singularities of Plaine Commune.
The contributory economy is a proposal developed by the association Ars Industrialis, which aims to answer these two main issues:
a) the macro-economic challenges of widespread automation - economic insolvency
Reticulated digital automation suppresses automated jobs and is currently leading to the proletarianization of population new categories. As a result, it challenges the Ford-Keynesian model based on the redistribution of productivity gains in the form of wages and therefore purchasing power is threatened by the automation of a growing number of jobs which leads to a downward trend in employment, and thus, at the macro-economic level, the decline in purchasing power obtained by wages and leads gradually to insufficient consumption to make the system solvent.
a. The function of the contributory economy is to respond to the problems raised by the automation of a growing number of jobs (and thus to their gradual disappearance). It proposes a new model of redistribution of productivity gains, not in the form of wages, but in the form of time. The time gained by automation would be redistributed to citizens by means of a “contributory income”. This out of employment income must allow them to engage in “work-capability” activities. During this time, it is easier for them to develop their abilities by cultivating, practicing and transforming knowledge (know-how, know-how-to-live, conceptuel and spiritual knowledge). According to the thesis that supports this proposal, while many jobs are automatizable and more and more automated, it is because they are based on the adaptation to the task and the implementation of skills (routine tasks and standardized skills therefore where work activities, on the contrary, involve the practice of knowledge (transmitted, shared and transformed) so no, they are not automatable). A contradiction of the automation is that it tends to make disappear the knowledge that made it possible and are necessary to make evolve its processes. It is then, rather than refusing automation, to propose a regime developing this knowledge.
b. In the context of the Anthropocene, understood as an entropisation of all levels, this valorization of work activities (hence of practice and production of knowledge) seems necessary.
Indeed, according to the thesis that supports this proposal:
· Employment activities based on the implementation of skills and therefore on the repetition of standardized routines produce entropy at the psycho-social level (homogenization and inertia of behaviors, pathologies);
· The work activities based on the practice of knowing, thus on their transformation and renewal, produce anti-entropy at the psychosocial level (production of organization during the sharing of knowledge and production of diversification, bifurcation, singularity, novelty during their transformation).
The contribution economy and the economy of the contribution therefore imply an investment in the creation of knowledge.
To promote this production and this practice of knowledge, the proposed device (which is inspired by the system of intermittent performers) is as follows:
· a contributory income remunerates the commitment to capability processes during non employment period to practice knowledge (this contributory income constitutes a national investment whose financing will be the subject of a multiscalar social negotiation of which the experiment Territory Learner Contributive aims to pose the framework);
· individuals must reload their contributory income right by a certain number of employment hours in the context of intermittent jobs during which they provide society with the skills and knowledge thus developed, by increasing the share of work in employment.
· These jobs are paid in wages by employer structures that are labelled as “contributory” and may be public or private, and for-profit or not).
Contributory research can be considered as a form of social (self)sculpture, if we view culture, arts and knowledge as transindividual processes through which groups sculpt themselves by sharing common practices. We could also speak of a form of gardening: culture understood here as a form of permaculture. Research methods in the sciences, (including human sciences) and arts, insofar as they are based on the constitution of open research communities extending well beyond the world of academic research, need be re-examined with regard to education. The event of the Anthropocene demands a new ecology of educational subjectivity, which takes into account its technical conditions.
Between the nineteenth and twenty-first centuries, these technical conditions led to the generalization of proletarianization (loss of knowledge through its exteriorisation into artifacts), of which the infrasomatization mentioned in chapter 3 is the most advanced stage. Thus a key question to be addressed by contributory research is proletarianization which is tied to the question of a new relationship to technics, based on the goal of de-proletarianization,that is, a relationship in which people are not mere users, but in which they understand, practice and transform technologies.
In a disruptive period, all types of knowledge and art have to be thought again from scratch. We need to provide therapeutic prescriptions for the disruptive technologies which first appear as toxic. The perspective opened up by contributory research aims to revisit the notion of ‘social sculpture’ within the contemporary technological context and the framework of digital studies.
1. Context: disruption and algorithmic governmentality, an “anti-social sculpture”?
Whilst the term ‘social sculpture’ (used by Joseph Beuys) within an art historical context may be contentious and dated—mostly because of its inheritance in separating an active master-sculptor from a passive sculpted-matter—there is a need to revisit what is meant by ‘social sculpture’ (or rather terms such as ‘social self-sculpting’ and ‘social plastic’) through contemporary modes of technical and technological mediation of the world.
Beuys coined the concept of “social plasticity”, where modeling and transformation are becoming a total action, through the idea that each one considered as an artist, as a “creative power”, could participate to the “social sculpture”, to the shaping of the forms of the world in which they live and they are involved - each one participating to the production of symbols, and to the production in general.
In a philosophical context, the use of the word sculpture can be traced back through Heidegger to Aristotle and is related to the term technē, where to sculpt means to take form, to shape matter. If we include social behaviors as a form of ‘material’, we can better understand that the concept of social sculpture is close to the notion of culture and education, as the forming and shaping (cultivation/gardening) of behaviors in society through the sculpting of the retentions (habits or memories) and protentions (expectations or desires). At the same time, we must take into account that students and individuals are not objects, they are subjects. While the process of education may be able to re-orientate their desires towards less pernicious consumption, it cannot entirely form their behaviour, otherwise, it would obliterate their agency and thus their educational capacity.
For centuries, the retentions and protentions of individuals have been sculpted by social organizations (rituals, political, religious, philosophical or academic and educational institutions), through the practice of knowledge (know how, theoretical knowledge, and know how to live) or arts (technical arts, arts of living, creative and performing arts). Such knowledge and arts are neguanthropic practices through which individuals take care of their collective milieu and learn to live together by sharing common retentions and protentions – through the memory of a singular past and the projection an unpredictable future.
In the disruptive period, social organizations through which individuals transmit, practice and transform their knowledge and arts seem to be outpaced by radical and permanent innovations. Such practices become obsolete and are replaced by marketing injunctions, implemented into algorithmic technologies operating in real time, at the speed of light.
Indeed, the current functioning of the digital technical system in the service of consumerist data economy leads to the capture and the control not only of attention, but also of retentions and protentions of the users of digital devices, connected objects and “social” networks, through the collection of their ‘personal’ traces or data, and through the automatic generation of their profiles. The algorithmic environments suggest to them programmable and standardized behaviors and steer their drives towards mass market commodities: the constitution of mimetic and consumerist crowds and the exhaustion of libidinal energy thus leads to the production of psycho-social entropy.
The development of new forms of supports for knowledge (mechanical, analogical, digital) and modes of production (consumerist economy, attention economy and data economy) have transformed traditional practices of knowledge and social organisations. The Cultural industries (described by T. Adorno and M. Horkheimer) and mass media based on analogue technologies have modeled consumerist behaviors. Data industries and social media that are based on digital technologies have resulted in ‘algorithmic governmentality’ (described by T. Berns and A. Rouvroy). This transformation has led to a new epoch of proletarianization (loss of know-how, know how to live and theoretical knowledge) and, ultimately, have led to a new form of ‘anti-social sculpture’.
However, there are multiple forms of (technical) communication possible that are not at all related to ‘algo power’ (for instance decentralized peer-to-peer architectures). It is only because of the recent rise of Facebook, Google, Uber, Amazon etc., that this ‘algo’ problem has started to appear. Therefore, algo governance, no matter how dominant at the moment, should not be accepted as our inescapable fate.
It seems necessary to create processes of collective individuation which enable individuals and communities to socialize or sublimate their drives and to renew their libidinal energy, by inventing new ways of living or by making singular works of arts (scientific discoveries, technical inventions, artistic performances, social projects). Such processes lead to the production of psycho-social neguanthropy, i.e psycho-social diversity, novelty, transformations and bifurcations. The urgent question now is how social (self)-sculpture as a subversive-creative force relates to the hegemony of the large corporate platforms.
2. Therapeutic propositions: digital studies, contributive research, practice based research and socially engaged art, towards a new social self-sculpture in the digital milieu?
The question is the following: how to develop such capacities of transformation and bifurcation in a context of digital automation and generalized proletarianization? In order to do so we have to change the paradigm, passing from technologies of control and algorithmic injunctions at the basis of data economy, to technologies of spirit and capacitation, based on contributory economy (defined in chapter 6).
In the digital development of artifacts, an alternative should be to “found a new algorithmic sovereignty” based on “a new right of work”, which itself is distinguished from labor. Work is thus defined as capacity to de-automatize and to bifurcate. This is what Beuys described as “the creative power of man” and as “the capacity of man invested into work”.
In order to update a social sculpture in a digital society, we have to establish the capacity to model and to shape the digital systems that are composing our contemporary tracks, memories and relationships, and the media of our knowledge. To shape and to design the society, by individual and collective contributions is a way toward a “design of existences” or a social design, which open the possibility to model and to shape the future.
The digital and algorithmic environments have to become contributory supports of knowledge (know how to make, know how to live, theoretical knowledge) and art, and the participants have to become social sculptors of their digital and algorithmic environments, collectively practicing digital technologies through new knowledge and arts. It thus seems necessary to develop new social organizations likely to give knowledge and art their therapeutic role in society, which is to help human individuals to adopt their new technical milieu. This is the aim of digital studies, contributory research, practice based research or socially engaged art.
The aim of digital studies is to understand how digital technologies impact (both in negative and positive ways) the construction of knowledge (disciplinary epistemologies) and aesthetics (cultural production). Both art, be it applied design or fine arts, and knowledge always require technical supports in order to be conserved, transmitted, shared and transformed, and the transformation of these supports always affects these arts or knowledge. It thus seems necessary to conceive, produce and experiment contributory digital devices and platforms, especially shaped for the transmission and sharing of knowledge, that is, which enable the “learners” to participate actively in the collective production of knowledge or art, and not simply to receive them from an exterior source and passively consume or contemplate them. The development of such contributory devices and platforms (annotation or categorization tools, qualitative algorithms, deliberative social networks) requires from us to redesign the network architectures and data formats and to introduce digital hermeneutic functions into current web formats and digital tools, enabling “contributors” to express, deliberate, confront and discuss their point of view and practices.
Contributory research is based on the articulation between the action research method and contributory technologies. In this case users are not merely responding in the form of comments and sending their big or small interactive signals such as likes or pictures and videos. Contributions are substantial pieces of work that are fully integrated into the collaboratory hermeneutic effort and thus differ from comments in the margins. Researchers from different disciplines work in close cooperation with inhabitants of their territory (territorial collectivities, educational institutions, businesses, charity sector, elected representatives, citizens, etc.). Contributory digital platforms make such exchanges possible because they facilitate the progressive publication of hypotheses during the research process, and their public discussion and critic: all stakeholders can take an active part in the research and become researchers. Academics, activists, designers and coders learn from the users and inhabitants, just as the latter learn from the academics, through a process of collective capacitation. The aim is to identify the fundamental (political, juridical, health, psychical, economical) questions raised by disruptive technologies or digital infrastructures, to scientifically address such questions, and on this basis, to produce and experiment “therapeutic” hypotheses to resolve the concrete problems in the territory, which thus becomes a “learning territory”.
Practice-based research is a method of research where the research activity is based upon the practice, and the artistic practice is understood as a form of knowledge production. Such a mode of research or knowledge construction sits between the division of knowledge between ‘know how’ and ‘know what’. The research projects construct their methodology as part of the very process of the project, the question is not resolved through the work itself: the focus is on research question being asked rather than the work itself, the work consists in the negotiation and in the collective (re)formulation of the research questions. The projects all accomplish a mode of disclosure of the research: these modes of practice/disclosure are forms of gestures which are therapeutic and ultimately neganthropic.
Socially engaged art should not be understood as a singular autonomous practice within the commodified art world, but rather as a strategic element in social movements, alongside critical research, public activism, and networked communications. The notion of ‘event work’ (Brian Holmes), counters the transgression of disciplinary enclosure inside the university. The aim of an event work is to articulate artistic strategies with other strategies, such as critical research, communication, activism, intellectualism, the political, in order to face contemporary challenges. The notion of event work thus names the relationship between event and work, insisting on the social transformation and collective individuation implied in working activities, and on the bifurcations, which can be produced by such activities. In such a context, the role of the artist is not to make ‘objective’ works of art that spectators can contemplate but to create new situations in which the public can engage. There is a need to open new ways of doing, living and thinking. The artist has to be understood as a relational actor in the world, producing situations and opening improbable bifurcations, rather than an autonomous actor in the world, producing objects (cf the concept of the artist as proposer).
Such experimentations, hypotheses and models, which already take place in different territories, should be exchanged, shared and discussed between these different places, thus creating an international network of research composed of diverse localities (defined as an “internation” in the introduction). The aim of the proposed network is to organize collective reflection and deliberation on the economic, epistemic, political and social consequences of the contemporary industrial transformation, and to experiment new economic and social model, based on a rational appropriation of technological innovation by local populations, and oriented towards the production of neganthropy.
The project of the Internation is an attempt to put in place a contributory economy of localities, giving value to the practice of singular knowledge through which individuals and groups shape new technological supports and new modes of collaborations - for the social sculpture of the future.
3. Examples of contributory research projects, practice-based research projects and socially engaged art practices
. Projects of contributory clinic and contributory urbanity in Plaine Commune
The project of contributory clinic brings together academic researchers, doctors, childcare workers and parents in order to invent new therapies to fight against child’s surexposure to screens. The aim of this project is to connect the academic knowledge concerning the constitutive role of technical supports in psychological and cognitive faculties, the professional knowledge concerning the nocive effects of screens for child’s development, the practical or educational knowledge of parents and the technical knowledge of designers, in order to conceive and develop new educational practices and new digital tools likely to support child’s development and attention capacities. The contributory clinic should become a place where other parents, childcare and education workers can come to capacitate themselves in order to address this new problem of public health.
The aim of the project of contributory urbanity is to launch a program of contributive research with teachers, students, architects, designers, urbanists, etc. in some schools of Seine-Saint-Denis (in the North of Paris) in order to develop on this territory a culture of BIM technologies and a critical/pharmacological approch of smart cities. The goal is to develop new practices of architecture, urbanism, construction with young generations thanks to the adoption of BIM technologies through the practice of Minecraft. If BIM technologies can be “absorbed” (understood and practised) by the inhabitants (and especially the young generation) in a contributive way, this could be the opportunity for the inhabitants to take part actively in the digital transformation of their cities and territories, and to imagine new sustainable urban development, whereas the current models of “smart cities” tend to exclude the inhabitants of the processes of conception, management and construction of the city. Video games and BIM technologies could thus open new possibilities for contributive architecture or urbanism, associating inhabitants (young generations) with professional architects, urbanists, urban managers, etc. in the territorial development, in order to renew the urban jobs and the “urban engineering” (transformed by digital technologies) and to create a shared urban knowledge on the territory. This project brings together the academic knowledge concerning the history and economy of cities, the professional knowledge concerning digital technologies of urban construction and management, the educational knowledge of teachers, in order to give students the opportunity to understand and practice digital technologies of urban construction and town planning, and to concretely contribute to the digital transformation of the city.
. Real Smart Cities project in Ecuador and Galapagos The Real Smart Cities Project (This project has received funding from the MSCA-RISE programme under grant agreement No. 777707.) develops a critic of the development and implementation of the technologies within the urban landscape or so called Smart Cities. The project is divided into three work packages, the first one concentrates on the Transdisciplinary Study of the Digital Episteme and the second one Data City: Big Data, Open Data, and the 3rd focuses on Citizen Participation and Territorial Experimentation. Alongside the territorial experimentation in Plaine Commune mentioned above the Real Smart Cities project is carrying out contributory research in forms of experimentation in the city of Guayaquil in Ecuador and on the Galapagos Islands. The experimentation in Guayaquil consists of building on the socially engaged practices previously in place with the staff and students of the Univeridad de las Artes (UArtes) supplementing the activity with researchers from other members of the project. These projects includes forms of digital capacitation in the women’s prison, local radio projects in Nigeria and questions of capacitation with the local population on the Galapagos Islands. The socially engagement processes underway in Guayaquil were grouped into a symposium exhibition ‘Guayaquil Archipelago’ which took place in the city of Guayaquil in the 2019. The thematic of the symposium, exhibition was the Archipelago, Archipelago as a morphology of the city itself, the City of Guayaquil is an archipelago of inter-connected islands but also an archipelago of islands of social exclusion, poverty and digital exclusion. The symposium exhibition acted a mode of disclosure of archipelagic thinking where islands become relational rather than insular. The challenge became how to rethink the network through the archipelago, an archipelago of relational possibilities, a condition of possibility of openness as forms of locality. The second phase of the experimentation was the engagement with public encounters with arts on the Galapagos islands, Real Smart Cities with Uartes hosted a two day event on the island of San Cristobal where the problematic of locality and the Anthropocene were posed as series of interventions with the local population. The context for the interventions was one the one hand tension in relation to the imposition of global imperatives on the locality of the Galapagos itself and the need to have clear modes of local capacitation. The participatory methodologies of mapping were used to represent the local populations relation to digital technologies, San Cristobal poses the intriguing question of digital exclusion through the limitation of access to internet technologies. Both projects with Guayaquil and the Galapagos are on going.
The main objective of our program in the field of economics is to reduce the anthropic effects— that is “anthropogenic forcings” of all kinds1, which gives impetus to the increase of entropy— of human activities. This objective requires us to reevaluate the diversity of knowledge and the locality of economies. We call this diversity of knowledge noodiversity, that is negative entropy, insofar as it results from an exosomatic transformation and constitutes an extension of biodiversity. As it was explained in the first chapter, fighting against the increase of entropy, and generating what Erwin Schrödinger defined as negative entropy, has a necessarily local character. Therefore, we argue that it is indispensable to reactivate the notion of internation, outlined by Marcel Mauss in 1920. Calling upon the internation, Mauss tried to respond to the creation of the League of Nations2 and the internationalist critique from Marxist socialists who believed to overcome what they considered the formal and idealist cosmopolitism, described by Emmanuel Kant (1991, 51)3.
Today, more or less everywhere in the world, nationalisms and populisms seem to dominate the public sphere and political stakes. These reactionary movements were primarily stem from the “anthropogenic” development of global economy, as it has functionally generated the increase of entropy and destroyed localities. Facing this reality, we must carefully rethink, at the institutional level, the relation between the local and the global, and the national and the international, in order to challenge the political crisis of our times and show that, quite contrary to nationalism and its poisonous xenophobia, nations, and more generally localities, do have a future. In a new geopolitical context, which is being potently reshaped by means of technology, a different account of the nation and other forms of localities is urgently needed in order to provide fundamentals for a new macro-economy. This chapter strives to respond to this need.
The main objective of this chapter is to lay the theoretical groundwork for the internation and the organization of the institutions constituting the internation on the basis of thermodynamic constraints. A thermodynamic approach to the internation makes it possible to redefine the concept of nation, and more generally territorial public powers, in relation to the concept of locality and, as a result, draw these concepts away from nationalisms and “localisms”. To put it shortly, nations and territorial public powers are localities as long as they function as open systems which are governed by institutions capable of instituting and institutionalizing negentropic processes in order to sustain public power. However, dealing with the question of localities and institutions from within the era of disruption, it is necessary to discuss them in a strict relation with technologies in order to respond to geopolitical power shifts which create economic conflicts and rising military tensions. It is only through the articulation of these complex issues that the globalization, responsible for the increase of entropy, can be changed into rewordling [remondialisation] which fosters the anti-anthropic modes of organization.
The period we are living in is that of transition. The internation has a key-role to play in order to turn this transition into a viable future in the Anthropocene-becoming-Neganthropocene (Stiegler 2018). Therefore what we call upon as the internation is a common and open public sphere where nations and localities can meet and negotiate in order to optimize their coordinated choices at the level of global economic struggle against entropy and anthropy. This is the only way to overcome the Anthropocene epoch as Entropocene and elaborate a new rationality which required for such an economy.
[This chapter is currently being updated]
[This chapter is currently being translated]
This chapter seeks to elaborate a general approach to ethics. In this approach ethical life is seen in tandem with technics, that we will define as technologies of organizing noetic life in the technosphere. We posit that the technosphere should be seen as a locality for hyper-industrial societies of the 21st century. The organization of noetic life is a technological issue and needs to be discussed in ethical terms, for it requires us to critically assess and reflect upon what we all experience in our daily lives: human behaviors, duties and characters are now being shaped by artificial systems. What is happening, in the most recent stage of exosomatic evolution, is that these artificial systems have become planetary organized and organizing organisms we are living in, through and by, both as individuals and political communities. This new organization of life, an unheard-of articulation of organic and inorganic matter, makes it necessary to thoroughly reconceptualize what we mean by ethical living in the technosphere.
Speaking of ethical life is not an easy task nowadays. Indeed the sense of ethical living has been distorted by green-washing strategies of global companies. Shamelessly—that is unethically, since what the Greeks defined as aidos (shame), along with dike (justice), constitutes the very condition of ethical life—these companies made use of growing environmental concerns in societies to keep unchanged their unsustainable business model. Therefore, when addressing what ethical living can and should actually mean in this condition, it seems necessary to shift the question of ethics from the sphere of personal choices to the sphere of the hyper-industrial organization of artificial systems. If these systems, as we argue, not only condition personal choices but also precede them by means of the algorithmic infrastructure of a new data economy that evades social and political control, the fundamental ethical issue has to relate to the hyper-industrial conditions within which an ethical life is possible. And that means addressing the question of ethics from within these hyper-industrial conditions is possible only through a firm rejection of the current macro-economic industrial model and the usages of technologies it imposes.
We must be clear: ethical living can be nothing but an illusion within the current macro-economic industrial model based on the tenet of irrational economic growth and GDP as a monetary measure of market value: in hyper-industrial societies living in the burning biosphere the question of ethics is first of all the question of the organization of the economic process, although ethics is clearly not limited to the economy. However, a different organization of this process is necessary in order to give ethics a new lease on life. We nonetheless firmly believe that living well in hyper-industrial societies of the 21st century is possible, provided that we carefully rethink (Stiegler 2018), that is take more seriously, the question of the relation between ethics, the economy and technology. This is what we aim to do in this chapter by reinterpreting the concept of ethos.
The chapter consists of six parts. The two first parts are introductory. In Part I, four conditions of possibility of ethics in the 21st century are determined. We argue that these conditions are technological, technospheric, hyper-industrial and exosomatic. In Part II, through a discussion of the problem of abstraction in the field of ethics, we show why our general approach to ethics should be seen as a critical extension of a normative/applied approach to ethics. Parts III and IV are devoted to the notion of ethos, reinterpreted in the context of algorithimization and automatization. In the Part V we discuss the vital link between ethos and locality in the context of technodiversity, arguing that the latter, as the very condition of technological sustainability, has, like biodiversity, been placed in jeopardy. Finally, Part VI is devoted to the ethical organization of life on Earth with respect to food production and the relation between the animal and the human in the Anthropocene.
Four Conditions of Possibility of Ethics in the 21st century
The first condition of possibility of ethics in the 21st century is technological. However, arguing that the question of ethics should be approached as a technological issue, it is first necessary to make a distinction between the term technics, denoted by technique in French and Technik in German, and what is commonly referred to as technologies. If the term ‘technology’ refers to technological equipment, ‘technics’ encompasses human actions based on knowledge. All human action has something to do with tekhnē,” which means that “delimiting the field of technics” is difficult (Stiegler 1998, 94). Unlike French, German, Italian, Finnish, Polish and other Slavic languages, the English language makes a distinction between technology, technique and technics (Lindberg 2010, 27). If the term ‘technology’ probably exist in all non-English languages, it commonly denotes technological equipment, rather than what Simondon described as a meta-theory of technics, which would pave the way to “the integration of technical reality into universal culture.”
(Simondon 2017, 159) Consider that what Michel Foucault described as techniques de soi in his language is translated as technologies of the self in English (1988, 16-49). However, using the term ‘technology’ may lead to some confusion. Since the development of cybernetics from the 1950s on, this term also refers to high-technology systems, that is, organized and organizing inorganic matter (Stiegler 1998, 17; Hui 2019, 28). High-technology systems have nothing to do with what Foucault meant by technologies of the self, practices aimed at caring for the self. We prefer the term technics not only in order to avoid terminological confusion, we also use this term in order to show that the very possibility of caring should be carefully rethought in relation to high technology. Since high-technology systems heavily (dis)organize the possibility of ethical life and do so with dazzling speed, leaving societies in a state of disorientation, the distinction between technics and technology makes it possible to argue that (1) technics should be approached as local and localized forms of knowledge of how to do [savoir-faire], live [savoir-vivre], conceptualize, and theorize—and, in this respect, technics go beyond what is commonly described as techniques; (2) that technics as forms of knowledge need to be reinvented from within technological systems, these new artificial organisms which both deform and transform technics as human modes of making “life worth living.” (Stiegler 2013)
The second condition of possibility of ethics in the 21st century is technospheric. The technosphere is not only a digital milieu which is real, rather than virtual, as one might have thought two or three decades ago (Hui 2016, 47-48). It is also a new system which needs humans to function but itself works autonomously and is therefore outside of human control (Haff, 2014, 127)—it can appear then as inhuman. But the technosphere is first of all and still a space of human activity, characterized by the intervention of technology systems and technoscience into nature. In the wake of Bonneuil and Fressoz (2016), Latour (2017), Bińczyk (2018, 2019), and many others, it is possible to define the technosphere as what is responsible for anthropogenic changes in the biosphere (Vernadsky 1945, 1997) and the reorganization of all life structures—social and biological alike. In this respect, any ethical action should be taken or judged from within “technonature,” (Lindberg 2020a) that is, a new space of living on Earth, where, on the one hand, the modern distinction between nature and technics appears as porous and, on the other hand, the metabolic system of the biosphere, which makes life on Earth biologically possible, is about to fail due to our use of industrial technologies. Here, the ethical task is to determine a new technospheric metabolism in which life-sustaining processes can still flourish in the technical organization of organic and inorganic matter/energy.
The third condition of possibility of ethics in the 21st century is hyper-industrial. Despite what sociologists such as Alain Touraine (1971) and Daniel Bell (1973) told us, the so-called post-industrial society has never come to pass, because industry cannot be reduced to the presence of factory chimneys, coal mines and steel production. We are living in hyper-industrial— rather than post-industrial—societies in which everything “has become subject to modelling and industrial activity—distribution, health, leisure, education, and so on.” (Stiegler 2015, 228). For the want of a politics of technics in relation to high technologies, this technology-based hyper-industry has finally led us to the age of a more and more unsettling surveillance, where the subject of modelling is “human experience as free raw material for translation into behavioral data.” (Zuboff 2019, 8) In this new situation every discourse on ethics has to take into account this hyper-industrial fact: our data, as new raw material for old, outdated and unsustainable economic models that remain structurally unchanged, are extracted from our daily activities. In general, capitalism in the industrial era is based on the exploitation of energy resources. However, unlike 19th century capitalist model, which depends on the extraction of fossil fuel and which has certainly not disappeared in the 21st century, digital platform-based economic model rather exploits libidinal energy reserves the extracted data come from. It is necessary to acknowledge to what extent these two variants of the systemically organized extraction have the same destructive impact on forms of live: human milieu—which can be developed through noetic activities and the exchange of new forms of knowledge, rather than through the algorithmized and controlled exchange of information—is destroyed as much as the environment. This hyper-industrial requires for us to develop a more comprehensive account of the technosphere. Just as the biosphere appears within the solar system as a locality for all living organisms, the technosphere should be understood as a locality for hyper-industrial societies and approached as no less critical an object of care. The future of the biosphere is techno-logical. Therefore, the technosphere has to be preserved as our techno-biological condition of life on Earth. Which means that our relation to technology has to change as much as the current economic model based on the unlimited exploitation of mineral and noetic resources.
The fourth condition of possibility of ethics in the 21st century is exorganological and relates to what Alfred Lotka described as exosomatic evolution, that is, an “increased adaptation [of the human species] […] achieved by the incomparably more rapid development of ‘artificial’ aids to our native receptor-effector apparatus.” (1945, 188). These artificial aids are exosomatic organs—from knives, arrows, wheels to carts, cars and self-driving cars; from abacus to calculator, computers and clusters—which are developed outside of the body and have greater and greater impact on the organization of life on Earth. To put it simply, the exosomatic evolution is an extension of biological evolution, and the economic process is a continuation of exosomatic evolution. Drawing on Lotka’s observation, Nicolas Georgescu-Roegen argued that “with the exosomatic evolution, the human species became addicted to the comfort provided by detachable limbs, which, in turn, compelled man to become a geological agent who continuously speeds up the entropic degradation of the finite stock of mineral resources.” (1976, xiv) From the exorganological perspective, ethics may be defined as a multiplicity of new technics of composing life with artificial organs, organs which we have depended on in the process of biological-exosomatic-economic evolution but which can be destroyed by current modes of production (and their use of these organs) that will make life on Earth biologically unsustainable. No one can either make ethical claims without taking into account these four conditions, which are systemically inter-connected.
The Uses, Misuses and Abuses of Abstraction
Under these conditions, a general approach to ethics may be seen as a critical extension of the normative/applied approach to ethics used by experts on ethics and committees whose role is to determine whether a new technological product is good or bad for individual users, society, freedom, democracy, and so on. In 1991, observing how genetic engineering was used to increase the power of the economic model, the French social philosopher André Gorz pointed out: “Faire de l’’éthique’ la spécialité d’experts revient à l’abstraire du vécu et de la culture du quotidien, à constater son extinction.” (1991, 109) However, considering the rapidly increasing impact of AI-based technologies (brain-machine interfaces, hands-free control of computers and, more generally, the use of algorithms) on hyper-industrial societies, we do need experts on ethics, provided they have the courage to firmly oppose the interests of global companies, paving the way to a new understanding of public policy in the technosphere. Recommendations from experts on ethics and ethics committees cannot go hand-in-hand with business as usual nor end up becoming pious dreams that avoid naming the problem, and remain silent about how to implement these recommendations in order to turn them into effective ethical principles. This is what courage means—the courage to dare to know (sapere aude) that Kant established as the spiritual principle of the Enlightenment. We have to learn how to demand knowledge from the experts in order to become capable of rebuilding and reinventing public power as a milieu of ethical actions in the era of algorithms.
However, at the same time, we also need, in the wake of Gorz, to carefully rethink the dynamics of abstraction in relation to ethics in order to see why establishing general ethical rules abstracts them from the local circumstances in which authentic ethical actions, both collective and individual, take place. Jacques Derrida identified such “forces of abstraction” as “deracination, delocalization, disincarnation, formalization, universalizing schematization, objectification, telecommunication etc.” If these forces can be also defined as the forces of evil or illness [le mal d’abstraction], it is because abstraction, when it forgets the local circumstances of its origination, becomes ineffective.
There is actually nothing abstract about the fact that Derrida defines the machine and technics as “the sites of abstraction.” (Derrida 2002, 43) On the contrary, what he tries to tell us by his specific use of the term abstraction has a very concrete meaning and relates to what we are experiencing in absolute terms: (1) technology tends to shape a universal and homogenous system; (2) the system tends to totalization—which means that the more the system is developed, the more it is abstracted from the local realities it transforms according to its own disruptive logic and the more it exceeds the control of these concrete and heterogeneous localities. Moreover, and counterintuitively, this universalizing tendency of the technical system makes the latter more and more specialized (Hui 2019, 21). Anywhere we live according to the customs of our singular plural places, we are organized by the same totalizing system and are condemned to let a small group of specialists design this system and decide how it works. This inner logic of the technical system, which is inherent in the very ‘nature’ of technics, has been subsumed by equally totalizing economic processes since the second half of the 20th century while Economic, an originary social science, has been left in the hands of an equally small group of specialists. “This is all wrong,” Greta Thunberg said at the Climate Action Summit in New York in September 2019.
A substantial part of this hyper-industrial evil—as an old-new evil spirit [genius malignus] devoting all his efforts [omni sua industria] to deceiving us, to put it in classical Cartesian terms [Descartes 2008, 16]1 —is due to the use which is made of abstraction in economic and technological processes. In this respect, the irrevocably critical task of ethics is to take account of the dynamics of abstraction. If abstraction is still what we need for ethics in order to have the power to define the ethical rules of the technosphere and for the technological macro-systems it consists of, abstraction is also what can make us blind to micro- and meso-levels from which an ethically organized technosphere, as a complex ensemble of reticulated open ethical systems more or less local but always localized in their histories, remains to be thought. A systemic change must be made in our common approach to ethics, as well as to its use and abuse by experts in ethics and ethics committees working in hyper-industrial conditions, shaped by technoscience in tandem with capital. Determining whether a new technological product or service is good or bad for societies from within the technoeconomic/technoscientific global system which itself is moving in the wrong direction remains far too abstract to be effective as far as local lives are concerned. Therefore, this irrevocably critical task of ethics is also a meta-task in the sense that it is aimed at determining the right place(s) for ethics and upsetting the balance between the universal and the local, the totalized and the singular, the specialized and the common—in other words, to rethink ethics in the technospere from accustomed places that Greeks called ethe (plural form of ethos).
In this respect, a formal distinction between ethics and morality should be reintroduced in order to go beyond the restrictions of moral philosophy. In a nutshell, ethical life, as long as it means to act well, is always conflictual. Acting well does not necessarily amount to following what is defined as acceptable, right or just as defined by moral philosophy. Morality is an abstraction as long as it is detached from mores, which means “customs,” that is ethos or Sittlichkeit. The redefinition of ethics is to observe how we all can and have to speak of ethics through the multiplicity of European philosophies and languages2 (Cassin 2010, 691-699), but also to discover anew how this multiplicity can align with a non-European ethe and work on a planetary level through multilinguistic techno-logical organs. It is no accident that the reduction of technology to mere functionality goes hand-in-hand with the advent of a monolinguistic culture which affects local ethe to the same extent as food monoculture affects biodiversity. To give ethics a new lease on life means to recognize that linguistic diversity is much more than a thing to be preserved within the logic of cultural exception and should rather be approached as the material condition of what Adam Smith described as the wealth of nations. And to rethink this wealth from a general ethical perspective, beyond abstracted/abstracting notions of growth, requires a deeper understanding of how ethical living is exorganically related to “technical inventions as behaviors of the living” (Canguilhem 2008, 95) and how this universal exorganological relation differentiates through local levels.
Ethos and Algorithms. Ancient and Modern Meaning of Ethos3
Ethics as moral philosophy is derived from the ancient Greek word ethos, which originally meant accustomed place and only derivatively custom and habit. Ethos is a particular character, that belongs both to the community (customs) and to the individual (moral character) who displays her character in her actions and discourses.
The ethos as the particular character of a community manifests itself in its customs, habits and traditions: it is a non-conscious and non-rational articulation of how things should be on the grounds that they have “always” been so. In ordinary life, the individuals’ ethical character reflects the ethos of their community: people grow into the customs and beliefs of their community. As Martin Heidegger has shown in his Humanismusbrief, ethos as the destined place of dwelling appears as a historical destination that orients people’s action immemorially. The force of ethos can be so great that a person defending it is ready to stand against the public law, like Antigone, the heroine of Sophocles’s tragedy Antigone, who is caught in a contradiction between the ethos as tradition that commands her to act in a certain way and the public law that forbids it. As G.W.F. Hegel shows in his interpretation of the ethical world in Phenomenology of Spirit, Antigone’s singular way of dealing with this contradiction makes her an ethical individual– displayed by the dramatic character (Hegel 1807, GW 241-277). The force of the ethos does not mean that any inherited ethos would be just but only that it imposes itself as if in the name of divine justice. This is why ethical life does not necessarily consist in simply following local traditions. On the contrary, sometimes the ethical duty calls to contest narrow-minded habitudes, repressive traditions and unjust laws, even though this can lead to heartbreaking tragedies. Ethos is inseparable from conflict, guilt and crime, and this is why ethical reflection often takes place in tragedy. In the contemporary world, people’s place of dwelling is determined in a radically different way. Of course, custom and habit still play a role. But the ethos of the contemporary world is increasingly marked by “algorithmic life” (Sadin 2015) and by “algorithmic governmentality” (Rouvroy & Berns 2010) that conditions ethical life itself. This does not mean that algorithms would dictate ethical rules but that the social space in which ethical action can be undertaken in the first place is increasingly managed by algorithms. They are the means of what Rouvroy and Berns call a “statistical governance”: it does not control what is real but it structures what appears as possible and at the same time tends to suppress alternative virtualities (Rouvroy & Berns 2009, Neyrat 2010). Some areas of statistical governance are designed by public powers (Alston 2019), but much larger domains, as well as its essential technological architecture, are created by big global companies that ultimately serve the needs of capitalism only (Zuboff 2015, Zuboff 2019). The world is not run by a huge self-conscious mega-AI, like in science fiction dystopias, but it is managed by innumerable algorithmic systems that innervate the social bodies impersonally and automatically. Why would algorithmic governance effect and even over-determine ethos in a more general sense of the world? Firstly, like classical ethos, little by little social algorithms configure the social space because they are given the role of framing what can be done by whom. Ancient ethos attributed different tasks for different types of people, e.g. men were expected to go to war, women were expected to lament the dead. Also contemporary social algorithms indicate what different people can do, and in principle they can do it in a more sophisticated way than the ancient ethos, because they can attribute tasks to people in function of their personal competences instead of crude particular features like sex or race. For example, they can be used to select the persons who can obtain a bank loan, better health care, a job or a place in an institution of higher education. However, it has been shown that instead of diminishing discrimination they can actually increase it, because they tend to keep people in the same places as they were previously (O’Neil 2016).
Secondly, like classical ethos, social algorithms govern social reality in a nonconscious yet incontestable way. The antique ethos was not questioned by people but simply obeyed by them because it was “ordered by gods.” A social algorithm is obviously not dictated by gods but programmed by engineers following the commands of their clients. But the people who are submitted to it cannot know or question it. From their point of view the algorithms are impenetrable “black boxes” (actually with advanced machine learning they develop into black boxes for the engineers, too). If ethos commands life with the force of the unconscious, social algorithms govern it with the force of the unthought, as N. Katherine Hayles (2017) puts it. For example, when algorithms of selection to higher education pick out some candidates and leave others out, the candidates cannot know why they were selected or not (either because the algorithm is a business secret or because its functioning is based on untraceable data-mining processes) – even while it being argued that justice demands that an individual should know and be able to challenge the data sets and processes used in the evaluation of her case (Villani 2018, 113-130, Madiega 2019).
Still, social algorithms are not exactly the same thing as ethos, because they have different temporalities. Firstly, they have different relations to the past. The ethos has no definite origin but it is just a habit that has “always” been there and that stays valid as long as people repeat it. The algorithm has an origin, because a society or a company has set its aims, a team of programmers has built it, and then it simply realizes its program. The ethos is different from the social algorithm insofar as it is open to reinterpretations, reforms and rebellions: it is valid only as long as people accept it. The social algorithm, on the contrary, can be updated by its programmers but not by the people who are governed by it. One cannot say no to an algorithm. Secondly and in consequence, ethos and social algorithms have a different relation to the future. A machine’s temporality is fundamentally different from human existential time. By necessity, technical systems function within a causal logic where past events determine future ones. In a traditional computer program this means realizing the same program. Modern machine-learning technologies are different since they rewrite their rules as a function of the regularities found in available data, so that they are based on recursivity rather than repetition (Hui 2019). However, the AI still functions on past possibilities (program or data) and in this sense it is fundamentally different from existential time, which develops by encountering impossibilities. Existential time is capable of openness to chance that is the very basis of human liberty, and this the social algorithm cannot see.
A social algorithm does not admit of critique and is not open to genuine chance. In this sense it does not leave place for existential choices, including demands of justice, tragic action, and finally ethical action itself. This is why it would be dangerous to let social space be overtly saturated and overdetermined by social algorithms.
Ethos and Automatization. Ethos As a “Mode of Relating to Contemporary Reality”
In 1984, Michel Foucault approached the classical notion of ethos in a contemporary setting. Commenting on the famous text of Kant, Answering the Question: What Is Enlightenment?, published 200 years earlier at the threshold of what we commonly call modernity, Foucault argued that modernity should be defined as an attitude, rather than a period of history (1984, 39). Ethos as accustomed place underwent a transformation with the advent of modernity, which keeps updating its classical sense. An ethical attitude then becomes “a mode of relating to contemporary reality [actualité]”—to what is happening now. Foucault traces this modern ethical attitude back to the ancient ethos because the former continuously appears as a task, even though this task is different from that of Antigone, for example. Arguing that ethos is always a voluntary choice that has to be made in order to be as a task, Foucault describes a modern ethical attitude as “a way of thinking and feeling; a way, too, of acting and behaving that at one and the same time marks a relation of belonging and presents itself as a task.” (39)
This is how an ethical transformation works: the immemorial, always given as a law, has to change in order to persevere. However, we who transform this law, if not transgress it in a continuous task of reinterpreting its principles [archai], are obliged to respect its immemorial, archaic character. This ob-ligation (binding toward) is the irreducible condition of ethical life. Indeed Bergson shows that not only is it one of the two sources of morality but also a social link: it “binds us to the other members of society, is a link of the same nature as that which unites the ants in the ant-hill or the cells of an organism.” (1977, 83) But this obligation also requires us to rethink what ethos, as “a mode of relating to contemporary reality,” actually is in the second decade of the 21st century, given the fact that our reality is no longer that of Foucault.
In the age of the great acceleration, time runs much faster than ever. Although only 35 years have passed since the publication of Foucault’s text on ethos, the fundamental difference between his contemporary reality and ours is technological. Thus, in order to redefine what an ethical attitude can or should be, we have to bear in mind that our “mode of relating to contemporary reality” is conditioned techno-logically: on the one hand, it is governed by social algorithms from which it is impossible to escape since they already reshape the very possibility of a social link, and on the other, an ethical attitude has to recognize its own vulnerability to an automatization that makes an ethical act impossible as long as ethical life is about making choices (the classical sense of ethos) and thereby maintaining an attitude that is a result of making choices and as such remains voluntary (the modern sense of ethos). It then becomes necessary to rethink the ethos from within “the automatic society” (Stiegler 2016) in order to transform the immemorial (the archaic) and give ethics new possibilities after automatization. Therefore, what Foucault described as “a philosophical ethos,” that consists in a “critique of what we are saying thinking and doing through a historical ontology of ourselves” (1984, 50) will undergo a new transformation, becoming now a technological ethos. Foucault’s approach to critique is still valid in the second half of the 21st century, but technology comes before ontology. An ethos as a philosophical life—that Foucault theorizes along with Pierre Hadot and what he describes as mode de vie in relation to ancient philosophical lives—should now be thought of as an experiment in technological life.
One’s actions can be performned and judged as ethical as long as the one who acts is responsible for what they do and are capable of anticipating the consequences of their actions. Therefore an ethical act cannot be automatized, because it is always singular and calls for a decision which always already disautomatizes the individual who makes it. Derrida often insisted on the impossible nature of this decision which emerges from the very sense of ethics or “the ethicity of ethics.” (Derrida 2003, 40) However, when we take the ethos for an attitude or a mode—of being, acting and relating to the self and the others—ethical life becomes a question of technics and should be approached through “the evolution of the technologies of the self.” (Foucault 2001, 1605)
Foucault identifies these technologies as modes of acting which make us capable of taking care of ourselves and, as a result, capable of interacting with others as if this interaction were a sine qua non condition of how to take care of the self in order to live an ethical life. Drawing on Greco-Roman philosophy and the monastic principles typical of Christian spirituality of the late Roman empire, Foucault distinguishes three techniques of the self: (1) writing letters to friends, in which the writer discloses herself or himself; (2) examining the self and conscience, and reviewing what was done; (3) remembering [askesis]. Let us consider a simple example: imagine that you’re a monk living in the fourth century. Your monastery is situated somewhere in the Roman Empire. You speak Latin, which was the lingua franca as English is today, and Rome meant—urbi et orbi—the whole world, just as what we call globalization has meant, since the 20th century. You write a letter to a friend and you disclose yourself. Reviewing what you’ve done, you bring it back to mind in an act of remembrance, which requires an effort on your part. What is the purpose of the effort you put in when writing your letter? Why do you need a friend to know and take care of yourself through this self-knowledge that you acquire techno-logically? Relying on Stoic letters reinterpreted during a seminar given at the University of Vermont in 1982, Foucault would reply that by means of this set of techniques you practice ethical life: not only does technics let you acquire the truth of yourself but it also can “transform truth into a permanent principle of action.” (1984a, 35) Thus this self-disclosure you made becomes the ethos you were prepared by [paraskeuazo]. Now you live your monk’s life as an authentically philosophical life. The overwhelming majority of us are not monks, however. An attempt to transform a philosophical ethos into a technological ethos also means to make it vulgar, that is, possible to be practiced— rather than to be simply used—by the people in their ordinary lives and through the organizations and institutions they create. If it is true, as Foucault says, that the technologies of the self which help us live an ethical life are subject to evolution, the same must be true for ethical life so long as the perseverance of the ethos requires of us the careful transformation of immemorial ethical principles. Then, the question is how to live an ethical life while practicing new technologies, rather than just using them4? Which of these new technologies—platforms, devices, apparatuses—might be useful for taking care of oneself and which ones make it utterly impossible? And, more importantly, how to reinvent ethical life when so many actions we used to do on our own have become increasingly automatized for the two last decades? How can we still be monks and get prepared by this automatization in order to make of it a blessing, rather than a curse?
No serious answer to these questions is possible without interrogating the hyper-industrial organization of these new technologies, an organization based on a shameful extraction and trading of personal data that is spinning out of control. So long as this extraction continues for the want of a politics of technology—which is not to be confused with the “policy” of tech-giants, but rather is a call to national governments and international institutions to take action—dealing with ethics will appear as an occupation for academic philosophers. However, through his reinterpretation of Stoic letters, Foucault shows that ethos does not have to appear as an old-fashioned ancient term, provided that we remain ethically capable of transforming it and making it persevere: to practice an ethical life through remembering means “the progressive consideration of self or mastery over oneself.” (1984a, 35) No one can hold mastery over themselves without having control over the data they share in their everyday lives and the shameful (ab)uses of these data made by Facebook, Google and other big-tech giants. To take back control over our data, which now disorganizes social life, appears as vital necessity and an opening to the technological ethos.
Ethos, Locality and Technodiversity
“It is one of my targets to show people that a lot of things that are a part of their landscape—that people think are universal—are the result of some very precise historical changes,” Foucault said when discussing the technologies of the self and recalling what he did in his earlier works (1984a, 11). But how to consider this statement in 2019, given the fact that (1) our landscape is shaped by a quasi-universal technological system; (2) this landscape consists of more and more standardized “things” under the totalizing and unifying tendency of new technologies? How can we organize an ethical technospheric landscape, and why is the relation between this landscape and a technological ethos of fundamental importance for this organization?
The unifying and totalizing character of technologies require us to rethink the irreducibly local character of ethos and reconsider the relation between the universal and the particular. The universal and the particular need to be approached in terms of composition, rather than in terms of opposition, and constitutes what one has to define as an ethical ecosystem. An ethos is local because it is a character and is localized in an accustomed place. It would be erroneous, however, to reduce the local appearance of ethos to a small and cozy place. Locality appears on different scales and continuously varies from one specific locality to another. What is universal is precisely this variation which binds localities and keep them working together as distinguished local singularities, always localized in something bigger than themselves. Israeli kibbutzim, Polish cooperativism, the Plaine Commune in Saint-Denis, indigenous communities in Ecuador, Berlin, Scotland, Hong Kong, the Amazon tribes, nations, civilizations, the biosphere are all different, variable and incredibly singular-plural scales of the same locality. This locality can be defined as universal, provided that we are smart enough to understand that the one consists in the many which keeps the one alive. What is ethical locally is universalisable so long as it appears in multiple variants in different places and upholds an ethical ecosystem, composed of localities which are physically opened because they are delimited by porous borders. In this perspective, locality is the linchpin of ethical life as the practice of local and localized ethe on a universal scale: an ethos can be conserved only in a local open system.
Therefore there is no such a thing as a universal or global ethos, which clearly implies that there are no universal values since they always belong to a particular local landscape and are proclaimed as universal by a privileged group of organized people—be it “white middle-class males, Western states, or international corporations” (Lindberg et al. 2014, 1)—who have a political interest in these deceptive claims to universalism. The so-called universal values are too often a seditious pretext to silence, despise or even extinguish those who are supposed to not uphold them. In this respect, calling upon values that are deemed timeless and that appeal to all of us is always a suspicious move, no matter who— and in the name of whom—makes this call. Ethics is not only about values. It is first about principles and the organization in which these principles can be upheld in order to make people capable of forming and transforming values on their own through daily ethical practices on variable scales of locality.
The global, unifying and totalizing technical system is a deadly threat for ethical life precisely because it systematically tends to destroy localities by means of their standardization and keeps degrading the complexity of immemorial ties as the conditions of possibility of ethical life within and between localities. However, a deeper insight into the very nature of technology and its evolution—from technical objects to the complex techno-logical mega-organism we live in—shows that this unifying and totalizing tendency of technology stems from the very logic of its development: technology exists as a unifying and totalizing system long before people who think that they just use technical devices can see that they are actually governed by what they use. This gap between what we think of and through technologies and how technical systems effectively work stems from the fact that the more technologies are developed, the more they become specialized and diversified (Ellul 1980, 200, Hui 2019, 21).
Precisely because the technical system we live in is global, we need global ethical regulations for designing, producing and commercializing the elements of the technical system, most notably AI-based technologies. However, as long as these regulations derive from universal ethical values, the ethics of technology will defend the interests of tech-companies and technology, reduced to mere functionality, and will only be at the service of an economic model that is ethically unsustainable since it systemically destroys localities as places for practicing ethos.
It is obvious that technologies have functions that make economic life possible. However, a different perspective on technology, one that goes beyond its functionality, is needed in order to escape the global techno-economic trap of unification and totalization, and see what technological ethos, as a mode of relation to technological reality, might look like. In its current tendency to unification and totalization on the one hand and specialization and diversification on the other, technology appears as a quasi-universality: human animals are specifically human because they use technics in an incomparably more developed way than non-human animals. Anthropology is technology and from this perspective the technological condition of the human existence can be defined as universal. However, this technological/anthropological fact appears within “particular cosmologies” (Hui 2019, 265) and multiple localities of which the former are composed. The task here is to reconsider technics through the category of diversity we so often reclaim when defending culture (cultural diversity vs. homogenization of cultures to globalization) and environment (biodiversity vs. monoculture).
Ethical life is unsustainable without technodiversity. This is the reason why technodiversity has to be protected in order to save localities by making their habitants capable of transforming their places technologically and, as a result, keep their ethos alive. We do not need to fight against the totalizing and unifying power of technologies for they are what makes us human, all too human – if not inhuman. What we need is to carefully think this inhuman aspect of ourselves and of our technologies in order to fight with them for a more diversified distribution of the technological power, respectful of local open systems as places of practicing ethical life.
How to Live Together With Animals when Ten Billion People have to Be Fed By 2050?
The development of human technologies in industrial capitalism, notably since the beginning of the age of the great acceleration, has led to incredible suffering by animals. Even though findings in ethology and zoology offer us more and more proof of near human-like levels of consciousness in vertebrates and of cognitive processes of animals within their habitats, the animal condition has never been so tragic as it appears in our times. This is a fact which is more and more difficult to bear from an ethical perspective: whether we like it or not, the process of meat production in intensive animal farming might inspire what Primo Levi described as “a shame at being human.” (quoted after Deleuze 1995, 172). In the context of the farmed animal industry, the inhuman power of human technologies—these human industrial machines exploiting animals in the name of cost minimization as a basic production rule which excludes the very possibility of animal ethics—strikes us here more than anywhere else.
“The relations between humans and animals must change,” Derrida said, arguing that this change has to be both ethical and ontological (Derrida, Roudinesco 2004: 64). However, human-animal relations have to change not only with regards to the moral consideration of animals, even though considering non-human species must transform our ethe, that is our modes of life, standardized by what we eat (Pelluchon 2018). Our relationships with animals have to change also because the industrial degradation of animal lives heavily affects the human condition in the two first decades of the 21st century and constitutes an imminent threat for humankind. It is common knowledge (yet still denied by many) that human consumption and eating meat in particular have a great impact on climate change, the 6th mass extinction, and the health of people living in industrial societies. The current system of food production and distribution is largely anthropogenic for it constitutes one of the important factors of the growth of thermodynamic entropy (the atmospheric greenhouse gas concentrations) and biological entropy (food monoculture which destroys biodiversity: forests, animal habitats and local modes of life). The question is whether industrial food production and distribution can be anti-anthropic. Can industry in general and the food industry in particular lead to anti-anthropic eating habits and behaviors, given the fact that the latter radically changed in industrial capitalism, notably in the era of the great acceleration?5
According to recent United Nations projections6, the world’s population will reach the 10 billion mark in 2050. Feeding 10 billion people, 70 percent of whom will be living in urban spaces7, will be possible only if we can invent new industrial methods of producing food. The urbanization of the planet, responsible for the shrinking of rural regions8 and the emergence of megacities, notably in developing regions, is one of the most striking features of the Anhropocene. A systemic transformation of food production and distribution emerges now as the very condition of sustainable urbanization. The historically unprecedented context of planetary urbanization requires us to address food issues beyond dietary preferences, which must remain individual choices. However, in order to mitigate the environmental impact of intensive animal farming, a concerted political effort must be made to make effective adjustments between population health, local economies, environmental protection, and producers’ interests.
In an effort to anticipate the future consequences of the planet’s urbanization, we argue that the transformation of cultural urban food practices should be approached as an opportunity and a potent tool for making a change in the Anthropocene, rather than a sad renunciation or acceptance of displeasure. Plant-based cuisine has a great urban potential to transform immemorial local food practices and thereby keep them alive, while contributing to the development of new forms of knowledge concerning how to live and how to act. Food is not just food. Food must be just and therefore the food industry has to become the object of an ethical, social, and politico-economic critique. With the planetary expansion of a capitalist free-market economy, food-giants did not only swallow small farms and destroy rural ways of life. Through their marketing strategies and the programming of consumer choice, they also radically changed dietary habits in Western societies and worked to generalize this new Western lifestyle in countries where meat was not the basic element of traditional cuisines (Burgat 2017, 9). In this respect, the food issue in hyper-industrial societies needs to be addressed not only in relation to global meat production and consumption, a threat for organisms, including humans, living in the Biosphere. In addition, a sustainable system of food production and consumption has to be reinvented from the perspective of localities and their cuisines in order to contest an insipid monoculture and re-open the vital link between food and diversity on an urbanized planet.
Facing the Anthropocene also means to ask how to live together with animals and why their well-being should stop being treated as a secondary ethical problem. Global meat consumption will rise proportionally in relation to the world’s population growth over the coming decades. This irrevocable fact makes industrial livestock production unsustainable from a planetary perspective. Hyper-industrial societies capable of considering the interests of animals are in fact those which will be capable of taking care of themselves.
By the middle of the second half of the 21st century, the question of ethics should be addressed as the question of the organization of life in hyper-industrial societies in which ethical practices are possible, rather than that of values. In the new state of things, a planetary disorientation that challenges national and international legal orders, ethical thinking has to be first a critical thinking. What is at stake in the era of algorithms and generalized automatization is the conditions in which ethical acts can be performed, rather than the question of justice. In this context, the notion of ethos is crucially important and needs to be rethought in tandem with the notion of locality.
‘We as a civilization are too much like someone addicted to a drug that will kill if continued and kill if suddenly withdrawn.’ (James Lovelock)1
To the surprise of commentators worldwide, in his State of the Union Address of 2006, George W. Bush began a call for investment in climate change solutions with the assertion that ‘America is addicted to oil’; that ‘the best way to break this addiction’, moreover, ‘is through technology.’2 The claim was met with displeasure by critics who, insisting on the need to differentiate between economic necessity and the euphoria of uncontrolled consumption, saw the 43rd president legitimating hyperbole habitually identified with the political left. Others have sought to demonstrate that the metaphor is not metaphorical: ‘Just as the consequences of alcohol abuse, from DUIs to cirrhosis, are symptoms, global warming is a symptom of oil addiction’.3 The tension between the two positions can be resolved by loosening the etiological criteria of addiction in line with contemporary research. In this context, it makes more sense to speak of our increasingly pathological attachment to the world of technological pharmaka made possible by oil, rather than directly for the black stuff itself. Our focus is therefore less on a narrow definition that encompasses only the stereotype of far-gone, destitute, and seemingly irrecuperable abusers of a small range of traditionally recognised addictogens, like alcohol and heroin, and more on what the social psychologist Bruce Alexander terms ‘addiction3’: a category that admits the prospect of consuming more or less anything to the extent of consequential ‘overwhelming involvement’ (shopping, eating, video games, pornography…), and which for the most part sustains the appearance of normality by denying the potential ‘fatal consequences’ of our actions.4
If, as Alexander claims, addiction is a ‘substantial and growing danger in the twenty-first century’, contrary to myth, that is not because we have been seized by uncontrollable hedonism. (We have known for coming up to thirty years that the neural mechanisms for craving are bound up with but not identical to those for pleasure.)5 It is because of the short-term therapeutic role addiction plays in facilitating our adaptation to the stressful environments of contemporary living, however detrimental it proves to be beyond that.
Foreshadowing Bush, and with a greater emphasis on the simultaneous curativity and toxicity of technology than we find in his eulogisation of ecotech, Nicholas Georgescu-Roegen wrote, back in 1977, of ‘mankind’s addiction’ to the ‘comfort offered by the exosomatic organs’. ‘This addiction’, he continued,
is completely analogous to that of the first fishes which evolved into air-breathing reptiles and thus became irrevocably addicted to air, now constitutes a predicament because the production of exosomatic organs became from a certain moment on dependent on the use of available energy and available matter stored in the bowels of the earth.6
The analogy risks being unhelpfully simplistic if read as diluting the concept of addiction to the point where even air is addictogenic. But Georgescu-Roegen’s point can be made more sophisticated by linking it back to his work on the use of technology to stave off entropy and what the psychologist Mihaly Csikszentmihalyi terms ‘psychic entropy, a disorganization of the self that impairs its effectiveness . . . to the point that it is no longer able to invest attention and pursue its goals’.7
Through the technological organisation of our local milieus, we construct our own little ecological niches, or microworlds, and create our own interiority in the process. This is what Bernard Stiegler calls the anti-entropy of ‘work’, in strict opposition to the entropic, or ‘anthropic’, exhausting, forces of ‘labour’.8 This sense of work is fundamentally related to what Csikszentmihalyi famously calls the vitalising, ‘transcendent’ happiness of ‘flow’, or immersion in a self-contained and autotelic world of one’s own making, oblivious to the distractions of competing external stimuli. He called this ‘being in the zone’. Csikszentmihalyi also saw, however, that flow experiences, from watching television to performing surgery, can be powerfully addictive, providing zones of calm focus in the midst of bewildering transformation.9 Subsequent research, most notably by the addiction anthropologist Natasha Dow Schüll, has shown that the gap between therapeutic work and toxic addiction may be imperceptibly narrow. Technologies from gambling machines to smartphones, often designed explicitly with addictogenesis in mind, serve as substitutes for world- and self-creation, a means of restructuring the turmoil-afflicted mind with goals and direction, alleviating stress and anxiety⏤in other words, psychic entropy⏤for those otherwise unable to achieve flow states.10 Proponents of the ‘entropic brain theory’ in neuroscience similarly posit that stability-reinforcing patterns of activity associated with addiction (as well as OCD and depression) ‘could be functional in . . . working to resist a more catastrophic collapse [in]to’ the regression they identify with ‘primary’, or elevated-entropy, states of consciousness.11
Yet whatever the relative health, or capacity for withstanding environmental perturbation, afforded by the respite of these zones, a potentially explosive problem results from the way that local sites of anti-entropy tip out entropy into their surrounding environments, be they individual bodies or the societies that house them. Mental stability comes at a price, and one that becomes all the costlier when the stress produced by the labours of our ever-expanding technosphere goes hand-in-hand with exosomatosis, or the spiralling doses of technology needed to prop up our ailing biology and planet. This ailing has become all the more acute of late, on account not just of climate change, but moreover because of a mooted ‘evolutionary mismatch’ between the anthropic forcings and pressures exerted on us by our technologically organised milieus, and the ability to accommodate them of our evolved (‘endosomatic’) physiology most notably, an overburdened and increasingly dysregulated dopamine system.12 The central contention of this chapter is that the two phenomena are indissociable: we cannot hope to combat the collapse of our planetary ecosystems if we do not first address the ‘functional uncoupling’13 of Homo sapiens from the delocalised global spaces to which we are pushed to ever greater lengths to adapt. At the very heart of ecological catastrophe is a chronic-systemic crisis of our psychological and social habitats, caused by populations who consume to dangerous excess as the only available strategy for coping with the pressures of exploitation to which contemporary society exposes us.
1. The Entropocene as Limbic Capitalocene
It is a fundamental premise of this book that ‘Technology’, in the words of John Stewart, paraphrasing Stiegler, ‘is Anthropologically Constitutive’.14 We cannot grasp what it means to be human without reference to the technical prostheses that regulate our experience of time, desire and attention, not to mention our ability to participate in the expected norms of society. Our tools are as vital to social life and the life of the mind as oxygen is to our physiological existence. For our evolved physiology to be continually reinvented by our technics, however, there needs to be a biological correlate that explains our plasticity; one that allows for who we are to be transformed by what we use to navigate the world. The latest suggestion attributes the enlarged cerebral cortex of members of the genus Homo to a ‘dopamine dominated striatum’, which differentiates us from earlier hominid ancestors by accounting for enhanced sensitivity to social and environmental cues, as well as diminishing aggression in favour of sociality and cooperation.15 The dopamine system thus constitutes the neurobiological interface through which the human organism learns from and adapts to its surroundings, governing our responsiveness to external stimuli.
While it has long been understood that certain pharmaceutical substances, like alcohol and nicotine, exert a decisive modulatory effect on dopaminergic activity, and correspondingly on our behaviours, it is now becoming increasingly apparent that our experience of the world is continually rewritten, via the brain, by the technical objects that organise our lifeworlds. There is no hard and fast distinction, in other words, between pharmaceuticals and the simultaneously toxic and curative pharmaka that are technologies. From ritual drinking and smoking, to the ever-bigger and more energy-demanding cars needed by commuters (Georgescu-Roegen’s example), to the takeaway coffees and smartphones that now serve as unavoidable entry-points into the contemporary world of work, whose very necessity disinclines us to acknowledge the extent to which we are automated to accommodate their presence, our social and mental lives are habitually structured around the legitimation of certain modes of addictive, up-dosed, technology consumption. But just as breathing oxygen is a principal cause of carcinogenesis, the life created, or sustained, by our exosomatic organs is also inseparable from what, following Stiegler, we are here calling anthropy and the deterioration of our artifactual environments. In a vicious circle of consumption, leading to environmental destabilisation, leading to further, more pathological, consumption, the greater the stress placed on us by those environments, the more we become reliant on the therapy provided by the very technologies that do so much to cause that stress in the first place, and at ever greater cost to the planet.
The impact of some of this dependence is already well established and, indeed, being tackled, for instance, in the commitment of the United Nations Sustainable Development Goals to reducing deaths from ‘non-communicable’ diseases by one third. The WHO’s report on Health in 2015: From Millennium Development Goals to Sustainable Development Goals devotes two whole chapters to NCDs: one focusing on mental health, dementia and substance abuse; and another on maladies stemming from poor lifestyle and preventable, anthropically-caused, environmental conditions including cancer, chronic respiratory problems, and so-called ‘diseases of poverty’ and ‘despair’ like cardiac illness and diabetes.16 The report remains conspicuously silent, however, on what can be identified as the underlying dopaminergic and ecological above all, economic causes that link the two chapters, and has just as little to say about newer forms of environmental illness tied to the excessive consumption of more recent technologies. The effects of excessive screen-time on childhood development, and of social media on the health of our democracies,17 are only now becoming the object of emerging scientific knowledge. The recourse to digital tablets by exhausted parents, who use them for respite, to pacify small children, has led to diagnoses of attentional deficiency and linguistic and emotional under-development often confused with autism.18 In another indication that the fallout of technology intoxication calls for an understanding of addiction that takes us beyond conventional ideas about the scale and social impact of problematic consumption, connections have been made between election-hacking in the UK and United States, and digital media consumers’ craving for a ‘buzz value’ that trumps the veracity of online content.19
Luca Pani is the progenitor of the aforementioned theory of an ‘evolutionary mismatch’ between ‘current environmental conditions in industrialized countries’ and the ‘completely different’ ones ‘in which the human central nervous system evolved.’ One ‘remarkable example’ of this uncoupling of the human organism from its habitat, he argues, is the development of ever more powerful delivery mechanisms of drugs into the brain, the cumulative effect of which is to
interfere with the global adaptation of an individual to its environment, producing not only an impairment in his/her hedonic capacities, but also a more disruptive effect on the cognitive and emotional abilities that are necessary for an effective interaction with the external world.20
The claim is made specifically in relation to hypodermic needles, crack pipes and the organic solvents often sniffed by addicts. But it also lends itself readily to a reading of the increasing potency of everyday technologies across the whole history of capitalism, which should no longer be separated into distinct producer- and consumer-led phases. What began with the trade in spices and sugar, proceeds through tobacco, opium and caffeine on the way to pornography, pop music, modified corn starch and carfentanyl. The portable screen as a route of administration for the intoxications of ubiquitous gambling and fake news is just the latest stage in this history, and needs to be understood ecologically, in relation to the environmental stresses that push people in their direction, most notably the proletarianisation of world-building, which the industrial production of craving, if not pleasure, seeks to offset. The passage to commodity-harvesting of comparatively mild psychoactives previously used only for medicine coincides with the early-modern onset of what the environmental historian Jason W. Moore calls ‘Cheap Nature’, referring to the un(der)paid toil extracted by merchants who would credit themselves for the industry of slaves, not to mention that of plant matter and the progressively depleted soil of the plantations. This concept of Cheap Nature, encompassing ‘Cheap Food’, ‘Cheap Energy’, ‘Cheap Raw Materials’, and ‘Cheap Labour’, all priced in a way that ignores the long-term consequences of systemic overwork, takes us to the heart of what Moore reclassifies as the ‘Capitalocene’.21 But there’s also another, vital, ‘cheap’ at stake, here: one that cuts across the binary of nature and culture, forcing us to see the collapse of planetary ecosystems in terms of the degradation of our social-technological environments, and the undue stress that this places on our biological functioning. Let us call it Cheap Desire, or Cheap Attention, in reference to a will that the climate-change-disavowing mentality of business-as-usual needs to be infinite. The exhaustion of this will, both individually and collectively, is bound up with increased reliance on the manufacture of habitual and frequently addictive consumption as a coping strategy.
The long-standing but increasingly explicit elicitation of dopamine release in the human limbic system functions as the under-acknowledged engine of contemporary economics, not least because it constitutes the flipside our enforced adaptation to the disadjusted environments in which we consume. Biologists have been warning for years of the risk posed to our health and intelligence by endocrine-disrupting pollutants,22 but the reciprocal reinvention of humans and the technosphere is yet more profound than even this warning implies. The Anthropos of the Anthropocene is one whose biochemistry is undergoing constant modulation by extractive technologies that engineer consumptive habits to maintain the waning levels of demand around which global order is organised. In the words of Bruce Alexander, addiction has been ‘globalised’ through the exploitation of the very nervous system via which we interact with and learn from our surroundings. This ‘dopamining’ is, in turn, inextricable from capitalism’s production of ‘psychosocial dislocation’23 and our corresponding attempts to withdraw from what David Graeber has called the ‘dead zones’ of our traumatised working habitats.24
When Jason W. Moore speaks of the ‘Capitalocene’, he does so to avoid holding the planet’s various populations equally responsible for an ecological catastrophe caused vastly disproportionately by the ‘developed’ capitalist economies of the prevailing world system.25 In so doing, he runs the risk of unduly divorcing us from complicity, as if capital is somehow distinct from the people who continually remake and enact it; hence our (Stieglerian) stated preference for Anthropocene, with its echo of Entropocene.26 A more nuanced assignation of responsibility comes from reframing the problem of causality in relation to habit-creation and the manipulation of the pleasure circuitry of the brain. The American historian David T. Courtwright has coined the expression ‘limbic capitalism’ to describe the coupling of the entrepreneurial exploitation of the ‘evolved drives’ of our neural infrastructure of reward, with the provision of goods and services designed ‘to cope with the damage’ inflicted by free markets on the psychosocial structures that enable us to absorb the shock of change.27 Limbic capitalism has been brought to the fore by the combination of relentless labouring under conditions of mounting precarity and deficient social support systems, which places the burden of coping firmly on the side of individuals whose only survival mechanisms become the panoply of cures-for-sale offered up for consumption by the market. Recent research into the social psychology and neurobiology of addiction suggests that this process should no longer be framed around the idea of the brain being irreversibly ‘hijacked’ by substances that destroy its natural chemical composition.28 But there is a legitimate question of our complicity in the surrender of an agency that is only ever fragile. We willingly, albeit passively, submit to bombardment by ever more refined forms of stimulation to distract us from the perturbations of a market system that⏤be it via workplace deregulation, or through the imposition of structural adjustment programmes on developing countries systematically dissolves communities’ capacity to employ collective niche construction in the service of vitality, that is, to participate actively in the formation and modification of their living environments.
Bringing together Moore and Courtwright enables us to see that the ‘Entropocene’ is also a ‘limbic Capitalocene’: an epochal disaster encompassing not just the planet and human civilisation, but one moreover rooted in a retreat into oblivion that Alexander describes as a ‘rational’, ‘adaptive’ response to the entropic climate in which we labour.29 Ecological catastrophe is less about a surfeit of human ecosystem-engineering than its absence: the surrender of agency to an automation of the nervous system by technologies that think and feel in our place. The result is a vicious cycle of excess, where climate change is biochemically intertwined with the overworking of the dopamine system, produced by the ever more efficacious doses of intoxicants we consume to anaesthetise ourselves against the impact of social breakdown. And this means that attempts to deal with climate change will only, and futilely, be treating its symptoms, unless they also engage with the addictogenic, ‘hyperdopaminergic society’ that lies at its origin. By the same measure, the solution will not reside in implementing consumption abstention, ‘dopamine fasting’, or a global ‘Twelve Steps’ programme either. We cannot do without our pharmaka, and nor can we simply eliminate their constitutive toxicity through some fantasised process of purification that preserves only their curativity. But we can aim for an organisation of society that curbs their toxic potential, lessening the stresses that leave us so in need of their intoxicating power to salve that we consume them to pathological, destructive, excess, by generating alternative forms of therapy. Understood in these terms, the project of planetary detox intersects felicitously with the philosophico-political aims of the Internation: to cultivate locality and a restoration of depleted social bonds as means to recapacitate the agency that we have surrendered to automations of the nervous system.
2. Dopaminergic Animals in a Hyperdopaminergic Society
The crux of what looks like our collective pathology revolves around the relationship between culture and the neurotransmitter, dopamine, whose functions include social bonding, the facilitation of experiential learning, habituation and anticipation. The principal role of dopamine concerns its involvement in the seeking out of novel information and the encoding of repeat behaviours that prove initially rewarding, or ‘salient’. To put this in the recent language of Yuk Hui,30 it works to absorb contingency into a routine, by bringing us to crave the stability of habitual repetition. The process begins at birth: one currently dominant idea builds on the attachment theory of John Bowlby to argue that the limbic system is responsible for the formation of social bonds between mother, child and the extended family.31 Bowlby observed that young children starved of maternal attention quickly adapt to their environmental instability by becoming withdrawn and emotionally detached, reacting more to the novelty of new toys than to the unfamiliar adults who bestow them.32 These changes are now understood in relation to neuroplasticity, meaning the ability of the neuronal organisation of the brain to be dopaminergically moulded by the stimuli of its surroundings. Rat studies have shown that contact between mother and child influences not only the development of dopaminergic circuits in the newborn’s brain, but also conditions the parents’ emotional and physical attentiveness, by bringing them to suffer the absence of their offspring through cravings more familiarly recognised as love. Pups reared in prolonged isolation demonstrate ‘elevated baseline dopamine levels and increased dopamine release in response to acute stress in adulthood’.33
The dopamine system, in other words, compensates for the lack of a familial anchor point by facilitating the creation of stabilising habits in the face of stress. Through it, we reach out and latch on to anything able to create an emotional impact, with our neuronal circuitry reorganising to become more responsive to the source of reward, pruning away synaptic relations linked to the decreasingly necessary wider orbit of attention, in the process. This mechanism for coping with the absence of a social bond proves highly adaptive, equipping us to live through anxiogenic periods of instability. But it is also linked to ‘enhanced sensitivity to psychostimulants such as cocaine’ and ‘may lead to increased vulnerability to addiction’.34 Addiction thus ‘shares a common neurobiology’ with attachment,35 in an identity that explains the scientific recognition that love bears all the neurological and psychological hallmarks of substance dependence. It should also, therefore, be seen as a kind of substitute for social investment a way, we might say, of fabricating (ontological) ground, there where its absence becomes most apparent. The effect cuts both ways, with addiction characterised by a retreat from the social relations for which it substitutes. Looking at the tightly knit networks of companionship that often exist among street users, we can also see how it functions as a complicated attempt to create social attachments there where they are found wanting.36
The biology of attachment is one way of making sense of Bernard Stiegler’s claim that addictions are not solely pathological, but simultaneously toxic and curative.37 It likewise sheds light on an established, but debatably successful, therapeutic tradition of seeking to replace toxic addictions (heroin, smoking, alcohol) with ‘better’ ones (to God, methadone, vaping, AA meetings and running, for instance).38 Catherine Malabou is another recent exemplar of this tradition, arguing that ‘addictive processes have in large part caused the Anthropocene, and only new addictions will be able to partly counter them.’39 We need to be careful not to conflate ‘better’ with toxicity-free, or next-generation technological quick fixes, intended to facilitate sustained consumption, however. The looming future of geo-engineered skies, seeded with a shield of aerosol phosphates to protect the planet from the solar heat building up behind it, has already been compared to enabling alcoholism, as the ‘dialysis that allows the patient to continue drinking’.40
The release of the dopamine neurotransmitter is at the heart of our capacity to adjust to environmental change, and its relation to managing uncertainty, in particular, explains why it has arguably played a vital role in both making and now unmaking the modern, globalised, world. Writing in The Dopaminergic Mind in Human Evolution and History, the psychologist Fred Previc argues that the story of human ecological history is one of the increasing dominance of dopamine in the brain, which he links, in turn, to the rise of ‘abstract intelligence, exploratory drive, urge to control and conquer’, as well as acquisitiveness, goal- and future-directedness, long-term planning and the pursuit of religious and scientific truth.41 The emergence of the dopaminergic mind is developmental rather than evolutionary, a product of ecological shifts inducing neurochemical, but not genetic, change. It begins with prehistorical alterations in diet before intensifying around 6,000 years ago, alongside the growing need to compete for resources and ensuing calculations of settled societies. Previc’s argument, here, resonates with major evolutionary-anthropological claims about the inability of our cognitive architecture comfortably to manage large numbers of social relationships, and the breakdown of our sense of communal belonging and motivation to participate in the life of the collective, once a certain scale threshold is passed.42 We can identify neolithic sedentarisation and, in particular, the ensuing rise of cities as a significant source of this growth of competition, because they removed people from the familiar, small-scale networks of extended family life and transplanted them into ‘depersonalised’43 urban settlements where they had to ‘suppress suspicion of others’, negotiate cultural politics and ‘adapt to densely crowded neighbourhoods’ of complete strangers: ‘unfamiliarity became the measure of human relations’.44
The result of this heightened stress, Previc contends, was neurochemical imbalance, triggered by the depletion of serotonin and norepinephrine relative to dopamine. The next stage of his argument corroborates Peter Sloterdijk’s identification of early-modern European expansionism with the rise of a ‘risk-taking’, ‘disinhibited’ subjectivity.45 Previc posits that the reorganisation of society around dopamine was a decisive factor in colonialism, the growth of capitalism and the Enlightenment and has become even more pronounced since the second, ‘hyperdopaminergic’, half of the twentieth century.
‘Hyperdopaminergic society’ describes the neoliberal era of enforced adaptation to the demands of free markets; the ideology of ‘disruption’; and the proliferating use of dopamining techniques to colonise what has elsewhere been termed the ‘available brain time’ of consumers. ‘A highly dopaminergic society is fast-paced and even manic, given that dopamine is known to increase activity levels, speed up our internal clocks and create a preference for novel over unchanging environments.’46 Previc reels off a list of ‘hyperdopaminergic disorders’, including depression, obsession-compulsion, autism, schizophrenia, Tourette’s, Alzheimer’s and Parkinson’s. We can also add ADHD to this list, though it is also, ironically, linked to traits that can thrive in hyper-doperminergic conditions. The D4 dopamine receptor is believed to have evolved around forty thousand years ago, at a time when the enhanced susceptibility to stimulation it confers would have proved adaptively advantageous for ancestors who took risks to explore new territories in search of food. Nowadays, the allele is thought prevalent in sufferers of attention deficit disorders, who end up being pathologised by the absence of unexplored palaeolithic landscapes from the cramped and understimulating conditions of contemporary urban living.47 Homogenised, metric-heavy and greenspace-poor classrooms would be foremost examples of environments to which holders of the gene now risk being maladapted.48
The attempt to diminish this maladaptation, by increasing our margins of tolerance for the ‘inconstancies of the environment’ (to borrow a phrase from Georges Canguilhem),49 is a major cause of addiction, which should be recognised as another hyperdopaminergic disorder; perhaps even the most prevalent one. Its inclusion within this category of stress-related illness need not presuppose the classical and now, it is argued, outdated ‘disease model’, which treats dependence as a neurobiological disorder of the dopamine system, routinely said to be triggered when the brain is ‘hijacked’ by a limited range of corruptive intoxicants. If this model offers an all-too-easy mechanism for separating out problem drinkers, junkies and pornography users from mere model consumers, contemporary research is moving in the opposite direction, disentangling addiction from threshold-surpassing quantities of specific substances to focus more on the hyper-dopaminergic settings that occasion an increasingly universal culture of obsessive consumption. Addiction is now increasingly located at the intersection of the neuroplastic brain with the instability of what the clinician Jean-Pierre Couteron, a former president of France’s Fédération Addiction, has baptised ‘addictogenic society’.50 Pathology no longer resides solely in the addict, but is learned, stemming from the viciously circular moulding of synaptic circuits around manufactured intensities that substitute for the social bonds we are losing the luxury of forming. As our surrounding environments become more hyper-competitive and antisocial, ever higher doses of supply-maximised stimulus respond to both rising baseline levels of dopamine and the desensitisation that follows from the brain’s adjustment to habituation.
3. Adaptation and Encapsulation
Addiction as a strategy for managing ecological stress is what we saw with the ‘Gin Craze’ of anomic, industrialising London, and in the gambling and opium dens through which the dislocated peoples of dopaminergic society absorbed the disadjustments of the eighteenth and nineteenth centuries. Phenomena like ‘white morbidity’51 and the current American opioid crisis combine with the ‘soaring’, non-medical recourse to Tramadol in parts of Africa and Asia,52 to say nothing of the ubiquity of staring vacantly at the screens of our digital devices, as instances of what Bruce Alexander describes as capitalism’s contemporary ‘globalisation of addiction’. There is nonetheless a difference between earlier, historical, epidemics and those that mark our hyper-dopaminergic present. Patterns of abuse appear alongside periods of rapid technological change the evolution of distillation and techniques, or instruments of stimulation delivery as new sources of stimulus overwhelm the social norms organised around older forms of technology. But there is reason to think the organisation of culture can prove highly effective in regulating consumption. One post-millennial rereading of China’s Opium Wars emphasises the success of traditional Chinese smoking rituals in absorbing the massive increase in supply and facilitating the management of functional habits. Frank Dikötter attributes much of received wisdom regarding opiate-addled Chinese people to colonialist-biological stereotypes of evolutionarily weak-willed orientals, which resurgent nationalism also exploited. Far more destructive, in terms of eliminating the social shock absorbers of ‘backward’ imperial culture, were the nationalists’ politics of prohibition and the emergence of the disease model of addiction, which rewrote history to cast opium as wholly and singularly toxic: a destroyer of agriculture, work ethic and national character.53 Fredric Jameson has written that ‘the postmodern, or late capitalism, has at least brought the epistemological benefit of revealing the ultimate structure of the commodity to be that of addiction itself.’54 But this was perhaps already apparent from the time of the Opium Wars, with opium’s change in status coinciding with its commodification.
In any case, as has been argued elsewhere, historical addiction epidemics have tended to fade out as affected societies readjust their educational norms and social organisation to accommodate hitherto disruptive technologies.55 Bernard Stiegler has argued that, in our present age of the economics of ‘disruption’, the historical pattern of innovation leading to a ‘readjustment’ of society around new technologies, breaks down.56 It comes as no surprise that global consumption has sky-rocketted over the last thirty years, during the very period when knowledge of climate change suggests that we would be taking actions to curb it. Since the conservative revolution of the 1980s, relentless waves of technological change have combined with labour market reforms intended to reduce the welfare safety net and spur us on to adapt to a more aggressively competitive, Darwinian, way of living, dressed up as creative destruction. There is no time for systems of social support and integration to catch up with the disintegrations created by waves of technological-stimulatory overload. Coupled with the built in obsolescence of technological devices designed for shortness of lifespan, these changes make chasing to keep up with the rest of society our default mode of existence. It is in this context that the contemporary pattern of consumption is for multiple, overlapping, addictions, overlain on a metanarrative of unending adaptation, which leaves us struggling to equate curativity with intermittent, ‘hormetic’, doses.57 A constant state of excitation has become the ideological rule, irrespective of the longer-term damage this inflicts on our capacity for life-building.
Previc also suggests that, while posing potentially ‘the greatest threat to mental health’ in the industrialised and post-industrial world, the prevalence of dopaminergic disorders is ‘much rarer or at least less severely manifested in non-industrial societies’.58 Emerging research on the serious under-diagnosis of mental illness in the developing world raises questions about the latter part of this claim.59 So, too, should the continuity of ecological factors behind the rise of addictogenic societies. We can read the adaptationist economics underpinning the manufacture of dependence in the West as a direct continuation of the policies of dependence-inculcation trialled and imposed on Africa and Latin America, first through colonialism and then through the ‘structural readjustment’ programmes of the IMF and WTO. The effect of both has been sustained disadjustment, where consumption comes to substitute for community-led vitality and social support systems.
Between the seventeenth and early twentieth centuries, the British East India Company imposed organisational reforms on Indian agriculture that, in addition to causing massive starvation and catalysing the Opium Wars, also set the tone for the whole of the limbic Capitalocene. Prior to colonisation, subsistence farming on communal land had been the norm. A traditional system of grain storage and reciprocal, mutual support enabled farmers to stave off the worst of climatic instability. But the British enclosed the commons and compelled the sale of grain reserves to drive up agricultural productivity, forcing the replacement of subsistence crops with those, including opium, specifically cultivated for export. The same opium was dumped on China to create habits and a demand that would be financed by the sale, hitherto refused, of Chinese tea to a British public newly enraptured by caffeine.60 Similar stories of enforced adaptation come from Latin America, where the carefully managed diversity of indigenous agriculture gave way, under duress, to the dominance of calorie- and dopamine-boosting sugar for export, which, in turn, freed up European labour to focus on urban industrialisation.61 Later, postcolonial efforts to overturn industrial under-development and the dependence of the developing world on Western industrial technologies, were battered into submission by Western loans, distributed in the manner of a dealer looking to snare new clients, and which merely reinforced relations of patronage. The conditions of loan receipt, and eventually also of their forgiveness and restructuring, went further in necessitating the destruction of techniques of social readjustment ruled to constrain the free functioning of the market.62 ‘Resilience’ came to denote the very opposite of how the philosopher of medicine, Georges Canguilhem, understands health: not the capacity to reinvent one’s milieu in the face of environmental perturbation, but relentless adaptation to the demand to open up domestic economies to international competition. The proletarianising effect of dependence on licensed Western technologies is redoubled by the active inhibition of local forms of community vitality.
Dopamine is linked to globalisation, on the one hand, by its contribution to abstract spatialisation, exploration, conquest and the pursuit of stimulation; on the other, by its links to the destruction of locality to which we are now bearing witness. If the history of dopaminergic society is coextensive with that of the stresses and seductions of the city, with the latter now collapsing from the centre outwards, the two may yet also prove coterminous. Much recognisably modern state-building was also born of the pressures of urban intoxication. Well into the nineteenth century, cities were plagued by cholera-infested water, the pernicious effects of which were diminished by the ‘antidiarrheal properties of narcotics and the antimicrobial properties of alcohol’.63 According to Courtwright, the building of waterworks and public fountains provided both hygiene and alternative sources of much-needed stimulation. Public parks and spaces worked to similar counter-stimulatory effect. Their ongoing disappearance is already recognised as a contributory factor in the rise of the ADHD64 that has been described as the other ‘side of the same mental coin’ as addiction,65 due to the way in which both conditions habitually entail a compulsive switching of focus away from socially preferred objects of attention, and towards more potent, distracting, sources of stimulus like screens and video games. One of the great problems of the digital stage of dopamining, on this note, is that the conventional organisation of our lives and analogue living spaces routinely provides little in the way of sufficiently attractive alternatives to coax those who have withdrawn from society back into it. If the city just about survives as a commercial entity, that is surely in large part because its high streets have been colonised by outlets furnishing the very objects of addiction and heightened stimulus, like mobile phones, electronic cigarettes, coffee and alcohol, that push us away from it. As a site of ritual coming together and localised point-of-retreat, it is ceding its place to the delocalised, virtual microspheres of Amazon, Netflix and social media.
Hence, more broadly, the irony of our unfurling planetary crisis: it corresponds to the fracturing of the world, understood in the Heideggerian sense of an ecology of possibility. Bruno Latour has recently analysed the politics of climate change disavowal around the idea of ‘the absence of a shared common world’.66 Faced with the choice between sacrificing their way of living, or maintaining business-as-usual at the price of condemning vast swathes of the globe to devastation, Latour argues, governing elites have retreated from the aspiration to rule in the interests of the many, and simply seek to sequester themselves away in privatised niches, from whence they can ride out the Apocalypse. His argument works equally as a description of a much greater spectrum of limbic Capitalocenic behaviour, insofar as disavowal a classic symptom of addiction has become the default mode of experience; insofar as we are all seemingly engaged in a process of withdrawal from the universal public spaces formerly characterised by joint attention, collective projects and what Jacques Rancière would call a ‘common aisthesis’, or unifying experience of what amounts to the same place.67 In dopamined, addictogenic society, the shared world succumbs to fragmentation into the hermetically self-contained bubbles of private islands, gated communities and internet echo chambers in which one can escape the feelings of stressed out hopelessness The reference to bubbles, here, recalls not only the filter bubbles of Web 2.0 evoked by Eli Pariser,68 but also the social and psychological structures of immunity, the ‘capsule architectures’ and ‘foam’ of Peter Sloterdijk: ‘In foam worlds, the individual bubbles are not absorbed into a single, integrative hyper-orb’,69 but remain separate. The limbic Capitalocene reveals itself as just the latest stage of the foaming of the world into self-contained capsules.
According to the psychiatrist Daniel Casriel, this search for insularity and ‘safe spaces’ is exactly what is at stake in addiction. Casriel understood ‘encapsulation’ as a third way out for those maladapted for ‘fight or flight’ to ‘anaesthetise’ the feeling of being unable to cope.70 And his generation of drug therapists sought to counter the tendency towards withdrawal by recreating a bridge between the zones of retreat of the addict and the sphere of the public by reabsorbing individual bubbles of foam into a communal milieu. Their project of detox through a reintegration of addicts into the public sphere was derailed by a combination of rehab consumerism that is, of treatment models that reinforce the very proletarianising tendencies they are supposed to combat and the shift of policy-making towards the War Against Drugs. But proponents of this resynthesis of the public also ran up against the complexities of seeking to replace toxic dependences with others deemed beneficial. Even during the 1970s, therapeutic communities of the kind pioneered by Phoenix House were accused of functioning as ‘encapsulated addict worlds’,71 where addicts were allowed to live without thought for their reinsertion into the shared space from which they had withdrawn through addiction.
3. Towards a Psychosocial Ecology of Detoxification
If climate change is a problem of the limbic Capitalocene, which is to say, a phenomenon of addictive consumption induced by generalised proletarianisation, then what would that mean for how we treat it? Interestingly, there is a parallel argument present both in the dominant paradigm of addiction treatments and partly also in discourses of climate change. Its main logic consists in emphasising the necessity of a radical break with existing patterns of consumption. We owe to Daniel Ross the observation that, in an image much exploited by the industries of climate denial, the public imagination is dominated by visions of carbon cuts leading to enforced cold turkey: abrupt withdrawal from a way of life organised around technology-led consumerism, followed by the misery of endless abstinence, planet-wide ‘counting the days’, and slip-ups where we indulge in fracking ‘just one more time’. In the best case version of this scenario, we might manage to get by as ‘functional’ addicts, carefully allowing ourselves a few minutes of internet, oil and shopping for plastic per day, but only in the strictly controlled doses already (ineffectually) advised in the small print of greenwashed society. Anything to avoid the intolerability of withdrawal symptoms that would be experienced both individually and collectively: perhaps not the vomiting and diarrhoea induced by discontinuing opioids, but certainly anxiety, irritability and fatigue. And who knows how these would scale up at the level of politics and society? Distaste for such symptoms, not to mention conviction in their absolute unviability, has already been circularly deployed to proscribe the diagnosis of addiction in relation to pathological digital media consumption72. Environmental writers have been equally quick to insist that abstention from technology consumption is simply not an option. Saving the world, it is routinely argued, will require more and greater technological modernity, not a reversion to ‘collective sacrifice’.73
The dubious advantage of framing ecological collapse in terms of the intolerable price of cold turkey, and more broadly of clinging to a disease model of addiction that allows us to distance climate change from a pathological consumption, is that it exonerates us from taking preventative climate action until it’s effectively too late until, that is, we hit the mythical, iceberg-free, point of ‘rock bottom’. Most famously spelled out in the second of AA’s Twelve Steps, ‘rock bottom’ is the moment where we supposedly, finally, take the crucial measure of admitting ‘hopelessness’ and ‘complete defeat’ in the face of a ‘mental obsession so subtly powerful that no amount of human willpower could break it’. It dawns on us once the object of addiction becomes so all-consuming as to exclude everything else we hold dear from the increasingly narrowed orbit of attention. According to this logic, the typical alcoholic is so selfish and lacking in care that they will only be roused to action when it becomes a matter of literal life or death. Only having lost their job, their money, their family, their health and perhaps even their home and now their planet, will they recognise the need to replace their own defective willpower with the motivation provided by AA, through God.
We should note a degree of wiggle-room in the original formulation of the Twelve Steps. AA co-founder Bill W refers to some early success in recruiting ‘young people who were scarcely more than potential alcoholics’, and even states one aim of the nascent society as being to spare them hell by ‘rais[ing] the bottom the rest of us had hit to the point where it would hit them’ sooner. That ambition was ultimately abandoned and he concedes that ‘few people will sincerely try to practice the AA programme unless they have hit bottom’.74 The doctrine of rock-bottom has since hardened into a cornerstone of the rehab industry, in spite of doubts over its basis in evidentiary science. According to the addiction writer Maia Szalavitz,75 the consecration of hitting bottom is due in part to a judicial system that legitimates the counterproductively punitive treatment of addicts, not least by dressing up retribution as tough love. Writing in The Sober Truth: Debunking the Bad Science behind 12-Step Programs and the Rehab Industry, the Harvard clinician and psychiatrist Lance Dodes is similarly critical. For Dodes, it is a myth that constitutes the ultimate form of marketing for a defective cure we are encouraged to consume all the more fervently when it emphatically doesn’t work.
The continued success of the commercialised rehab industry cannot be divorced from the way that its failures are routinely explained away through reference to clients who, having yet to hit the nadir required to spur them towards committed sobriety, just don’t want ‘it’ badly enough. The typically neoliberal emphasis on deficient personal responsibility conveniently covers over more compelling accounts of rehab’s production of relapsing recidivists: namely, its replication of the paralysis and enforced adaptation to stressful circumstances that pushes people towards addiction in the first place.
Fortunately, however, abstention is no longer the shibboleth it once was in therapeutic circles. The majority of the ‘addiction treatment industry is based on a defective model that has been unchanged since the 1930s’, namely one built on the reification of the Twelve Steps into the kind of doctrinally rigid and proletarianising, mass-produced consumerist model never envisaged by the Kropotkinian forefathers of Alcoholics Anonymous. Ideological dogmatism and the marketing of abstractly universal and ultimately branded modes of therapy have rendered many therapeutic institutions incapable of the self-transformation they preach, unwilling to share and create knowledge with ‘rival’ providers, and unwilling to devolve decision-making autonomy to patients who are frequently there by coercion, court-mandated to undergo rehab as an alternative to prison, and with no option but to comply with inflexible regimes imposed on them from above.76 But ‘there is also significant evidence’ of ‘empathetic and empowering approaches that let patients set their own goals’ yielding greater success than those that ultimately reproduce the environmental dislocation underpinning recourse to addiction.77 An emerging panoply of alternatives to the dominant one-size-fits-all programs of rehab includes elements of a return to the roots of AA in the anarchist theory of ‘mutual aid’.78 Other chapters in this volume sketch out how localities might be revitalised around the use of digital platforms to cultivate participatory, citizen-led, research, as per the territorial experimentations being undertaken in the Plaine Commune Contributive Learning Territory, Greater Paris. In a clear nod to the ethos of self-organising, spontaneously emerging community support, their potential is to provide groups and networks of groups with their own means of generating self-knowledge, which enables them, in turn, to transform and revitalise through their own efforts the toxic environments that push them to misconsume in the first place.
One pioneering experiment of this nature is Plaine Commune’s ‘Clinique Contributive’,79 which, under the auspices of the child psychiatrist Marie-Claude Bossière, brings together researchers, healthcare professionals and parents of young children diagnosed as suffering from the effects of overexposure to the distractions of digital technologies. Its aim is to combat screen addiction by creating a locality in which parents can learn from one another, in a non-judgmental setting, and generate shared knowledge about the developmental impact on their children of both parties’ excessive consumption. The clinic also provides the basis for recreating the extended networks of care whose erosion has made often isolated, tired and stressed out adults cling to the comfort of their smartphones in the absence of better psychosocial integration. In exploring the connections between tiredness and the sustained use of sleep-impoverishing devices, they discover alternative forms of invigoration to digital overstimulation.
Contributive therapy thus becomes a technique for inventing forms of emotional and social connection that transcend commercialised individualism, new forms of philia tied to the pursuit of the common good, echoing the kind evoked by Aristotle in books VIII and IX of the Nicomachean Ethics. We can also see it as a form of work (not labour!) and as a process of ‘capabilisation’ in which the contribution to knowledge allows people to become what they are ‘able to do and be’.80 The therapeutic potential of contributory research can also be understood through the insights into human development of the Russian psychologist Lev Vygotsky. Vygotsky’s conviction was that human action is a transformational process where individuals, Homo sapiens as a species, and tools, exist in a network of mutual co-creation. In an essay on ‘The Collective as a Factor in the Development of the Abnormal Child’, Vygotsky characterised the social dimension of development as a ‘function of collective behaviour, as a form of cooperation or cooperative activity’.81 He borrowed from city-planning the concept of a ‘zone of proximal development’ to articulate how, with the help of peers or another more competent individual, the child becomes able to do things that she was not previously capable of doing.82 Vygotsky saw this phenomenon occurring especially in playful situations, where the ‘child always behaves beyond his average age, above his daily behaviour; in play it is as though he were a head taller than himself.83 Development, he argued, emerges from a social, relational context in which the individual and the group grow into something different, by creating new norms in their relationship with the environment.
One of the few attempts to transfer this perspective into practice took place in the East Side Institute in New York, where, in the 1970s, the therapists Fred Newman and Lois Holzman combined Vygotskian conceptualisations of development with Wittgenstein’s work on language to create the psychotherapeutic method of ‘Social Therapy’.84 Social therapy starts from the premise that individuals ‘are forced to adapt to conditions which increasingly and more and more obviously are against not only their own interests but those of the human species as a whole’,85 with drugs and homelessness being part of a wide range of failed attempts at adaptation. And it understands the group as a ‘unit of transformation/change/growth/learning’ through which individuals can be transformed without a specific focus on ‘fixing the problems’ of its members.86 The group becomes both a method and a result, its activities serving as an emotional zone of proximal development. This volume’s chapter on ‘social sculpture’ outlines a similar ‘transindividuation’ of individuals within a collective through knowledge-sharing. Despite criticising the rigidity of the Alcoholics Anonymous, Dodes similarly reinforces the value of this kind of localised therapeutic community, suggesting that ‘groups would be a highly valuable component’ in the treatment of addiction ‘if they were designed to help patients . . . to experiment with new ways of relating.’87
From this perspective, the function of addiction treatment is to facilitate the co-creation of forms of life hitherto impossible to imagine. And the relationship between what constitutes the possibility and impossibility of future development should be considered one of the most important steps in a therapeutic endeavour. In his book The Psychology of Experiencing: The Resolution of Life’s Critical Situations (1991), another Russian, Fyodor Vasilyuk sought to investigate ‘just what a person does when there is nothing to be done, when he or she is in a situation that renders impossible the realisation of his or her needs, attitudes, values, etc.’88 These moments were ‘critical situations’ in which the individual is unable to ‘cope with the existing external and internal conditions of life.’89 The same encounter with a metaphorical brick wall is addressed in DeYoung and Krueger’s understanding of psychopathology as a ‘persistent failure . . . to generate effective new goals, interpretations, or strategies when existing ones prove unsuccessful.’90
The Anthropocene presents us with the mother of all critical situations, one that threatens the very habitability of the planet, over and above exposing as ineffective the existing norms around which our lives are organised. But it also thus offers an opportunity for the abandonment of old norms that are making us ill, and an overdue end to hyperdopaminergic society. Hence its paradoxical promise of renewed vitality.
What is sometimes called the “hyperindustrial” economy – defined by digital technology and disruptive marketing – is based on those two atomic elements (in the chemical sense of the term) that are carbon and silicon. They are also the two elements of hyperindustrial society in another sense: in the sense involved when we refer to water as the element of the fish – and where today these seem to have become toxic for those who try to live within this elemental milieu.
Carbon becomes the center of the industrial economy with the invention of “heat engines” (as Sadi Carnot called them), while silicon becomes the element of control technologies and the exploitation of individual and collective memory. To what geo-economio-politics of hypercontrol does this lead, especially in the light of China’s rise to the status of hyperpower? And in what way do these questions necessitate the perspective of a general theory of thermodynamic, biological and informational entropy that aims to rethink the conditions of decision-making in the Anthropocene?
We are confronted in the twenty-first century with an array of serious problems but among them two immense challenges stand out: on the one hand, those problems presented by carbon technologies, and, on the other hand, those posed by silicon technologies. While it may seem that nothing can trump the planetary threat of climate change, in fact both of these challenges involve existential threats and dangers amounting to what is sometimes called ‘extinction risk’, not least because these two challenges are absolutely inextricable. We believe that there is a widespread intellectual and political blind-spot about the economic, political, psychological and sociological significance of the vast technological transformation that has unfurled across the past quarter of a century. More specifically, it is today crucial to understand the complex and fundamental ways in which seemingly irreconcilable questions of economic and ecological sustainability relate to and are compounded by the transformation of computation, information, network and algorithmic technologies.
Hominims acquired the ability to create and use fire as early as the Lower Palaeolithic and the controlled use of carbon combustion became common in the Middle Palaeolithic. From that moment, the beings that would become ourselves found themselves within a fiery element defined by the capacity for artificial, controllable energy production and consumption founded on the flammability of organic materials. From the moment cooking was invented, this capacity was a matter of the potential to produce and consume energy in order to do work, thereby opening the possibility of ‘ways of life’, or what Marx called a ‘mode of consumption’: ‘the hunger that is satisfied by cooked meat eaten with knife and fork differs from hunger that devours raw meat’1. Both dangerous and beneficial, controllable within the risks of being extinguished or turning wild, this first symbol of technics was also the first object of care, long before the Neolithic Revolution.
In addition to warmth and cooking, the development of the controlled use of carbon combustion gave rise to other technologies, such as smelting, forging and gunpowder. But the modern history of carbon technologies obviously begins with the invention of heat engines powered by hydrocarbons derived from fossilized organic matter. More specifically, it begins with the external combustion engine, and more specifically still with the industrial (or thermodynamic) revolution that was set off by the steam engine envisaged by James Watt, which he patented in 1781 and which was to transform manufacturing and rail and maritime transport throughout the nineteenth century. In the twentieth century, fossil fuel power plants linked to electricity grids would further vastly transform both production and consumption, and automobiles equipped with internal combustion engines would transform road transport and make possible the rise of global aviation. The combustion of hydrocarbons, however, inevitably releases a significant level of ‘metabolic products’: while for the ten thousand years prior to the industrial revolution the global atmospheric CO2 concentration was 280 parts per million, in 2018 it currently stands at 410 ppm, with annual emissions and concentrations continuing to increase2.
The first integrated circuit was produced in 1958, the first CPU in 1971, the Apple II and Commodore PET home computers entered the market in 1977, the Microsoft Windows ‘operating environment’ was first released in 1985, the World Wide Web was opened to the general public in April 1993, Amazon was founded in July 1994, the domain name google.com was registered in September 1997, the Tencent and Alibaba conglomerates were founded in 1998 and 1999 respectively, the Facebook social network was made universally open in September 2006 (with active users rising from 100 million in August 2008 to two billion in June 2017), the capacitive multi-touch smartphone known as the iPhone was launched in June 2007 and Uber’s mobile app and transport services were officially launched in July 2009. It is notable that this timeline of significant dates increasingly focuses on consumer-based silicon technologies, reflecting the vast entrance of these transformational technologies into the consumer market over the past forty years. It is also notable that we have chosen to end it in 2009, reflecting that the last decade has seen a period of consolidation and monopolization of the silicon economy in the hands of a small number of super-giant corporate players.
Today, it has become transparently clear to everyone that silicon technologies have transformed every aspect of production and consumption, along with scientific and technological research of every kind, and this is especially so in the quarter of a century that has transpired since the internet became public and global. All of this amounts to a vast ‘disruption’ of the technical system, along with every other psychosocial and institutional system.
Retentional technologies and the industrial capitalism of production
If the elemental function of carbon technologies fundamentally consists in the production of chemical energy in order that it can then be transformed into mechanical or electrical energy and consumed as work, then the elemental function of silicon technologies fundamentally consists in the production of an artificial memory that, too, can be put to work in manifold ways. Silicon technologies are retentional technologies (to borrow a term from Husserlian phenomenology). In Stiegler’s work, this very long history of retentional technologies (or what he prefers to call hypomnesic technologies) has been explored in detail and with respect to a wide variety of dimensions. If we here prefer to refer to silicon technologies – while keeping in mind the mnemotechnical history that extends through cave painting, the invention of writing systems (including alphabetization, which remains an almost unchanged standard from the Roman Empire to the Digital Leviathan), the printing press, the phonograph, the radio, cinema, analogue television and the becoming-digital of everything that we now see unfolding with silicon technologies strictly speaking – if we refer to silicon technologies, therefore, it is only because the proliferation of uses, services and functions associated with this latest stage of memory technology seems so greatly to exceed the mere ability to ‘record the past’. And yet, this is precisely the basis of all of them.
The industrial revolution whose possibility we previously ascribed to Watt’s steam engine could never have occurred without retentional technologies of a kind we have hitherto failed to mention: those technologies by which the continuous gestures of workers of all kinds were broken down analytically into their discrete elements, in order to be then programmed back into machines powered by the heat engines of Watt and his successors: the paradigmatic case of such a machine is Jacquard’s loom, but a thousand examples could no doubt be cited. The basis of this analytical process is what Stiegler refers to as ‘grammatization’, the process of turning something temporal (like speech) into something spatial (like writing), by turning the continuous into the discrete, on the basis of which it can be analysed and reproduced.
The noetic, political and economic consequences of grammatization can be to support new forms of knowledge, but it can also lead to what Stiegler calls ‘proletarianization’ (drawing on Gilbert Simondon’s reading of the Grundrisse’s ‘fragment on machines’). If proletarianization has in traditional Marxist discourse been understood to refer to the systematic separation of workers from the means of production, Stiegler’s use of the term draws attention to the way in which those means firstly consist in the knowledge possessed by workers and transmitted inter-generationally. It is this knowledge that is literally removed from the minds of weavers and programmed into Jacquard’s loom. In addition to the ownership of the energy-production capacities of the heat engine, what in fact made the rapid acceleration of the industrial revolution possible was thus the ability of the capitalist to dispossess the worker of the knowledge of how to make things, knowledge that was then turned into information and recorded and exploited in the retentional technologies of machines: it is here that the history of industrial automation and artificial intelligence truly begins.
Industrial capitalism based on production thus arises from the concentration of carbon technologies in the hands of capital, but equally from the capitalist acquisition of retentional technologies through which workers, systematically dispossessed of knowledge, become labourers, that is, servants of the machine. From this vast process is born that great division between capital and proletarianized labour on the basis of which Marx and Engels would construct a revolutionary politics. In fact, of course, this founding moment of the industrial revolution was only the first step of a history that would continue through many chapters, including ones that Marx could never have anticipated: one key way in which to understand this set of chapters is as the unfolding of the epochs of grammatization.
Protentional technologies and the hyper-industrial capitalism of consumption
This is precisely the realization that came to capitalist producers at the beginning of the twentieth century. For Marx, the spread of machines (powered by carbon technologies and programmed by retentional technologies of grammatization) amongst the capitalist class was bound to make it increasingly difficult for any one capitalist to maintain an edge over others, leading to his diagnosis of a tendency of the rate of profit to fall. Economists ever since have disparaged this analysis, above all on the grounds that it is not what is observed in the economic history that has unfolded since it was described by Marx, ‘natural’ boom-and-bust ‘cycles’ notwithstanding. Indeed, this history does not seem to confirm Marx’s analysis. But this may be the result less of analytical error than of a fundamental transformation of capitalism resulting from this tendency, even if the solution to this problem is itself only a postponement of this tendency.
The essence of this ‘solution’ was the realization that it is possible to create new markets, not just by geographical expansion, but through the possibility of manipulating consumer desire and therefore consumer behaviour. If capitalism is a perpetual economic competition giving rise to perpetual technoscientific innovation, this is not just a matter of R&D and production: it is also a matter of the socialization of that innovation – all those processes through which new products are taken up by consumers, by which they are adopted. The shift to a hyper-industrial capitalism of consumption was in part a matter of the new organization of consumption, but the large-scale investment required to achieve the productivity gains to be realised from mass production was feasible only if consumer behaviour could be more or less reliably predicted, which is to say, produced: for this new consumer market in transport vehicles powered by internal combustion engines to succeed, it was necessary to invent public relations, or in other words, marketing.
As Stiegler has shown on many occasions, this invention was made possible not just by the discovery of this ‘idea’, but by the development of new forms of grammatization, and specifically the ‘grammatization of the sensible’ inaugurated with audiovisual technologies such as radio, cinema and television. It is not technological change as such that Marx could not anticipate, but the significance of the new analytical and programming possibilities opened up by these new retentional technologies. With these powerful new tools that could be used to access and influence the minds of potential consumers on an industrial scale, it became possible to completely transform the basis of profit-making in industrial capitalism, by constantly manufacturing the market for the new products that could then be constantly introduced and updated.
By accessing consciousness and targeting the unconscious, marketing and its associated technologies and techniques have progressively learned how to make consumer behaviour controllable, by interpolating (in the literary sense) tertiary retentions into the stream of consciousness. The basis on which it can do so, however, depends on reducing desire as much as possible to a calculable phenomenon, which is to say, grammatizing the relationship to the future. In other words, this amounts to a grammatization of protention, Husserl’s term for my immediate expectation, but expanded here to include every form of motive, reason, expectation, dream and desire.
This in turn involves a detachment of desire from everything incalculable, incomparable and long-term (including every form of education and inter-generational transmission), inducing a regressive tendency that aims instead only at the finite and short-term goals of the consumer behaviour required by the market. But this ultimately risks being self-destructive for the consumerist model itself, setting up a tendency for the libidinal economy (on which the macroeconomic ‘perpetual growth model’ fundamentally depends) to collapse, as libidinal energy is depleted: the ability to stimulate the perpetual increase in consumption required by the consumerist economy is thereby threatened. It is ultimately for this reason (along with the aporia of sustainability) that consumerist capitalism can be but a postponement of Marx’s diagnosis with regard to the rate of profit.
Silicon technologies and the ultra-industrial capitalism of algorithmic platforms
The protentional grammatization technologies of the twentieth century had only limited means of accessing the information and data that is necessary in order to calculate and predict the relationship between, firstly, grammatized content (for example, a television commercial that, in Husserl’s terms, amounts to a kind of industrial temporal object), secondly, protentional conditioning, and thirdly, consumer behaviour: the clearest indicator was ultimately the success or otherwise of a marketing campaign. But with the introduction of silicon technologies that now dominate the twenty-first century, this question is fundamentally transformed, because the consumers of such grammatized content are ceaselessly and immediately sending data back to producers. On the basis of such data, producers can ever more finely calculate the relationship between particular content and particular responses from particular ‘kinds’ of users. The extreme speeds at which these processes occur in contemporary algorithmic silicon systems means that it is also possible for these producers to adjust content in a very rapid and targeted way that was simply impossible in the twentieth century. This speed exceeds that of noetic processes themselves, and this rapid exchange and algorithmic control of vast amounts of user data gives rise to a kind of informational and protentional shock wave, analogous at the noetic level to the ‘sonic boom’ produced at flight speeds above Mach 1.
Every major consumer platform today utilizes powerful algorithmic techniques of this kind in order to absolutely maximize their ability to performatively influence consumer behaviour. Furthermore, global ‘platforms’ such as Alphabet and Facebook are now among the largest corporations on the planet and have become so through the new market they have created for the vast amounts of data generated by their users. If the capitalism of analogue audiovisual technologies was already hyper-industrial and performative (in Austin’s sense), then the new market of platform capitalism based of silicon technologies, user profiling and social networking is highly performative and can thus be considered an ultra-industrial capitalism of algorithmic platforms3. But this only intensifies the deleterious effects of such processes on the libidinal economy of consumers. And this in turn is bound to intensify the self-destructive tendencies of the consumerist macroeconomy, since it ruins the very basis of its ‘success’: the control of desire.
The anti-politics of ultra-industrial populism in the Entropocene
Behind such a paradoxical intention to produce consumers lies the even more paradoxical belief that this mass of consumers can continuously drive the engine of the global economy like a perpetual motion machine, and drive it to new heights. But perpetual motion is a myth based on the notion of an abstract machine that is thermodynamically impossible, and the ‘heights’ to be reached are in this case transparently at odds with the unambiguous imperatives declared by the IPCC. But in addition to that, the billions upon billions of bytes of data gathered from consumers by producers and platforms, fed into increasingly powerful and increasingly intelligent automated algorithms designed to calculate and control behaviour according to the imperatives not of the IPCC but solely of the market, has an extremely ruinous effect on the psyches of the individual consumers of whom this market is composed (who are today targeted almost from birth, if not from before birth), giving rise as it does to an infernal spiral of consumerist addiction.
Evidence abounds throughout the industrialized democracies of the political consequences towards which this ruination tends. And these consequences are intensified by the fact that all these performative techniques are applied also in the political realm. If, as Stiegler suggests, this entails the replacement of the adoptive performativity of ‘democracy’ with the adaptive performativity of ‘telecracy’4, where the demos no longer finds itself in possession of any kratos, then the algorithmicization of this telecracy via the silicon technologies of platform capitalism is already exposing the utter vulnerability of ‘representational’ political systems to a thoroughgoing disintegration at the hands of the ‘owners’ of this data and the manipulators of these algorithms.
This can be described as an ultra-industrial political regression (a new form of what is often called ‘populism’) to which ultra-industrial capitalism tends to give rise. But regardless of the degree to which the leading industrial populists imagine they can cynically keep hold of the reins of power as they exploit the fears and irrationality of the crowd, the enormous risk that they are precipitating is of hubristically engendering processes that will completely run out of all control. All of this is what first began to get going with the shift from an industrial capitalism of production to a hyper-industrial capitalism of consumption a century ago, for which the immensely destructive wars of the twentieth century stand as testament, and it is all this that remains at stake in an ultra-industrial capitalism of algorithmic platforms.
Reinventing economics as the science of counter-entropic struggle in exosomatization
Carbon technologies are thermodynamic: their function is to contribute to the struggle of noetic, technical (or exosomatic) life against its irreducible entropic conditions. But in utilizing these technologies to pursue counter-entropic ends, and given that all negentropic systems are localized systems that are bound to remain entropic in an overall sense, we inevitably produce entropic consequences elsewhere. And when those systems have extended across the entire biosphere, cinesphere, technosphere and exosphere, then this ‘elsewhere’ remains precisely here, and the toxicity they produce is unavoidably self-poisoning, ruining its biospheric element just as does the infusorian in Freud’s petri dish.
Silicon technologies are informational: their function, too, is to contribute to the struggle of exosomatic life against its irreducible entropic conditions. But in this case, the toxicity they produce pollutes not the biosphere but the noetic element of the knowing, technical beings who must nevertheless find the noetic resources to address all of these self-poisonings, whether carbonic or noetic, and to do so by making good collective decisions. It is this division between two kinds of entropic toxicity, and the necessity of recognizing the gravity of informational entropy, that we here seek to highlight.
Most economic theory (like most philosophy) has, to its detriment, remained rooted in a mechanistic physical conception that predates the discovery of the second law of thermodynamics, at least if we believe the economic historian Philip Mirowski5. This means that economic systems are not truly viewed as dynamic processes in perpetual struggle against entropic tendencies but are instead understood as involving one or another kind of static or cyclical equilibrium making possible the fantasy of perpetual growth. From the work of the physicist Erwin Schrödinger, the mathematical biologist Alfred J. Lotka and the economist Nicholas Georgescu-Roegen, however, it becomes possible to see biological (endosomatic) evolution as precisely involving manifold processes amounting to so many struggles against entropy, where these struggles are always localized – at the scale of the cell, the organism, the species, the ecosystem or the biosphere. And it also becomes possible to see that economic processes are what replace these evolutionary tendencies when life becomes technical (exosomatic), still always localized – at the scale of the tribe, ethnic group, society, nation or global economic system.
Mirowski shows that there is a contradiction in neoliberal economics between an absolutized, ‘universal’ conception of ‘the Market’ and a localized (but still informational and computational) conception of specific but highly artificial markets, where the assertion of this universality in fact ends up authorizing the elimination of the wealth of actual knowledge embodied in institutions of exchange of all kinds. His work makes clear that the dangerous turn of recent macroeconomic history – characterized by neoliberalism, financial crisis and proletarianization (in Stiegler’s sense) – has everything to do with the failure of economic theory to incorporate an understanding of entropic and counter-entropic processes, at both the thermodynamic and informational levels. The implicit question it raises is how to reinvent economic theory and practice by incorporating such an understanding from its founding premises.
For a general theory of entropy
This in turn raises the question of the necessity of a theory of general entropy. Such a theory would on the one hand seek to juxtapose and articulate the thermodynamic notion of entropy with the informational notion, and to exceed the limitations especially of the latter. And it would also be in this way an account of the relationship between every kind of counter-entropic system, which is to say every kind of localization and de-localization process that works against the tendency towards the elimination of improbabilities, which is to say the elimination of the past (as what, for any noetic system, opens the possibility of a future). But as Smithson’s association of entropy with both waste and luxury already suggests, this also bears upon Georges Bataille’s ‘notion of expenditure’ and ‘general economy’ (not forgetting that for Bataille, expenditure beyond subsistence is not a question merely of waste but of an irreducible necessity of life).
What this implies, ultimately, is that any such theory is compelled to integrate difficult mathematical, scientific, economic, anthropological and technological questions with others that exceed these divisions between fields of knowledge, in the first place because what is at stake is the counter-entropic function of knowledge itself. Stiegler has indeed begun a project to open up this question of entropy in terms of its thermodynamic, biological, informational and noetic dimensions (in all of its ‘exorganological’ dimensions, in Stiegler’s recent terms), drawing on the work of Vernadsky, Georgescu-Roegen and Lotka, among others, and in discussion with scientists such as Giuseppe Longo, but this requires large-scale transdisciplinary contributory research projects to be established involving scholars across a wide variety of fields. Despite this daunting complexity, such a theory of general entropy has today become a necessity.
In a context in which the globalized systems of consumerist capitalism are reaching their limits, and in the process dragging many other systems past their limits, including geophysical systems such as the climate system, and also including the noetic systems through which alone good collective decisions can ever hope to be made – in such a context, where a cascade of catastrophic system failures seems entirely possible if not highly probable, it is solely on the basis of such a theory that counter-entropic investment prospects with the potential to bifurcate away from such a globally dangerous and monstrous situation can be identified, imagined, invented and realised. Such a bifurcation, and the general theory on which it can be established, will presuppose a reconsideration of the very basis and division of fields of knowledge, but it will also require a complete reorganization of silicon technologies at least as profound as the elimination of carbon technologies called for by the IPCC, and on a comparably short timespan.