Project 2050: Radical Futures
What does it mean to be human in the 21st century?
What will it mean to be human fifty years from now?
project lead by Ollivier Dyens and Damian Arteca
Technology promises extraordinary new objects, structures, and possibilities but also carries within itself threatening transformations that burrow deep within our psyche. AI, Big Data, organs-on-a-chip, quantum computing, 3d printing, CRISPR, virtual and augmented realities, today’s technologies force us to re-evaluate our understanding of the world as deeply as our ancestors did 50,000 years ago when cave painting emerged. The shade of this world, William Gibson suggests, is that of a precarious balance between terror and ecstasy.
‘This is not a race against the machines,’ Kevin Kelly one said. ‘If we race against them we lose. This is a race with the machines.’ But what does it mean for humanity to race alongside machines? How will we change, what will change?
Project 2050 will examine these questions. It will attempt to identify and analyze long term patterns, and to perceive not only the forest from the trees, but the shades of the entire continent. The future is an unknown space, an unknown continent of possibilities. Project 2050 is a gathering of explorers for whom shades of the future offer clues to what we are, to who we are, to how and why we are changing.
Project 2050 will explore and analyse the deep transformations of our society and offer clues on patterns forming far way in the horizon. Project 2050’s aim is not only to analyze long term patterns but to also make the future. Project 2050 starts from the premise that the future is quantum-like, in the sense that all possibilities exist until they collapse into reality through observations, actions, involvement. To do so, Project 2050 will use traditional scholarly methods and complement them with creative, far reaching and original thoughts and knowledge experiments so that new insights can be discovered and made real.
Project 2050 will share its insights, questions and experiments as frequently and as largely as possible so that society as a whole can gain from the insights, failures and successes of Project 2050.
Radical Futures: Events
Radical Futures presents a sequence of monthly, public conversations meant to tackle the future in a radically new way. The series is part of a broader project meant to explore these crucial questions through interdisciplinary and intergenerational dialogue.
1-Are algorithms the new public intellectuals?
It is hardly controversial to note the decline of the public intellectual’s role in our culture. The mid to late twentieth century has felled more public intellectuals than it has produced; television, radio and the press have all distanced themselves from the longform interview and academic thinkers, even juggernauts as Chomsky, E. O. Wilson, and others. The public intellectual’s role of observing, synthesizing, and re-presenting culture to itself has arguably been replaced by a modern successor: the Algorithm. With the rise of public-scale usage of private data, one’s experience of the informatic world is more and more curated to and for our interests- our intellectual pursuits are now mediated by ever more efficient algorithmic systems. Where before we might peruse the bibliographies of our favourite intellectual’s magnum opi, we now receive automated recommendations from the likes of google scholar and academia.edu. Where does the role of public intellectual lie in tomorrow’s digital culture? What is gained -and what is lost- as we transition into new forms of curating and disseminating knowledge?
On October 2nd, the night’s attendants included undergraduates, graduate students, alums and visiting professors. A “public intellectual” was understood to mean any individual who regularly shares their thoughts on a wide array of topics to a broad audience, catering particularly to topics of public interest, and prototypically -though not necessarily- being of academic heritage. The question was posed: Are algorithms fulfilling this role? Xin Wei -check out his ‘Prototyping Social Forms’ on our website- raised an initial, but crucial point: algorithms are processes, not entities, and one should avoid the problem of reification in discussing them when presented alongside human interlocutors. This was problematized by a recent graduate in anthropology, who pointed out that algorithmic processes can functionally replace individual persons by taking on their utility: mediating, organizing, and disseminating information. Furthermore, one can imagine an algorithmic intellectual: one who produces results based on a set of strict rules. A non-algorithmic intellectual then tantalizes the imagination: the attendees proposed Deleuze as a possible candidate.
Inevitably the conversation turned to politics- the interests of (the mostly corporate) algorithmic designers were put in question, but then (in the spirit of the evening) the question was radically re-imagined: much like how nowadays professional chess players engage in so-called Centaur Matches (where each team of humans plays cooperates with an AI), we might imagine a future in which human shepherds act as rudders to algorithms functioning as intellectuals, teachers, presidents... This is hardly science fiction: surveillance, the military, the economy… all are already to some degree “Centaur Matches”. The point of literacy was raised: as more and more of our societal infrastructure becomes computational, Java, Python, and other coding languages become a sort of Cybernetic Lingua Franca. There is no doubt that code is becoming a language of power: we might imagine a time in which code is taught in public schools- a society in which code is an official language, perhaps taught in immersion programs. The evening ranged over a broad swath of topics, and marked an expansive and exciting inauguration to the “Radical Futures” series.
In an age of WikiLeaks, Edward Snowden and VPN’s now becoming standard fare, it has become a cliché to declare our society Orwellian or to invoke the panopticon in describing our state of -sometimes intrusive- global connectedness. As more and more of our behaviour becomes data ripe for corporate and state interests, and technology affords greater and greater windows of surveillance, there is no doubt that we fear losing our privacy. Simultaneously, our culture has become preoccupied with the image and being seen: Instagram and snapchat have overtaken text-based social media, we purchase fitbits to monitor our health, and we indulge in the court of public opinion to engage in moral performance for a digital audience. We are seemingly afraid of other’s privacy, and of being made private- of being unsurveyed. As we advance into a future of technology, how will our simultaneous desire and fear for surveillance unfold? What desirable futures of surveillance can we imagine?
Letters from the Future
There was a time when scholars wrote and thought. Today, in this world of infinite subroutines and deeper and deeper machine learning, scholarship has become algorithmic and genetic. At this moment in our academic history, iterations of thoughts, ideas and structures appear mysteriously, triggered by scholars but created by the sentience of non-sentient machines.
The future is not something that happens to us. We create the future.