Silicon Valley rarely talks politics – except, perhaps, to discuss the quickest ways of disrupting it. On the rare occasions that its leaders do speak out, it is usually to disparage the homeless, celebrate colonialism or complain about the hapless city regulators who are out to strangle the fragile artisans who gave us Uber and Airbnb.
Thus it is puzzling that America’s tech elites have become the world’s loudest proponents of basic income – an old but radical idea that has been embraced, for very different reasons and in very different forms, by both left and right. From Marc Andreessen to Tim O’Reilly, Silicon Valley’s royalty seems intrigued by the prospect of handing out cash to ordinary citizens, regardless of whether they work or not.
Y Combinator, one of Silicon Valley’s premier startup incubators, has announced it wants to provide funding to a group of volunteers and hire a researcher – for five years, no less – to study the issue.
Albert Wenger, a partner in United Square Ventures, a prominent venture capital firm, is so taken with the idea that he is currently at work on a book. So, why all the fuss – and in Silicon Valley, of all places?
Basic income is seen as the Trojan horse that allows tech companies to say we are the good cop to Wall Street’s bad cop
First, there is the traditional libertarian argument against the intrusiveness and inefficiency of the welfare state – a problem that basic income, once combined with the full-blown dismantling of public institutions, might solve. Second, the coming age of automation might result in even more people losing their jobs – and the prospect of a guaranteed and unconditional basic income might reduce the odds of another Luddite uprising. Better to have everyone learning how to code, receiving basic income and hoping to meet an honest venture capitalist.
Third, the precarious nature of employment in the gig economy no longer looks as terrifying if you receive basic income of some kind. Driving for Uber, after all, could be just a hobby that occasionally yields some material benefits. Think fishing, but a bit more social. And who doesn’t like fishing?
Basic income, therefore, is often seen as the Trojan horse that would allow tech companies to position themselves as progressive, even caring – the good cop to Wall Street’s bad cop – while eliminating the hurdles that stand in the way of further expansion.
Goodbye to all those cumbersome institutions of the welfare state, employment regulations that guarantee workers’ rights or subversive attempts to question the status quo with regards to the ownership of data or the infrastructure that produces it.
And yet there is something else to Silicon Valley’s advocacy: the sudden realisation that, should it fail to define the horizons of the basic income debate, the public might eventually realise that the main obstacle in the way of this radical idea is none other than Silicon Valley itself.
To understand why, it is best to examine the most theoretically and technologically sophisticated version of the basic income argument.
This is the work of radical Italian economists – Carlo Vercellone, Andrea Fumagalli and Stefano Lucarelli – who for decades have been penning pungent critiques of “cognitive capitalism” – that is, the current stage of capitalism, characterised by the growing importance of cognitive labour and the declining importance of material production.
Dutch city plans to pay citizens a ‘basic income’, and Greens say it could work in the UK
Read more
Unlike other defenders of basic income who argue that it is necessary on moral or social grounds, these economists argue that it makes sound economic sense during our transition to cognitive capitalism. It is a way to avoid structural instability – generated, among other things, by the increasing precariousness of work and growing income polarisation – and to improve the circulation of ideas (as well as their innovative potential) in the economy.
How so? First, it is a way to compensate workers for the work they do while not technically working – which, as we enter cognitive capitalism, often produces far more value than paid work. Think of Uber drivers who are generating useful data, which helps Uber in making resource allocation decisions, in between their trips.
Second, because much of our labour today is collective – do you know by how much your individual search improves Google’s search index? Or how much a line of code you contribute to a free software project enhances the overall product? – it is often impossible to determine the share of individual contribution in the final product. Basic income simply acknowledges that much of modern cognitive labour is social in character.
Finally, it is a way to ensure that some of the productivity gains associated with the introduction of new techniques for rationalising the work process – which used to be passed on to workers through mechanisms such as wage indexing – can still be passed on, even as collective bargaining and other forms of employment rights are weakened. This, in turn, could lead to higher investments and higher profits, creating a virtuous circle.
The cognitive capitalism argument for basic income is more complex than this crude summary, but it requires two further conditions.
First, that the welfare state, in a somewhat reformed form, must survive and flourish – it is a key social institution that, with its generous investments in health and education, gives us the freedom to be creative.
Second, that there must be a fundamental reform of the tax system to fund it – with taxes not just on financial transactions, but also on the use of instruments such as patents, trademarks and, increasingly, various rights claims over data that prevent the optimal utilisation of knowledge.
This more radical interpretation of the basic income agenda suggests that Silicon Valley, far from being its greatest champion, is its main enemy. It actively avoids paying taxes; it keeps finding new ways to estrange data from the users who produce it; it wants to destroy the welfare state, either by eliminating it or by replacing it with its own, highly privatised and highly individualised alternatives (think preventive health-tracking with FitBit versus guaranteed free healthcare).
Besides, it colonises, usurps and commodifies whatever new avenues of genuine social cooperation– the much-maligned sharing economy – are opened up by the latest advances in communication.
Why is Silicon Valley so 'tone deaf' to India?
Read more
In short, you can either have a radical agenda of basic income, where people are free to collaborate as they wish because they no longer have to work, or the kind of platform capitalism that seeks to turn everyone into a precarious entrepreneur. But you can’t have both.
In fact, Silicon Valley can easily make the first step towards the introduction of basic income: why not make us, the users, the owners of our own data? At the very minimum, it could help us to find alternative, non-commercial users of this data. At its most ambitious you can think of a mechanism whereby cities, municipalities and eventually nation states, starved of the data that now accrues almost exclusively to the big tech firms, would compensate citizens for their data with some kind of basic income, that might be either direct (cash) or indirect (free services such as transportation).
This, however, is not going to happen because data is the very asset that makes Silicon Valley impossible to disrupt – and it knows it. What we get instead is Silicon Valley’s loud, but empty, advocacy of an agenda it is aggressively working to suppress.
Somehow our tech elites want us to believe that governments will scrape enough cash together to make it happen. Who will pay for it, though? Clearly, it won’t be the radical moguls of Silicon Valley: they prefer to park their cash offshore.
Suddenly, it feels like 2000 again. Back then, surveillance programs like [Carnivore](https://en.wikipedia.org/wiki/Carnivore_(software%29), Echelon, and Total Information Awareness helped spark a surge in electronic privacy awareness. Now a decade later, the recent discovery of programs like [PRISM](https://en.wikipedia.org/wiki/PRISM_(surveillance_program%29), Boundless Informant, and FISA orders are catalyzing renewed concern.
The programs of the past can be characterized as “proximate” surveillance, in which the government attempted to use technology to directly monitor communication themselves. The programs of this decade mark the transition to “oblique” surveillance, in which the government more often just goes to the places where information has been accumulating on its own, such as email providers, search engines, social networks, and telecoms.
Both then and now, privacy advocates have typically come into conflict with a persistent tension, in which many individuals don’t understand why they should be concerned about surveillance if they have nothing to hide. It’s even less clear in the world of “oblique” surveillance, given that apologists will always frame our use of information-gathering services like a mobile phone plan or GMail as a choice.
We’re All One Big Criminal Conspiracy
As James Duane, a professor at Regent Law School and former defense attorney, notes in his excellent lecture on why it is never a good idea to talk to the police:
Estimates of the current size of the body of federal criminal law vary. It has been reported that the Congressional Research Service cannot even count the current number of federal crimes. These laws are scattered in over 50 titles of the United States Code, encompassing roughly 27,000 pages. Worse yet, the statutory code sections often incorporate, by reference, the provisions and sanctions of administrative regulations promulgated by various regulatory agencies under congressional authorization. Estimates of how many such regulations exist are even less well settled, but the ABA thinks there are “[n]early 10,000.”
If the federal government can’t even count how many laws there are, what chance does an individual have of being certain that they are not acting in violation of one of them?
As Supreme Court Justice Breyer elaborates:
The complexity of modern federal criminal law, codified in several thousand sections of the United States Code and the virtually infinite variety of factual circumstances that might trigger an investigation into a possible violation of the law, make it difficult for anyone to know, in advance, just when a particular set of statements might later appear (to a prosecutor) to be relevant to some such investigation.
For instance, did you know that it is a federal crime to be in possession of a lobster under a certain size? It doesn’t matter if you bought it at a grocery store, if someone else gave it to you, if it’s dead or alive, if you found it after it died of natural causes, or even if you killed it while acting in self defense. You can go to jail because of a lobster.
If the federal government had access to every email you’ve ever written and every phone call you’ve ever made, it’s almost certain that they could find something you’ve done which violates a provision in the 27,000 pages of federal statues or 10,000 administrative regulations. You probably do have something to hide, you just don’t know it yet.
We Should Have Something To Hide
Over the past year, there have been a number of headline-grabbing legal changes in the US, such as the legalization of marijuana in CO and WA, as well as the legalization of same-sex marriage in a growing number of US states.
As a majority of people in these states apparently favor these changes, advocates for the US democratic process cite these legal victories as examples of how the system can provide real freedoms to those who engage with it through lawful means. And it’s true, the bills did pass.
What’s often overlooked, however, is that these legal victories would probably not have been possible without the ability to break the law.
The state of Minnesota, for instance, legalized same-sex marriage this year, but sodomy laws had effectively made homosexuality itself completely illegal in that state until 2001. Likewise, before the recent changes making marijuana legal for personal use in WA and CO, it was obviously not legal for personal use.
Imagine if there were an alternate dystopian reality where law enforcement was 100% effective, such that any potential law offenders knew they would be immediately identified, apprehended, and jailed. If perfect law enforcement had been a reality in MN, CO, and WA since their founding in the 1850s, it seems quite unlikely that these recent changes would have ever come to pass. How could people have decided that marijuana should be legal, if nobody had ever used it? How could states decide that same sex marriage should be permitted, if nobody had ever seen or participated in a same sex relationship?
The cornerstone of liberal democracy is the notion that free speech allows us to create a marketplace of ideas, from which we can use the political process to collectively choose the society we want. Most critiques of this system tend to focus on the ways in which this marketplace of ideas isn’t totally free, such as the ways in which some actors have substantially more influence over what information is distributed than others.
The more fundamental problem, however, is that living in an existing social structure creates a specific set of desires and motivations in a way that merely talking about other social structures never can. The world we live in influences not just what we think, but how we think, in a way that a discourse about other ideas isn’t able to. Any teenager can tell you that life’s most meaningful experiences aren’t the ones you necessarily desired, but the ones that actually transformed your very sense of what you desire.
We can only desire based on what we know. It is our present experience of what we are and are not able to do that largely determines our sense for what is possible. This is why same sex relationships, in violation of sodomy laws, were a necessary precondition for the legalization of same sex marriage. This is also why those maintaining positions of power will always encourage the freedom to talk about ideas, but never to act.
Technology And Law Enforcement
Law enforcement used to be harder. If a law enforcement agency wanted to track someone, it required physically assigning a law enforcement agent to follow that person around. Tracking everybody would be inconceivable, because it would require having as many law enforcement agents as people.
Today things are very different. Almost everyone carries a tracking device (their mobile phone) at all times, which reports their location to a handful of telecoms, which are required by law to provide that information to the government. Tracking everyone is no longer inconceivable, and is in fact happening all the time. We know that Sprint alone responded to 8 million law enforcement requests for real time customer location just in 2008. They got so many requests that they built an automated system to handle them.
Combined with ballooning law enforcement budgets, this trend towards automation, which includes things like license plate scanners and domestically deployed drones, represents a significant shift in the way that law enforcement operates.
Police already abuse the immense power they have, but if everyone’s every action were being monitored, and everyone technically violates some obscure law at some time, then punishment becomes purely selective. Those in power will essentially have what they need to punish anyone they’d like, whenever they choose, as if there were no rules at all.
Even ignoring this obvious potential for new abuse, it’s also substantially closer to that dystopian reality of a world where law enforcement is 100% effective, eliminating the possibility to experience alternative ideas that might better suit us.
Compromise
Some will say that it’s necessary to balance privacy against security, and that it’s important to find the right compromise between the two. Even if you believe that, a good negotiator doesn’t begin a conversation with someone whose position is at the exact opposite extreme by leading with concessions.
And that’s exactly what we’re dealing with. Not a balance of forces which are looking for the perfect compromise between security and privacy, but an enormous steam roller built out of careers and billions in revenue from surveillance contracts and technology. To negotiate with that, we can’t lead with concessions, but rather with all the opposition we can muster.
All The Opposition We Can Muster
Even if you believe that voting is more than a selection of meaningless choices designed to mask the true lack of agency we have, there is a tremendous amount of money and power and influence on the other side of this equation. So don’t just vote or petition.
To the extent that we’re “from the internet,” we have a certain amount of power of our own that we can leverage within this domain. It is possible to develop user-friendly technical solutions that would stymie this type of surveillance. I help work on Open Source security and privacy apps at Open Whisper Systems, but we all have a long ways to go. If you’re concerned, please consider finding some way to directly oppose this burgeoning worldwide surveillance industry (we could use help at Open Whisper Systems!). It’s going to take all of us.
One day in the early 1980s, I was flipping through the TV channels, when I stopped at a news report. The announcer was grey-haired. His tone was urgent. His pronouncement was dire: between the war in the Middle East, famine in Africa, AIDS in the cities, and communists in Afghanistan, it was clear that the Four Horsemen of the Apocalypse were upon us. The end had come.
We were Methodists and I’d never heard this sort of prediction. But to my grade-school mind, the evidence seemed ironclad, the case closed. I looked out the window and could hear the drumming of hoof beats.
Life went on, however, and those particular horsemen went out to pasture. In time, others broke loose, only to slow their stride as well. Sometimes, the end seemed near. Others it would recede. But over the years, I began to see it wasn’t the end that was close. It was our dread of it. The apocalypse wasn’t coming: it was always with us. It arrived in a stampede of our fears, be they nuclear or biological, religious or technological.
In the years since, I watched this drama play out again and again, both in closed communities such as Waco and Heaven’s Gate, and in the larger world with our panics over SARS, swine flu, and Y2K. In the past, these fears made for some of our most popular fiction. The alien invasions in H G Wells’s War of the Worlds (1898); the nuclear winter in Nevil Shute’s On the Beach (1957); God’s wrath in the Left Behind series of books, films and games. In most versions, the world ended because of us, but these were horrors that could be stopped, problems that could be solved.
But today something is different. Something has changed. Judging from its modern incarnation in fiction, a new kind of apocalypse is upon us, one that is both more compelling and more terrifying. Today our fears are broader, deeper, woven more tightly into our daily lives, which makes it feel like the seeds of our destruction are all around us. We are more afraid, but less able to point to a single source for our fear. At the root is the realisation that we are part of something beyond our control.
I noticed this change recently when I found myself reading almost nothing but post-apocalyptic fiction, of which there has been an unprecedented outpouring. I couldn’t seem to get enough. I tore through one after another, from the tenderness and brutality of Peter Heller’s The Dog Stars (2013) to the lonely wandering of Emily St John Mandel’s survivors in Station Eleven (2014), to the magical horrors of Benjamin Percy’s The Dead Lands (2015). There were the remnants of humanity trapped in the giant silos of Hugh Howey’s Wool (2013), the bizarre biotech of Paolo Bacigalupi’s The Windup Girl (2009), the desolate realism of Cormac McCarthy’s The Road (2006).
I read these books like my life depended on it. It was impossible to look away from the ruins of our civilisation. There seemed to be no end to them, with nearly every possible depth being plumbed, from Tom Perrotta’s The Leftovers (2011), about the people not taken by the rapture, to Nick Holdstock’s The Casualties (2015), about people’s lives just before the apocalypse. The young-adult aisle was filled with similar books, such as Veronica Roth’s Divergent (2011), James Dashner’s The Maze Runner (2009) and Michael Perry’s The Scavengers (2014). Movie screens were alight with the apocalyptic visions of Snowpiercer (2013), Mad Max: Fury Road (2015), The Hunger Games (2012-15), Z for Zachariah (2015). There was also apocalyptic poetry in Sara Eliza Johnson’s Bone Map (2014), apocalyptic essays in Joni Tevis’s The World Is on Fire (2015), apocalyptic non-fiction in Alan Weisman’s The World Without Us (2007). Even the academy was on board with dense parsings, such as Eugene Thacker’s In the Dust of This Planet: Horror of Philosophy (2010) and Samuel Scheffler’s Death and the Afterlife (2013).
I would stand on the overpass and watch the cars flow underneath. They never slowed. There was never a time when the road was empty
In the annals of eschatology, we are living in a golden age. The end of the world is on everyone’s mind. Why now? In the recent past we were arguably much closer to the end – just a few nuclear buttons had to be pushed.
The current wave of anxiety might be obvious on the surface, but it runs much deeper. It’s a feeling I’ve had for a long time, and one that has been building over the years. The first time I remember it was when I lived in a house next to a 12-lane freeway. Sometimes I would stand on the overpass and watch the cars flow underneath. They never slowed. They never stopped. There was never a time when the road was empty, when there were no cars driving on it.
When I tried to wrap my mind around this endlessness, it filled me with a kind of panic. It felt like something was careening out of control. But they were just cars. I drove one myself every day. It made no sense. It was like there was something my mind was trying reach around, but couldn’t.
This same ungraspable feeling has hit me at odd times since: on a train across Hong Kong’s New Territories through the endless apartment towers. In an airplane rising over the Midwest, watching the millions of small houses and yards merge into a city, a state, a country. Seeing dumpster after dumpster being carried off from a construction site to a pile growing somewhere.
When I began my apocalyptic binge, I could channel that same feeling and let it run all the way through. It echoed through those stories, through the dead landscapes. And now after reading so many, I believe I am starting to understand the nature of this new fear.
Humans have always been an organised species. We have always functioned as a group, as something larger than ourselves. But in the recent past, the scale of that organisation has grown so much, the pace of that growth is so fast, the connective tissue between us so dense, that there has been a shift of some kind. Namely, we have become so powerful that some scientists argue we have entered a new era, the Anthropocene, in which humans are a geological force. That feeling, that panic, comes from those moments when this fact is unavoidable. It comes from being unable to not see what we’ve become – a planet-changing superorganism. It is from the realisation that I am part of it.
Apocalyptic fictions of the current wave feed off precisely this fear: the feeling that we are part of something over which we have no control
Most days, I don’t feel like I’m part of anything with much power to create and destroy. Day to day, my own life feels chaotic and hard, trying to collect money I’m owed, or get my car fixed, or pay for health insurance, or feed my kids. Most of the time, making it through the day feels like a victory, not like I’m playing any part in a larger drama, or that the errands I’m running, the things I’m buying, the electricity I’m wasting could be bringing about our doom.
Yet apocalyptic fictions of the current wave feed off precisely this fear: the feeling that we are part of something over which we have no control, of which we have no real choice but to keep being part. The bigger it grows, the more we rely on it, the deeper the anxiety becomes. It is the curse of being a self-aware piece of a larger puzzle, of an emergent consciousness in a larger emergent system. It is as hard to fathom as the colony is to the ant.
Emergence, or the way that complex systems come from simple parts, is a well-known phenomenon in science and nature. It is how everything from slime moulds to cities to our sense of self arises. It is how bees become a hive, how cells become an organism, how a brain becomes a mind. And it is how humans become humanity.
But what’s less well-known is that there are two ways of interpreting emergence. The first is known as weak emergence, which is more intuitive. In this view you should be able to trace the lines of causation from the bottom of a system all the way to the top. In looking at an ant, you should detect the making of the colony. In examining brain cells, you should find the self. In this view, the neuroscientist Michael Gazzaniga explains in his book Who’s In Charge? Free Will and the Science of the Brain (2011), ‘the emergent property is reducible to its individual components’.
But another view is called strong emergence, in which the new system takes on qualities the parts don’t have. In strong emergence, the new system undergoes what Gazzaniga called a ‘phase shift’, not unlike the way that water changes to ice. They are made of the same stuff, but behave according to different rules. The emergent system might not be more than the sum of its parts, but it is different from them.
Strong emergence has been grudgingly accepted in physics, a field where quantum mechanics and general relativity have never been reconciled, and where they might never do so because of a phase shift: physics at our level emerges from below, but changes once it does. Quantum mechanics and general relativity operate at different levels of organisation. They work according to different rules.
Whenever I think about reconciling my own life with that of my species, I have a similar feeling. My life depends on technologies I don’t understand, signals I can’t see, systems I can’t perceive. I don’t understand how any of it works, how I could change it, or how it can last. Its feels like peering across some chasm, like I am part of something I cannot quite grasp, like there has been a phase shift from humans struggling to survive to humanity struggling to survive our success.
The problems we face will not be fixed at the level of the individual life. We all know this because none of us have changed our own lives anywhere near enough to make a difference. Where would we start? With our commute? With candles? Life is already hard. Solutions will need to be implemented at a higher level of organisation. We fear this. We know it, but we have no idea what those solutions might look like. Hence the creeping sense of doom.
‘It doesn’t make sense,’ says a character in Mandel’s Station Eleven. ‘Are we supposed to believe that civilisation has just come to an end?’
Another responds: ‘Well, it was always a little fragile, wouldn’t you say?’
This is what our fiction is telling us. This is what makes it so mesmerising, so satisfying. In our stories of the post-apocalypse, the dilemma is resolved, the fragility laid bare. In these, humans are both villain and hero, disease and cure. Our doom is our salvation. In our books at least, humanity’s destruction is also its redemption.
Standing on that bridge now, I see even more cars than ever before. Beneath the roar of engines is the sound of hoof beats. As they draw closer, the old feeling rises up, but I know now that it comes not just from a fear of the end itself, but from the fear of knowing that the rider is me.
In 2007, right before the first iPhone launched, I asked Steve Jobs the obvious question: The design of the iPhone was based on discarding every physical interface element except for a touchscreen. Would users be willing to give up the then-dominant physical keypads for a soft keyboard?
His answer was brusque: “They’ll learn.”
Steve turned out to be right. Today, touchscreens are ubiquitous and seem normal, and other interfaces are emerging. An entire generation is now coming of age with a completely different tactile relationship to information, validating all over again Marshall McLuhan’s observation that “the medium is the message”.
A great deal of product development is based on the assumption that products must adapt to unchanging human needs or risk being rejected. Yet, time and again, people adapt in unpredictable ways to get the most out of new tech. Creative people tinker to figure out the most interesting applications, others build on those, and entire industries are reshaped.
People change, then forget that they changed, and act as though they always behaved a certain way and could never change again. Because of this, unexpected changes in human behavior are often dismissed as regressive rather than as potentially intelligent adaptations.
But change happens anyway. “Software is eating the world” is the most recent historic transformation of this sort.
In 2014, a few of us invited Venkatesh Rao to spend the year at Andreessen Horowitz as a consultant to explore the nature of such historic tech transformations. In particular, we set out to answer the question: Between both the breathless and despairing extremes of viewing the future, could an intellectually rigorous case be made for pragmatic optimism?
As this set of essays argues — many of them inspired by a series of intensive conversations Venkat and I had — there is indeed such a case, and it follows naturally from the basic premise that people can and do change. To “break smart” is to adapt intelligently to new technological possibilities.
With his technological background, satirical eye, and gift for deep and different takes (as anyone who follows his Ribbonfarm blog knows!), there is perhaps nobody better suited than Venkat for telling a story of the future as it breaks smart from the past.
Whether you’re a high school kid figuring out a career or a CEO attempting to navigate the new economy, Breaking Smart should be on your go-to list of resources for thinking about the future, even as you are busy trying to shape it.
Something momentous happened around the year 2000: a major new soft technology came of age. After written language and money, software is only the third major soft technology to appear in human civilization. Fifteen years into the age of software, we are still struggling to understand exactly what has happened. Marc Andreessen’s now-familiar line, software is eating the world, hints at the significance, but we are only just beginning to figure out how to think about the world in which we find ourselves.
Only a handful of general-purpose technologies1 – electricity, steam power, precision clocks, written language, token currencies, iron metallurgy and agriculture among them – have impacted our world in the sort of deeply transformative way that deserves the description eating. And only two of these, written language and money, were soft technologies: seemingly ephemeral, but capable of being embodied in a variety of specific physical forms. Software has the same relationship to any specific sort of computing hardware as money does to coins or credit cards or writing to clay tablets and paper books.
But only since about 2000 has software acquired the sort of unbridled power, independent of hardware specifics, that it possesses today. For the first half century of modern computing after World War II, hardware was the driving force. The industrial world mostly consumed software to meet existing needs, such as tracking inventory and payroll, rather than being consumed by it. Serious technologists largely focused on solving the clear and present problems of the industrial age rather than exploring the possibilities of computing, proper.
Sometime around the dot com crash of 2000, though, the nature of software, and its relationship with hardware, underwent a shift. It was a shift marked by accelerating growth in the software economy and a peaking in the relative prominence of hardware.2 The shift happened within the information technology industry first, and then began to spread across the rest of the economy.
But the economic numbers only hint at3 the profundity of the resulting societal impact. As a simple example, a 14-year-old teenager today (too young to show up in labor statistics) can learn programming, contribute significantly to open-source projects, and become a talented professional-grade programmer before age 18. This is breaking smart: an economic actor using early mastery of emerging technological leverage — in this case a young individual using software leverage — to wield disproportionate influence on the emerging future.
Only a tiny fraction of this enormously valuable activity — the cost of a laptop and an Internet connection — would show up in standard economic metrics. Based on visible economic impact alone, the effects of such activity might even show up as a negative, in the form of technology-driven deflation. But the hidden economic significance of such an invisible story is at least comparable to that of an 18-year-old paying $100,000 over four years to acquire a traditional college degree. In the most dramatic cases, it can be as high as the value of an entire industry. The music industry is an example: a product created by a teenager, Shawn Fanning’s Napster, triggered a cascade of innovation whose primary visible impact has been the vertiginous decline of big record labels, but whose hidden impact includes an explosion in independent music production and rapid growth in the live-music sector.4
Software eating the world is a story of the seen and the unseen: small, measurable effects that seem underwhelming or even negative, and large invisible and positive effects that are easy to miss, unless you know where to look.5
Today, the significance of the unseen story is beginning to be widely appreciated. But as recently as fifteen years ago, when the main act was getting underway, even veteran technologists were being blindsided by the subtlety of the transition to software-first computing.
Perhaps the subtlest element had to do with Moore’s Law, the famous 1965 observation by Intel co-founder Gordon Moore that the density with which transistors can be packed into a silicon chip doubles every 18 months. By 2000, even as semiconductor manufacturing firms began running into the fundamental limits of Moore’s Law, chip designers and device manufacturers began to figure out how to use Moore’s Law to drive down the cost and power consumption of processors rather than driving up raw performance. The results were dramatic: low-cost, low-power mobile devices, such as smartphones, began to proliferate, vastly expanding the range of what we think of as computers. Coupled with reliable and cheap cloud computing infrastructure and mobile broadband, the result was a radical increase in technological potential. Computing could, and did, become vastly more accessible, to many more people in every country on the planet, at radically lower cost and expertise levels.
One result of this increased potential was that technologists began to grope towards a collective vision commonly called the Internet of Things. It is a vision based on the prospect of processors becoming so cheap, miniaturized and low-powered that they can be embedded, along with power sources, sensors and actuators, in just about anything, from cars and light bulbs to clothing and pills. Estimates of the economic potential of the Internet of Things – of putting a chip and software into every physical item on Earth – vary from $2.7 trillion to over $14 trillion: comparable to the entire GDP of the United States today.6
By 2010, it had become clear that given connectivity to nearly limitless cloud computing power and advances in battery technologies, programming was no longer something only a trained engineer could do to a box connected to a screen and a keyboard. It was something even a teenager could do, to almost anything.
The rise of ridesharing illustrates the process particularly well.
Only a few years ago services like Uber and Lyft seemed like minor enhancements to the process of procuring and paying for cab rides. Slowly, it became obvious that ridesharing was eliminating the role of human dispatchers and lowering the level of expertise required of drivers. As data accumulated through GPS tracking and ratings mechanisms, it further became clear that trust and safety could increasingly be underwritten by data instead of brand promises and regulation. This made it possible to dramatically expand driver supply, and lower ride costs by using underutilized vehicles already on the roads.
As the ridesharing sector took root and grew in city after city, second-order effects began to kick in. The increased convenience enables many more urban dwellers to adopt carless lifestyles. Increasing supply lowers costs, and increases accessibility for people previously limited to inconvenient public transportation. And as the idea of the carless lifestyle began to spread, urban planners began to realize that century-old trends like suburbanization, driven in part by car ownership, could no longer be taken for granted.
The ridesharing future we are seeing emerge now is even more dramatic: the higher utilization of cars leads to lower demand for cars, and frees up resources for other kinds of consumption. Individual lifestyle costs are being lowered and insurance models are being reimagined. The future of road networks must now be reconsidered in light of greener and more efficient use of both vehicles and roads.
Meanwhile, the emerging software infrastructure created by ridesharing is starting to have a cascading impact on businesses, such as delivery services, that rely on urban transportation and logistics systems. And finally, by proving many key component technologies, the rideshare industry is paving the way for the next major development: driverless cars.
These developments herald a major change in our relationship to cars.
To traditionalists, particularly in the United States, the car is a motif for an entire way of life, and the smartphone just an accessory. To early adopters who have integrated ridesharing deeply into their lives, the smartphone is the lifestyle motif, and the car is the accessory. To generations of Americans, owning a car represented freedom. To the next generation, not owning a car will represent freedom.
And this dramatic reversal in our relationships to two important technologies – cars and smartphones – is being catalyzed by what was initially dismissed as “yet another trivial app.”
Similar impact patterns are unfolding in sector after sector. Prominent early examples include the publishing, education, cable television, aviation, postal mail and hotel sectors. The impact is more than economic. Every aspect of the global industrial social order is being transformed by the impact of software.
This has happened before of course: money and written language both transformed the world in similarly profound ways. Software, however, is more flexible and powerful than either.
Writing is very flexible: we can write with a finger on sand or with an electron beam on a pinhead. Money is even more flexible: anything from cigarettes in a prison to pepper and salt in the ancient world to modern fiat currencies can work. But software can increasingly go wherever writing and money can go, and beyond. Software can also eat both, and take them to places they cannot go on their own.
Partly as a consequence of how rarely soft, world-eating technologies erupt into human life, we have been systematically underestimating the magnitude of the forces being unleashed by software. While it might seem like software is constantly in the news, what we have already seen is dwarfed by what still remains unseen.
The effects of this widespread underestimation are dramatic. The opportunities presented by software are expanding, and the risks of being caught on the wrong side of the transformation are dramatically increasing. Those who have correctly calibrated the impact of software are winning. Those who have miscalibrated it are losing.
And the winners are not winning by small margins or temporarily either. Software-fueled victories in the past decade have tended to be overwhelming and irreversible faits accompli. And this appears to be true at all levels from individuals to businesses to nations. Even totalitarian dictatorships seem unable to resist the transformation indefinitely.
So to understand how software is eating the world, we have to ask why we have been systematically underestimating its impact, and how we can recalibrate our expectations for the future.
There are four major reasons we underestimate the increasing power of software. Three of these reasons drove similar patterns of miscalibration in previous technological revolutions, but one is unique to software.
First, as futurist Roy Amara noted, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Technological change unfolds exponentially, like compound interest, and we humans seem wired to think about exponential phenomena in flawed ways.1 In the case of software, we expected too much too soon from 1995 to 2000, leading to the crash. Now in 2015, many apparently silly ideas from 2000, such as home-delivery of groceries ordered on the Internet, have become a mundane part of everyday life in many cities. But the element of surprise has dissipated, so we tend to expect too little, too far out, and are blindsided by revolutionary change in sector after sector. Change that often looks trivial or banal on the surface, but turns out to have been profound once the dust settles.
Second, we have shifted gears from what economic historian Carlota Perez calls the installation phase of the software revolution, focused on basic infrastructure such as operating systems and networking protocols, to a deployment phase focused on consumer applications such as social networks, ridesharing and ebooks. In her landmark study of the history of technology,2 Perez demonstrates that the shift from installation to deployment phase for every major technology is marked by a chaotic transitional phase of wars, financial scandals and deep anxieties about civilizational collapse. One consequence of the chaos is that attention is absorbed by transient crises in economic, political and military affairs, and the apocalyptic fears and utopian dreams they provoke. As a result, momentous but quiet change passes unnoticed.
Third, a great deal of the impact of software today appears in a disguised form. The genomics and nanotechnology sectors appear to be rooted in biology and materials science. The “maker” movement around 3d printing and drones appears to be about manufacturing and hardware. Dig a little deeper though, and you invariably find that the action is being driven by possibilities opened up by software more than fundamental new discoveries in those physical fields. The crashing cost of genome-sequencing is primarily due to computing, with innovations in wet chemistry playing a secondary role. Financial innovations leading to cheaper insurance and credit are software innovations in disguise. The Nest thermostat achieves energy savings not by exploiting new discoveries in thermodynamics, but by using machine learning algorithms in a creative way. The potential of this software-driven model is what prompted Google, a software company, to pay $3B to acquire Nest: a company that on the surface appeared to have merely invented a slightly better mousetrap.
These three reasons for under-estimating the power of software had counterparts in previous technology revolutions. The railroad revolution of the nineteenth century also saw a transitional period marked by systematically flawed expectations, a bloody civil war in the United States, and extensive patterns of disguised change — such as the rise of urban living, grocery store chains, and meat consumption — whose root cause was cheap rail transport.
The fourth reason we underestimate software, however, is a unique one: it is a revolution that is being led, in large measure, by brash young kids rather than sober adults.3
This is perhaps the single most important thing to understand about the revolution that we have labeled software eating the world: it is being led by young people, and proceeding largely without adult supervision (though with many adults participating). This has unexpected consequences.
As in most periods in history, older generations today run or control all key institutions worldwide. They are better organized and politically more powerful. In the United States for example, the AARP is perhaps the single most influential organization in politics. Within the current structure of the global economy, older generations can, and do, borrow unconditionally from the future at the expense of the young and the yet-to-be-born.
But unlike most periods in history, young people today do not have to either “wait their turn” or directly confront a social order that is systematically stacked against them. Operating in the margins by a hacker ethos — a problem solving sensibility based on rapid trial-and-error and creative improvisation — they are able to use software leverage and loose digital forms of organization to create new economic, social and political wealth. In the process, young people are indirectly disrupting politics and economics and creating a new parallel social order. Instead of vying for control of venerable institutions that have already weathered several generational wars, young people are creating new institutions based on the new software and new wealth. These improvised but highly effective institutions repeatedly emerge out of nowhere, and begin accumulating political and economic power. Most importantly, they are relatively invisible. Compared to the visible power of youth counterculture in the 1960s for instance, today’s youth culture, built around messaging apps and photo-sharing, does not seem like a political force to reckon with. This culture also has a decidedly commercial rather than ideological character, as a New York Times writer (rather wistfully) noted in a 2011 piece appropriately titled Generation Sell.4 Yet, today’s youth culture is arguably more powerful as a result, representing as it does what Jane Jacobs called the “commerce syndrome” of values, rooted in pluralistic economic pragmatism, rather than the opposed “guardian syndrome” of values, rooted in exclusionary and authoritarian political ideologies.
Chris Dixon captured this guerrilla pattern of the ongoing shift in political power with a succinct observation: what the smartest people do on the weekend is what everyone else will do during the week in ten years.
The result is strange: what in past eras would have been a classic situation of generational conflict based on political confrontation, is instead playing out as an economy-wide technological disruption involving surprisingly little direct political confrontation. Movements such as #Occupy pale in comparison to their 1960s counterparts, and more importantly, in comparison to contemporary youth-driven economic activity.
This does not mean, of course, that there are no political consequences. Software-driven transformations directly disrupt the middle-class life script, upon which the entire industrial social order is based. In its typical aspirational form, the traditional script is based on 12 years of regimented industrial schooling, an additional 4 years devoted to economic specialization, lifetime employment with predictable seniority-based promotions, and middle-class lifestyles. Though this script began to unravel as early as the 1970s, even for the minority (white, male, straight, abled, native-born) who actually enjoyed it, the social order of our world is still based on it. Instead of software, the traditional script runs on what we might call paperware: bureaucratic processes constructed from the older soft technologies of writing and money. Instead of the hacker ethos of flexible and creative improvisation, it is based on the credentialist ethos of degrees, certifications, licenses and regulations. Instead of being based on achieving financial autonomy early, it is based on taking on significant debt (for college and home ownership) early.
It is important to note though, that this social order based on credentialism and paperware worked reasonably well for almost a century between approximately 1870 and 1970, and created a great deal of new wealth and prosperity. Despite its stifling effects on individualism, creativity and risk-taking, it offered its members a broader range of opportunities and more security than the narrow agrarian provincialism it supplanted. For all its shortcomings, lifetime employment in a large corporation like General Motors, with significantly higher standards of living, was a great improvement over pre-industrial rural life.
But by the 1970s, industrialization had succeeded so wildly, it had undermined its own fundamental premises of interchangeability in products, parts and humans. As economists Jeffrey Greenwood and Mehmet Yorkuglu5 argue in a provocative paper titled 1974, that year arguably marked the end of the industrial age and the beginning of the information age. Computer-aided industrial automation was making ever-greater scale possible at ever-lower costs. At the same time, variety and uniqueness in products and services were becoming increasingly valuable to consumers in the developed world. Global competition, especially from Japan and Germany, began to directly threaten American industrial leadership. This began to drive product differentiation, a challenge that demanded originality rather than conformity from workers. Industry structures that had taken shape in the era of mass-produced products, such as Ford’s famous black Model T, were redefined to serve the demand for increasing variety. The result was arguably a peaking in all aspects of the industrial social order based on mass production and interchangeable workers roughly around 1974, a phenomenon Balaji Srinivasan has dubbed peak centralization.6
One way to understand the shift from credentialist to hacker modes of social organization, via young people acquiring technological leverage, is through the mythological tale of Prometheus stealing fire from the heavens for human use.
The legend of Prometheus has been used as a metaphor for technological progress at least since Mary Shelley’s Frankenstein: A Modern Prometheus. Technologies capable of eating the world typically have a Promethean character: they emerge within a mature social order (a metaphoric “heaven” that is the preserve of older elites), but their true potential is unleashed by an emerging one (a metaphoric “earth” comprising creative marginal cultures, in particular youth cultures), which gains relative power as a result. Software as a Promethean technology emerged in the heart of the industrial social order, at companies such as AT&T, IBM and Xerox, universities such as MIT and Stanford, and government agencies such as DARPA and CERN. But its Promethean character was unleashed, starting with the early hacker movement, on the open Internet and through Silicon-Valley style startups.
As a result of a Promethean technology being unleashed, younger and older face a similar dilemma: should I abandon some of my investments in the industrial social order and join the dynamic new social order, or hold on to the status quo as long as possible?
The decision is obviously easier if you are younger, with much less to lose. But many who are young still choose the apparent safety of the credentialist scripts of their parents. These are what David Brooks called Organization Kids (after William Whyte’s 1956 classic, The Organization Man7): those who bet (or allow their “Tiger” parents8 to bet on their behalf) on the industrial social order. If you are an adult over 30, especially one encumbered with significant family obligations or debt, the decision is harder.
Those with a Promethean mindset and an aggressive approach to pursuing a new path can break out of the credentialist life script at any age. Those who are unwilling or unable to do so are holding on to it more tenaciously than ever.
Young or old, those who are unable to adopt the Promethean mindset end up defaulting to what we call a pastoral mindset: one marked by yearning for lost or unattained utopias. Today many still yearn for an updated version of romanticized9 1950s American middle-class life for instance, featuring flying cars and jetpacks.
How and why you should choose the Promethean option, despite its disorienting uncertainties and challenges, is the overarching theme of Season 1. It is a choice we call breaking smart, and it is available to almost everybody in the developed world, and a rapidly growing number of people in the newly-connected developing world.
These individual choices matter.
As historians such as Daron Acemoglu and James Robinson10 and Joseph Tainter11 have argued, it is the nature of human problem-solving institutions, rather than the nature of the problems themselves, that determines whether societies fail or succeed. Breaking smart at the level of individuals is what leads to organizations and nations breaking smart, which in turn leads to societies succeeding or failing.
Today, the future depends on increasing numbers of people choosing the Promethean option. Fortunately, that is precisely what is happening.
In this season of Breaking Smart, I will not attempt to predict the what and when of the future. In fact, a core element of the hacker ethos is the belief that being open to possibilities and embracing uncertainty is necessary for the actual future to unfold in positive ways. Or as computing pioneer Alan Kay put it, inventing the future is easier than predicting it.
And this is precisely what tens of thousands of small teams — small enough to be fed by no more than two pizzas, by a rule of thumb made famous by Amazon founder Jeff Bezos — are doing across the world today.
Prediction as a foundation for facing the future involves risks that go beyond simply getting it wrong. The bigger risk is getting attached to a particular what and when, a specific vision of a paradise to be sought, preserved or reclaimed. This is often a serious philosophical error — to which pastoralist mindsets are particularly prone — that seeks to limit the future.
But while I will avoid dwelling too much on the what and when, I will unabashedly advocate for a particular answer to how. Thanks to virtuous cycles already gaining in power, I believe almost all effective responses to the problems and opportunities of the coming decades will emerge out of the hacker ethos, despite its apparent peripheral role today. The credentialist ethos of extensive planning and scripting towards deterministic futures will play a minor supporting role at best. Those who adopt a Promethean mindset and break smart will play an expanding role in shaping the future. Those who adopt a pastoral mindset and retreat towards tradition will play a diminishing role, in the shrinking number of economic sectors where credentialism is still the more appropriate model.
The nature of problem-solving in the hacker mode, based on trial-and-error, iterative improvement, testing and adaptation (both automated and human-driven) allows us to identify four characteristics of how the future will emerge.
First, despite current pessimism about the continued global leadership of the United States, the US remains the single largest culture that embodies the pragmatic hacker ethos, nowhere more so than in Silicon Valley. The United States in general, and Silicon Valley in particular, will therefore continue to serve as the global exemplar of Promethean technology-driven change. And as virtual collaboration technologies improve, the Silicon Valley economic culture will increasingly become the global economic culture.
Second, the future will unfold through very small groups having very large impacts. One piece of wisdom in Silicon Valley today is that the core of the best software is nearly always written by teams of fewer than a dozen people, not by huge committee-driven development teams. This means increasing well-being for all will be achieved through small two-pizza teams beating large ones. Scale will increasingly be achieved via loosely governed ecosystems of additional participants creating wealth in ways that are hard to track using traditional economic measures. Instead of armies of Organization Men and Women employed within large corporations, and Organization Kids marching in at one end and retirees marching out at the other, the world of work will be far more diverse.
Third, the future will unfold through a gradual and continuous improvement of well-being and quality of life across the world, not through sudden emergence of a utopian software-enabled world (or sudden collapse into a dystopian world). The process will be one of fits and starts, toys and experiments, bugginess and brokenness. But the overall trend will be upwards, towards increasing prosperity for all.
Fourth, the future will unfold through rapid declines in the costs of solutions to problems, including in heavily regulated sectors historically resistant to cost-saving innovations, such as healthcare and higher education. In improvements wrought by software, poor and expensive solutions have generally been replaced by superior and cheaper (often free) solutions, and these substitution effects will accelerate.
Putting these four characteristics together, we get a picture of messy, emergent progress that economist Bradford Delong calls “slouching towards utopia“: a condition of gradual, increasing quality of life available, at gradually declining cost, to a gradually expanding portion of the global population.
A big implication is immediately clear: the asymptotic condition represents a consumer utopia. As consumers, we will enjoy far more for far less. This means that the biggest unknown today is our future as producers, which brings us to what many view as the central question today: the future of work.
The gist of a robust answer, which we will explore in Understanding Elite Discontent, was anticipated by John Maynard Keynes as far back as 1930,1 though he did not like the implications: the majority of the population will be engaged in creating and satisfying each other’s new needs in ways that even the most prescient of today’s visionaries will fail to anticipate.
While we cannot predict precisely what workers of the future will be doing — what future wants and needs workers will be satisfying — we can predict some things about how they will be doing it. Work will take on an experimental, trial-and-error character, and will take place in an environment of rich feedback, self-correction, adaptation, ongoing improvement, and continuous learning. The social order surrounding work will be a much more fluid descendant of today’s secure but stifling paycheck world on the one hand, and liberating but precarious world of free agency and contingent labor on the other.
In other words, the hacker ethos will go global and the workforce at large will break smart. As the hacker ethos spreads, we will witness what economist Edmund Phelps calls a mass flourishing2 — a state of the economy where work will be challenging and therefore fulfilling. Unchallenging, predictable work will become the preserve of machines.
Previous historical periods of mass flourishing, such as the early industrial revolution, were short-lived, and gave way, after a few decades, to societies based on a new middle class majority built around predictable patterns of work and life. This time around, the state of mass flourishing will be a sustained one: a slouching towards a consumer and producer utopia.
If this vision seems overly dramatic, consider once again the comparison to other soft technologies: software is perhaps the most imagination-expanding technology humans have invented since writing and money, and possibly more powerful than either. To operate on the assumption that it will transform the world at least as dramatically, far from being wild-eyed optimism, is sober realism.
At the heart of the historical development of computing is the age-old philosophical impasse between purist and pragmatist approaches to technology, which is particularly pronounced in software due to its seeming near-Platonic ineffability. One way to understand the distinction is through a dinnerware analogy.
Purist approaches, which rely on alluring visions, are like precious “good” china: mostly for display, and reserved exclusively for narrow uses like formal dinners. Damage through careless use can drastically lower the value of a piece. Broken or missing pieces must be replaced for the collection to retain its full display value. To purists, mixing and matching, either with humbler everyday tableware, or with different china patterns, is a kind of sacrilege.
The pragmatic approach on the other hand, is like unrestricted and frequent use of hardier everyday dinnerware. Damage through careless play does not affect value as much. Broken pieces may still be useful, and missing pieces need not be replaced if they are not actually needed. For pragmatists, mixing and matching available resources, far from being sacrilege, is something to be encouraged, especially for collaborations such as neighborhood potlucks.
In software, the difference between the two approaches is clearly illustrated by the history of the web browser.
On January 23, 1993, Marc Andreessen sent out a short email, announcing the release of Mosaic, the first graphical web browser:
07:21:17-0800 by marca@ncsa.uiuc.edu:
By the power vested in me by nobody in particular, alpha/beta version 0.5 of NCSA’s Motif-based networked information systems and World Wide Web browser, X Mosaic, is hereby released:
file://ftp.ncsa.uiuc.edu/Web/xmosaic/xmosaic-0.5.tar.Z
Along with Eric Bina he had quickly hacked the prototype together after becoming enthralled by his first view of the World Wide Web, which Tim Berners-Lee had unleashed from CERN in Geneva in 1991. Over the next year, several other colleagues joined the project, equally excited by the possibilities of the web. All were eager to share the excitement they had experienced, and to open up the future of the web to more people, especially non-technologists.
Over the course of the next few years, the graphical browser escaped the confines of the government-funded lab (the National Center for Supercomputing Applications at the University of Illinois) where it was born. As it matured at Netscape and later at Microsoft, Mozilla and Google, it steered the web in unexpected (and to some, undesirable) directions. The rapid evolution triggered both the legendary browser wars and passionate debates about the future of the Internet. Those late-nineties conflicts shaped the Internet of today.
To some visionary pioneers, such as Ted Nelson, who had been developing a purist hypertext paradigm called Xanadu for decades, the browser represented an undesirably messy direction for the evolution of the Internet. To pragmatists, the browser represented important software evolving as it should: in a pluralistic way, embodying many contending ideas, through what the Internet Engineering Task Force (IETF) calls “rough consensus and running code.”
While every major software project has drawn inspiration from both purists and pragmatists, the browser, like other pieces of software that became a mission critical part of the Internet, was primarily derived from the work and ideas of pragmatists: pioneers like Jon Postel, David Clark, Bob Kahn and Vint Cerf, who were instrumental in shaping the early development of the Internet through highly inclusive institutions like the IETF.
Today, the then-minority tradition of pragmatic hacking has matured into agile development, the dominant modern approach for making software. But the significance of this bit of history goes beyond the Internet. Increasingly, the pragmatic, agile approach to building things has spread to other kinds of engineering and beyond, to business and politics.
The nature of software has come to matter far beyond software. Agile philosophies are eating all kinds of building philosophies. To understand the nature of the world today, whether or not you are a technologist, it is crucial to understand agility and its roots in the conflict between pragmatic and purist approaches to computing.
The story of the browser was not exceptional. Until the early 1990s, almost all important software began life as purist architectural visions rather than pragmatic hands-on tinkering.
This was because early programming with punch-card mainframes was a highly constrained process. Iterative refinement was slow and expensive. Agility was a distant dream: programmers often had to wait weeks between runs. If your program didn’t work the first time, you might not have gotten another chance. Purist architectures, worked out on paper, helped minimize risk and maximize results under these conditions.
As a result, early programming was led by creative architects (often mathematicians and, with rare exceptions like Klari Von Neumann and Grace Hopper, usually male) who worked out the structure of complex programs upfront, as completely as possible. The actual coding onto punch cards was done by large teams of hands-on programmers (mostly women1) with much lower autonomy, responsible for working out implementation details.
In short, purist architecture led the way and pragmatic hands-on hacking was effectively impossible. Trial-and-error was simply too risky and slow, which meant significant hands-on creativity had to be given up in favor of productivity.
With the development of smaller computers capable of interactive input hands-on hacking became possible. At early hacker hubs, like MIT through the sixties, a high-autonomy culture of hands-on programming began to take root. Though the shift would not be widely recognized until after 2000, the creative part of programming was migrating from visioning to hands-on coding. Already by 1970, important and high-quality software, such as the Unix operating system, had emerged from the hacker culture growing at the minicomputer margins of industrial programming.
Through the seventies, a tenuous balance of power prevailed between purist architects and pragmatic hackers. With the introduction of networked personal computing in the eighties, however, hands-on hacking became the defining activity in programming. The culture of early hacker hubs like MIT and Bell Labs began to diffuse broadly through the programming world. The archetypal programmer had evolved: from interchangeable member of a large team, to the uniquely creative hacker, tinkering away at a personal computer, interacting with peers over networks. Instead of dutifully obeying an architect, the best programmers were devoting increasing amounts of creative energy to scratching personal itches.
The balance shifted decisively in favor of pragmatists with the founding of the IETF in 1986. In January of that year, a group of 21 researchers met in San Diego and planted the seeds of what would become the modern “government” of the Internet.
Despite its deceptively bureaucratic-sounding name, the IETF is like no standards organization in history, starting with the fact that it has no membership requirements and is open to all who want to participate. Its core philosophy can be found in an obscure document, The Tao of the IETF, little known outside the world of technology. It is a document that combines the informality and self-awareness of good blogs, the gravitas of a declaration of independence, and the aphoristic wisdom of Zen koans. This oft-quoted section illustrates its basic spirit:
In many ways, the IETF runs on the beliefs of its members. One of the “founding beliefs” is embodied in an early quote about the IETF from David Clark: “We reject kings, presidents and voting. We believe in rough consensus and running code”. Another early quote that has become a commonly-held belief in the IETF comes from Jon Postel: “Be conservative in what you send and liberal in what you accept”.
Though the IETF began as a gathering of government-funded researchers, it also represented a shift in the center of programming gravity from government labs to the commercial and open-source worlds. Over the nearly three decades since, it has evolved into the primary steward2 of the inclusive, pluralistic and egalitarian spirit of the Internet. In invisible ways, the IETF has shaped the broader economic and political dimensions of software eating the world.
The difference between purist and pragmatic approaches becomes clear when we compare the evolution of programming in the United States and Japan since the early eighties. Around 1982, Japan chose the purist path over the pragmatic path, by embarking on the ambitious “fifth-generation computing” effort. The highly corporatist government-led program, which caused much anxiety in America at the time, proved to be largely a dead-end. The American tradition on the other hand, outgrew its government-funded roots and gave rise to the modern Internet. Japan’s contemporary contributions to software, such as the hugely popular Ruby language designed by Yukihiro Matsumoto, belong squarely within the pragmatic hacker tradition.
I will argue that this pattern of development is not limited to computer science. Every field eaten by software experiences a migration of the creative part from visioning activities to hands-on activities, disrupting the social structure of all professions. Classical engineering fields like mechanical, civil and electrical engineering had already largely succumbed to hands-on pragmatic hacking by the nineties. Non-engineering fields like marketing are beginning to convert.
So the significance of pragmatic approaches prevailing over purist ones cannot be overstated: in the world of technology, it was the equivalent of the fall of the Berlin Wall.
While pragmatic hacking was on the rise, purist approaches entered a period of slow, painful and costly decline. Even as they grew in ambition, software projects based on purist architecture and teams of interchangeable programmers grew increasingly unmanageable. They began to exhibit the predictable failure patterns of industrial age models: massive cost-overruns, extended delays, failed launches, damaging unintended consequences, and broken, unusable systems.
These failure patterns are characteristic of what political scientist James Scott1 called authoritarian high modernism: a purist architectural aesthetic driven by the authoritarian priorities. To authoritarian high-modernists, elements of the environment that do not conform to their purist design visions appear “illegible” and anxiety-provoking. As a result, they attempt to make the environment legible by forcibly removing illegible elements. Failures follow because important elements, critical to the functioning of the environment, get removed in the process.
Geometrically laid-out suburbs, for example, are legible and conform to platonic architectural visions, even if they are unlivable and economically stagnant. Slums on the other hand, appear illegible and are anxiety-provoking to planners, even when they are full of thriving economic life. As a result, authoritarian planners level slums and relocate residents into low-cost planned housing. In the process they often destroy economic and cultural vitality.
In software, what authoritarian architects find illegible and anxiety-provoking is the messy, unplanned tinkering hackers use to figure out creative solutions. When the pragmatic model first emerged in the sixties, authoritarian architects reacted like urban planners: by attempting to clear “code slums.” These attempts took the form of increasingly rigid documentation and control processes inherited from manufacturing. In the process, they often lost the hacker knowledge keeping the project alive.
In short, authoritarian high modernism is a kind of tunnel vision. Architects are prone to it in environments that are richer than one mind can comprehend. The urge to dictate and organize is destructive, because it leads architects to destroy the apparent chaos that is vital for success.
The flaws of authoritarian high modernism first became problematic in fields like forestry, urban planning and civil engineering. Failures of authoritarian development in these fields resulted in forests ravaged by disease, unlivable “planned” cities, crony capitalism and endemic corruption. By the 1960s, in the West, pioneering critics of authoritarian models, such as the urbanist Jane Jacobs and the environmentalist Rachel Carson, had begun to correctly diagnose the problem.
By the seventies, liberal democracies had begun to adopt the familiar, more democratic consultation processes of today. These processes were adopted in computing as well, just as the early mainframe era was giving way to the minicomputer era.
Unfortunately, while democratic processes did mitigate the problems, the result was often lowered development speed, increased cost and more invisible corruption. New stakeholders brought competing utopian visions and authoritarian tendencies to the party. The problem now became reconciling conflicting authoritarian visions. Worse, any remaining illegible realities, which were anxiety-provoking to all stakeholders, were now even more vulnerable to prejudice and elimination. As a result complex technology projects often slowed to a costly, gridlocked crawl. Tyranny of the majority — expressed through autocratic representatives of particular powerful constituencies — drove whatever progress did occur. The biggest casualty was innovation, which by definition is driven by ideas that are illegible to all but a few: what Peter Thiel calls secrets — things entrepreneurs believe that nobody else does, which leads them to unpredictable breakthroughs.
The process was most clearly evident in fields like defense. In major liberal democracies, different branches of the military competed to influence the design of new weaponry, and politicians competed to create jobs in their constituencies. As a result, major projects spiraled out of control and failed in predictable ways: delayed, too expensive and technologically compromised. In the non liberal-democratic world, the consequences were even worse. Authoritarian high modernism continued (and continues today in countries like Russia and North Korea), unchecked, wreaking avoidable havoc.
Software is no exception to this pathology. As high-profile failures like the launch of healthcare.gov2 show, “democratic” processes meant to mitigate risks tend to create stalled or gridlocked processes, compounding the problem.
Both in traditional engineering fields and in software, authoritarian high modernism leads to a Catch-22 situation: you either get a runaway train wreck due to too much unchecked authoritarianism, or a train that barely moves due to a gridlock of checks and balances.
Fortunately, agile software development manages to combine both decisive authority and pluralistic visions, and mitigate risks without slowing things to a crawl. The basic principles of agile development, articulated by a group of 17 programmers in 2001, in a document known as the Agile Manifesto, represented an evolution of the pragmatic philosophy first explicitly adopted by the IETF.
The cost of this agility is a seemingly anarchic pattern of progress. Agile development models catalyze illegible, collective patterns of creativity, weaken illusions of control, and resist being yoked to driving utopian visions. Adopting agile models leads individuals and organizations to gradually increase their tolerance for anxiety in the face of apparent chaos. As a result, agile models can get more agile over time.
Not only are agile models driving reform in software, they are also spreading to traditional domains where authoritarian high-modernism first emerged. Software is beginning to eat domains like forestry, urban planning and environment protection. Open Geographic Information Systems (GIS) in forestry, open data initiatives in urban governance, and monitoring technologies in agriculture, all increase information availability while eliminating cumbersome paperware processes. As we will see in upcoming essays, enhanced information availability and lowered friction can make any field hacker-friendly. Once a field becomes hacker-friendly, software begins to eat it. Development gathers momentum: the train can begin moving faster, without leading to train wrecks, resolving the Catch-22.
Today the shift from purist to pragmatist has progressed far enough that it is also reflected at the level of the economics of software development. In past decades, economic purists argued variously for idealized open-source, market-driven or government-led development of important projects. But all found themselves faced with an emerging reality that was too complex for any one economic ideology to govern. As a result, rough consensus and running economic mechanisms have prevailed over specific economic ideologies and gridlocked debates. Today, every available economic mechanism — market-based, governmental, nonprofit and even criminal — has been deployed at the software frontier. And the same economic pragmatism is spreading to software-eaten fields.
This is a natural consequence of the dramatic increase in both participation levels and ambitions in the software world. In 1943, only a small handful of people working on classified military projects had access to the earliest computers. Even in 1974, the year of Peak Centralization, only a small and privileged group had access to the early hacker-friendly minicomputers like the DEC PDP series. But by 1993, the PC revolution had nearly delivered on Bill Gates’ vision of a computer at every desk, at least in the developed world. And by 2000, laptops and Blackberries were already foreshadowing the world of today, with near-universal access to smartphones, and an exploding number of computers per person.
The IETF slogan of rough consensus and running code (RCRC) has emerged as the only workable doctrine for both technological development and associated economic models under these conditions.
As a result of pragmatism prevailing, a nearly ungovernable Promethean fire has been unleashed. Hundreds of thousands of software entrepreneurs are unleashing innovations on an unsuspecting world by the power vested in them by “nobody in particular,” and by any economic means necessary.
It is in the context of the anxiety-inducing chaos and complexity of a mass flourishing that we then ask: what exactly is software?
Software possesses an extremely strange property: it is possible to create high-value software products with effectively zero capital outlay. As Mozilla engineer Sam Penrose put it, software programming is labor that creates capital.
This characteristic make software radically different from engineering materials like steel, and much closer to artistic media such as paint.1 As a consequence, engineer and engineering are somewhat inappropriate terms. It is something of a stretch to even think of software as a kind of engineering “material.” Though all computing requires a physical substrate, the trillions of tiny electrical charges within computer circuits, the physical embodiment of a running program, barely seem like matter.
The closest relative to this strange new medium is paper. But even paper is not as cheap or evanescent. Though we can appreciate the spirit of creative abundance with which industrial age poets tossed crumpled-up false starts into trash cans, a part of us registers the wastefulness. Paper almost qualifies as a medium for true creative abundance, but falls just short.
Software though, is a medium that not only can, but must be approached with an abundance mindset. Without a level of extensive trial-and-error and apparent waste that would bankrupt both traditional engineering and art, good software does not take shape. From the earliest days of interactive computing, when programmers chose to build games while more “serious” problems waited for computer time, to modern complaints about “trivial” apps (which often turn out to be revolutionary), scarcity-oriented thinkers have remained unable to grasp the essential nature of software for fifty years.
The difference has a simple cause: unsullied purist visions have value beyond anxiety-alleviation and planning. They are also a critical authoritarian marketing and signaling tool — like formal dinners featuring expensive china — for attracting and concentrating scarce resources in fields such as architecture. In an environment of abundance, there is much less need for visions to serve such a marketing purpose. They only need to provide a roughly correct sense of direction to those laboring at software development to create capital using whatever tools and ideas they bring to the party — like potluck participants improvising whatever resources are necessary to make dinner happen.
Translated to technical terms, the dinnerware analogy is at the heart of software engineering. Purist visions tend to arise when authoritarian architects attempt to concentrate and use scarce resources optimally, in ways they often sincerely believe is best for all. By contrast, tinkering is focused on steady progress rather than optimal end-states that realize a totalizing vision. It is usually driven by individual interests and not obsessively concerned with grand and paternalistic “best for all” objectives. The result is that purist visions seem more comforting and aesthetically appealing on the outside, while pragmatic hacking looks messy and unfocused. At the same time purist visions are much less open to new possibilities and bricolage, while pragmatic hacking is highly open to both.
Within the world of computing, the importance of abundance-oriented approaches was already recognized by the 1960s. With Moore’s Law kicking in, pioneering computer scientist Alan Kay codified the idea of abundance orientation with the observation that programmers ought to “waste transistors” in order to truly unleash the power of computing.
But even for young engineers starting out today, used to routinely renting cloudy container-loads2 of computers by the minute, the principle remains difficult to follow. Devoting skills and resources to playful tinkering still seems “wrong,” when there are obvious and serious problems desperately waiting for skilled attention. Like the protagonist in the movie Brewster’s Millions, who struggles to spend $30 million within thirty days in order to inherit $300 million, software engineers must unlearn habits born of scarcity before they can be productive in their medium.
The principle of rough consensus and running code is perhaps the essence of the abundance mindset in software.
If you are used to the collaboration processes of authoritarian organizations, the idea of rough consensus might conjure up the image of a somewhat informal committee meeting, but the similarity is superficial. Consensus in traditional organizations tends to be brokered by harmony-seeking individuals attuned to the needs of others, sensitive to constraints, and good at creating “alignment” among competing autocrats. This is a natural mode of operation when consensus is sought in order to deal with scarcity. Allocating limited resources is the typical purpose of such industrial-age consensus seeking. Under such conditions, compromise represents a spirit of sharing and civility. Unfortunately, it is also a recipe for gridlock when compromise is hard and creative breakthroughs become necessary.
By contrast, software development favors individuals with an autocratic streak, driven by an uncompromising sense of how things ought to be designed and built, which at first blush appears to contradict the idea of consensus.
Paradoxically, the IETF philosophy of eschewing “kings, presidents and voting” means that rough consensus evolves through strong-minded individuals either truly coming to an agreement, or splitting off to pursue their own dissenting ideas. Conflicts are not sorted out through compromises that leave everybody unhappy. Instead they are sorted out through the principle futurist Bob Sutton identified as critical for navigating uncertainty: strong views, weakly held.
Pragmatists, unlike the authoritarian high-modernist architects studied by James Scott, hold strong views on the basis of having contributed running code rather than abstract visions. But they also recognize others as autonomous peers, rather than as unquestioning subordinates or rivals. Faced with conflict, they are willing to work hard to persuade others, be persuaded themselves, or leave.
Rough consensus favors people who, in traditional organizations, would be considered disruptive and stubborn: these are exactly the people prone to “breaking smart.” In its most powerful form, rough consensus is about finding the most fertile directions in which to proceed rather than uncovering constraints. Constraints in software tend to be relatively few and obvious. Possibilities, however, tend to be intimidatingly vast. Resisting limiting visions, finding the most fertile direction, and allying with the right people become the primary challenges.
In a process reminiscent of the “rule of agreement” in improv theater, ideas that unleash the strongest flood of follow-on builds tend to be recognized as the most fertile and adopted as the consensus. Collaborators who spark the most intense creative chemistry tend to be recognized as the right ones. The consensus is rough because it is left as a sense of direction, instead of being worked out into a detailed, purist vision.
This general principle of fertility-seeking has been repeatedly rediscovered and articulated in a bewildering variety of specific forms. The statements have names such as the principle of least commitment (planning software), the end-to-end principle (network design), the procrastination principle (architecture), optionality (investing), paving the cowpaths (interface design), lazy evaluation (language design) and late binding (code execution). While the details, assumptions and scope of applicability of these different statements vary, they all amount to leaving the future as free and unconstrained as possible, by making as few commitments as possible in any given local context.
The principle is in fact an expression of laissez-faire engineering ethics. Donald Knuth, another software pioneer, captured the ethical dimension with his version: premature optimization is the root of all evil. The principle is the deeper reason autonomy and creativity can migrate downstream to hands-on decision-making. Leaving more decisions for the future also leads to devolving authority to those who come later.
Such principles might seem dangerously playful and short-sighted, but under conditions of increasing abundance, with falling costs of failure, they turn out to be wise. It is generally smarter to assume that problems that seem difficult and important today might become trivial or be rendered moot in the future. Behaviors that would be short-sighted in the context of scarcity become far-sighted in the context of abundance.
The original design of the Mosaic browser, for instance, reflected the optimistic assumption that everybody would have high-bandwidth access to the Internet in the future, a statement that was not true at the time, but is now largely true in the developed world. Today, many financial technology entrepreneurs are building products based on the assumption that cryptocurrencies will be widely adopted and accepted. Underlying all such optimism about technology is an optimism about humans: a belief that those who come after us will be better informed and have more capabilities, and therefore able to make more creative decisions.
The consequences of this optimistic approach are radical. Traditional processes of consensus-seeking drive towards clarity in long-term visions but are usually fuzzy on immediate next steps. By contrast, rough consensus in software deliberately seeks ambiguity in long-term outcomes and extreme clarity in immediate next steps. It is a heuristic that helps correct the cognitive bias behind Amara’s Law. Clarity in next steps counteracts the tendency to overestimate what is possible in the short term, while comfort with ambiguity in visions counteracts the tendency to underestimate what is possible in the long term. At an ethical level, rough consensus is deeply anti-authoritarian, since it avoids constraining the freedoms of future stakeholders simply to allay present anxieties. The rejection of “voting” in the IETF model is a rejection of a false sense of egalitarianism, rather than a rejection of democratic principles.
In other words, true north in software is often the direction that combines ambiguity and evidence of fertility in the most alluring way: the direction of maximal interestingness.
The decade after the dot com crash of 2000 demonstrated the value of this principle clearly. Startups derided for prioritizing “growth in eyeballs” (an “interestingness” direction) rather than clear models of steady-state profitability (a self-limiting purist vision of an idealized business) were eventually proven right. Iconic “eyeball” based businesses, such as Google and Facebook, turned out to be highly profitable. Businesses which prematurely optimized their business model in response to revenue anxieties limited their own potential and choked off their own growth.
The great practical advantage of this heuristic is that the direction of maximal interestingness can be very rapidly updated to reflect new information, by evolving the rough consensus. The term pivot, introduced by Eric Ries as part of the Lean Startup framework, has recently gained popularity for such reorientation. A pivot allows the direction of development to change rapidly, without a detailed long-term plan. It is enough to figure out experimental next steps. This ability to reorient and adopt new mental models quickly (what military strategists call a fast transient4) is at the heart of agility.
The response to new information is exactly the reverse in authoritarian development models. Because such models are based on detailed purist visions that grow more complex over time, it becomes increasingly harder to incorporate new data. As a result, the typical response to new information is to label it as an irrelevant distraction, reaffirm commitment to the original vision, and keep going. This is the runaway-train-wreck scenario. On the other hand, if the new information helps ideological opposition cohere within a democratic process, a competing purist vision can emerge. This leads to the stalled-train scenario.
The reason rough consensus avoids both these outcomes is that it is much easier to agree roughly on the most interesting direction than to either update a complex, detailed vision or bring two or more conflicting complex visions into harmony.
For this to work, an equally pragmatic implementation philosophy is necessary. One that is very different from the authoritarian high-modernist way, or as it is known in software engineering, the waterfall model (named for the way high-level purist plans flow unidirectionally towards low-level implementation work).
Not only does such a pragmatic implementation philosophy exist, it works so well that running code actually tends to outrun even the most uninhibited rough consensus process without turning into a train wreck. One illustration of this dynamic is that successful software tends to get both used and extended in ways that the original creators never anticipated – and are often pleasantly surprised by, and sometimes alarmed by. This is of course the well-known agile model. We will not get into the engineering details,5 but what matters are the consequences of using it.
The biggest consequence is this: in the waterfall model, execution usually lags vision, leading to a deficit-driven process. By contrast, in working agile processes, running code races ahead, leaving vision to catch up, creating a surplus-driven process.
Both kinds of gaps contain lurking unknowns, but of very different sorts. The surplus in the case of working agile processes is the source of many pleasant surprises: serendipity. The deficit in the case of waterfall models is the source of what William Boyd called zemblanity: “unpleasant unsurprises.”
In software, waterfall processes fail in predictable ways, like classic Greek tragedies. Agile processes on the other hand, can lead to snowballing serendipity, getting luckier and luckier, and succeeding in unexpected ways. The reason is simple: waterfall plans constrain the freedom of future participants, leading them to resent and rebel against the grand plan in predictable ways. By contrast, agile models empower future participants in a project, catalyzing creativity and unpredictable new value.
The engineering term for the serendipitous, empowering gap between running code and governing vision has now made it into popular culture in the form of a much-misunderstood idea: perpetual beta.
When Google’s Gmail service finally exited beta status in July 2009, five years after it was launched, it already had over 30 million users. By then, it was the third largest free email provider after Yahoo and Hotmail, and was growing much faster than either.1 For most of its users, it had already become their primary personal email service.
The beta label on the logo, indicating experimental prototype status, had become such a running joke that when it was finally removed, the project team included a whimsical “back to beta” feature, which allowed users to revert to the old logo. That feature itself was part of a new section of the product called Gmail Labs: a collection of settings that allowed users to turn on experimental features. The idea of perpetual beta had morphed into permanent infrastructure within Gmail for continuous experimentation.
Today, this is standard practice: all modern web-based software includes scaffolding for extensive ongoing experimentation within the deployed production site or smartphone app backend (and beyond, through developer APIs2). Some of it is even visible to users. In addition to experimental features that allow users to stay ahead of the curve, many services also offer “classic” settings that allow them to stay behind the curve — for a while. The best products use perpetual beta as a way to lead their users towards richer, more empowered behaviors, instead of following them through customer-driven processes. Backward compatibility is limited to situations of pragmatic need, rather than being treated as a religious imperative.
The Gmail story contains an answer to the obvious question about agile models you might ask if you have only experienced waterfall models: How does anything ambitious get finished by groups of stubborn individuals heading in the foggiest possible direction of “maximal interestingness” with neither purist visions nor “customer needs” guiding them?
The answer is that it doesn’t get finished. But unlike in waterfall models, this does not necessarily mean the product is incomplete. It means the vision is perpetually incomplete and growing in unbounded ways, due to ongoing evolutionary experiments. When this process works well, what engineers call technical debt can get transformed into what we might call technical surplus.3 The parts of the product that lack satisfying design justifications represent the areas of rapid innovation. The gaps in the vision are sources of serendipitous good luck. (If you are a Gmail user, browsing the “Labs” section might lead you to some serendipitous discoveries: features you did not know you wanted might already exist unofficially).
The deeper significance of perpetual beta culture in technology often goes unnoticed: in the industrial age, engineering labs were impressive, enduring buildings inside which experimental products were created. In the digital age, engineering labs are experimental sections inside impressive, enduring products. Those who bemoan the gradual decline of famous engineering labs like AT&T Bell Labs and Xerox PARC often miss the rise of even more impressive labs inside major modern products and their developer ecosystems.
Perpetual beta is now such an entrenched idea that users expect good products to evolve rapidly and serendipitously, continuously challenging their capacity to learn and adapt. They accept occasional non-critical failures as a price worth paying. Just as the ubiquitous under construction signs on the early static websites of the 1990s gave way to dynamic websites that were effectively always “under construction,” software products too have acquired an open-ended evolutionary character.
Just as rough consensus drives ideation towards “maximal interestingness”, agile processes drive evolution towards the regimes of greatest operational uncertainty, where failures are most likely to occur. In well-run modern software processes, not only is the resulting chaos tolerated, it is actively invited. Changes are often deliberately made at seemingly the worst possible times. Intuit, a maker of tax software, has a history of making large numbers of changes and updates at the height of tax season.
Conditions that cause failure, instead of being cordoned off for avoidance in the future, are deliberately and systematically recreated and explored further. There are even automated systems designed to deliberately cause failures in production systems, such as ChaosMonkey, a system developed by Netflix to randomly take production servers offline, forcing the system to heal itself or die trying.
The glimpses of perpetual beta that users can see is dwarfed by unseen backstage experimentation.
This is neither perverse, nor masochistic: it is necessary to uncover hidden risks in experimental ideas early, and to quickly resolve gridlocks with data.
The origins of this curious philosophy lie in what is known as the release early, release often (RERO) principle, usually attributed to Linus Torvalds, the primary architect of the Linux operating system. The idea is exactly what it sounds like: releasing code as early as possible, and as frequently as possible while it is actively evolving.
What makes this possible in software is that most software failures do not have life-threatening consequences.4 As a result, it is usually faster and cheaper to learn from failure than to attempt to anticipate and accommodate it via detailed planning (which is why the RERO principle is often restated in terms of failure as fail fast).
So crucial is the RERO mindset today that many companies, such as Facebook and Etsy, insist on new hires contributing and deploying a minor change to mission-critical systems on their very first day. Companies that rely on waterfall processes by contrast, often put new engineers through years of rotating assignments before trusting them with significant autonomy.
To appreciate just how counterintuitive the RERO principle is, and why it makes traditional engineers nervous, imagine a car manufacturer rushing to put every prototype into “experimental” mass production, with the intention of discovering issues through live car crashes. Or supervisors in a manufacturing plant randomly unplugging or even breaking machinery during peak demand periods. Even lean management models in manufacturing do not go this far. Due to their roots in scarcity, lean models at best mitigate the problems caused by waterfall thinking. Truly agile models on the other hand, do more: they catalyze abundance.
Perhaps the most counter-intuitive consequence of the RERO principle is this: where engineers in other disciplines attempt to minimize the number of releases, software engineers today strive to maximize the frequency of releases. The industrial-age analogy here is the stuff of comedy science fiction: an intern launching a space mission just to ferry a single paper-clip to the crew of a space station.
This tendency makes no sense within waterfall models, but is a necessary feature of agile models. The only way for execution to track the changing direction of the rough consensus as it pivots is to increase the frequency of releases. Failed experiments can be abandoned earlier, with lower sunk costs. Successful ones can migrate into the product as fast as hidden risks can be squeezed out. As a result, a lightweight sense of direction — rough consensus — is enough. There is no need to navigate by an increasingly unattainable utopian vision.
Which raises an interesting question: what happens when there are irreconcilable differences of opinion that break the rough consensus?
If creating great software takes very little capital, copying great software takes even less. This means dissent can be resolved in an interesting way that is impossible in the world of atoms. Under appropriately liberal intellectual property regimes, individuals can simply take a copy of the software and continue developing it independently. In software, this is called forking. Efforts can also combine forces, a process known as merging. Unlike the superficially similar process of spin-offs and mergers in business, forking and merging in software can be non-zero sum.
Where democratic processes would lead to gridlock and stalled development, conflicts under rough consensus and running code and release early, release often processes leads to competing, divergent paths of development that explore many possible worlds in parallel.
This approach to conflict resolution is so radically unfamiliar1 that it took nearly three decades even for pragmatic hackers to recognize forking as something to be encouraged. Twenty five years passed between the first use of the term “fork” in this sense (by Unix hacker Eric Altman in 1980) and the development of a tool that encouraged rather than discouraged it: git, developed by Linus Torvalds in 2005. Git is now the most widely used code management system in the world, and the basis for Github, the leading online code repository.
In software development, the model works so well that a nearly two-century old industrial model of work is being abandoned for one built around highly open collaboration, promiscuous forking and opt-in staffing of projects.
The dynamics of the model are most clearly visible in certain modern programming contests, such as the regular Matlab programming contests conducted by MathWorks.
Such events often allow contestants to frequently check their under-development code into a shared repository. In the early stages, such sharing allows for the rapid dissemination of the best design ideas through the contestant pool. Individuals effectively vote for the most promising ideas by appropriating them for their own designs, in effect forming temporary collaborations. Hoarding ideas or code tends to be counterproductive due to the likelihood that another contestant will stumble on the same idea, improve upon it in unexpected ways, or detect a flaw that allows it to “fail fast.” But in the later stages, the process creates tricky competitive conditions, where speed of execution beats quality of ideas. Not surprisingly, the winner is often a contestant who makes a minor, last-minute tweak to the best submitted solution, with seconds to spare.
Such contests — which exhibit in simplified forms the dynamics of the open-source community as well as practices inside leading companies — not only display the power of RCRC and RERO, they demonstrate why promiscuous forking and open sharing lead to better overall outcomes.
Software that thrives in such environments has a peculiar characteristic: what computer scientist Richard Gabriel described as worse is better.2 Working code that prioritizes visible simplicity, catalyzing effective collaboration and rapid experimentation, tends to spread rapidly and unpredictably. Overwrought code that prioritizes authoritarian, purist concerns such as formal correctness, consistency, and completeness tends to die out.
In the real world, teams form through self-selection around great code written by one or two linchpin programmers rather than contest challenges. Team members typically know each other at least casually, which means product teams tend to grow to a few dozen at most. Programmers who fail to integrate well typically leave in short order. If they cannot or do not leave, they are often explicitly told to do nothing and stay out of the way, and actively shunned and cut out of the loop if they persist.
While the precise size of an optimal team is debatable, Jeff Bezos’ two-pizza rule suggests that the number is no more than about a dozen.3
In stark contrast to the quality code developed by “worse is better” processes, software developed by teams of anonymous, interchangeable programmers, with bureaucratic top-down staffing, tends to be of terrible quality. Turning Gabriel’s phrase around, such software represents a “better is worse” outcome: utopian visions that fail predictably in implementation, if they ever progress beyond vaporware at all.
The IBM OS/2 project of the early nineties,4 conceived as a replacement for the then-dominant operating system, MS-DOS, provides a perfect illustration of “better is worse.” Each of the thousands of programmers involved was expected to design, write, debug, document, and support just 10 lines of code per day. Writing more than 10 lines was considered a sign of irresponsibility. Project estimates were arrived at by first estimating the number of lines of code in the finished project, dividing by the number of days allocated to the project, and then dividing by 10 to get the number of programmers to assign to the project. Needless to say, programmers were considered completely interchangeable. The nominal “planning” time required to complete a project could be arbitrarily halved at any time, by doubling the number of assigned engineers.5 At the same time, dozens of managers across the the company could withhold approval and hold back a release, a process ominously called “nonconcurrence.”
“Worse is better” can be a significant culture shock to those used to industrial-era work processes. The most common complaint is that a few rapidly growing startups and open-source projects typically corner a huge share of the talent supply in a region at any given time, making it hard for other projects to grow. To add insult to injury, the process can at times seem to over-feed the most capricious and silly projects while starving projects that seem more important. This process of the best talent unpredictably abandoning other efforts and swarming a few opportunities is a highly unforgiving one. It creates a few exceptional winning products and vast numbers of failed ones, leaving those with strong authoritarian opinions about “good” and “bad” technology deeply dissatisfied.
But not only does the model work, it creates vast amounts of new wealth through both technology startups and open-source projects. Today, its underlying concepts like rough consensus, pivot, fast failure, perpetual beta, promiscuous forking, opt-in and worse is better are carrying over to domains beyond software and regions beyond Silicon Valley. Wherever they spread, limiting authoritarian visions and purist ideologies retreat.
There are certainly risks with this approach, and it would be polyannish to deny them. The state of the Internet today is the sum of millions of pragmatic, expedient decisions made by hundreds of thousands of individuals delivering running code, all of which made sense at the time. These decisions undoubtedly contributed to the serious problems facing us today, ranging from the poor security of Internet protocols to the ones being debated around Net Neutrality. But arguably, had the pragmatic approach not prevailed, the Internet would not have evolved significantly beyond the original ARPANET at all. Instead of a thriving Internet economy that promises to revitalize the old economy, the world at large might have followed the Japanese down the dead-end purist path of fifth-generation mainframe computing.
Today, moreover, several solutions to such serious legacy problems are being pursued, such as blockchain technology (the software basis for cryptocurrencies like Bitcoin). These are vastly more creative than solutions that were debated in the early days of the Internet, and reflect an understanding of problems that have actually been encountered, rather than the limiting anxieties of authoritarian high-modernist visions. More importantly, they validate early decisions to resist premature optimization and leave as much creative room for future innovators as possible. Of course, if emerging solutions succeed, more lurking problems will surface that will in turn need to be solved, in the continuing pragmatic tradition of perpetual beta.
Our account of the nature of software ought to suggest an obvious conclusion: it is a deeply subversive force. For those caught on the wrong side of this force, being on the receiving end of Blitzkrieg operations by a high-functioning agile software team can feel like mounting zemblanity: a sense of inevitable doom.
This process has by now occurred often enough, that a general sense of zemblanity has overcome the traditional economy at large. Every aggressively growing startup seems like a special-forces team with an occupying army of job-eating machine-learning programs and robots following close behind.
Internally, the software-eaten economy is even more driven by disruption: the time it takes for a disruptor to become a disruptee has been radically shrinking in the last decade — and startups today are highly aware of that risk. That awareness helps explain the raw aggressiveness that they exhibit.
It is understandable that to people in the traditional economy, software eating the world sounds like a relentless war between technology and humanity.
But exactly the opposite is the case. Technological progress, unlike war or Wall Street style high finance, is not a zero-sum game, and that makes all the difference. The Promethean force of technology is today, and always has been, the force that has rescued humanity from its worst problems just when it seemed impossible to avert civilizational collapse. With every single technological advance, from the invention of writing to the invention of television, those who have failed to appreciate the non-zero-sum nature of technological evolution have prophesied doom and been proven wrong. Every time, they have made some version of the argument: this time it is different, and been proven wrong.
Instead of enduring civilizational collapse, humanity has instead ascended to a new level of well-being and prosperity each time.
Of course, this poor record of predicting collapses is not by itself proof that it is no different this time. There is no necessary reason the future has to be like the past. There is no fundamental reason our modern globalized society is uniquely immune to the sorts of game-ending catastrophes that led to the fall of the Roman empire or the Mayan civilization. The case for continued progress must be made anew with each technological advance, and new concerns, such as climate change today, must be seriously considered.
But concerns that the game might end should not lead us to limit ourselves to what philosopher James Carse6 called finite game views of the world, based on “winning” and arriving at a changeless, pure and utopian state as a prize. As we will argue in the next essay, the appropriate mindset is what Carse called an infinite game view, based on the desire to continue playing the game in increasingly generative ways. From an infinite game perspective, software eating the world is in fact the best thing that can happen to the world.
The unique characteristics of software as a technological medium have an impact beyond the profession itself. To understand the broader impact of software eating the world, we have to begin by examining the nature of technology adoption processes.
A basic divide in the world of technology is between those who believe humans are capable of significant change, and those who believe they are not. Prometheanism is the philosophy of technology that follows from the idea that humans can, do and should change. Pastoralism, on the other hand is the philosophy that change is profane. The tension between these two philosophies leads to a technology diffusion process characterized by a colloquial phrase popular in the startup world: first they ignore you, then they laugh at you, then they fight you, then you win.1
Science fiction writer Douglas Adams reduced the phenomenon to a set of three sardonic rules from the point of view of users of technology:
Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
Anything invented after you’re thirty-five is against the natural order of things.
As both these folk formulations suggest, there is certain inevitability to technological evolution, and a certain naivete to certain patterns of resistance.
To understand why this is in fact the case, consider the proposition that technological evolution is path-dependent in the short term, but not in the long term.
Major technological possibilities, once uncovered, are invariably exploited in ways that maximally unleash their potential. While there is underutilized potential left, individuals compete and keep adapting in unpredictable ways to exploit that potential. All it takes is one thing: a thriving frontier of constant tinkering and diverse value systems must exist somewhere in the world.
Specific ideas may fail. Specific uses may not endure. Localized attempts to resist may succeed, as the existence of the Amish demonstrates. Some individuals may resist some aspects of the imperative to change successfully. Entire nations may collectively decide to not explore certain possibilities. But with major technologies, it usually becomes clear very early on that the global impact is going to be of a certain magnitude and cause a corresponding amount of disruptive societal change. This is the path-independent outcome and the reason there seems to be a “right side of history” during periods of rapid technological developments.
The specifics of how, when, where and through whom a technology achieves its maximal impact are path dependent. Competing to guess the right answers is the work of entrepreneurs and investors. But once the answers are figured out, the contingent path from “weird” to “normal” will be largely forgotten, and the maximally transformed society will seem inevitable with hindsight.
The ongoing evolution of ridesharing through conflict with the taxicab industry illustrates this phenomenon well. In January 2014 for instance, striking cabdrivers in Paris attacked vehicles hired through Uber. The rioting cabdrivers smashed windshields and slashed tires, leading to immediate comparisons in the media to the original pastoralists of industrialized modernity: the Luddites of the early 19th century.2
Like the Luddite movement, the reaction to ridesharing services such as Uber and Lyft is not resistance to innovative technology per se, but something larger and more complex: an attempt to limit the scope and scale of impact in order to prevent disruption of a particular way of life. As Richard Conniff notes in a 2011 essay in the Smithsonian magazine:
As the Industrial Revolution began, workers naturally worried about being displaced by increasingly efficient machines. But the Luddites themselves “were totally fine with machines,” says Kevin Binfield, editor of the 2004 collection Writings of the Luddites. They confined their attacks to manufacturers who used machines in what they called “a fraudulent and deceitful manner” to get around standard labor practices. “They just wanted machines that made high-quality goods,” says Binfield, “and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.3
In his essay, Conniff argues that the original Luddites were simply fighting to preserve their idea of human values, and concludes that “standing up against technologies that put money or convenience above other human values” is necessary for a critical engagement of technology. Critics make similar arguments in every sector being eaten by software.
The apparent reasonableness of this view is deceptive: it is based on the wishful hope that entire societies can and should agree on what the term human values means, and use that consensus to decide which technologies to adopt. An unqualified appeal to “universal” human values is usually a call for an authoritarian imposition of decidedly non-universal values.
As the rideshare industry debates demonstrate, even consumers and producers within a single sector find it hard to achieve consensus on values. Protests by cab drivers in London in 2014 for instance, led to an increase in business4 for rideshare companies, clear evidence that consumers do not necessarily act in solidarity with incumbent producers based on shared “human values.”
It is tempting to analyze such conflicts in terms of classical capitalist or labor perspectives. The result is a predictable impasse: capitalists emphasize increased supply driving prices down, while progressives focus on loss of jobs in the taxicab industry. Both sides attempt to co-opt the political loyalties of rideshare drivers. Capitalists highlight increased entrepreneurial opportunities, while progressives highlight increased income precarity. Capitalists like to label rideshare drivers free agents or micro-entrepreneurs, while progressives prefer labels like precariat (by analogy to proletariat) or scab. Both sides attempt to make the future determinate by force-fitting it into preferred received narratives using loaded terms.
Both sides also operate by the same sense of proportions: they exaggerate the importance of the familiar and trivialize the new. Apps seem trivial, while automobiles loom large as a motif of an entire century-old way of life. Societies organized around cars seem timeless, normal, moral and self-evidently necessary to preserve and extend into the future. The smartphone at first seems to add no more than a minor element of customer convenience within a way of life that cannot possibly change. The value it adds to the picture is treated like a rounding error and ignored. As a result both sides see the conflict as a zero-sum redistribution of existing value: gains on one side, exactly offset by losses on the other side.
But as Marshall McLuhan observed, new technologies change our sense of proportions.
Even today’s foggy view of a smartphone-centric future suggests that ridesharing is evolving from convenience to necessity. By sustaining cheaper and more flexible patterns of local mobility, ridesharing enables new lifestyles in urban areas. Young professionals can better afford to work in opportunity-rich cities. Low-income service workers can expand their mobility beyond rigid public transit and the occasional expensive emergency taxi-ride. Small restaurants with limited working capital can use ridesharing-like services to offer delivery services. It is in fact getting hard to imagine how else transportation could work in a society with smartphones.
The impact is shifting from the path-dependent phase, when it wasn’t clear whether the idea was even workable, to the non-path-dependent phase, where it seems inevitable enough that other ideas can be built on top.
Such snowballing changes in patterns of life are due to what economists call consumer surplus5 (increased spending power elsewhere due to falling costs in one area of consumption) and positive spillover effects6 (unexpected benefits in unrelated industries or distant geographies). For technologies with a broad impact, these are like butterfly effects: deceptively tiny causes with huge, unpredictable effects. Due to the unpredictability of surplus and spillover, the bulk of the new wealth created by new technologies (on the order of 90% or more) eventually accrues to society at large,7 rather than the innovators who drove the early, path-dependent phase of evolution. This is the macroeconomic analog to perpetual beta: execution by many outrunning visioning by a few, driving more bottom-up experimentation and turning society itself into an innovation laboratory.
Far from the value of the smartphone app being a rounding error in the rideshare industry debate, it in fact represents the bulk of the value. It just does not accrue directly to any of the participants in the overt, visible conflict.
If adoption models were entirely dictated by the taxicab industry, this value would not exist, and the zero-sum framing would become a self-fulfilling prophecy. Similarly, when entrepreneurs try to capture all or even most of the value they set out to create, the results are counterproductive: minor evolutionary advances that again make zero-sum outcomes a self-fulfilling prophecy. Technology publishing pioneer Tim O’Reilly captured the essence of this phenomenon with the principle, “create more value than you capture.” For the highest-impact products, the societal value created dwarfs the value captured.
These largely invisible surplus and spillover effects do more than raise broad living standards. By redirecting newly freed creative energy and resources down indeterminate paths, consumer surpluses and spillover effects actually drive further technological evolution in a non-zero-sum way. The bulk of the energy leaks away to drive unexpected innovations in unrelated areas. A fraction courses through unexpected feedback paths and improves the original innovation itself, in ways the pioneers themselves do not anticipate. Similar unexpected feedback paths improve derivative inventions as well, vastly amplifying the impact beyond simple “technology diffusion.”
The story of the steam engine is a good illustration of both effects. It is widely recognized that spillover effects from James Watt’s steam engine, originally introduced in the Cornish mining industry, helped trigger the British industrial revolution. What is less well-known8 is that the steam engine itself was vastly improved by hundreds of unknown tinkerers adding “microinventions” in the decades immediately following the expiration of James Watt’s patents. Once an invention leaks into what Robert Allen calls “collective invention settings,” with a large number of individuals and firms freely sharing information and independently tinkering with an innovation, future evolution gathers unstoppable momentum and the innovation goes from “weird” to “new normal.” Besides the Cornish mining district in the early 1800s, the Connecticut Valley in the 1870s-1890s,9 Silicon Valley since 1950 and the Shenzen region of China since the 1990s are examples of flourishing collective invention settings. Together, such active creative regions constitute the global technology frontier: the worldwide zone of bricolage.
The path-dependent phase of evolution of a technology can take centuries, as Joel Mokyr shows in his classic, Lever of Riches. But once it enters a collective invention phase, surplus and spillover effects gather momentum and further evolution becomes simultaneously unpredictable and inevitable. Once the inevitability is recognized, it is possible to bet on follow-on ideas without waiting for details to become clear. Today, it is possible to bet on a future based on ridesharing and driverless cars without knowing precisely what those futures will look like.
As consumers, we experience this kind of evolution as what Buckminster Fuller called ephemeralization: the seemingly magical ability of technology to do more and more with less and less.
This is most visible today in the guise of Moore’s Law, but ephemeralization is in fact a feature of all technological evolution. Potable water was once so hard to come by, many societies suffered from endemic water-borne diseases and forced to rely on expensive and inefficient procedures like boiling water at home. Today, only around 10% of the world lacks such access.10 Diamonds were once worth fighting wars over. Today artificial diamonds, indistinguishable from natural ones, are becoming widely available.
The result is a virtuous cycle of increasing serendipity, driven by widespread lifestyle adaptation and cascades of self-improving innovation. Surplus and spillover creating more surplus and spillover. Brad deLong’s slouching towards utopia for consumers and Edmund Phelps’ mass flourishing for producers. And when the virtuous cycle is powered by a soft, world-eating technology, the steady, cumulative impact is immense.
Both critics and enthusiasts of innovation deeply misunderstand the nature of this virtuous cycle. Critics typically lament lifestyle adaptations as degeneracy and call for a return to traditional values. Many enthusiasts, instead of being inspired by a sense of unpredictable, flourishing potential, are repeatedly seduced by specific visions of the Next Big Thing, sometimes derived rather literally from popular science fiction. As a result, they lament the lack of collective attention directed towards their pet societal projects. The priorities of other enthusiasts seem degenerate.
The result in both cases is the same: calls for reining in the virtuous cycle. Both kinds of lament motivate efforts to concentrate and deploy surpluses in authoritarian ways (through retention of excessive monopolistic profits by large companies or government-led efforts funded through taxation) and contain spillover effects (by restricting access to new technological capabilities). Both are ultimately attempts to direct creative energies down a few determinate paths. Both are driven by a macroeconomic version of the Luddite hope: that it is possible to enjoy the benefits of non-zero-sum innovation without giving up predictability. For critics, it is the predictability of established patterns of life. For Next Big Thing enthusiasts, it is a specific aspirational pattern of life.
Both are varieties of pastoralism, the cultural cousin of purist approaches in engineering. Pastoralism suffers from precisely the same, predictable authoritarian high-modernist failure modes. Like purist software visions, pastoralist visions too are marked by an obsessive desire to permanently win a specific, zero-sum finite game rather than to keep playing the non-zero-sum infinite game.
When the allure of pastoralist visions is resisted, and the virtuous cycle is allowed to work, we get Promethean progress. This is unpredictable evolution in the direction of maximal societal impact, unencumbered by limiting deterministic visions. Just as the principle of rough consensus and running code creates great software, consumer surplus and spillover effects create great societies. Just as pragmatic and purist development models lead to serendipity and zemblanity in engineering respectively, Promethean and pastoral models lead to serendipity and zemblanity at the level of entire societies.
When pastoralist calls for actual retreat are heeded, the technological frontier migrates elsewhere, often causing centuries of stagnation. This was precisely what happened in China and the Islamic world around the fifteenth century, when the technological frontier shifted to Europe.
Heeding the other kind of pastoralist call, to pursue a determinate Next Big Thing at the expense of many indeterminate small things, leads to somewhat better results. Such models can deliver impressive initial gains, but invariably create a hardening landscape of authoritarian, corporatist institutions. This triggers a vicious cycle that predictably stifles innovation.
The Apollo program, for instance, fulfilled John F. Kennedy’s call to put humans on the moon within the decade. It also led to the inexorable rise of the military-industrial complex that his predecessor, Dwight D. Eisenhower, had warned against. The Soviets fared even worse: they made equally impressive strides in the space race, but the society they created collapsed on itself under the weight of authoritarianism. What prevented that outcome in the United States was the regional technological frontier migrating to the West Coast, and breaking smart from the military-industrial complex in the process. This allowed some of the creative energy being gradually stifled to escape to a more favorable environment.
With software eating the world, we are again witnessing predictable calls for pastoralist development models. Once again, the challenge is to resist the easy answers on offer.
In art, the term pastoral refers to a genre of painting and literature based on romanticized and idealized portrayals of a pastoral lifestyle, usually for urban audiences with no direct experience of the actual squalor and oppression of pre-industrial rural life.
Biblical Pastoralism: drawing inspiration for the 21st century from shepherds.
Within religious traditions, pastorals may also be associated with the motifs and symbols of uncorrupted states of being. In the West for instance, pastoral art and literature often evoke the Garden of Eden story. In Islamic societies, the first caliphate is often evoked in a similar way.
The notion of a pastoral is useful for understanding idealized understandings of any society, real or imagined, past, present or future. In Philip Roth’s American Pastoral for instance, the term is an allusion to the idealized American lifestyle enjoyed by the protagonist Seymour “Swede” Levov, before it is ruined by the social turmoil of the 1960s.
At the center of any pastoral we find essentialized notions of what it means to be human, like Adam and Eve or William Whyte’s Organization Man, arranged in a particular social order (patriarchal in this case). From these archetypes we get to pure and virtuous idealized lifestyles. Lifestyles that deviate from these understandings seem corrupt and vice-driven. The belief that “people don’t change” is at once an approximation and a prescription: people should not change except to better conform to the ideal they are assumed to already approximate. The belief justifies building technology to serve the predictable and changeless ideal and labeling unexpected uses of technology degenerate.
We owe our increasingly farcical yearning for jetpacks and flying cars, for instance, to what we might call the “World Fairs pastoral,” since the vision was strongly shaped by mid-twentieth-century World Fairs. Even at the height of its influence, it was already being satirized by television shows like The Flintstones and The Jetsons. The shows portrayed essentially the 1950s social order, full of Organization Families, transposed to past and future pastoral settings. The humor in the shows rested on audiences recognizing the escapist non-realism.
Not quite as clever as the Flintstones or Jetsons, but we try.
The World Fairs pastoral, inspired strongly by the aerospace technologies of the 1950s, represented a future imagined around flying cars, jetpacks and glamorous airlines like Pan Am. Flying cars merely updated a familiar nuclear-family lifestyle. Jetpacks appealed to the same individualist instincts as motorcycles. Airlines like Pan Am, besides being an integral part of the military-industrial complex, owed their “glamor” in part to their deliberate perpetuation of the sexist culture of the fifties. Within this vision, truly significant developments, like the rise of vastly more efficient low-cost airlines in the 70s, seemed like decline from a “Golden Age” of air travel.
Arguably, the aerospace future that actually unfolded was vastly more interesting than the one envisioned in the World Fairs pastoral. Low-cost, long-distance air travel opened up a globalized and multicultural future, broke down barriers between insular societies, and vastly increased global human mobility. Along the way, it helped dismantle much of the institutionalized sexism behind the glamour of the airline industry. These developments were enabled in large part by post-1970s software technologies,1 rather than improvements in core aerospace engineering technologies. These were precisely the technologies that were beginning to “break smart” out of the stifling influence of the military-industrial complex.
In 2012, thanks largely to these developments, for the first time in history there were over a billion international tourist arrivals worldwide.2 Software had eaten and democratized elitist air travel. Today, software is continuing to eat airplanes in deeper ways, driving the current explosion in drone technology. Again, those fixated on jetpacks and flying cars are missing the actual, much more interesting action because it is not what they predicted. When pastoralists pay attention to drones at all, they see them primarily as morally objectionable military weapons. The fact that they replace technologies of mass slaughter such as carpet bombing, and the growing number of non-military uses, are ignored.
In fact the entire World Fairs pastoral is really a case of privileged members of society, presuming to speak for all, demanding “faster horses” for all of society (in the sense of the likely apocryphal3 quote attributed to Henry Ford, “If I’d asked my customers what they wanted, they would have demanded faster horses.”)
Fortunately for the vitality of the United States and the world at large, the future proved wiser than any limiting pastoral vision of it. The aerospace story is just one among many that suddenly appear in a vastly more positive light once we drop pastoral obsessions and look at the actual unfolding action. Instead of the limited things we could imagine in the 1950s, we got much more impactful things. Software eating aerospace technology allowed it to continue progressing in the direction of maximum potential.
If pastoral visions are so limiting, why do we get so attached to them? Where do they even come from in the first place? Ironically, they arise from Promethean periods of evolution that are too successful.
The World Fairs pastoral, for instance, emerged out of a Promethean period in the United States, heralded by Alexander Hamilton in the 1790s. Hamilton recognized the enormous potential of industrial manufacturing, and in his influential 1792 Report on Manufactures,4 argued that the then-young United States ought to strive to become a manufacturing superpower. For much of the nineteenth century, Hamilton’s ideas competed for political influence5 with Thomas Jefferson’s pastoral vision of an agrarian, small-town way of life, a romanticized, sanitized version of the society that already existed.
For free Americans alive at the time, Jefferson’s vision must have seemed tangible, obviously valuable and just within reach. Hamilton’s must have seemed speculative, uncertain and profane, associated with the grime and smoke of early industrializing Britain. For almost 60 years, it was in fact Jefferson’s parochial sense of proportions that dominated American politics. It was not until the Civil War that the contradictions inherent in the Jeffersonian pastoral led to its collapse as a political force. Today, while it still supplies powerful symbolism to politicians’ speeches, all that remains of the Jeffersonian Pastoral is a nostalgic cultural memory of small-town agrarian life.
During the same period, Hamilton’s ideas, through their overwhelming success, evolved from a vague sense of direction in the 1790s into a rapidly maturing industrial social order by the 1890s. By the 1930s, this social order was already being pastoralized into an alluring vision of jetpacks and flying cars in a vast, industrialized, centralized society. A few decades later, this had turned into a sense of dead-end failure associated with the end of the Apollo program, and the reality of a massive, overbearing military-industrial complex straddling the technological world. The latter has now metastasized into an entire too-big-to-fail old economy. One indicator of the freezing of the sense of direction is that many contemporary American politicians still remain focused on physical manufacturing the way Alexander Hamilton was in 1791. What was a prescient sense of direction then has turned into nostalgia for an obsolete utopian vision today. But where we have lost our irrational attachment to the Jeffersonian Pastoral, the World Fairs pastoral is still too real to let go.
We get attached to pastorals because they offer a present condition of certainty and stability and a utopian future promise of absolutely perfected certainty and stability. Arrival at the utopia seems like a well-deserved reward for hard-won Promethean victories. Pastoral utopias are where the victors of particular historical finite games hope to secure their gains and rest indefinitely on their laurels. The dark side, of course, is that pastorals also represent fantasies of absolute and eternal power over the fate of society: absolute utopias for believers that necessarily represent dystopias for disbelievers. Totalitarian ideologies of the twentieth century, such as communism and fascism, are the product of pastoral mindsets in their most toxic forms. The Jeffersonian pastoral was a nightmare for black Americans.
When pastoral fantasies start to collapse under the weight of their own internal contradictions, long-repressed energies are unleashed. The result is a societal condition marked by widespread lifestyle experimentation based on previously repressed values. To those faced with a collapse of the World Fairs pastoral project today, this seems like an irreversible slide towards corruption and moral decay.
Because they serve as stewards of dominant pastoral visions, cultural elites are most prone to viewing unexpected developments as degeneracy. From the Greek philosopher Plato1 (who lamented the invention of writing in the 4th century BC) to the Chinese scholar, Zhang Xian Wu2 (who lamented the invention of printing in the 12th century AD), alarmist commentary on technological change has been a constant in history. A contemporary example can be found in a 2014 article3 by Paul Verhaege in The Guardian:
There are constant laments about the so-called loss of norms and values in our culture. Yet our norms and values make up an integral and essential part of our identity. So they cannot be lost, only changed. And that is precisely what has happened: a changed economy reflects changed ethics and brings about changed identity. The current economic system is bringing out the worst in us.
Viewed through any given pastoral lens, any unplanned development is more likely to subtract rather than add value. In an imagined world where cars fly, but driving is still a central rather than peripheral function, ridesharing can only be seen as subtracting taxi drivers from a complete vision. Driverless cars — the name is revealing, like “horseless carriage” — can only be seen as subtracting all drivers from the vision. And with such apparent subtraction, values and humans can only be seen as degenerating (never mind that we still ride horses for fun, and will likely continue driving cars for fun).
This tendency to view adaptation as degeneracy is perhaps why cultural elites are startlingly prone to the Luddite fallacy. This is the idea that technology-driven unemployment is a real concern, an idea that arises from the more basic assumption that there is a fixed amount of work (“lump of labor”) to be done. By this logic, if a machine does more, then there is less for people to do.
Prometheans often attribute this fallacious argument to a lack of imagination, but the roots of its appeal lie much deeper. Pastoralists are perfectly willing and able to imagine many interesting things, so long as they bring reality closer to the pastoral vision. Flying cars — and there are very imaginative ways to conceive of them — seem better than land-bound ones because drivers predictably evolving into pilots conforms to the underlying notion of human perfectibility. Drivers unpredictably evolving into smartphone-wielding free agents, and breaking smart from the Organization Man archetype, does not. Within the Jeffersonian pastoral, faster horses (not exactly trivial to breed) made for more empowered small-town yeoman farmers. Drivers of early horseless carriages were degenerate dependents, beholden to big corporations, big cities and Standard Oil.
In other words, pastoralists can imagine sustaining changes to the prevailing social order, but disruptive changes seem profane. As a result, those who adapt to disruption in unexpected ways seem like economic and cultural degenerates, rather than representing employment rebounding in unexpected ways.
History of course, has shown that the idea of technological unemployment is not just wrong, it is wildly wrong. Contemporary fears of software eating jobs is just the latest version of the argument that “people cannot change” and that this time, the true limits of human adaptability have been discovered.
This argument is absolutely correct — within the pastoral vision that it is made.
Once we remove pastoral blinders, it becomes obvious that the future of work lies in the unexpected and degenerate-seeming behaviors of today. Agriculture certainly suffered a devastating permanent loss of employment to machinery within the Jeffersonian pastoral by 1890. Fortunately, Hamilton’s profane ideas, and the degenerate citizens of the industrial world he foresaw, saved the day. The ideal Jeffersonian human, the noble small-town yeoman farmer, did in fact become practically extinct as the Jeffersonians feared. Today the pastoral-ideal human is a high-IQ credentialist Organization Man, headed for gradual extinction, unable to compete with higher-IQ machines. The degenerate, breaking-smart humans of the software-eaten world on the other hand, have no such fears. They are too busy tinkering with new possibilities to bemoan imaginary lost utopias.
John Maynard Keynes was too astute to succumb to the Luddite fallacy in this naive form. In his 1930 conception of the leisure society,4 he noted that the economy could arbitrarily expand to create and satisfy new needs, and with a lag, absorb labor as fast as automation freed it up. But Keynes too failed to recognize that with new lifestyles come new priorities, new lived values and new reasons to want to work. As a result, he saw the Promethean pattern of progress as a necessary evil on the path to a utopian leisure society based on traditional, universal religious values:
I see us free, therefore, to return to some of the most sure and certain principles of religion and traditional virtue-that avarice is a vice, that the exaction of usury is a misdemeanour, and the love of money is detestable, that those walk most truly in the paths of virtue and sane wisdom who take least thought for the morrow. We shall once more value ends above means and prefer the good to the useful. We shall honour those who can teach us how to pluck the hour and the day virtuously and well, the delightful people who are capable of taking direct enjoyment in things, the lilies of the field who toil not, neither do they spin.
But beware! The time for all this is not yet. For at least another hundred years we must pretend to ourselves and to every one that fair is foul and foul is fair; for foul is useful and fair is not. Avarice and usury and precaution must be our gods for a little longer still. For only they can lead us out of the tunnel of economic necessity into daylight.
Perceptions of moral decline however, have no necessary relationship with actual moral decline. As Joseph Tainter observes in The Collapse of Complex Societies:
Values of course, vary culturally, socially and individually…What one individual, society, or culture values highly another does not…Most of us approve, in general, of that which culturally is most like or most pleasing, or at least most intelligible to us. The result is a global bedlam of idiosyncratic ideologies, each claiming exclusive possession of ‘truth.’…
The ‘decadance’ concept seems particularly detrimental [and is] notoriously difficult to define. Decadent behavior is that which differs from one’s own moral code, particular if the offender at some former time behaved in a manner of which one approves. There is no clear causal link between the morality of behavior and political fortunes.
While there is no actual moral decline in any meaningful absolute sense, the anxiety experienced by pastoralists is real. For those who yearn for paternalistic authority, more lifestyle possibilities leads to a sense of anomie rather than freedom. It triggers what the philosopher George Steiner called nostalgia for the absolute.5 Calls for a retreat to tradition or a collectivist drive towards the Next Big Thing (often an Updated Old Thing, as in the case of President Obama’s call for a “new Sputnik moment” a few years ago) share a yearning for a simpler world. But, as Steiner notes:
I do not think it will work. On the most brutal, empirical level, we have no example in history…of a complex economic and technological system backtracking to a more simple, primitive level of survival. Yes, it can be done individually. We all, I think, in the universities now have a former colleague or student somewhere planting his own organic food, living in a cabin in the forest, trying to educate his family far from school. Individually it might work. Socially, I think, it is moonshine.
In 1974, the year of peak centralization, Steiner was presciently observing the beginnings of the transformation. Today, the angst he observed on university campuses has turned into a society-wide condition of pastoral longing, and a pervasive sense of moral decay.
For Prometheans, on the other hand, not only is there no decay, there is actual moral progress.
Prometheans understand technological evolution in terms of increasing diversity of lived values, in the form of more varied actual lifestyles. From any given pastoral perspective, such increasing pluralism is a sign of moral decline, but from a Promethean perspective, it is a sign of moral progress catalyzed by new technological capabilities.
Emerging lifestyles introduce new lived values into societies. Hamilton did not just suggest a way out of the rural squalor1 that was the reality of the Jeffersonian pastoral. His way also led to the dismantlement of slavery, the rise of modern feminism and the gradual retreat of colonial oppression and racism. Today, we are not just leaving the World Fairs pastoral behind for a richer technological future. We are also leaving behind its paternalistic institutions, narrow “resource” view of nature, narrow national identities and intolerance of non-normative sexual identities.
Promethean attitudes begin with an acknowledgment of the primacy of lived values over abstract doctrines. This does not mean that lived values must be uncritically accepted or left unexamined. It just means that lived values must be judged on their own merit, rather than through the lens of a prejudiced pastoral vision.
The shift from car-centric to smartphone-centric priorities in urban transportation is just one aspect of a broader shift from hardware-centric to software-centric lifestyles. Rideshare driver, carless urban professional and low-income-high-mobility are just the tip of an iceberg that includes many other emerging lifestyles, such as eBay or Etsy merchant, blogger, indie musician and search-engine marketer. Each new software-enabled lifestyle adds a new set of lived values and more apparent profanity to society. Some, like rent-over-own values, are shared across many emerging lifestyles and threaten pastorals like the “American Dream,” built around home ownership. Others, such as dietary preferences, are becoming increasingly individualized and weaken the very idea of a single “official food pyramid” pastoral script for all.
Such broad shifts have historically triggered change all the way up to the global political order. Whether or not emerging marginal ideologies2 achieve mainstream prominence, their sense of proportions and priorities, driven by emerging lifestyles and lived values, inevitably does.
These observations are not new among historians of technology, and have led to endless debates about whether societal values drive technological change (social determinism) or whether technological change drives societal values (technological determinism). In practice, the fact that people change and disrupt the dominant prevailing ideal of “human values” renders the question moot. New lived values and new technologies simultaneously irrupt into society in the form of new lifestyles. Old lifestyles do not necessarily vanish: there are still Jeffersonian small farmers and traditional blacksmiths around the world for instance. Rather, they occupy a gradually diminishing role in the social order. As a result, new and old technologies and an increasing number of value systems coexist.
In other words, human pluralism eventually expands to accommodate the full potential of technological capabilities.3
We call this the principle of generative pluralism. Generative pluralism is what allows the virtuous cycle of surplus and spillover to operate. Ephemeralization — the ability to gradually do more with less — creates room for the pluralistic expansion of lifestyle possibilities and individual values, without constraining the future to a specific path.
The inherent unpredictability in the principle implies that both technological and social determinism are incomplete models driven by zero-sum thinking. The past cannot “determine” the future at all, because the future is more complex and diverse. It embodies new knowledge about the world and new moral wisdom, in the form of a more pluralistic and technologically sophisticated society.
Thanks to a particularly fertile kind of generative pluralism that we know as network effects, soft technologies like language and money have historically caused the greatest broad increases in complexity and pluralism. When more people speak a language or accept a currency, the potential of that language or currency increases in a non-zero-sum way. Shared languages and currencies allow more people to harmoniously co-exist, despite conflicting values, by allowing disputes to be settled through words or trade4 rather than violence. We should therefore expect software eating the world to cause an explosion in the variety of possible lifestyles, and society as a whole becoming vastly more pluralistic.
And this is in fact what we are experiencing today.
The principle also resolves the apparent conflict between human agency and “what technology wants”: Far from limiting human agency, technological evolution in fact serves as the most complete expression of it. Technology evolution takes on its unstoppable and inevitable character only after it breaks smart from authoritarian control and becomes part of unpredictable and unscripted collective invention culture. The existence of thousands of individuals and firms working relatively independently on the same frontier means that every possibility will not only be uncovered, it will be uncovered by multiple individuals, operating with different value systems, at different times and places. Even if one inventor chooses not to pursue a possibility, chances are, others will. As a result, all pastoralist forms of resistance are eventually overwhelmed. But the process retains rational resistance to paths that carry risk of ending the infinite game for all, in proportion to their severity. As global success in limiting the spread of nuclear and biological weapons shows, generative pluralism is not the same as mad scientists and James Bond villains running amok.
Prometheans who discover high-leverage unexpected possibilities enter a zone of serendipity. The universe seems to conspire to magnify their agency to superhuman levels. Pastoralists who reject change altogether as profanity turn lack of agency into a self-fulfilling prophecy, and enter a zone of zemblanity. The universe seems to conspire to diminish whatever agency they do have, resulting in the perception that technology diminishes agency.
Power, unlike capability, is zero-sum, since it is defined in terms of control over other human beings. Generative pluralism implies that on average, pastoralists are constantly ceding power to Prometheans. In the long term, however, the loss of power is primarily a psychological rather than material loss. To the extent that ephemeralization frees us of the need for power, we have less use for a disproportionate share.
As a simple example, consider a common twentieth-century battleground: public signage. Today, different languages contend for signaling power in public spaces. In highly multilingual countries, this contention can turn violent. But automated translation and augmented reality technologies5 can make it unnecessary to decide, for instance, whether public signage in the United States ought to be in English, Spanish or both. An arbitrary number of languages can share the same public spaces, and there is much less need for linguistic authoritarianism. Like physical sports in an earlier era, soft technologies such as online communities, video games and augmented reality are all slowly sublimating our most violent tendencies. The 2014 protests in Ferguson, MO, are a powerful example. Compared to the very similar civil rights riots in the 1960s, information in the form of social media coverage, rather than violence, was the primary medium of influence.
The broader lesson of the principle of generative pluralism is this: through technology, societies become intellectually capable of handling progressively more complex value-based conflicts. As societies gradually awaken to resolution mechanisms that do not require authoritarian control over the lives of others, they gradually substitute intelligence and information for power and coercion.
So far, we have tried to convey a visceral sense of what is essentially an uneven global condition of explosive positive change. Change that is progressing at all levels from individual to business to communities to the global societal order. Perhaps most important part of the change is that we are experiencing a systematic substitution of intelligence for brute authoritarian power in problem solving, allowing a condition of vastly increased pluralism to emerge.
Paradoxically, due to the roots of vocal elite discontent in pastoral sensibilities, this analysis is valid only to the extent that it feels viscerally wrong. And going by the headlines of the past few years, it certainly does.
Much of our collective sense of looming chaos and paradises being lost is in fact a clear and unambiguous sign of positive change in the world. By this model, if our current collective experience of the human condition felt utopian, with cultural elites extolling its virtues, we should be very worried indeed. Societies that present a facade of superficial pastoral harmony, as in the movie Stepford Wives, tend to be sustained by authoritarian, non-pluralistic polities, hidden demons, and invisible violence.
Innovation can in fact be defined as ongoing moral progress achieved by driving directly towards the regimes of greatest moral ambiguity, where our collective demons lurk. These are also the regimes where technology finds its maximal expressions, and it is no accident that the two coincide. Genuine progress feels like onrushing obscenity and profanity, and also requires new technological capabilities to drive it.
The subjective psychological feel of this evolutionary process is what Marshall McLuhan described in terms of a rear-view mirror effect: “we see the world through a rear-view mirror. We march backwards into the future.”
Our aesthetic and moral sensibilities are oriented by default towards romanticized memories of paradises lost. Indeed, this is the only way we can enter the future. Our constantly pastoralizing view of the world, grounded in the past, is the only one we have. The future, glimpsed only through a small rear-view mirror, is necessarily framed by the past. To extend McLuhan’s metaphor, the great temptation is to slam on the brakes and shift from what seems like reverse gear into forward gear. The paradox of progress is that what seems like the path forward is in fact the reactionary path of retreat. What seems like the direction of decline is in fact the path forward.
Today, our collective rear-view mirror is packed with seeming profanity, in the form of multiple paths of descent into hell. Among the major ones that occupy our minds are the following:
These are such complex and strongly coupled themes that conversations about any one of them quickly lead to a jumbled discussion of all of them, in the form of an ambiguous “inequality, surveillance and everything” non-question. Dickens’ memorable opening paragraph in A Tale of Two Cities captures this state of confused urgency and inchoate anxiety perfectly:
It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way – in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only.
Such a state of confused urgency often leads to hasty and ill-conceived grand pastoralist schemes by way of the well-known politician’s syllogism:1
Something must be done
This is something
This must be done
Promethean sensibilities suggest that the right response to the sense of urgency is not the politician’s syllogism, but counter-intuitive courses of action: driving straight into the very uncertainties the ambiguous problem statements frame. Often, when only reactionary pastoralist paths are under consideration, this means doing nothing, and allowing events to follow a natural course.
In other words, our basic answer to the non-question of “inequality, surveillance and everything” is this: the best way through it is through it. It is an answer similar in spirit to the stoic principle that “the obstacle is the way” and the Finnish concept of sisu: meeting adversity head-on by cultivating a capacity for managing stress, rather than figuring out schemes to get around it. Seemingly easier paths, as the twentieth century’s utopian experiments showed, create a great deal more pain in the long run.
Broken though they might seem, the mechanisms we need for working through “inequality, surveillance and everything” are the generative, pluralist ones we have been refining over the last century: liberal democracy, innovation, entrepreneurship, functional markets and the most thoughtful and limited new institutions we can design.
This answer will strike many as deeply unsatisfactory and perhaps even callous. Yet, time and again, when the world has been faced with seemingly impossible problems, these mechanisms have delivered.
Beyond doing the utmost possible to shield those most exposed to, and least capable of enduring, the material pain of change, it is crucial to limit ourselves and avoid the temptation of reactionary paths suggested by utopian or dystopian visions, especially those that appear in futurist guises. The idea that forward is backward and sacred is profane will never feel natural or intuitive, but innovation and progress depend on acting by these ideas anyway.
In the remaining essays in this series, we will explore what it means to act by these ideas.
Part-way through Douglas Adams’ Hitchhikers’ Guide to the Galaxy, we learn that Earth is not a planet, but a giant supercomputer built by a race of hyperintelligent aliens. Earth was designed by a predecessor supercomputer called Deep Thought, which in turn had been built to figure out the answer to the ultimate question of “Life, the Universe and Everything.” Much to the annoyance of the aliens, the answer turns out to be a cryptic and unsatisfactory “42.”
What is 7 times 6?
We concluded the previous essay with our own ultimate question of “Inequality, Surveillance and Everything.” The basic answer we offered — “the best way through it is through it” — must seem as annoying, cryptic and unsatisfactory as Deep Thought’s “42.”
In Adams’ tale, Deep Thought gently suggests to the frustrated aliens that perhaps the answer seemed cryptic because they never understood the question in the first place. Deep Thought then proceeds to design Earth to solve the much tougher problem of figuring out the actual question.
First performed as a radio show in 1978, Adams’ absurdist epic precisely portrayed the societal transformation that was gaining momentum at the time. Rapid technological progress due to computing was accompanied by cryptic and unsatisfactory answers to confused and urgent-seeming questions about the human condition. Our “Inequality, Surveillance and Everything” form of the non-question is not that different from the corresponding non-question of the late 1970s: “Cold War, Globalization and Everything.” Then, as now, the frustrating but correct answer was “the best way through it is through it.”
The Hitchhiker’s Guide can be read as a satirical anti-morality tale about pastoral sensibilities, utopian solutions and perfect answers. In their dissatisfaction with the real “Ultimate Answer,” the aliens failed to notice the truly remarkable development: they had built an astoundingly powerful computer, which had then proceeded to design an even more powerful successor.
Like the aliens, we may not be satisfied with the answers we find to timeless questions, but simply by asking the questions and attempting to answer them, we are bootstrapping our way to a more advanced society.
As we argued in the last essay, the advancement is both technological and moral, allowing for a more pluralistic society to emerge from the past.
Adams died in 2001, just as his satirical visions, which had inspired a generation of technologists, started to actually come true. Just as Deep Thought had given rise to a fictional “Earth” computer, centralized mainframe computing of the industrial era gave way to distributed, networked computing. In a rather perfect case of life imitating art, IBM researchers named a powerful chess-playing supercomputer Deep Thought in the 1990s, in honor of Adams’ fictional computer. A later version, Deep Blue, became the first computer to beat the reigning human champion in 1997. But the true successor to the IBM era of computing was the planet-straddling distributed computer we call the Internet.
Manufactured in Taiwan
Science fiction writer Neal Stephenson noted the resulting physical transformation as early as 1996, in his essay on the undersea cable-laying industry, Mother Earth, Motherboard.1 By 2004, Kevin Kelly had coined a term and launched a new site to talk about the idea of digitally integrated technology as a single, all-subsuming social reality,2 emerging on this motherboard:
I’m calling this site The Technium. It’s a word I’ve reluctantly coined to designate the greater sphere of technology – one that goes beyond hardware to include culture, law, social institutions, and intellectual creations of all types. In short, the Technium is anything that springs from the human mind. It includes hard technology, but much else of human creation as well. I see this extended face of technology as a whole system with its own dynamics.
The metaphor of the world as a single interconnected entity that subsumes human existence is an old one, and in its modern form, can be traced at least to Hobbes’ Leviathan (1651), and Herbert Spencer’s The Social Organism (1853). What is new about this specific form is that it is much more than a metaphor. The view of the world as a single, connected, substrate for computation is not just a poetic way to appreciate the world: It is a way to shape it and act upon it. For many software projects, the idea that “the network is the computer” (due to John Gage, a computing pioneer at Sun Microsystems) is the only practical perspective.
While the pre-Internet world can also be viewed as a programmable planetary computer based on paperware, what makes today’s planetary computer unique in history is that almost anyone with an Internet connection can program it at a global scale, rather than just powerful leaders with the ability to shape organizations.
The kinds of programming possible on such a vast, democratic scale have been rapidly increasing in sophistication. In November 2014 for instance, within a few days of the Internet discovering and becoming outraged by a sexist 2013 Barbie comic-book titled Computer Engineer Barbie, hacker Kathleen Tuite had created a web app (using an inexpensive cloud service called Heroku) allowing anyone to rewrite the text of the book. The hashtag #FeministHackerBarbie immediately went viral. Coupled with the web app, the hashtag unleashed a flood of creative rewrites of the Barbie book. What would have been a short-lived flood of outrage only a few years ago had turned into a breaking-smart moment for the entire software industry.
To appreciate just how remarkable this episode was, consider this: a hashtag is effectively an instantly defined soft network within the Internet, with capabilities comparable to the entire planet’s telegraph system a century ago. By associating a hashtag with the right kind of app, Tuite effectively created an entire temporary publishing company, with its own distribution network, in a matter of hours rather than decades. In the process, reactive sentiment turned into creative agency.
These capabilities emerged in just 15 years: practically overnight by the normal standards of technological change.
In 1999, SETI@home,3 the first distributed computing project to capture the popular imagination, merely seemed like a weird way to donate spare personal computing power to science. By 2007, Facebook, Twitter, YouTube, Wikipedia and Amazon’s Mechanical Turk4 had added human creativity, communication and money into the mix, and the same engineering approaches had created the social web. By 2014, experimental mechanisms developed in the culture of cat memes5 were influencing elections. The penny-ante economy of Amazon’s Mechanical Turk had evolved into a world where bitcoin miners were making fortunes, car owners were making livable incomes through ridesharing on the side, and canny artists were launching lucrative new careers on Kickstarter.
Even as the old planet-scale computer declines, the new one it gave birth to is coming of age.
In our Tale of Two Computers, the parent is a four-century-old computer whose basic architecture was laid down in the zero-sum mercantile age. It runs on paperware, credentialism, and exhaustive territorial claims that completely carve up the world with strongly regulated boundaries. Its structure is based on hierarchically arranged container-like organizations, ranging from families to nations. In this order of things, there is no natural place for a free frontier. Ideally, there is a place for everything, and everything is in its place. It is a computer designed for stability, within which innovation is a bug rather than a feature.
We’ll call this planet-scale computer the geographic world.
The child is a young, half-century old computer whose basic architecture was laid down during the Cold War. It runs on software, the hacker ethos, and soft networks that wire up the planet in ever-richer, non-exclusive, non-zero-sum ways. Its structure is based on streams like Twitter: open, non-hierarchical flows of real-time information from multiple overlapping networks. In this order of things, everything from banal household gadgets to space probes becomes part of a frontier for ceaseless innovation through bricolage. It is a computer designed for rapid, disorderly and serendipitous evolution, within which innovation, far from being a bug, is the primary feature.
We’ll call this planet-scale computer the networked world.
The networked world is not new. It is at least as old as the oldest trade routes, which have been spreading subversive ideas alongside valuable commodities throughout history. What is new is its growing ability to dominate the geographic world. The story of software eating the world is the also the story of networks eating geography.
There are two major subplots to this story. The first subplot is about bits dominating atoms. The second subplot is about the rise of a new culture of problem-solving.
In 2015, it is safe to say that the weird problem-solving mechanisms of SETI@home and kitten-picture sharing have become normal problem-solving mechanisms for all domains.
Today it seems strange to not apply networked distributed computing involving both neurons and silicon to any complex problem. The term social media is now unnecessary: Even when there are no humans involved, problem-solving on this planet-scale computer almost necessarily involves social mechanisms. Whatever the mix of humans, software and robots involved, solutions tend to involve the same “social” design elements: real-time information streams, dynamically evolving patterns of trust, fluid identities, rapidly negotiated collaborations, unexpected emergent problem decompositions, efficiently allocated intelligence, and frictionless financial transactions.
Each time a problem is solved using these elements, the networked world is strengthened.
As a result of this new and self-reinforcing normal in problem-solving, the technological foundation of our planet is evolving with extraordinary rapidity. The process is a branching, continuous one rather than the staged, sequential process suggested by labels like Web 2.0 and Web 3.01, which reflect an attempt to understand it in somewhat industrial terms. Some recently sprouted extensions and branches have already been identified and named: the Mobile Web, the Internet of Things (IoT), streaming media, Virtual Reality (VR), Augmented Reality (AR) and the blockchain. Others will no doubt emerge in profusion, further blurring the line between real and virtual.
Surprisingly, as a consequence of software eating the technology industry itself, the specifics of the hardware are not important in this evolution. Outside of the most demanding applications, data, code, and networking are all largely hardware-agnostic today.
The Internet Wayback Machine,2 developed by Brewster Kahle and Bruce Gilliat in 1996, has already preserved a history of the web across a few generations of hardware. While such efforts can sometimes seem woefully inadequate with respect to pastoralist visions of history preservation, it is important to recognize the enormity of the advance they represent over paper-based collective memories.
Crashing storage costs and continuously upgraded datacenter hardware allows corporations to indefinitely save all the data they generate. This is turning out to be cheaper than deciding what to do with it3 in real time, resulting in the Big Data approach to business. At a personal level, cloud-based services like Dropbox make your personal data trivial to move across computers.
Most code today, unlike fifty years ago, is in hardware-independent high-level programming languages rather than hardware-specific machine code. As a result of virtualization (technology that allows one piece of hardware to emulate another, a fringe technology until around 20004), most cloud-based software runs within virtual machines and “code containers” rather than directly on hardware. Containerization in shipping drove nearly a seven-fold increase5 in trade among industrialized nations over 20 years. Containerization of code is shaping up to be even more impactful in the economics of software.
Networks too, are defined primarily in software today. It is not just extremely high-level networks, such as the transient, disposable ones defined by hashtags, that exist in software. Low-level networking software can also persist across generations of switching equipment and different kinds of physical links, such as telephone lines, optic fiber cables and satellite links. Thanks to the emerging technology of software-defined networking (SDN), functions that used to be performed by network hardware are increasingly performed by software.
In other words, we don’t just live on a networked planet. We live on a planet networked by software, a distinction that makes all the difference. The software-networked planet is an entity that can exist in a continuous and coherent way despite continuous hardware churn, just as we humans experience a persistent identity, even though almost every atom in our bodies gets swapped out every few years.
This is a profound development. We are used to thinking of atoms as enduring and bits as transient and ephemeral, but in fact the reverse is more true today.
bitsoveratoms75ppi
The emerging planetary computer has the capacity to retain an evolving identity and memory across evolutionary epochs in hardware, both silicon and neural. Like money and writing, software is only dependent on hardware in the short term, not in the long term. Like the US dollar or the plays of Shakespeare, software and software-enabled networks can persist through changes in physical technology.
By contrast it is challenging to preserve old hard technologies even in museums, let alone in working order as functional elements of society. When software eats hardware, however, we can physically or virtually recreate hardware as necessary, imbuing transient atoms with the permanence of bits.
For example, the Realeaux collection of 19th century engineering mechanisms, a priceless part of mechanical engineering heritage, is now available as a set of 3d printable models from Cornell University6 for students anywhere in the world to download, print and study. A higher-end example is NASA’s reverse engineering of 1970s-vintage Saturn V rocket engines.7 The complex project used structured light 3d scanning to reconstruct accurate computer models, which were then used to inform a modernized design. Such resurrection capabilities even extend to computing hardware itself. In 1997, using modern software tools, researchers at the University of Pennsylvania led by Jan Van Der Spiegel recreated ENIAC, the first modern electronic computer — in the form of an 8mm by 8mm chip.8
As a result of such capabilities, the very idea of hardware obsolescence is becoming obsolete. Rapid evolution does not preclude the persistence of the past in a world of digital abundance.
The potential in virtual and augmented reality is perhaps even higher, and the potential goes far beyond consumption devices like the Oculus VR, Magic Leap, Microsoft Hololens and the Leap 3d motion sensor. The more exciting story is that production capabilities are being democratized. In the early decades of prohibitively expensive CGI and motion capture technology, only big-budget Hollywood movies and video games could afford to create artificial realities. Today, with technologies like Microsoft’s Photosynth (which allows you to capture 3d imagery with smartphones), SketchUp, (a powerful and free 3d modeling tool), 3d Warehouse (a public repository of 3d virtual objects), Unity (a powerful game-design tool) and 3d scanning apps such as Trimensional, it is becoming possible for anyone to create living historical records and inhabitable fictions in the form of virtual environments. The Star Trek “holodeck” is almost here: our realities can stay digitally alive long after they are gone in the physical world.
These are more than cool toys. They are soft technological capabilities of enormous political significance. Software can preserve the past in the form of detailed, relivable memories that go far beyond the written word. In 1964, only the “Big 3” network television crews had the ability to film the civil rights riots in America, making the establishment record of events the only one. A song inspired by the movement was appropriately titled This revolution will not be televised. In 1991, a lone witness with a personal camcorder videotaped the tragic beating of Rodney King, triggering the Los Angeles riots.
Fast-forwarding fifteen years, in 2014, smartphones were capturing at least fragments of nearly every important development surrounding the death of Michael Brown in Ferguson, and thousands of video cameras were being deployed to challenge the perspectives offered by the major television channels. In a rare display of consensus, civil libertarians on both the right and left began demanding that all police officers and cars be equipped with cameras that cannot be turned off. Around the same time, the director of the FBI was reduced to conducting a media roadshow to attempt to stall the spread of cryptographic technologies capable of limiting government surveillance.
In just a year after the revelations of widespread surveillance by the NSA, the tables were already being turned.
It is only a matter of time before all participants in every event of importance will be able to record and share their experiences from their perspective as comprehensively as they want. These can then turn into collective, relivable, 3d memories that are much harder for any one party to manipulate in bad faith. History need no longer be written by past victors.
Even authoritarian states are finding that surveillance capabilities cut both ways in the networked world. During the 2014 #Occupy protests in Hong Kong for instance, drone imagery allowed news agencies to make independent estimates of crowd sizes,9 limiting the ability of the government to spin the story as a minor protest. Software was being used to record history from the air, even as it was being used to drive the action on the ground.
When software eats history this way, as it is happening, the ability to forget10 becomes a more important political, economic and cultural concern than the ability to remember.
When bits begin to dominate atoms, it no longer makes sense to think of virtual and physical worlds as separate, detached spheres of human existence. It no longer makes sense to think of machine and human spheres as distinct non-social and social spaces. When software eats the world, “social media,” including both human and machine elements, becomes the entire Internet. “The Internet” in turn becomes the entire world. And in this fusion of digital and physical, it is the digital that dominates.
The fallacious idea that the online world is separate from and subservient to the offline world (an idea called digital dualism, the basis for entertaining but deeply misleading movies such as Tron and The Matrix) yields to an understanding of the Internet as an alternative basis for experiencing all reality, including the old basis: geography.
Science fiction writer Bruce Sterling captured the idea of bits dominating atoms with his notion of “spimes” — enduring digital master objects that can be flexibly realized in different physical forms as the need arises. A book, for instance, is a spime rather than a paper object today, existing as a master digital copy that can evolve indefinitely, and persist beyond specific physical copies.
At a more abstract level, the idea of a “journey” becomes a spime that can be flexibly realized in many ways, through specific physical vehicles or telepresence technologies. A “television news show” becomes an abstract spime that might be realized through the medium of a regular television crew filming on location, an ordinary citizen livestreaming events she is witnessing, drone footage, or official surveillance footage obtained by activist hackers.
Spimes in fact capture the essential spirit of bricolage: turning ideas into reality using whatever is freely or cheaply available, instead of through dedicated resources controlled by authoritarian entities. This capability highlights the economic significance of bits dominating atoms. When the value of a physical resource is a function of how openly and intelligently it can be shared and used in conjunction with software, it becomes less contentious. In a world organized by atoms-over-bits logic, most resources are by definition what economists call rivalrous: if I have it, you don’t. Such captive resources are limited by the imagination and goals of one party. An example is a slice of the electromagnetic spectrum reserved for a television channel. Resources made intelligently open to all on the other hand, such as Twitter, are limited only by collective technical ingenuity. The rivalrousness of goods becomes a function of the the amount of software and imagination used to leverage them, individually or collectively.
When software eats the economy, the so-called “sharing economy” becomes the entire economy, and renting, rather than ownership, becomes the default logic driving consumption.
The fact that all this follows from “social” problem-solving mechanisms suggests that the very meaning of the word has changed. As sociologist Bruno Latour has argued, “social” is now about more than the human. It includes ideas and objects flexibly networked through software. Instead of being an externally injected alien element, technology and innovation become part of the definition of what it means to be social.
What we are living through today is a hardware and software upgrade for all of civilization. It is, in principle no different from buying a new smartphone and moving music, photos, files and contacts to it. And like a new smartphone, our new planet-scale hardware comes with powerful, but disorienting new capabilities. Capabilities that test our ability to adapt.
And of all the ways we are adapting, the single most important one is the adaptation in our problem-solving behaviors.
This is the second major subplot in our Tale of Two Computers. Wherever bits begin to dominate atoms, we solve problems differently. Instead of defining and pursuing goals we create and exploit luck.
Upgrading a planet-scale computer is, of course, a more complex matter than trading in an old smartphone for a new one, so it is not surprising that it has already taken us nearly half a century, and we’re still not done.
Since 1974, the year of peak centralization, we have been trading in a world whose functioning is driven by atoms in geography for one whose functioning is driven by bits on networks. The process has been something like vines growing all over an aging building, creeping in through the smallest cracks in the masonry to establish a new architectural logic.
The difference between the two is simple: the geographic world solves problems in goal-driven ways, through literal or metaphoric zero-sum territorial conflict. The networked world solves them in serendipitous ways, through innovations that break assumptions about how resources can be used, typically making them less rivalrous and unexpectedly abundant.
Goal-driven problem-solving follows naturally from the politician’s syllogism: we must do something; this is something; we must do this. Such goals usually follow from gaps between reality and utopian visions. Solutions are driven by the deterministic form-follows-function1 principle, which emerged with authoritarian high-modernism in the early twentieth century. At its simplest, the process looks roughly like this:
Problem selection: Choose a clear and important problem
Resourcing: Capture resources by promising to solve it
Solution: Solve the problem within promised constraints
This model is so familiar that it seems tautologically equivalent to “problem solving”. It is hard to see how problem-solving could work any other way. This model is also an authoritarian territorial claim in disguise. A problem scope defines a boundary of claimed authority. Acquiring resources means engaging in zero-sum competition to bring them into your boundary, as captive resources. Solving the problem generally means achieving promised effects within the boundary without regard to what happens outside. This means that unpleasant unintended consequences — what economists call social costs — are typically ignored, especially those which impact the least powerful.
We have already explored the limitations of this approach in previous essays, so we can just summarize them here. Choosing a problem based on “importance” means uncritically accepting pastoral problem frames and priorities. Constraining the solution with an alluring “vision” of success means limiting creative possibilities for those who come later. Innovation is severely limited: You cannot act on unexpected ideas that solve different problems with the given resources, let alone pursue the direction of maximal interestingness indefinitely. This means unseen opportunity costs can be higher than visible benefits. You also cannot easily pursue solutions that require different (and possibly much cheaper) resources than the ones you competed for: problems must be solved in pre-approved ways.
This is not a process that tolerates uncertainty or ambiguity well, let alone thrive on it. Even positive uncertainty becomes a problem: an unexpected budget surplus must be hurriedly used up, often in wasteful ways, otherwise the budget might shrink next year. Unexpected new information and ideas, especially from novel perspectives — the fuel of innovation — are by definition a negative, to be dealt with like unwanted interruptions. A new smartphone app not anticipated by prior regulations must be banned.
In the last century, the most common outcome of goal-directed problem solving in complex cases has been failure.
The networked world approach is based on a very different idea. It does not begin with utopian goals or resources captured through specific promises or threats. Instead it begins with open-ended, pragmatic tinkering that thrives on the unexpected. The process is not even recognizable as a problem-solving mechanism at first glance:
Immersion in relevant streams of ideas, people and free capabilities
Experimentation to uncover new possibilities through trial and error
Leverage to double down on whatever works unexpectedly well
Where the politician’s syllogism focuses on repairing things that look broken in relation to an ideal of changeless perfection, the tinkerer’s way focuses on possibilities for deliberate change. As Dilbert creator Scott Adams observed, “Normal people don’t understand this concept; they believe that if it ain’t broke, don’t fix it. Engineers believe that if it ain’t broke, it doesn’t have enough features yet.”2
What would be seemingly pointless disruption in an unchanging utopia becomes a way to stay one step ahead in a changing environment. This is the key difference between the two problem-solving processes: in goal-driven problem-solving, open-ended ideation is fundamentally viewed as a negative. In tinkering, it is a positive.
The first phase — inhabiting relevant streams — can look like idle procrastination on Facebook and Twitter, or idle play with cool new tools discovered on Github. But it is really about staying sensitized to developing opportunities and threats. The perpetual experimentation, as we saw in previous essays, feeds via bricolage on whatever is available. Often these are resources considered “waste” by neighboring goal-directed processes: a case of social costs being turned into assets. A great deal of modern data science for instance, begins with “data exhaust”: data of no immediate goal-directed use to an organization that would normally get discarded in an environment of high storage costs. Since the process begins with low-stakes experimentation, the cost of failures is naturally bounded. The upside, however, is unbounded: there is no necessary limit to what unexpected leveraged uses you might discover for new capabilities.
Tinkerers — be they individuals or organizations — in possession of valuable but under-utilized resources tend to do something counter-intuitive. Instead of keeping idle resources captive, they open up access to as many people as possible, with as few strings attached as possible, in the hope of catalyzing spillover tinkering. Where it works, thriving ecosystems of open-ended innovation form, and steady streams of new wealth begin to flow. Those who share interesting and unique resources in such open ways gain a kind of priceless goodwill money cannot buy. The open-source movement, Google’s Android operating system, Big Data technology, the Arduino hardware experimentation kit and the OpenROV underwater robot all began this way. Most recently, Tesla voluntarily opened up access to its electric vehicle technology patents under highly liberal terms compared to automobile industry norms.
Tinkering is a process of serendipity-seeking that does not just tolerate uncertainty and ambiguity, it requires it. When conditions for it are right, the result is a snowballing effect where pleasant surprises lead to more pleasant surprises.
What makes this a problem-solving mechanism is diversity of individual perspectives coupled with the law of large numbers (the statistical idea that rare events can become highly probable if there are enough trials going on). If an increasing number of highly diverse individuals operate this way, the chances of any given problem getting solved via a serendipitous new idea slowly rises. This is the luck of networks.
Serendipitous solutions are not just cheaper than goal-directed ones. They are typically more creative and elegant, and require much less conflict. Sometimes they are so creative, the fact that they even solve a particular problem becomes hard to recognize. For example, telecommuting and video-conferencing do more to “solve” the problem of fossil-fuel dependence than many alternative energy technologies, but are usually understood as technologies for flex-work rather than energy savings.
Ideas born of tinkering are not targeted solutions aimed at specific problems, such as “climate change” or “save the middle class,” so they can be applied more broadly. As a result, not only do current problems get solved in unexpected ways, but new value is created through surplus and spillover. The clearest early sign of such serendipity at work is unexpectedly rapid growth in the adoption of a new capability. This indicates that it is being used in many unanticipated ways, solving both seen and unseen problems, by both design and “luck”.
Venture capital is ultimately the business of detecting such signs of serendipity early and investing to accelerate it. This makes Silicon Valley the first economic culture to fully and consciously embrace the natural logic of networks. When the process works well, resources flow naturally towards whatever effort is growing and generating serendipity the fastest. The better this works, the more resources flow in ways that minimize opportunity costs.
From the inside, serendipitous problem solving feels like the most natural thing in the world. From the perspective of goal-driven problem solvers, however, it can look indistinguishable from waste and immoral priorities.
This perception exists primarily because access to the luck of sufficiently weak networks can be slowed down by sufficiently strong geographic world boundaries (what is sometimes called bahramdipity: serendipity thwarted by powerful forces). Where resources cannot stream freely to accelerate serendipity, they cannot solve problems through engineered luck, or create surplus wealth. The result is growing inequality between networked and geographic worlds.
This inequality superficially resembles the inequality within the geographic world created by malfunctioning financial markets, crony capitalism and rent-seeking behaviors. As a result, it can be hard for non-technologists to tell Wall Street and Silicon Valley apart, even though they represent two radically different moral perspectives and approaches to problem-solving. When the two collide on highly unequal terms, as they did in the cleantech sector in the late aughts, the overwhelming advantage enjoyed by geographic-world incumbents can prove too much for the networked world to conquer. In the case of cleantech, software was unable to eat the sector and solve its problems in large part due to massive subsidies and protections available to incumbents.
But this is just a temporary state. As the networked world continues to strengthen, we can expect very different outcomes the next time it takes on problems in the cleantech sector.
As a result of failures and limits that naturally accompany young and growing capabilities, the networked world can seem “unresponsive” to “real” problems.
So while both Wall Street and Silicon Valley can often seem tone-deaf and unresponsive to pressing and urgent pains while minting new billionaires with boring frequency, the causes are different. The problems of Wall Street are real, and symptomatic of a true crisis of social and economic mobility in the geographic world. Those of Silicon Valley on the other hand, exist because not everybody is sufficiently plugged into the networked world yet, limiting its power. The best response we have come up with for the former is periodic bailouts for “too big to fail” organizations in both the public and private sector. The problem of connectivity on the other hand, is slowly and serendipitously solving itself as smartphones proliferate.
This difference between the two problem-solving cultures carries over to macroeconomic phenomena as well.
Unlike booms and busts in the financial markets, which are often artificially created, technological booms and busts are an intrinsic feature of wealth creation itself. As Carlota Perez notes, technology busts in fact typically open up vast new capabilities that were overbuilt during booms. They radically expand access to the luck of networks to larger populations. The technology bust of 2000 for instance, radically expanded access to the tools of entrepreneurship and began fueling the next wave of innovation almost immediately.
The 2007 subprime mortgage bust, born of deceit and fraud, had no such serendipitous impact. It destroyed wealth overall, rather than creating it. The global financial crisis that followed is representative of a broader systematic crisis in the geographic world.
Structure, as the management theorist Alfred Chandler noted in his study of early industrial age corporations, follows strategy. Where a goal-driven strategy succeeds, the temporary scope of the original problem hardens into an enduring and policed organizational boundary. Temporary and specific claims on societal resources transform into indefinite and general captive property rights for the victors of specific political, cultural or military wars.
containers75ppi
As a result we get containers with eternally privileged insiders and eternally excluded outsiders: geographic-world organizations. By their very design, such organizations are what Daron Acemoglu and James Robinson call extractive institutions. They are designed not just to solve a specific problem and secure the gains, but to continue extracting wealth indefinitely. Whatever the broader environmental conditions, ideally wealth, harmony and order accumulate inside the victor’s boundaries, while waste, social costs, and strife accumulate outside, to be dealt with by the losers of resource conflicts.
This description does not apply just to large banks or crony capitalist corporations. Even an organization that seems unquestionably like a universal good, such as the industrial age traditional family, comes with a societal cost. In the United States for example, laws designed to encourage marriage and home-ownership systematically disadvantage single adults and non-traditional families (who now collectively form more than half the population). Even the traditional family, as defined and subsidized by politics, is an extractive institution.
Where extractive institutions start to form, it becomes progressively harder to solve future problems in goal-driven ways. Each new problem-solving effort has more entrenched boundaries to deal with. Solving new problems usually means taking on increasingly expensive conflict to redraw boundaries as a first step. In the developed world, energy, healthcare and education are examples of sectors where problem-solving has slowed to a crawl due to a maze of regulatory and other boundaries. The result has been escalating costs and declining innovation — what economist William Baumol has labeled the “cost disease.”
The cost disease is an example of how, in their terminal state, goal-driven problem solving cultures exhaust themselves. Without open-ended innovation, the growing complexity of boundary redrawing makes most problems seem impossible. The planetary computer that is the geographic world effectively seizes up.
On the cusp of the first Internet boom, the landscape of organizations that defines the geographic world was already in deep trouble. As Giles Deleuze noted around 1992:1
We are in a generalized crisis in relation to all environments of enclosure — prison, hospital, factory, school, family…The administrations in charge never cease announcing supposedly necessary reforms…But everyone knows these environments are finished, whatever the length of their expiration periods. It’s only a matter of administering their last rites and of keeping people employed until the installation of new forces knocking at the door.
The “crisis in environments of enclosure” is a natural terminal state for the geographic world. When every shared societal resource has been claimed by a few as an eternal and inalienable right, and secured behind regulated boundaries, the only way to gain something is to deprive somebody else of it through ideology-driven conflict.
This is the zero-sum logic of mercantile economic organization, and dates to the sixteenth century. In fact, because some value is lost through conflict, in the absence of open-ended innovation, it can be worse than zero-sum: what decision theorists call negative-sum (the ultimate example of which is of course war).
By the early twentieth century, mercantilist economic logic had led to the world being completely carved up in terms of inflexible land, water, air, mineral and — perhaps most relevant today — spectrum rights. Rights that could not be freely traded or renegotiated in light of changing circumstances.
This is a grim reality we have a tendency to romanticize. As the etymology of words like organization and corporation suggests, we tend to view our social containers through anthropomorphic metaphors. We extend metaphoric and legal fictions of identity, personality, birth and death far beyond the point of diminishing marginal utility. We assume the “life” of these entities to be self-evidently worth extending into immortality. We even mourn them when they do occasionally enter irreversible decline. Companies like Kodak and Radio Shack for example, evoke such strong positive memories for many Americans that their decline seems truly tragic to many, despite the obvious irrelevance of the business models that originally fueled their rise. We assume that the fates of actual living humans is irreversibly tied to the fates of the artificial organisms they inhabit.
In fact, in the late crisis-ridden state of the geographic world, the “goal” of a typical problem-solving effort is often to “save” some anthropomorphically conceived part of society, without any critical attention devoted to whether it is still necessary, or whether better alternatives are already serendipitously emerging. If innovation is considered a necessary ingredient in the solution at all, only sustaining innovations — those that help preserve and perfect the organization in question — are considered.
Whether the intent is to “save” the traditional family, a failing corporation, a city in decline, or an entire societal class like the “American middle class,” the idea that the continued existence of any organization might be both unnecessary and unjustifiable is rejected as unthinkable. The persistence of geographic world organizations is prized for its own sake, whatever the changes in the environment.
The dark side of such anthropomorphic romanticization is what we might call geographic dualism: a stable planet-wide separation of local utopian zones secured for a privileged few and increasingly dystopian zones for many, maintained through policed boundaries. The greater the degree of geographic dualism, the clearer the divides between slums and high-rises, home owners and home renters, developing and developed nations, wrong and right sides of the tracks, regions with landfills and regions with rent-controlled housing. And perhaps the most glaring divide: secure jobs in regulated sectors with guaranteed lifelong benefits for some, at the cost of needlessly heightened precarity in a rapidly changing world for others.
In a changing environment, organizational stability valued for its own sake becomes a kind of immorality. Seeking such stability means allowing the winners of historic conflicts to enjoy the steady, fixed benefits of stability by imposing increasing adaptation costs on the losers.
In the late eighteenth century, two important developments planted the seeds of a new morality, which sparked the industrial revolution. As a result new wealth began to be created despite the extractive, stability-seeking nature of the geographic world.
With the benefit of a century of hindsight, the authoritarian high-modernist idea that form can follow function in a planned way, via coercive control, seems like wishful thinking beyond a certain scale and complexity. Two phrases popularized by the open-source movement, free as in beer and free as in speech, get at the essence of problem solving through serendipity, an approach that does work1 in large-scale and complex systems.
The way complex systems — such as planet-scale computing capabilities — evolve is perhaps best described by a statement known as Gall’s Law:
A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.
Gall’s Law is in fact much too optimistic. It is not just non-working complex systems designed from scratch that cannot be patched up. Even naturally evolved complex systems that used to work, but have now stopped working, generally cannot be patched into working order again.
The idea that a new, simpler system can revitalize a complex system in a state of terminal crisis is the essence of Promethean thinking. Though the geographic world has reached a state of terminal crisis only recently, the seeds of a simpler working system to replace it were actually planted in the eighteenth century, nearly 200 years before software joined the party. The industrial revolution itself was driven by two elements of our world being partially freed from geographic world logic: people and ideas.
In the eighteenth century, the world gradually rejected the idea that people could be property, to be exclusively claimed by other people or organizations as a problem-solving “resource,” and held captive within specific boundaries. Individual rights and at-will employment models emerged in liberal democracies, in place of institutions like slavery, serfdom and caste-based hereditary professions.
The second was ideas. Again, in the late eighteenth century, modern intellectual property rights, in the form of patents with expiration dates, became the norm. In ancient China, those who revealed the secrets of silk-making were put to death by the state. In late eighteenth century Britain, the expiration of James Watt’s patents sparked the industrial revolution.
Thanks to these two enlightened ideas, a small trickle of individual inventions turned into a steady stream of non-zero sum intellectual and capitalist progress within an otherwise mercantilist, zero-sum world. In the process, the stability-seeking logic of mercantilism was gradually replaced by the adaptive logic of creative destruction.
People and ideas became increasingly free in two distinct ways. As Richard Stallman, the pioneer of the open source movement, famously expressed it: The two kinds of freedom are free as in beer and free as in speech.
First, people and ideas were increasingly free in the sense of no longer being considered “property” to be bought and sold like beer by others.
Second, people and ideas became increasingly free in the sense of not being restricted to a single purpose. They could potentially play any role they were capable of fulfilling. For people, this second kind of freedom is usually understood in terms of specific rights such as freedom of speech, freedom of association and assembly, and freedom of religion. What is common to all these specific freedoms is that they represent freedom from the constraints imposed by authoritarian goals. This second kind of freedom is so new, it can be alarming to those used to being told what to do by authority figures.
Where both kinds of freedom exist, networks begin to form. Freedom of speech, for instance, tends to create a thriving literary and journalistic culture, which exists primarily as a network of individual creatives rather than specific organizations. Freedom of association and assembly creates new political movements, in the form of grassroots political networks.
Free people and ideas can associate in arbitrary ways, creating interesting new combinations and exploring open-ended possibilities. They can make up their own minds about whether problems declared urgent by authoritarian leaders are actually the right focus for their talents. Free ideas are even more powerful, since unlike the talents of free individuals, they are not restricted to one use at a time.
Free people and free ideas formed the “working simple system” that drove two centuries of disruptive industrial age innovation.
Tinkering — the steady operation of this working simple system — is a much more subversive force than we usually recognize, since it poses an implicit challenge to authoritarian priorities.
This is what makes tinkering an undesirable, but tolerable bug in the geographic world. So long as material constraints limited the amount of tinkering going on, the threat to authority was also limited. Since the “means of production” were not free, either as in beer or as in speech, the anti-authoritarian threat of tinkering could be contained by restricting access to them.
With software eating the world, this is changing. Tinkering is becoming much more than a minority activity pursued by the lucky few with access to well-stocked garages and junkyards. It is becoming the driver of a global mass flourishing.
As Karl Marx himself realized, the end-state of industrial capitalism is in fact the condition where the means of production become increasingly available to all. Of course, it is already becoming clear that the result is neither the utopian collectivist workers’ paradise he hoped for, nor the utopian leisure society that John Maynard Keynes hoped for. Instead, it is a world where increasingly free people, working with increasingly free ideas and means of production, operate by their own priorities. Authoritarian leaders, used to relying on coercion and policed boundaries, find it increasingly hard to enforce their priorities on others in such a world.
Chandler’s principle of structure following strategy allows us to understand what is happening as a result. If non-free people, ideas and means of production result in a world of container-like organizations, free people, ideas and means of production result in a world of streams.
A stream is simply a life context formed by all the information flowing towards you via a set of trusted connections — to free people, ideas and resources — from multiple networks. If in a traditional organization nothing is free and everything has a defined role in some grand scheme, in a stream, everything tends steadily towards free as in both beer and speech. “Social” streams enabled by computing power in the cloud and on smartphones are not a compartmentalized location for a particular kind of activity. They provide an information and connection-rich context for all activity.
streams75ppi
Unlike organizations defined by boundaries, streams are what Acemoglu and Robinson call pluralist institutions. These are the opposite of extractive: they are open, inclusive and capable of creating wealth in non-zero-sum ways.
On Facebook for example, connections are made voluntarily (unlike reporting relationships on an org chart) and pictures or notes are usually shared freely (unlike copyrighted photos in a newspaper archive), with few restrictions on further sharing. Most of the capabilities of the platform are free-as-in-beer. What is less obvious is that they are also free-as-in-speech. Except at the extremes, Facebook does not attempt to dictate what kinds of groups you are allowed to form on the platform.
If the three most desirable things in a world defined by organizations are location, location and location,1 in the networked world they are connections, connections and connections.
Streams are not new in human culture. Before the Silk Road was a Darknet site, it was a stream of trade connecting Asia, Africa and Europe. Before there were lifestyle-designing free agents, hackers and modern tinkerers, there were the itinerant tinkers of early modernity. The collective invention settings we discussed in the last essay, such as the Cornish mining district in James Watt’s time and Silicon Valley today, are examples of early, restricted streams. The main streets of thriving major cities are also streams, where you might run into friends unexpectedly, learn about new events through posted flyers, and discover new restaurants or bars.
What is new is the idea of a digital stream created by software. While geography dominates physical streams, digital streams can dominate geography. Access to the stream of innovation that is Silicon Valley is limited by geographic factors such as cost of living and immigration barriers. Access to the stream of innovation that is Github is not. On a busy main street, you can only run into friends who also happen to be out that evening, but with Augmented Reality glasses on, you might also “run into” friends from around the world and share your physical experiences with them.
What makes streams ideal contexts for open-ended innovation through tinkering is that they constantly present unrelated people, ideas and resources in unexpected juxtapositions. This happens because streams emerge as the intersection of multiple networks. On Facebook, or even your personal email, you might be receiving updates from both family and coworkers. You might also be receiving imported updates from structurally distinct networks, such as Twitter or the distribution network of a news source. This means each new piece of information in a stream is viewed against a backdrop of overlapping, non-exclusive contexts, and a plurality of unrelated goals. At the same time, your own actions are being viewed by others in multiple unrelated ways.
As a result of such unexpected juxtapositions, you might “solve” problems you didn’t realize existed and do things that nobody realized were worth doing. For example, seeing a particular college friend and a particular coworker in the same stream might suggest a possibility for a high-value introduction: a small act of social bricolage. Because you are seen by many others from different perspectives, you might find people solving problems for you without any effort on your part. A common experience on Twitter, for example, is a Twitter-only friend tweeting an obscure but important news item, which you might otherwise have missed, just for your benefit.
When a stream is strengthened through such behaviors, every participating network is strengthened.
While Twitter and Facebook are the largest global digital streams today, there are thousands more across the Internet. Specialized ones such as Github and Stack Overflow cater to specific populations, but are open to anyone willing to learn. Newer ones such as Instagram and Whatsapp tap into the culture of younger populations. Reddit has emerged as an unusual venue for keeping up with science by interacting with actual working scientists. The developers of every agile software product in perpetual beta inhabit a stream of unexpected uses discovered by tinkering users. Slack turns the internal life of a corporation into a stream.
Streams are not restricted to humans. Twitter already has a vast population of interesting bots, ranging from House of Coates (an account that is updated by a smart house) to space probes and even sharks tagged with transmitters by researchers.2 Facebook offers pages that allow you to ‘like’ and follow movies and books.
By contrast, when you are sitting in a traditional office, working with a laptop configured exclusively for work use by an IT department, you receive updates only from one context, and can only view them against the backdrop of a single, exclusive and totalizing context. Despite the modernity of the tools deployed, the architecture of information is not very different from the paperware world. If information from other contexts leaks in, it is generally treated as a containment breach: a cause for disciplinary action in the most old-fashioned businesses. People you meet have pre-determined relationships with you, as defined by the organization chart. If you relate to a coworker in more than one way (as both a team member and a tennis buddy), that weakens the authority of the organization. The same is true of resources and ideas. Every resource is committed to a specific “official” function, and every idea is viewed from a fixed default perspective and has a fixed “official” interpretation: the organization’s “party line” or “policy.”
This has a radical consequence. When organizations work well and there are no streams, we view reality in what behavioral psychologists call functionally fixed 3 ways: people, ideas and things have fixed, single meanings. This makes them less capable of solving new problems in creative ways. In a dystopian stream-free world, the most valuable places are the innermost sanctums: these are typically the oldest organizations, most insulated from new information. But they are also the locus of the most wealth, and offer the most freedom for occupants. In China, for instance, the innermost recesses of the Communist Party are still the best place to be. In a Fortune 500 company, the best place to be is still the senior executive floor.
When streams work well on the other hand, reality becomes increasingly intertwingled (a portmanteau of intertwined and tangled), as Ted Nelson evocatively labeled the phenomenon. People, ideas and things can have multiple, fluid meanings depending on what else appears in juxtaposition with them. Creative possibilities rapidly multiply, with every new network feeding into the stream. The most interesting place to be is usually the very edge, rather than the innermost sanctums. In the United States, being a young and talented person in Silicon Valley can be more valuable and interesting than being a senior staffer in the White House. Being the founder of the fastest growing startup may offer more actual leverage than being President of the United States.
We instinctively understand the difference between the two kinds of context. In an organization, if conflicting realities leak in, we view them as distractions or interruptions, and react by trying to seal them out better. In a stream, if things get too homogeneous and non-pluralistic, we complain that things are getting boring, predictable, and turning into an echo chamber. We react by trying to open things up, so that more unexpected things can happen.
What we do not understand as instinctively is that streams are problem-solving and wealth-creation engines. We view streams as zones of play and entertainment, through the lens of the geographic-dualist assumption that play cannot also be work.
In our Tale of Two Computers, the networked world will become firmly established as the dominant planetary computer when this idea becomes instinctive, and work and play become impossible to tell apart.
The first sustainable socioeconomic order of the networked world is just beginning to emerge, and the experience of being part of a system that is growing smarter at an exponential rate is deeply unsettling to pastoralists and immensely exciting to Prometheans.
trashedplanet75ppi
Our geographic-world intuitions and our experience of the authoritarian institutions of the twentieth century lead us to expect that any larger system we are part of will either plateau into some sort of impersonal, bureaucratic stupidity, or turn “evil” somehow and oppress us.
The first kind of apocalyptic expectation is at the heart of movies like Idiocracy and Wall-E, set in trashed futures inhabited by a degenerate humanity that has irreversibly destroyed nature.
The second kind is the fear behind the idea of the Singularity: the rise of a self-improving systemic intelligence that might oppress us. Popular literal-minded misunderstandings of the concept, rooted in digital dualism, result in movies such as Terminator. These replace the fundamental humans-against-nature conflict of the geographic world with an imagined humans-against-machines conflict of the future. As a result, believers in such dualist singularities, rather ironically for extreme technologists, are reduced to fearfully awaiting the arrival of a God-like intelligence with fingers crossed, hoping it will be benevolent.
Both fears are little more than technological obscurantism. They are motivated by a yearning for the comforting certainties of the geographic world, with its clear boundaries, cohesive identities, and idealized heavens and hells.
Neither is a meaningful fear. The networked world blurs the distinction between wealth and waste. This undermines the first fear. The serendipity of the networked world depends on free people, ideas and capabilities combining in unexpected ways: “Skynet” cannot be smarter than humans unless the humans within it are free. This undermines the second fear.
To the extent that these fears are justified at all, they reflect the terminal trajectory of the geographic world, not the early trajectory of the networked world.
An observation due to Arthur C. Clarke offers a way to understand this second trajectory: any sufficiently advanced technology is indistinguishable from magic. The networked world evolves so rapidly through innovation, it seems like a frontier of endless magic.
Clarke’s observation has inspired a number of snowclones that shed further light on where we might be headed. The first, due to Bruce Sterling, is that any sufficiently advanced civilization is indistinguishable from its own garbage. The second, due to futurist Karl Schroeder,1 is that any sufficiently advanced civilization is indistinguishable from nature.
To these we can add one from social media theorist Seb Paquet, which captures the moral we drew from our Tale of Two Computers: any sufficiently advanced kind of work is indistinguishable from play.
Putting these ideas together, we are messily slouching towards a non-pastoral utopia on an asymptotic trajectory where reality gradually blurs into magic, waste into wealth, technology into nature and work into play. `
This is a world that is breaking smart, with Promethean vigor, from its own past, like the precocious teenagers who are leading the charge. In broad strokes, this is what we mean by software eating the world.
For Prometheans, the challenge is to explore how to navigate and live in this world. A growing non-geographic-dualist understanding of it is leading to a network culture view of the human condition. If the networked world is a planet-sized distributed computer, network culture is its operating system.
Our task is like Deep Thought’s task when it began constructing its own successor: to develop an appreciation for the “merest operational parameters” of the new planet-sized computer to which we are migrating all our civilizational software and data.
This may not be popular, but its how I feel. First, some background and disclaimers. I run a small games company making games for the PC, strategy games with an up front payment. We don’t make ‘free to play’ games or have micro transactions. Also, I’m pretty much a capitalist. I am not a big fan of government regulation in general. I am a ‘get rid of red tape’ kind of guy. I actually oppose tax breaks for game development. I am not a friend of regulation. But nevertheless.
I awake this morning to read about this:
Image1
Some background: Star Citizen is a space game. Its being made by someone who made space games years ago, and they ‘crowd-funded’ the money to make this one. The game is way behind schedule, and is of course, not finished yet. They just passed $100,000,000 in money raised. They can do this because individual ships in the game are for sale, even though you bought the game. I guess at this point we could just say ‘A fool and his money are soon parted’, but yet we do not do this with gambling addiction. In fact we some countries have extremely strict laws on gambling, precisely because they know addiction is a thing, and that people need to be saved from themselves.
Can spending money on games be a problem? Frankly yes, and its because games marketing and the science of advertising has changed beyond recognition from when games first appeared. Games ads have often been dubious, and tacky, but the problem is that now they are such a huge business, the stakes are higher, people are prepared to go further. On the fringes we have this crap:
taprao
But in the mainstream, even advertised in prime-time TV spots we have this crap:
hqdefault
And this stuff works. ‘Game of War’ makes a lot of money. That ad campaign cost them $40,000,000. (Source). Expensive? not when you earn a million dollars A DAY: (Source).
Image2Now if you don’t play games, you might be thinking ‘so what? they must be good games, you are jealous! But no! In fact all the coverage of games like Evony and Game Of War illustrates just how bad they are. They earn so much because the makers of those type of games have an incredibly fine tuned and skillful marketing department bent on psychological manipulation. You think I’m exaggerating? Read this. Some choice quotes:
“We take Facebook stalking to a whole new level. You spend enough money, we will friend you. Not officially, but with a fake account. Maybe it’s a hot girl who shows too much cleavage? That’s us. We learned as much before friending you, but once you let us in, we have the keys to the kingdom.”
Lets think about this for a minute. A company hires people to stalk its customers and befriend them so they can build up a psychological profile of each customer to allow them to extract more money. This is not market research, this is not game design. This is psychological warfare. Lines have been crossed so much we cannot even see them behind us with binoculars. We need to reign this stuff in. Its not just psychological warfare, but warfare where you, the customer, are woefully outgunned, and losing. Some people are losing catastrophically.
You know how much you hate those ads that track you around the internet reminding you of stuff you looked at but didn’t buy? That is amateur hour compared to the crap that some games companies are pulling these days. The problem is, we have NO regulation. AFAIK no law prevents a company stalking its customers on facebook. We live in an age where marketers have already tried using MRI scans on live subjects to test advertising responsiveness. You think you are not manipulated by ads? Get real, read some of the latest books on the topic.We are only a short step away from convincing AI bots that pretend to be our new flirty friends in game that urge us to keep playing, keep upgrading, keep spending.
Modern advertising is so powerful we should be legislating the crap out of this sort of thing. How bad do we let it get before we get some government imposed rules? We are in the early days of mass-population study and manipulation, the days where us, the gamers describe a game as ‘addicting’ as a positive. Maybe it isn’t such a positive after all. Maybe we need to start worrying about if a game is actually good, rather than just ‘addicting’. Maybe we need people to step in and save us from ourselves. We are basically still just hairless apes. We do not possess anything like the self-control or free-will that we think we do.
Like alcohol, gambling, smoking or eating, most of us do not find gaming addictive. Thus we fail to see the problem. it depends how you are wired. See this ‘awards screen’ in company of heroes 2:
2i9pao3
To most of us, thats just silly, and too big, and OTT. But if you suffer from OCD, that can be a BIG BIG problem for you. They KNOW this. Its why it is done. it works. Keep playing kid. keep playing. KEEP PLAYING. This sort of thing doesn’t need to work on everyone. If it works on just 1% and we can get them to spend $1,000 a month on our game (who cares if they can afford it?), then its worth doing.
I hate regulation, but sometimes you need it. Stopping a business dumping waste in a river is a good idea. Stopping companies treating their customers like animals that can be psychologically trapped and exploited is a good idea too. This stuff is too easy. Save us from ourselves.
Hey Friend, been a long time. Usually this would be a conversation I have with you over an instant messaging media. We would argue, because I need to confront my views, and you'll help me to step back a little bit and try to force me to take care of me.
This conversation would probably splitted across several media and people, because this is how I function, in weird ways and without focus.
On the 13th of November, coming back from le Louvres to Saint Denis - where I live - you sent me a SMS asking me if I was safe. I did heard a loud noise from the Stade de France when I was heading out the subway to my home, but since there was a match I just flagged it as "weird noise made by sports fan". I didn't understood why I received this text.
Then, once home. I started a web browser. After receiving half a dozen a tweet of various instance of you, I reassured you by posting that I was home and safe on twitter. And then, with my room-mate and coworker we just thin about the huge amount of work that we would have to do on Monday - and even before that.
I told you, I work in strange ways. I wasn't emotionally affected by the death of 300 people. It's random and I knew no one there. The shooting happened in places I can happen to go, but it's as random as a plane crash (and in fact there's a higher probability to be killed in a plane crash than being hit in a terrorist event).
I checked upon friends (or waited for news)(yeah, I suck at maintaining friendship, I think you're kind of aware of that now) to be sure everyone was mostly safe. And then I waited for the political disaster that will ensure. Until the next Monday I really hoped that our politicians would do something clever, like calling for respect and fraternity and unity.
You called me naïve, but if I'm not that naïve, then I turn cynical. I tried very hard to shut down my inner voices warning me of what would come next. And since you told me that being cynical might hurt you, I try to avoid that. Also it's better for my moral and my depression.
And then our Beloved Socialist President of the Republican Democratic Palpatine ordered the Senate to vote the martial law … Mmm, no, I'm on the wrong movie here. It was the talk of Mr. Hollande in front of the congress - higher and lower chamber gathered at Versailles - when he asserted that we were at war. And that we need to form an alliance with Putin and Assad to fight ISIS. And that we need to extend and modify the State of Emergency, and the Constitution.
This is where I broke up. Syria is still a hard political subject for me. You know that since I talk a lot about it. You even asked me to get diagnosed because I might have some sort of trauma. SO, yes, this is where my emotions finally set me adrift.
What people call emotion wave or surge are - in my case - chaotic tsunamis destroying anything that might be related to reason. That's my poison. That's what will kill me in the end. You're important there, in the fact that you help me resurface in those situation and kind of freeze the emotional disaster.
We talked about it. I see no hope in our current situation. Warrant-less search and warrant-less house arrest; total stop of support of any kind toward the refugees - who already had a hard time; suspension of the right to protest and, more generally, confiscation of the political debate by the politicians - Mr. Valls said that he won't accept any discussion about the incidence of social or economic factor on terrorism; those are what we live on now.
I mean, I'm used to see army in the street of Paris. In fact, I never knew them without troops - the bombing attack of 1995 happened at a time I wasn't that much in Paris and since then troops are always in the street. But now, their in battle suit, helmet and bullet proof vests, way to much weapon for my sanity, etc.
Cops did change also. They weren't on a short leash before, but now they're out for blood and revenge. Usually, even on the few forbidden protests I was at, there's always a way to get out if you ask nicely, they will let you go without hustle - they're basically filtering you to be sure you won't sucker punch them, but in the end you can escape before they arrest everyone. But on the 28th of November, there wasn't such a thing like a possible escape. They wanted to fight.
There was a public announce that unemployment was on the raise just before the COP21. And nothing in the government deemed important to say anything about it. I mean, they're supposed to be socialists for fuck sake. They should at least says that they will work on a new way to count unemployed people, or that they will do something about it. But they only speaks about security. Mr Valls eve stating that "Security if the first of liberty" which, ironically, is a quote made by JM. Le Pen as a slogan for it's presidential elections back in the eighties.
We have a socialist prime minister, defending a security only program, based on pricniple established by the far right movement.
That's about the state of our politics in France. But don't get me wrong, The FN is a bit worse than he PS in that he will actually do what they said they're gonna do, and they plan to cut funding for planed parenthood (which depends largely on regional funding), and other nice stuff.
Politicians wants me to vote to block the National Front, in a national movement aganst fascism. But I won't. I do not see the point on voting for a lack of response to social issues, just for the sake of protecting us against fascism. Politicians who enabled the police state, who are asking for a republican merge, who are saying that young people in teh suburb should cultivate themselves, who plans to bomb people in collaboration with Turkish, Russian and Syrian - all extremely democratic - governments, who reduce democratic life to vote, who won't do a thing about the unemployment, wants my vote to oppose fascism?
You see my dearest friend, you asked me to look on the bright side. But it's more than hard to do that. You told me that bitterness is like Beaujolais Nouveau. You can drink a bit of it, it can even be good - and I disagree on Beaujolais Nouveau being a good wine ever - but too much and it will kills you. Or hurt you.
I don't know.
I work at La Quadrature du Net now. And I really try to avoid the repetitive self destruct pattern that leads me to chain burn out. Me or other staffers. Or you.
During the attacks on the 13th of November, I focused on the solidarity part of it. That's what I'm trying to do. That's why I keep informed on the Syrian situation by following the White Helmets.
But there's something that is absent of our political life in France. We have traditional organisations who covers for themselves without caring about anything else than their way to power: syndicates, political parties. We do have old style NGO, advocating nd lobbying behind the scenes. We have radical groups who are busy fighting cops. But we do not have orgs who works on party. Militantism in France is a serious business. And if you're not working yourself to death you're doing it wrong. ANd you end up without anyone willing to take up the fight, to think on long term strategies, to federate smaller groups who exhausts themselves beyond repair.
And I hear you. I need to focus on the positive sides. So that's what I'm trying to do. There's some good stuff happening. LQDN is finally having a nice and more inclusive community - there's a lot of effort to do, but it's in progress. I'm working there to build tools to bother our deputies - piphone and similar stuff, provide tools to flatten the democratic process. Or at least to help the circulation of information.
And that's my target. You said me that we're in for a long fight. I'm not even sure we can win this fight, and the nihilistic part of me keep thinking that it's useless. But since I try to not killing myself, I need something. If I can bother an intelligence officer, a head of office somewhere, deputies or senators, ministers or head of state that's a win.
If, when they see us, in the press, or elsewhere, or when they hear about us those people think "Oh no … not them again … my day is now ruined" then, it's a win. It won't makes them stop doing shit, but at least, I'll smile when thinking about all the pain they'll get.
And in the meantime, we should try harder working with other small organisation specialised in other aspect of the fight. There's a lot to do with queers, feminists, ant racist groups. And I really think that's where I can help - beyond the purely technical point.
So, you see, I'm trying to stop sipping the bitterness part of things. It's hard 'cause I've turned cynical/realist. And because I love the bitterness. But you're right. I should stop drinking it.
I'm happy you're here. Because at least I can talk to you. And there's here also. This post is fucked up, and makes no sense. But I think it's a bit like what's the political life looks like. Socialist calling voters to vote for traditionalists.
It's fucked up. But I'm gonna ignore that, because it's useless and I can't spend any more energy on that. I'll focus on building things.
Thanks for still being here.
D’abord, un chiffre pour remettre les pendules à l’heure : 91%. C’est le pourcentage de français qui n’a pas voté pour le FN1. Moins d’un français sur 10 a donné une voix à ce parti. Et de fait, que le FN soit « le premier parti de France » n’est pas en soi le symbole d’une droitisation ou d’une radicalisation rampante de la société français. C’est le symbole de la mort de la démocratie représentative, le signe ultime que celle-ci ne représente plus rien ni personne.
Hier, je n’ai pas voté. Je n’irai pas plus dimanche prochain. Ami votant2, je sais que, probablement, tu me méprises, tu as envie de me hurler dessus, de me dire que c’est honteux, que des gens sont morts pour que je puisse voter, qu’à cause de moi le fascisme pourrait s’installer. Je ne t’en veux pas, j’étais pareil il y a à peine 4 ans.
Les étapes du deuil
Tu connais peut-être les 5 étapes du deuil de Elisabeth Kübler-Ross. Ça n’a pas forcément une grande valeur scientifique, mais ça permet de schématiser certains mécanismes émotionnels. Laisse-moi te les énoncer :
Déni
Colère
Marchandage
Dépression
Acceptation
Ami votant, je sais déjà que tu as dépassé le stade du déni : tu sais pertinemment que la démocratie représentative est morte. Sinon, tu voterais pour des idées qui te correspondent, tu voterais pour faire avancer la société, pour donner ton avis sur la direction à prendre. Mais tu ne fais pas cela : au contraire, tu votes « utile », tu votes pour faire barrage à un parti, tu votes pour « le moins pire ». C’est déjà un aveu que le système est mort.
En fait, tu oscilles entre les étapes 2 et 3. Entre la colère envers un système qui se fout de ta gueule, la colère contre les abstentionnistes qui ne jouent pas le jeu… et le marchandage. « Allez, si je vote pour le moins pire, système, tu continues à vivoter ? Allez, peut-être que si on vote PS cette fois, il fera une vraie politique de gauche ? Allez système, tu veux pas continuer à faire semblant de marcher un peu si je fais des concessions de mon côté ? Si je mets mes convictions de côté, tu veux bien ne pas être totalement lamentable ? »
Encore une fois, je comprends le principe, j’étais au même point lors des dernières élections présidentielles. J’appelais les gens à voter, je critiquais les abstentionnistes qui se permettaient de se plaindre alors que, merde, ils n’avaient pas pris la peine de faire leur devoir de citoyen. Je savais pertinemment que le PS au pouvoir ne ferait aucun miracle, que fondamentalement rien ne changerait par rapport à l’UMP, à part à la marge. Mais il fallait bien choisir le moins pire. La démocratie représentative était déjà morte, je le savais. Le vote utile, on nous le rabâchait depuis avant même que j’ai le droit de vote. Sans parler du référendum de 2005 où ça sentait déjà fort le sapin. Mais je n’avais pas terminé mon deuil. Et puis Hollande est passé.
Les derniers coups de pelle
Je ne pourrais jamais assez remercier François Hollande. Il m’a aidé à terminer mon deuil. En me renvoyant ma voix en pleine figure, en m’appuyant bien profondément la tête dans les restes puants et décomposés de notre système politique. Le quinquennat de François Hollande aura été la plus parfaite, la plus magnifique démonstration que le vote est une arnaque et que le pouvoir du peuple est une immense illusion. Le changement, c’est maintenant ! Rappelle-toi, le PS avait tous les pouvoirs en 2012 : la présidence, l’Assemblée, les villes, les régions… merde, même le Sénat était passé à gauche ! Une première ! Les types avaient les mains libres et carte blanche pour tout. Il fallait écouter Copé, la pleureuse « profondément choquée », nous expliquer l’énorme danger que représentaient ces pleins pouvoirs. Lutter contre la finance ? Imposer les revenus du capital comme ceux du travail ? Interdire le cumul des mandats ?
LOL NOPE.
Au lieu de ça, nous aurons eu la même merde qu’avant. Parfois en pire. Course à la croissance alors même que nous produisons déjà trop pour la planète. Course au plein emploi alors que le travail est condamné à disparaître (ce qui, je le rappelle, devrait être une bonne nouvelle). Course à la productivité alors que les syndromes d’épuisement professionnel se multiplient et que le mal-être des travailleurs se généralise. Diminution de ce qu’on nous matraque comme étant « le coût du travail » mais qu’un employé sensé devrait comprendre comme « mon niveau de vie ». Détricotage méthodique des services publics qui devraient au contraire être renforcés.
Nous n’attendions rien de Hollande, il a réussi à faire pire. Des lois liberticides au nom d’une sécurité qu’elles ne garantiront même pas. Un État d’Urgence à durée indéterminée. Des militants assignés à résidence pour leurs convictions. Des manifestations politiques interdites. Des gamins mis en garde à vue parce qu’ils ne respectent pas une minute de silence. Heureusement que c’est sous un parti qui se dit « républicain » que tout cela se passe, sinon, on pourrait doucement commencer à s’inquiéter.
Vous me traitez d’irresponsable parce que je n’ai pas été voter dimanche ? Moi je me trouve irresponsable d’avoir légitimité notre gouvernement actuel en votant en 2012. Depuis 2012, j’ai fait comme beaucoup de monde : j’ai traversé le stade 4, celui de la dépression. À me dire que nous étions définitivement foutus, que même lorsqu’un parti qui se disait en opposition totale avec le précédent se vautrait à ce point dans la même politique insupportable, il n’y avait plus de solution. Que la démocratie était morte, et que nous allions crever avec elle. Ami votant, admets-le, tu as eu la même réaction. Mais comme toujours, à chaque vote, tu régresses, tu retournes à l’étape 3, au marchandage, à te dire que peut-être, on pourra incliner un peu le système en s’asseyant sur nos convictions.
Moi, j’ai passé le cap. Je suis à l’étape 5, à l’acceptation. La démocratie représentative est morte, point. Que cela soit une bonne chose ou non, l’avenir le dira, mais le fait demeure : ce système est mort. Tu penses que retourner à l’étape de marchandage, c’est garder de l’espoir et qu’accepter la mort de notre système, c’est le désespoir. Je ne suis pas d’accord. Faire son deuil, c’est bien. C’est même nécessaire pour passer à autre chose et, enfin, avancer.
La démocratie est morte, vive la démocratie !
Tu remarqueras que je persiste à ajouter « représentative » quand je parle de mort de la démocratie. Parce que je ne crois pas que la démocratie elle-même soit morte : je pense que la démocratie réelle n’a jamais vécu en France. Le système dans lequel nous vivons se rapproche plus d’une « aristocratie élective » : nous sélectionnons nos dirigeants dans un panel d’élites autoproclamées qui ne change jamais, là où la démocratie voudrait que les citoyens soient tour à tour dirigeants et dirigés. Le simple fait que l’on parle de « classe politique » est le déni même de la notion de représentation qui est censée faire fonctionner notre démocratie représentative : la logique voudrait que ces politiciens soient issus des mêmes classes qu’ils dirigent. Attention, ne crachons pas dans la soupe, notre système est bien mieux qu’une dictature, à n’en pas douter. Mais ça n’est pas une démocratie. Je te renvoie à ce sujet à ce documentaire, J’ai pas voté, que tout le monde devrait voir avant de sauter à la gorge des abstentionnistes.
Des gens sont morts pour qu’on puisse voter ? Non, ils sont morts parce qu’ils voulaient donner au peuple le droit à s’autodéterminer, parce qu’ils voulaient la démocratie. Est-ce qu’on pense sérieusement, en voyant la grande foire à neuneu que sont les campagnes électorales, que c’est pour cela que des gens sont morts ? Pour que des guignols cravatés paradent pendant des semaines pour que nous allions tous, la mort dans l’âme, désigner celui dont on espère qu’il nous entubera le moins ? Je trouve ce système bien plus insultant pour la mémoire des combattants de la démocratie que l’abstention.
Alors oui, j’ai fait mon deuil, et ça me permet d’avoir de l’espoir pour la suite. Parce que pendant que la grande imposture politicarde se poursuit sur les plateaux-télé, nous, citoyens de tous bords, essayons de trouver des solutions. Plus le temps passe, plus le nombre de gens ayant terminé leur deuil augmente, plus ces gens s’intéressent réellement à la politique et découvrent des idées nouvelles, politiques et sociétales : tirage au sort, mandats uniques et non-renouvelables, revenu de base, etc. Des solutions envisageables, des morceaux de savoir, de culture politique… de l’éducation populaire, en somme. Rien ne dit que ces solutions fonctionneront, mais tout nous dit que le système actuel ne fonctionne pas. Et lorsque ce système s’effondrera, ce sera à ces petits morceaux de savoir disséminés un peu partout dans la population qu’il faudra se raccrocher. L’urgence aujourd’hui, c’est de répandre ces idées pour préparer la suite. Ami votant, tu as tout à gagner à nous rejoindre, parce que tu as de toute évidence une conscience politique et qu’elle est gâchée, utilisée pour te battre contre des moulins à vent.
Notre système est un vieil ordinateur à moitié déglingué. Tu peux continuer d’imaginer qu’en réinstallant le même logiciel (PS ou LR, choisis ton camp camarade), il finira par fonctionner. D’autres utilisent la bonne vieille méthode de la claque sur la bécane (le vote FN) : on sait bien que ça ne sert à rien et que ça ne va certainement pas améliorer l’état de l’ordi, mais ça soulage. Certains imaginent qu’en déboulonnant l’Unité Centrale et en hackant petit à petit le système, on finira par faire bouger les choses (la députée Isabelle Attard est un bon exemple, personnellement je la surnomme l’outlier, la donnée qui ne rentre pas dans le modèle statistique du politicien). Ce n’est pas la pire des idées. On a même parlé de rebooter la France. Qui sait, si on arrive à mettre sur pied une telle stratégie en 2017, possible que je ressorte ma carte d’électeur du placard. Mais les plus nombreux, les abstentionnistes, ont laissé tomber le vieil ordinateur et cherchent juste à en trouver un nouveau qui fonctionne.
Alors on fait quoi ? Soyons clairs, je suis comme tout le monde, je n’ai aucune idée de la manière dont on peut passer à autre chose, pour instaurer une vraie démocratie. Une transition démocratique pourrait s’opérer en douceur en modifiant les institutions petit à petit : tout le monde aurait à y gagner. Politiciens compris, car l’alternative est peut-être l’explosion, et c’est une alternative à l’issue très incertaine. Mais clairement, nous ne prenons pas la direction d’une transition non-violente.
Je continue pour ma part à penser que, comme le disait Asimov, « la violence est le dernier refuge de l’incompétence ». Mais nous constatons chaque jour un peu plus notre impuissance dans ce système, et les politiciens actuels seraient bien avisés de corriger le tir avant qu’il ne soit trop tard. Avant que les citoyens ne se ruent dans ce dernier refuge.
En lisant le livre de Bernard Stiegler, « Aimer, s’aimer, nous aimer » (Galilée, 2003), on peut ressentir un sentiment de découragement.
Le philosophe explique dans son livre que les électeurs FN sont, comme beaucoup d’entre nous dans cette société malade, victimes de troubles narcissiques. Pour s’en sortir, ils ont la particularité de désigner des boucs émissaires. C’est un symptôme, une façon d’évacuer le mal-être.
Il est impossible de discuter avec des troubles et des symptômes (seuls les psys savent faire). Les journalistes peuvent donc continuer à s’agiter, à « fact-checker », à enquêter, à essayer de comprendre à coups de portraits, ils n’ont aucune prise sur rien, me suis-je dit.
Je suis allée demander à Bernard Stiegler ce que la presse peut et doit faire au lendemain des élections européennes, qui ont vu le FN atteindre le score de 25% des votants.
Une conférence, ce samedi
Ars Industrialis, le groupe de travail de Bernard Stiegler, organise une conférence ce samedi baptisée « Extrême nouveauté, extrême désenchantement, extrême droite ». La réunion aura lieu au Théâtre Gérard Philipe à Saint-Denis (Seine-Saint-Denis), de 14 heures à 18 heures. L'entrée est gratuite.
Rue89 : Dans « Aimer, s’aimer, nous aimer », vous dites que les électeurs FN souffrent d’un défaut de « narcissisme primordial ». Dès lors, peuvent-ils changer d’avis comme une personne rationnelle ?
Bernard Stiegler : Je parle avec des gens du Front national, il y en a même que j’aime bien. Je vous le dis très franchement : certains sont plutôt sympathiques. La plupart ne sont pas des racistes ou des antisémites, mais des gens très malheureux. Mais pour votre question, la réponse est non. Je n’essaye jamais de les dissuader de voter pour le Front national. Plus j’essaierais de le faire, et plus ils voteraient pour le Front national. C’est complètement inutile.
C’est d’autant plus inefficace que, pour une part, ils n’ont pas tort d’exprimer une souffrance. Le paranoïaque, le psychotique, le névrotique ne racontent jamais que des bêtises. Il y a toujours un fond de vérité. Le problème, c’est que ce fond de vérité qui devient pathologique exprime une maladie qui n’est pas seulement celle de ces électeurs : c’est celle de notre société.
Ce qui est spécifique dans la pathologie des électeurs du Front national, c’est que par leur vote, qu’ils le veuillent ou non, ils s’en prennent à des boucs émissaires.
Comment avez-vous compris ce qui les faisait souffrir ?
Je parle, dans le livre que vous citez, de Richard Durn [responsable de la tuerie du conseil municipal de Nanterre en 2002, ndlr]. Je me suis intéressé au sujet après avoir lu un extrait de son journal intime cité dans Le Monde et dans lequel Durn disait « avoir perdu le sentiment d’exister ». Ces mots m’ont énormément frappé. Moi aussi j’ai parfois le sentiment de ne pas exister. Et moi aussi je suis passé à l’acte : j’ai braqué des banques...
En lisant l’article, je me suis dit que ce type était extrêmement dangereux, mais qu’on était des millions comme lui. Et je me suis dit qu’un jour, les gens qui perdent le sentiment d’exister, de plus en plus nombreux, voteraient pour le Front national au lieu de tuer des gens ou de braquer des banques.
J’ai été scandalisé par l’attitude d’Eva Joly, au lendemain du premier tour de l’élection présidentielle, qui a qualifié le score du Front national de « tâche indélébile sur les valeurs de la démocratie ». Ce sont des propos honteux.
Comment expliquer cette liquidation du narcissisme primordial ?
Elle vient de l’organisation illimitée de la consommation via le marketing et la télévision. Vous avez vu le film « Le Festin de Babette » ? C’est une histoire magnifique : une Française qui vit au Danemark décide de faire un repas immense, somptuaire. Le film raconte la préparation de ce repas coûteux par une personne modeste, et c’est extraordinaire.
Quand j’étais enfant, le repas du dimanche avait beaucoup d’importance. Il était courant dans les classes populaires de faire des festins, comme Gervaise et son oie dans « L’Assommoir ». Il est très important de recevoir, de se rassembler.
C’est ce que le consumérisme a concrètement détruit : il n’y a que du prêt-à-porter, du prêt-à-manger – de la malbouffe et plus de fête.
Comment faire réaliser aux électeurs du FN que cette souffrance est déconnectée du chiffre de l’immigration ?
Cela ne sert à rien de leur dire : ils ne l’entendront jamais. Précisément, ils entendent autre chose si vous leur dites cela. Ils entendent que vous n’avez pas écouté leur problème. Et ils ont raison. S’en prendre à un bouc émissaire [ce que Bernard Stiegler nomme un « pharmakos », dans « Pharmacologie du Front national », Flammarion, 2013, ndlr] est un symptôme. C’est un symptôme horrifique, extrêmement dangereux, et le nazisme est l’exploitation de ce symptôme à l’échelle cauchemardesque du XXe siècle. Une telle horreur peut tout à fait revenir – c’est même plus que probable : si rien de décisif ne se passe, c’est ce qui finira par arriver. Et cela dépend de nous que cela n’arrive pas – mais ce n’est pas en insultant les électeurs du FN que cela s’arrangera.
Il est totalement vain de dire aux gens d’arrêter de symptomatiser : il faut les soigner, je veux dire prendre soin d’eux (au sens que j’ai donné à ce mot dans « Prendre soin »), s’occuper d’eux, leur donner des perspectives, leur tenir un autre discours que celui de François Hollande et de Nicolas Sarkozy...
Comment les journalistes peuvent-ils participer à ces soins ?
Je pense qu’il est urgent que la presse reprenne son rôle, qui est de défendre des idées, de les faire se confronter, et par là, de construire des opinions. Cela veut dire faire des choix politiques, esthétiques, intellectuels, sociaux, etc. – et les assumer. Le Monde diplomatique continue de faire ce travail, et c’est pourquoi je ne manque jamais l’occasion de le lire, même s’il m’énerve souvent.
Aujourd’hui, la désespérance est le fond de commerce du Front national. Pour redonner de l’espoir, il faut donner la parole à ceux qui ont quelque chose à dire et qui sont prêts au débat public – et par là reconstruire une pensée, des concepts et des perspectives, et les socialiser.
L’idée que les gens ne veulent pas penser est totalement fausse : quand le Collège de France a mis en ligne ses cours, des millions d’heures de cours ont été téléchargées. Ars Industrialis, qui fait des conférence souvent difficiles, a une très large audience. Ce que les gens refusent n’est pas la pensée : c’est la langue de bois, d’où qu’elle vienne.
Si le capitalisme consumériste s’effondre et qu’il n’y a pas ce travail d’invention d’une alternative à ce qui fut la base de ce consumérisme, à savoir le fordo-keynésianisme (la « croissance »), et qui s’est définitivement épuisé, l’extrême droite s’imposera partout – bien au-delà de la France et de l’Europe.
Vous pensez que c’est sur le point d’arriver ?
Dans les années 80, il s’est passé quelque chose de très important. Il y a eu la « révolution conservatrice », fondée sur l’idée qu’il valait mieux liquider l’Etat et financiariser le capitalisme en laissant la production se développer hors de l’Occident – et cela a été le début du chômage de masse.
Cette liquidation a créé une insolvabilité de masse dissimulée par les systèmes de subprimes et de « credit default swap » très profitables aux spéculateurs mais ruineux pour l’économie, un hyperconsumérisme extrêmement toxique sur le plan environnemental, une grande misère symbolique sur le plan mental, et une précarisation généralisée provoquant un sentiment d’insécurité bien réelle et une désintégration sociale.
Cette désintégration rend impossible l’intégration non pas des immigrés, mais de la population elle-même dans son ensemble, les immigrés y étant exposés plus que tous évidemment.
La crise de 2008 a mis au clair cette insolvabilité et cette fragilité extrême et structurelle. Et elle a ruiné durablement la confiance – ce à quoi Snowden mais aussi Fukushima et bien d’autres catastrophes ont ajouté leurs effets.
Un système économique ne peut pas fonctionner sans confiance – et il n’y a plus de confiance. Comment peut-il y en avoir quand 55% des jeunes Espagnols sont au chômage et que tout le monde s’en moque – cependant que l’automatisation est en train réduire l’emploi dans tous les secteurs et dans tous les pays ? Qui a parlé de tout cela au cours de la campagne sur
l’Europe ?
Les caissières disparaissent...
Oui, on n’a plus besoin de caissière, et on n’aura bientôt plus besoin de chauffeurs de camion – ni de nombreux techniciens, ingénieurs, etc. Ce qui est en train d’advenir, c’est la disparition de l’emploi. Pas un mot de cette question dans le tout récent rapport Pisani-Ferry si j’en crois la presse – pas plus que dans le rapport Gallois d’il y a presque deux ans déjà... Que de temps perdu ! Et que de fureur accumulée !
L’automatisation va se développer désormais massivement, notamment parce que le numérique permet d’intégrer toutes sortes d’automatismes jusqu’alors isolés, et qu’il en résulte une baisse rapide du coût des robots.
Jeff Bezos, le patron d’Amazon, est en train d’en installer partout dans tous ses entrepôts. Arnaud Montebourg a annoncé il y a un an qu’il allait lancer un plan de robotique française.
Le coût de l’automatisation va diminuer, et les PME françaises vont de plus en plus pouvoir s’y engager – même si elle ne le veulent pas, en raison de la concurrence, et le chômage va monter en flêche. Il n’y a qu’une solution pour contrer la montée proportionnelle du FN, c’est de créer une alternative au modèle keynésien : un modèle contributif.
Pouvez-vous donner un exemple concret de modèle contributif ?
Dans l’économie contributive, il n’y a plus de salariat ni de propriété industrielle au sens classique. Pour vous donner un exemple, j’ai travaillé il y a quelques années avec des étudiants stylistes sur un modèle d’entreprise de mode contributive. L’entreprise devenait un club d’amateurs de mode, dont certains contribuaientt par des idées, d’autres par des achats, d’autres par un travail de confection, d’autres par tout cela à la fois ou alternativement.
A son époque lointaine, devenue aujourd’hui mythique et totalement révolue, la Fnac était une sorte de coopérative où les vendeurs étaient d’abord des passionnés de musique ou de photo, et où les adhérents de la Fnac n’étaient pas des consommateurs, mais des amateurs.
Il y a des gens qui s’expriment extrêmement bien dans leur façon de s’habiller. Ils ont du goût, ils savent agencer des vêtements. Je pense que leur savoir peut être partagé et valorisé.
Et comment seraient-ils rémunérés ?
Ce n’est pas à l’échelle micro-économique de la firme qu’il faut poser et résoudre ce problème : c’est une question de macro-économie qui doit dépasser le couple valeur d’usage/valeur d’échange, et promouvoir ce que nous appelons valeur pratique (c’est-à-dire savoirs) et valeur sociétale (c’est-à-dire qui renforce fonctionnellement la solidarité).
C’est la valorisation mutuelle et par une puissance publique réinventée de ce qu’Amartya Sen appelle les « capabilités » – c’est-à-dire les savoir-faire, les savoir-vivre et les savoirs formels – qui constitue la base d’une économie contributive. C’est en fait la généralisation du modèle des intermittents du spectacle, qui cultivent leurs savoirs avec l’aide de leur revenu intermittent et qui les valorisent lorsqu’ils entrent en production, et que l’on voudrait détruire au moment même où il faudrait en généraliser l’état d’esprit si intelligent.
J’y reviens, quel rôle peut jouer la presse dans cette réflexion sur le modèle économique actuel ?
D’abord, elle-même devrait inventer, pour elle-même, de tels dispositifs contributifs. Le fonds d’aide à la presse devrait servir à cela, et les journalistes devraient se battre pour cela. Ensuite, il faut que la presse parle de l’automatisation et plus généralement du numérique en un sens approfondi et non « tendance » ou dans la rubrique « geek », et qu’elle ne soit pas dans le déni. L’automatisation vient, il faut l’assumer, et arrêter de dire qu’on va inverser la courbe du chômage. Celui-ci va considérablement augmenter.
Toutes sortes de gens réfléchissent à des scénarios qui permettraient d’entrer dans un nouveau monde – en Amérique latine par exemple, mais aussi en Amérique du Nord. Il faut leur donner la parole. Et il faut solliciter l’intelligence des lecteurs plutôt que de présupposer qu’ils ne recherchent que le scoop ou l’information sensationnelle et vulgaire.
Désormais, le FN se présente aussi comme l’un de ces scénarios alternatifs à l’ultralibéralisme...
Oui, c’est très malin. Ce matin, j’ai eu la grande surprise de lire une déclaration de Florian Philippot [vice-président du Front national, ndlr] qui défendait la grève de la SNCF dans Libération, au nom du service public. Imaginez le désarroi des syndicalistes de la CGT et de SUD.
Le Front national, c’est une idéologie ultralibérale déguisée en anti-ultralibéralisme. Jean-Marie Le Pen est un ultralibéral. Il l’a toujours dit, et il l’est plus que jamais. Il est absolument contre l’Etat, contre les fonctionnaires.
Quant à Marine Le Pen, quoiqu’elle dise, elle a besoin de l’ultralibéralisme pour se développer : c’est son terreau parce que ce qui attire chez elle ses électeurs et la désignation de boucs émissaires, ce qui provoque cette recherche de boucs émissaires est l’ultralibéralisme au service du capitalisme financiarisé pulsionnel et spéculatif. Qu’est-ce que le FN ? C’est le grand spécialiste des inversions de causalités.
Le FN vit sur l’idée que la souffrance est attribuable aux immigrés parce que personne n’a le courage de fournir les vrais schémas de causalité nouveaux qui s’imposent.
Le FN distille la peur en parlant des milliers de Mohamed Merah en latence. Mais ces jeunes qui partent en Syrie ne souffrent-ils pas du même trouble narcissique que les électeurs du FN ?
Bien entendu. J’ai appelé cela le complexe d’Antigone. « Antigone » est un texte absolument fondamental.
Je soutiens que les terroristes intégristes, beurs ou blancs, nés et élevés en France, qui d’un seul coup, se mettent à devenir musulmans, sont des petites Antigone. Je ne veux pas les défendre en disant cela. Ce que je veux dire, c’est qu’un adolescent a besoin de sublimer – et de le faire comme toujours « au nom de la loi ». Antigone est une adolescente qui défend la « loi divine ». Merah est aussi un adolescent.
Ces mômes-là, à un moment, ont besoin de s’identifier à leur père, puis à une figure de rupture avec le père qu’ils accusent alors de ne pas incarner correctement et sincèrement la loi. Ils cherchent alors d’autres figures identificatoires. Mais s’ils ne trouvent plus de possibilité d’identification dans la société, et s’ils vivent dans une société qui est en train de s’effondrer, ils sont prêts pour s’engager dans ce que j’ai appelé une sublimation négative – qui peut conduire au pire. Ce sont là encore des symptômes.
Vous pouvez faire tout ce que vous voulez, cela se développera encore longtemps et inévitablement si la société ne produit pas vite des capacités nouvelles d’identification positive sur des idées républicaines, constructives et vraiment porteuses d’avenir.
My flight from Athens where I had the pleasure to attend the first “reproducible world summit” landed at the scheduled time in Paris (CDG). While people were getting their bags and putting their coats on, one member of the cabin crew announced that a police control will take place outside the plane and that we should have ready identity documents. My seat was in the back of the plane, so I had time to wait in the cold of the jetway while all passengers were controlled one after the other.
Three cops were set up right at the entrance of the terminal. One was taking cards or passports and looking at people's face. He then passed it to another cop in front of a suitcase that seemed to contain a scanner and a computer. The third one was just standing against the wall watching. When it was my turn, after scanning my passport, the cop had this nice gesture where she started to move her hand holding the passport toward me —just like she had just did a hundred times—before pulling it backward when the result appeared on the screen.
I had the confirmation that I was registered as a dangerous political activist in 2012 when David Dufresne published Magasin Général. One report from the interior intelligence service from 2008 was leaked to promote the book. My name had not been properly redacted from the very first version that went online and was associated with a political self-organized space in Dijon. Some Debian Developers had the pleasure to visit that space in 2005, 2006, or 2007. The report was full of mistakes, like almost all police files, so I don't want to comment on it.
The good news is that since then, I have stopped being paranoid. I knew, and thus could take appropriate precautions. Just like every time I have to approach an airport, all my (encrypted) electronic devices were turned off. I had shaved a couple hours before. I know a lawyer ready to represent me. I am fully aware that it's best to say as little as possible.
Although it has been a while since I had such a blatant confirmation that I was still a registered anarchist. It should not be a surprise though. Once you are in, there's no way out.
I was then asked to step aside while they proceeded with the rest of the queue. I put my backpack down and leaned against the wall. Once they were over, one of the cops asked me to follow him. We walked through the corridors to reach the office of the border police. While we were walking, they asked me a series of questions. I'm not mentioning the pauses in between, but here's what I can remember:
— Do you have a connection?
— No.
— Are you going to Paris?
— To my parents' in the suburbs.
— How long have you been staying in Greece?
— 5 days.
— flipping the pages of my passport And you come back from the U.S. in Feb. 2015?
— No, that was the maximum stay. I was there in August 2014.
— Why were you in Greece? Vacations?
— Work. I was at a meeting.
— What do you do?
— Free software.
— What is that?
— I am a developper.
— Oh, computers.
— Yes.
— Is that why you also were in the US?
— Yes, it was another conference.
— And so you travel because of that. That's nice.
— …
— Are you a freelancer?
— I work with a coop, but yes.
The cop also commented that they had to do some simple checks and that they would then let me go as I was coming back. I did not trust this but said nothing.
When then passed through a door where the cop had to use their badge to unlock it. I was asked to sit on a chair in the corridor between two offices (as far as I could see). I could hear one cop explaining the situation to the next: “— Il a une fiche. — Ah, une fiche.” They seemed quite puzzled that I was not controlled when I flew out on Monday.
After some minutes, another cop came back asking me for my boarding pass. Some more minutes later, he came back asking me if the address on my passport was still valid. I replied “no”. They gave me a piece of paper asking me for my current address, a phone number and an email address. As these information are all easy to find, I thought it was easier to comply. I gave my @irq7.fr address that I use for all public administrations. When the cop saw it, he asked:
— What is it?
— I don't understand.
— Is it your company?
— It is a non-profit.
He gave me my passport back and showed me the way out.
(I will spare you details on the discussion I had to listen while waiting between two cops about how one loved to build models of military weapons used in wars against communism because of his origins. And that he was pissed off because fucking Europe disallowed some (toxic) paints he was used to.)
To the best of my understanding, what happened is that they made a phone call and were just asked to update my personal details by the intelligence service.
I don't know, but I'm left to wonder if all these people might just have been controlled because I was on the flight.
All-in-all this didn't take too long: one hour after leaving the plane I was on the platform for the regional trains. The cops stayed polite the whole time. I am privileged: French citizen, white, able to speak French with a teacher accent. I am pretty sure it would have not been that good if I had been displaying a long beard or a djellaba.
I took the time to document this because I know too many people who think that what the French government is doing doesn't concern them. It does. It's been a couple of years now that antiterrorism is how governments keep people in check. But we are reaching a whole new level now. We are talking about cops keeping their guns while off duties, house searches at any hours without judge oversight, and the government wants to change the constitution to make the “state of emergency” permanent. We've seen so many abuses in just two weeks. It will not go well. Meanwhile, instead of asking themselves why young people are killing others and themselves, state officials prefer dropping bombs. Which will surely prevent people ready to die from using suicidal tactics, right?
We are at the dawn of an environmental crisis that will end humanity. Every human on this planet is concerned. People get beaten up when they march to pressure governments to do something about it. We need to unite and resist. And yes, we are going to get hurt but freedom is not free.
Dans La Médiocratie (Lux), le philosophe Alain Deneault critique la médiocrité d’un monde où tout n’est plus fait que pour satisfaire le marché. Entretien.
“Les médiocres sont de retour dans la vallée fertile”, déclarait aux Inrocks le journaliste Daniel Mermet lors de son éviction de France Inter, en juin 2014. Le philosophe Alain Deneault, considérant la conjoncture globale, va plus loin : “Il n’y a eu aucune prise de la Bastille, rien de comparable à l’incendie du Reichstag, et l’Aurore n’a encore tiré aucun coup de feu. Pourtant, l’assaut a bel et bien été lancé et couronné de succès : les médiocres ont pris le pouvoir”. C’est cette révolution silencieuse qu’il analyse de long en large dans La Médiocratie (Lux), un livre coup de poing. De passage à Paris, cet enseignant en science politique à l’Université de Montréal nous explique le fond de sa pensée. Entretien.
Comment les médiocres ont-ils pris le pouvoir selon vous ? Depuis quand est-il valorisé d’être moyen ?
Alain Deneault – La généalogie de cette prise de pouvoir a deux branches. L’une remonte au XIXe siècle, à l’époque où on a transformé progressivement les “métiers” en “emplois”. Cela supposait une standardisation du travail, c’est-à-dire qu’on en fasse une chose moyenne. On a généré une sorte de moyenne standardisée, requise pour organiser le travail à grande échelle sur le mode aliénant que l’on sait, et qui a été décrit par Marx. On a fait de ce travail moyen quelque chose de désincarné, qui perd du sens, et qui n’est plus qu’un “moyen” pour le capital de croître, et pour les travailleurs de subsister.
L’autre versant de cette prise de pouvoir réside dans la transformation de la politique en culture de la gestion. L’abandon progressif des grands principes, des orientations et de la cohérence au profit d’une approche circonstancielle, où n’interviennent plus que des “partenaires” sur des projets bien précis sans qu’intervienne la notion de bien commun, a conduit à faire de nous des citoyens qui “jouent le jeu”, qui se plient à toutes sortes de pratiques étrangères aux champs des convictions, des compétences et des initiatives. Cet art de la gestion est appelé “gouvernance”.
Ces deux phénomènes ont amené des penseurs au XXe siècle à constater que la médiocrité n’était plus une affaire marginale, qui concernait des gens peu futés qui arrivaient à se rendre utiles, mais qu’elle faisait désormais système. En tant que professeur, qu’administrateur, qu’artiste, on est obligé de se plier à des modalités hégémoniques pour subsister.
Au niveau politique, cela a pour conséquence que chaque sujet est analysé sous l’angle du problem solving. Ce qui se passe en France en ce moment est emblématique : en réponse aux attaques terroristes, on bombarde, on répond par une stratégie de la solution au sens chirurgical du terme, alors qu’il faudrait prendre du recul et être plus subtil.
L’avènement de la médiocratie est-il à lier à la révolution libérale qui a eu lieu dans les années 80, au conformisme dans les entreprises et à la mise au pas du monde du travail qui en a découlé ?
Oui, et c’est d’autant plus vrai que la gouvernance mis en place par les technocrates de Margaret Thatcher a transformé l’ultralibéralisme en une approche réaliste. L’option du néolibéralisme n’est plus une option, mais quelque chose d’aussi normal que de respirer. La gouvernance a réussi à déguiser l’idéologie ultralibérale en savoir, en mode de vie en société, comme si c’était le socle à partir duquel on devrait délibérer, alors que ça devrait être l’objet de la délibération.
Désormais on ne parle plus du bien commun, on fait comme si l’intérêt général n’était plus que la somme d’intérêts particuliers que les uns et les autres sont ponctuellement invités à défendre. On est amené à n’être plus que le petit lobbyiste de ses intérêts privés, ou de ses intérêts de clan. C’est à partir de là que la culture du grenouillage, des arrangement douteux, se développe.
Selon vous “l’expert est la figure centrale de la médiocratie”. Comment expliquez-vous ce paradoxe ?
L’expert ne se contente pas de rendre disponible un savoir auprès de gens qui délibèrent. Il est un idéologue qui déguise son discours d’intérêt en savoir. A l’université, un étudiant devra désormais se demander au cours de son orientation s’il veut devenir expert ou intellectuel, sachant que l’expertise consistera surtout à vendre son cerveau à des acteurs qui ont intérêt à calibrer la production de notre travail intellectuel d’une manière orientée, de façon à satisfaire des intérêts.
Vous citez à ce titre le recteur de l’Université de Montréal, qui disait en 2011 : “Les cerveaux doivent correspondre aux besoins des entreprises”.
Tout à fait, c’est comme Patrick Le Lay [ancien PDG de TF1, ndlr], qui déclarait en 2004 : “Ce que nous vendons à Coca-Cola, c’est du temps de cerveau humain disponible”. Ce recteur, voit son institution – une des universités les plus importantes de la francophonie – comme une entreprise qui vend des cerveaux à l’industrie. Celle-ci occupe d’ailleurs plusieurs sièges au Conseil d’administration de cette université, et décide donc en partie de son orientation.
On est dans un monde où le savoir est généré pour satisfaire l’entreprise, alors que le rôle des intellectuels est de faire de l’entreprise un objet de la pensée. Edward Said en parle très bien : l’expert ne se préoccupe pas de ce que son savoir génère. On peut très bien être géologue, aller chercher du zinc ou du cuivre au Katanga, mais être totalement incompétent quand il s’agit de penser les incidences de cette pratique à l’échelle du Congo. L’industrie ne veut pas qu’ils soient compétents, car ce n’est pas dans son intérêt.
A l’inverse, l’intellectuel agira en “amateur”, c’est-à-dire en aimant son sujet et en se sentant concerné par toutes ses dimensions, ce qui appelle nécessairement à l’interdisciplinarité.
Vous expliquez que le discours politique a été colonisé par un vocable centriste, celui de la “gouvernance”. Ce que vous déplorez sous le terme de “médiocratie”, n’est-ce pas finalement la fin des utopies ?
Je n’irai pas jusque là. Ce n’est pas une terminologie centriste, mais d’extrême-centre, qui s’est développée – c’est presque le contraire. Un discours centriste se situe sciemment sur un axe gauche/droite, alors que le discours d’extrême centre ne tolère rien d’autre que lui-même. Il ne se situe pas sur un spectre mais en nie plutôt la réalité et la légitimité.
Les tenants de la gouvernance sont loin d’être pondérés, contrairement à ce que leur vocable pourrait laisser croire. Ce sont des sophistes des temps modernes, qui ont l’art d’amadouer les syndicats en leur faisant croire qu’ils souhaitent prendre en compte leurs aspirations lors de “Conférences sociales”. En réalité ils militent pour que ceux-ci soient acquis à leurs positions a priori. Leur prétendue synthèse est en fait un discours radical, souvent en phase avec des pratiques inégalitaires et antidémocratiques. Un ordre qui met en péril 80 % des écosystèmes, et qui permet à 1 % des plus riches d’avoir 50 % des actifs mondiaux n’a rien de pondéré.
La médiocratie semble en effet être dotée d’une formidable faculté à tout dépolitiser, alors que ce qu’elle propose est radical : vous citez par exemple la loi 78 encadrant strictement le droit à manifester, qui était passée au Québec en 2012. Comment repolitiser la société ?
Je milite pour le retour à des mots investis de sens, tous ceux que la gouvernance a voulu abolir, caricaturer ou récupérer : la citoyenneté, le peuple, le conflit, les classes, le débat, les droits collectifs, le service public, le bien commun… Ces notions ont été transformées en “partenariat”, en “société civile”, en “responsabilité sociale des entreprises”, en “acceptabilité sociale”, en “sécurité humaine”, etc. Autant de mots-valises qui ont expulsé du champ politique des références rationnelles qui avaient du sens. Le mot “démocratie” lui-même est progressivement remplacé par celui de “gouvernance”. Ces mots méritent d’être réhabilités, comme ceux de “patient”, d’usager, d’abonné, spectateur, qui ont tous été remplacés par celui de “clients”. Cette réduction de tout à des logiques commerciales abolit la politique et mène à un évanouissement des références qui permettent aux gens d’agir.
On n’a pas le choix entre agir ou penser: quand on a agi, c’est qu’on a pensé, et pour penser, il faut avoir les termes qui conviennent. Ce ne sont pas des utopies mais des traditions mobilisatrices dans l’histoire qui sont en train d’être détruites. Aujourd’hui les Etats ne sont plus que les partenaires d’entreprises qui ont un statut équivalent. On greffe des petits intérêts aux grands, mais pendant ce temps là il n’y a pas de notion commune.
La COP21 est-elle un bon exemple de ce processus de gouvernance, puisqu’elle est sponsorisée par des entreprises et des banques qui font de l’évasion fiscale ?
Ce qui est emblématique de la gouvernance dans la COP21, ce sont tous les préparatifs qui ont consisté à accueillir dans l’agenda autant de propositions émanant d’écologistes, que de propositions émanant de Total. Comme du point de vue du climat, le gaz est mieux que le pétrole, Total propose de se reconvertir, quitte à ruiner les nappes phréatiques. C’est ça la gouvernance : on transforme en grand débat de société l’organisation d’un rapport de force dans lequel, pour gommer les oppositions, les plus forts essayent d’amener les plus faibles à adhérer à leurs projets dans une mascarade de consultation et de délibération.
Rancière a écrit que nous sommes tous équitablement dotés de ce qui est requis pour gouverner. Le tirage au sort est-il une solution pour réaffirmer l’idée de bien commun ?
Rancière était mon directeur de thèse. Le tirage au sort ne doit pas être considéré comme une panacée. Ce qui est intéressant c’est toute la pensée sous-jacente à cette proposition. Dans La Haine de la démocratie, Rancière développe cette idée sans a priori militant. Si par exemple en France, au Québec ou au Canada on faisait élire le Sénat au sort, ça changerait considérablement le positionnement des gens. On aurait un autre lien aux institutions. On redécouvrirait alors qu’en ce qui concerne les enjeux généraux de la vie publique, personne n’est plus compétent qu’un autre.
Rancière a raison de dire à ce titre que très peu de gens sont démocrates. Ce mot finit par tellement gêner, malgré les usages abusifs qu’on en a fait, qu’on est en train de le remplacer par celui de gouvernance, plus compatible avec ceux qui veulent utiliser la consultation et l’opinion à des fins de manipulation.
Personnellement je ne suis pas pour faire de grands bonds utopiques. On ne va pas tirer au sort du jour au lendemain tous nos représentants. Commencer par le Sénat, une chambre haute, qui n’a qu’une force de blocage et pas de proposition, rassurerait les gens. Ce serait une manière de responsabiliser les citoyens, à condition d’inventer des mécanismes pour s’assurer qu’il n’y ait pas de trafic d’influence.
Après les attentats du 13 novembre, la lutte contre le terrorisme rend les discours critiques assez inaudibles de manière générale. Elle incite la population à remettre le bien commun entre les mains d’un gouvernement, voire d’un homme providentiel, plutôt qu’à s’en saisir…
C’est très certainement ce à quoi le gouvernement aspire. La lutte contre le terrorisme est une bêtise conceptuelle, qui équivaut à dire que l’on va lutter contre les grenades. Dire qu’on fait la guerre au terrorisme c’est ériger un discours martial contre un adversaire qui n’a encore une fois “pas de visage”, ce qui est une aubaine pour le pouvoir. Et d’un point de vue tactique c’est une folie, car dans les conditions de possibilité historiques actuelles cela va générer encore plus de tensions, qui risquent d’exposer encore plus les Français à la barbarie qui s’est déployée le 13 novembre. Sur le plan intérieur cela va conduire le pays à prendre des mesures d’exception encore plus drastiques et liberticides par rapport à des adversaires toujours plus flous. Pour finalement mettre entre parenthèse ce qui est si insupportable pour les gens de pouvoir, la démocratie.
Ce qu’on est en train de vivre mérite que chacun se pose un instant à la terrasse de lui-même, et lève la tête pour regarder la société où il vit. Et qui sait... peut-être qu'un peu plus loin, dans un lambeau de ciel blanc accroché aux immeubles, il apercevra la société qu’il espère.
Nous sommes en guerre.
Nous sommes en guerre contre une système nocif d’une extrême violence (cache) que nous alimentons chaque jour autour du capitalisme. Un système qui renforce les inégalités et conduit à une perte d’identité que certains retrouvent dans le fanatisme ou la religion. Il y a pourtant des alternatives pour un travail moins dégradant.
Nous sommes en guerre contre notre nature animale qui fait qu’il y a des viols, des charniers, des mutilations, des meurtres. Tous les jours. Notre nature humaine réclame de la bienveillance, de l’éducation et une vision partagée pour prendre le dessus. Elle nous demande de prendre du recul sur nos émotions.
Nous sommes en guerre contre notre auto-destruction en tant qu’espèce. Les conflits sont climatiques (cache) et ne pourront s’apaiser dans un écosystème qui change à une telle vitesse. Le danger n’est pas la surpopulation mais la sur-concentration de cette population qui cristallise les tensions et les haines.
Nous sommes en guerre depuis toujours, c’est ce qui nous pousse à progresser. Le numérique n’est qu’un catalyseur qui réduit le temps et l’espace, confisquant notre attention au service du marketing. Il est possible d’en faire autre chose, pour du bien commun, pour de l’entraide et de la réflexion distribuée.
Alors oui, nous sommes en guerre et cette guerre s’appelle vivre en communauté dans un espace fini. Je ne crois pas à l’insouciance perdue qu’il faudrait retrouver (cache), même collectivement. Saisissons cette chance pour prendre pleinement conscience ensemble de cet état avec du recul et du discernement. Nous reconstruisons ce monde chaque jour et je crois en notre capacité à le faire évoluer pour réunir les conditions propices à une vie digne pour tous. Nous avons le choix de voir nos enfants vivre en paix ou reposer en paix.
‘He turned out to be the same as every other politician.’ That was the complaint I kept hearing in Athens shortly after the leftist Prime Minister Alexis Tsipras had signed up to exactly the kind of bailout deal he once vehemently opposed. In reality, it was not so much that he was the same as other politicians as that he faced the same constraints as other Greek politicians: a failing economy, implacable creditors and frustrated fellow leaders. It was these external constraints that compelled him to act against his campaign rhetoric. The fact that Athenians were surprised and dismayed by this reflects a deeper problem with contemporary Western democracy – one to which their ancient forebears knew the solution.
Modern states are plagued by the problem of ‘rational ignorance’. The chance that any individual’s vote will make a difference is so vanishingly small that it would be irrational for anyone to bother taking a serious interest in the issues and candidates. And so, many people don’t – and then fall for implausible rhetoric. In this way, democracy has come to mean little more than electing politicians on the basis of their promises, then watching them fail to keep them.
This was not the case in the Athens of two and a half thousand years ago. Then, democracy – rule by the people – meant active participation in the running of the state, if not continually, then at least periodically throughout one’s life. As Aristotle put it: ‘to rule and be ruled in turn.’ This participation was a right but also a responsibility. It was intended not only to create a better state, but to create better citizens: engagement in the political process was an education in the soberingly complex realities of decision-making.
Male citizens were expected to serve not only in the army or on juries, as is the case with some modern states, but also to attend the main decision-making assembly in person. And while some executive offices were elected, most were selected by lottery – including that of Prime Minister, whose term of office was one day. Any male citizen could find himself representing his community, or receiving foreign dignitaries.
There is much we can learn from this. Modern states are of course much bigger than ancient Athens and, thankfully, have wider suffrage. Numbers alone mean we couldn’t all be Prime Minister for a day. But there is scope for government at every level – local, regional, national and international – to be made radically participatory. For example: legislative bodies could be wholly or partially selected by lottery. Obvious candidates are second chambers in bicameral parliaments, such as the British House of Lords.
Even better might be separate assemblies summoned to review each proposed new law or area of government. This would hugely increase the number of people involved in the legislative system. The ancient Athenians managed exactly this; today, digital technology would make it even easier.
In addition, civil services could be more open to internships and short-term appointments. Serving the state in this way (at some level) could even be compulsory, just as military service has been. Ad hoc one‑issue assemblies, citizen initiative programmes and wider consultation on legislation are all ways of including more people in the decision-making process.
Democracy need not mean voting for politicians who all turn out to be the same. It can mean ordinary people actively participating in governing themselves. As Aristotle knew, this would make for both wiser decisions and wiser citizens.
n the past few decades, the fortunate among us have recognised the hazards of living with an overabundance of food (obesity, diabetes) and have started to change our diets. But most of us do not yet understand that news is to the mind what sugar is to the body. News is easy to digest. The media feeds us small bites of trivial matter, tidbits that don't really concern our lives and don't require thinking. That's why we experience almost no saturation. Unlike reading books and long magazine articles (which require thinking), we can swallow limitless quantities of news flashes, which are bright-coloured candies for the mind. Today, we have reached the same point in relation to information that we faced 20 years ago in regard to food. We are beginning to recognise how toxic news can be.
News misleads. Take the following event (borrowed from Nassim Taleb). A car drives over a bridge, and the bridge collapses. What does the news media focus on? The car. The person in the car. Where he came from. Where he planned to go. How he experienced the crash (if he survived). But that is all irrelevant. What's relevant? The structural stability of the bridge. That's the underlying risk that has been lurking, and could lurk in other bridges. But the car is flashy, it's dramatic, it's a person (non-abstract), and it's news that's cheap to produce. News leads us to walk around with the completely wrong risk map in our heads. So terrorism is over-rated. Chronic stress is under-rated. The collapse of Lehman Brothers is overrated. Fiscal irresponsibility is under-rated. Astronauts are over-rated. Nurses are under-rated.
We are not rational enough to be exposed to the press. Watching an airplane crash on television is going to change your attitude toward that risk, regardless of its real probability. If you think you can compensate with the strength of your own inner contemplation, you are wrong. Bankers and economists – who have powerful incentives to compensate for news-borne hazards – have shown that they cannot. The only solution: cut yourself off from news consumption entirely.
News is irrelevant. Out of the approximately 10,000 news stories you have read in the last 12 months, name one that – because you consumed it – allowed you to make a better decision about a serious matter affecting your life, your career or your business. The point is: the consumption of news is irrelevant to you. But people find it very difficult to recognise what's relevant. It's much easier to recognise what's new. The relevant versus the new is the fundamental battle of the current age. Media organisations want you to believe that news offers you some sort of a competitive advantage. Many fall for that. We get anxious when we're cut off from the flow of news. In reality, news consumption is a competitive disadvantage. The less news you consume, the bigger the advantage you have.
News has no explanatory power. News items are bubbles popping on the surface of a deeper world. Will accumulating facts help you understand the world? Sadly, no. The relationship is inverted. The important stories are non-stories: slow, powerful movements that develop below journalists' radar but have a transforming effect. The more "news factoids" you digest, the less of the big picture you will understand. If more information leads to higher economic success, we'd expect journalists to be at the top of the pyramid. That's not the case.
News is toxic to your body. It constantly triggers the limbic system. Panicky stories spur the release of cascades of glucocorticoid (cortisol). This deregulates your immune system and inhibits the release of growth hormones. In other words, your body finds itself in a state of chronic stress. High glucocorticoid levels cause impaired digestion, lack of growth (cell, hair, bone), nervousness and susceptibility to infections. The other potential side-effects include fear, aggression, tunnel-vision and desensitisation.
News increases cognitive errors. News feeds the mother of all cognitive errors: confirmation bias. In the words of Warren Buffett: "What the human being is best at doing is interpreting all new information so that their prior conclusions remain intact." News exacerbates this flaw. We become prone to overconfidence, take stupid risks and misjudge opportunities. It also exacerbates another cognitive error: the story bias. Our brains crave stories that "make sense" – even if they don't correspond to reality. Any journalist who writes, "The market moved because of X" or "the company went bankrupt because of Y" is an idiot. I am fed up with this cheap way of "explaining" the world.
News inhibits thinking. Thinking requires concentration. Concentration requires uninterrupted time. News pieces are specifically engineered to interrupt you. They are like viruses that steal attention for their own purposes. News makes us shallow thinkers. But it's worse than that. News severely affects memory. There are two types of memory. Long-range memory's capacity is nearly infinite, but working memory is limited to a certain amount of slippery data. The path from short-term to long-term memory is a choke-point in the brain, but anything you want to understand must pass through it. If this passageway is disrupted, nothing gets through. Because news disrupts concentration, it weakens comprehension. Online news has an even worse impact. In a 2001 study two scholars in Canada showed that comprehension declines as the number of hyperlinks in a document increases. Why? Because whenever a link appears, your brain has to at least make the choice not to click, which in itself is distracting. News is an intentional interruption system.
News works like a drug. As stories develop, we want to know how they continue. With hundreds of arbitrary storylines in our heads, this craving is increasingly compelling and hard to ignore. Scientists used to think that the dense connections formed among the 100 billion neurons inside our skulls were largely fixed by the time we reached adulthood. Today we know that this is not the case. Nerve cells routinely break old connections and form new ones. The more news we consume, the more we exercise the neural circuits devoted to skimming and multitasking while ignoring those used for reading deeply and thinking with profound focus. Most news consumers – even if they used to be avid book readers – have lost the ability to absorb lengthy articles or books. After four, five pages they get tired, their concentration vanishes, they become restless. It's not because they got older or their schedules became more onerous. It's because the physical structure of their brains has changed.
News wastes time. If you read the newspaper for 15 minutes each morning, then check the news for 15 minutes during lunch and 15 minutes before you go to bed, then add five minutes here and there when you're at work, then count distraction and refocusing time, you will lose at least half a day every week. Information is no longer a scarce commodity. But attention is. You are not that irresponsible with your money, reputation or health. Why give away your mind?
News makes us passive. News stories are overwhelmingly about things you cannot influence. The daily repetition of news about things we can't act upon makes us passive. It grinds us down until we adopt a worldview that is pessimistic, desensitised, sarcastic and fatalistic. The scientific term is "learned helplessness". It's a bit of a stretch, but I would not be surprised if news consumption, at least partially contributes to the widespread disease of depression.
News kills creativity. Finally, things we already know limit our creativity. This is one reason that mathematicians, novelists, composers and entrepreneurs often produce their most creative works at a young age. Their brains enjoy a wide, uninhabited space that emboldens them to come up with and pursue novel ideas. I don't know a single truly creative mind who is a news junkie – not a writer, not a composer, mathematician, physician, scientist, musician, designer, architect or painter. On the other hand, I know a bunch of viciously uncreative minds who consume news like drugs. If you want to come up with old solutions, read news. If you are looking for new solutions, don't.
Society needs journalism – but in a different way. Investigative journalism is always relevant. We need reporting that polices our institutions and uncovers truth. But important findings don't have to arrive in the form of news. Long journal articles and in-depth books are good, too.
I have now gone without news for four years, so I can see, feel and report the effects of this freedom first-hand: less disruption, less anxiety, deeper thinking, more time, more insights. It's not easy, but it's worth it.
Behind this dynamic is a monoculture of money optimizing for more money. An investment mentality that hollows out our culture. Real estate is just one example. It’s happening across many segments of our society. And in each case, the existing community pays the price for the investor’s upside.
There are different forms of this dynamic.
A New York Times investigation found that just 158 families have provided nearly half the funding for presidential campaigns. What better investment than your own politician?
In music, 80% of the concert industry is owned by Ticketmaster. A diverse universe of record labels is steadily consolidating down. A shocking percentage of Top 40 hits are written by four Scandinavian men.
In Hollywood, it’s sequels, prequels, and risk-averse exploitations of existing IP — now in IMAX and 3D!
In tech, many investors’ first question for entrepreneurs is “what’s your exit strategy?” Big rounds, big burn rates, and big valuations push startups in the same direction. Maximize growth so you can eventually maximize money for yourself and somebody else.
When everyone is optimizing for money, the effects on society are horrific. It produces graphs that are up and to the right for all the wrong reasons.
We can’t assume that this will work itself out. As money maximization continues, all of us — and the poor and disempowered especially — face a bleak future. This model is only interested in supporting those that can afford to buy in.
It feels like we’ve been auto-subscribed to a newsletter that’s sending increasingly depressing emails. How do we get off this ride?
Do we stay opted in? Or do we opt out?
If you stay opted in and play the game, the ultimate best case is you’re one of the few that gets rich. Later you can give some money away to charity. But other than your bank account, little has changed. The existing structure is reinforced.
Do we opt out? Imagining opting out is emotionally satisfying.
“I might delete Facebook today.”
“I’ll go back to my Razr phone.”
“Maybe I’ll try homesteading.”
But to do any of these means becoming a ghost to your community. It’s impractical. Very few of us ever follow through.
Is there a third option? I think so. I don’t have a fully-fledged plan, but I have some thoughts on where we can start.
Number 1: Don’t sell out.
At some point in the past ten years, selling out lost its stigma. I come from the Kurt Cobain/“corporate rock still sucks” school where selling out was the worst thing you could ever do. We should return to that.
Don’t sell out your values, don’t sell out your community, don’t sell out the long term for the short term. Do something because you believe it’s wonderful and beneficial, not to get rich.
And — very important — if you plan to do something on an ongoing basis, ensure its sustainability. This means your work must support your operations and you don’t try to grow beyond that without careful planning. If you do those things you can easily maintain your independence.
Number 2: Be idealistic.
Always act with integrity. Really be clear about the things that drive you. Remember the lessons your parents and grandparents taught you about how to treat people and make sure your business lives up to that.
Don’t sink into the morass of “industry standards.” Don’t succumb to the inertia of the status quo. Don’t stop exploring new ideas. A small number of people can change how society works. It’s happened before and it will happen again.
There are some great examples to look to for inspiration.
Patagonia is a Benefit Corporation that will share proprietary information with competitors if it will help the environment.
REI is a co-op that announced they’re closed on Black Friday and they’re encouraging their employees to “opt-outside” instead.
Basecamp and the Hype Machine are independent software companies that put their products and life experience ahead of creating massive growth curves. Ten years in and they’re independent and going strong.
Another inspiration is Fugazi and their label, Dischord Records. From playing all-ages $5 shows to running an independent label for 30 years, we can recontextualize them as entrepreneurial heroes. Look at that photo — that could be a founding tech team. There’s even an office dog!
What these businesses have in common is that they are clear on their purpose and they follow a strict code in its pursuit. They don’t want to be everything to everybody. They just want to be themselves.
This thinking is very contrary to the current business zeitgeist, which is all about aggression and being big and fast. Everyone wants to be Napoleon. And we all know how that turned out.
Look at the language on that cover: “be paranoid,” “go to war.” Its violence suggests that being ruthless is the only way to survive. We all hear this tone all around us.
When I became the CEO of Kickstarter two years ago, this tone created a crisis for me. I had never approached my work as something to be done aggressively, but with the weight of the new job and those external voices on my shoulders, I suddenly had doubts. Is that who I needed to be as CEO? Everywhere I looked I saw messages of anxiety and fear. I questioned my instincts and who I was as a person.
Then I read Not For Bread Alone. Konosuke Matsushita ran a company in Japan for many years with a clear ethos. His philosophy was to always act creatively and with integrity, to pursue a positive impact on society, and to encourage collaboration among his team. It’s an ethos that’s as right today as it was then. It confirmed that I didn’t have to play the fear game.
Approaching your work with thoughtfulness at the core is challenging. You’re going against the grain. Your tools of measurement are very different from your peers. It’s easy to doubt yourself — I still do it all the time.
But in more important ways, it’s so much easier. You’re free to act with conviction. You can say and do what you believe is right. Your principles will still be tested, but you can respond in ways that will make you, your community, and your family proud.
It’s not about conquering the world, it’s about doing the right thing. When done correctly, this creates the ultimate product-market fit.
Community supported agriculture is a great example of this. A farm produces its crop for a community of people who receive the bounty every week. The value created and shared is balanced.
We want Kickstarter to be similarly in sync with society. Earlier this year we became a Public Benefit Corporation. This means we are legally obligated to consider the impact of our decisions on society, not just our shareholders. It’s very different from the expectation that for profit companies maximize shareholder value above all. It acknowledges and embraces that you are a part of a larger community.
We don’t expect everyone doing a Kickstarter project to become a Public Benefit Corporation, or to even care. We want artists and creators to be able to create and build for their own reasons — not just for money. No single mentality is forced on anyone. It’s a polyculture of aspirations and motivations — just as it should be.
Walking around NYC and seeing a bank on every corner is depressing, but the monoculture’s reign is impermanent. As more of us challenge the status quo, change will spark and spread. The hollowness and corruption of the pursuit of profit above all is obvious to even those who practice it. A new approach founded on a diversity of thought and experience can and will thrive.
I don’t know what the exact right steps are to change all of this. This is just me thinking out loud about something that doesn’t get talked about enough. My hope in sharing it is that someone here can build on these ideas, and make them even better. Ultimately this is going to have to be a group effort.
But we want to be very clear on where we at Kickstarter stand on this. Internally we have a Mission & Philosophy handbook that was written by our founder, Perry Chen. Its final page says it all:
Thanks for your time, and thanks for listening.
The mind…can make a heaven of hell, a hell of heaven. ― John Milton
The mind is certainly its own cosmos. — Alan Lightman
You go to school, study hard, get a degree, and you’re pleased with yourself. But are you wiser?
You get a job, achieve things at the job, gain responsibility, get paid more, move to a better company, gain even more responsibility, get paid even more, rent an apartment with a parking spot, stop doing your own laundry, and you buy one of those $9 juices where the stuff settles down to the bottom. But are you happier?
You do all kinds of life things—you buy groceries, read articles, get haircuts, chew things, take out the trash, buy a car, brush your teeth, shit, sneeze, shave, stretch, get drunk, put salt on things, have sex with someone, charge your laptop, jog, empty the dishwasher, walk the dog, buy a couch, close the curtains, button your shirt, wash your hands, zip your bag, set your alarm, fix your hair, order lunch, act friendly to someone, watch a movie, drink apple juice, and put a new paper towel roll on the thing.
But as you do these things day after day and year after year, are you improving as a human in a meaningful way?
In the last post, I described the way my own path had led me to be an atheist—but how in my satisfaction with being proudly nonreligious, I never gave serious thought to an active approach to internal improvement—hindering my own evolution in the process.
This wasn’t just my own naiveté at work. Society at large focuses on shallow things, so it doesn’t stress the need to take real growth seriously. The major institutions in the spiritual arena—religions—tend to focus on divinity over people, making salvation the end goal instead of self-improvement. The industries that do often focus on the human condition—philosophy, psychology, art, literature, self-help, etc.—lie more on the periphery, with their work often fragmented from each other. All of this sets up a world that makes it hard to treat internal growth as anything other than a hobby, an extra-curricular, icing on the life cake.
Considering that the human mind is an ocean of complexity that creates every part of our reality, working on what’s going on in there seems like it should be a more serious priority. In the same way a growing business relies on a clear mission with a well thought-out strategy and measurable metrics, a growing human needs a plan—if we want to meaningfully improve, we need to define a goal, understand how to get there, become aware of obstacles in the way, and have a strategy to get past them.
When I dove into this topic, I thought about my own situation and whether I was improving. The efforts were there—apparent in many of this blog’s post topics—but I had no growth model, no real plan, no clear mission. Just kind of haphazard attempts at self-improvement in one area or another, whenever I happened to feel like it. So I’ve attempted to consolidate my scattered efforts, philosophies, and strategies into a single framework—something solid I can hold onto in the future—and I’m gonna use this post to do a deep dive into it.
So settle in, grab some coffee, and get your brain out and onto the table in front of you—you’ll want to have it there to reference as we explore what a weird, complicated object it is.
The Goal
Wisdom. More on that later.
How Do We Get to the Goal?
By being aware of the truth. When I say “the truth,” I’m not being one of those annoying people who says the word truth to mean some amorphous, mystical thing—I’m just referring to the actual facts of reality. The truth is a combination of what we know and what we don’t know—and gaining and maintaining awareness of both sides of this reality is the key to being wise.
Easy, right? We don’t have to know more than we know, we only have to be aware of what we know and what we don’t know. Truth is in plain sight, written on the whiteboard—we just have to look at the board and reflect upon it. There’s just this one thing—
What’s in Our Way?
The fog.
To understand the fog, let’s first be clear that we’re not here:
Evolution
We’re here:
Evolution Plus
And this isn’t the situation:
consciousness binary
This is:
consciousness spectrum
This is a really hard concept for humans to absorb, but it’s the starting place for growth. Declaring ourselves “conscious” allows us to call it a day and stop thinking about it. I like to think of it as a consciousness staircase:
big staircase
An ant is more conscious than a bacterium, a chicken more than an ant, a monkey more than a chicken, and a human more than a monkey. But what’s above us?
A) Definitely something, and B) Nothing we can understand better than a monkey can understand our world and how we think.
There’s no reason to think the staircase doesn’t extend upwards forever. The red alien a few steps above us on the staircase would see human consciousness the same way we see that of an orangutan—they might think we’re pretty impressive for an animal, but that of course we don’t actually begin to understand anything. Our most brilliant scientist would be outmatched by one of their toddlers.
To the green alien up there higher on the staircase, the red alien might seem as intelligent and conscious as a chicken seems to us. And when the green alien looks at us, it sees the simplest little pre-programmed ants.
We can’t conceive of what life higher on the staircase would be like, but absorbing the fact that higher stairs exist and trying to view ourselves from the perspective of one of those steps is the key mindset we need to be in for this exercise.
For now, let’s ignore those much higher steps and just focus on the step right above us—that light green step. A species on that step might think of us like we think of a three-year-old child—emerging into consciousness through a blur of simplicity and naiveté. Let’s imagine that a representative from that species was sent to observe humans and report back to his home planet about them—what would he think of the way we thought and behaved? What about us would impress him? What would make him cringe?
I think he’d very quickly see a conflict going on in the human mind. On one hand, all of those steps on the staircase below the human are where we grew from. Hundreds of millions of years of evolutionary adaptations geared toward animal survival in a rough world are very much rooted in our DNA, and the primitive impulses in us have birthed a bunch of low-grade qualities—fear, pettiness, jealousy, greed, instant-gratification, etc. Those qualities are the remnants of our animal past and still a prominent part of our brains, creating a zoo of small-minded emotions and motivations in our heads:
normal animal brain
But over the past six million years, our evolutionary line has experienced a rapid growth in consciousness and the incredible ability to reason in a way no other species on Earth can. We’ve taken a big step up the consciousness staircase, very quickly—let’s call this burgeoning element of higher consciousness our Higher Being.
Higher Being
The Higher Being is brilliant, big-thinking, and totally rational. But on the grand timescale, he’s a very new resident in our heads, while the primal animal forces are ancient, and their coexistence in the human mind makes it a strange place:
animal + higher being
So it’s not that a human is the Higher Being and the Higher Being is three years old—it’s that a human is the combination of the Higher Being and the low-level animals, and they blend into the three-year-old that we are. The Higher Being alone would be a more advanced species, and the animals alone would be one far more primitive, and it’s their particular coexistence that makes us distinctly human.
As humans evolved and the Higher Being began to wake up, he looked around your brain and found himself in an odd and unfamiliar jungle full of powerful primitive creatures that didn’t understand who or what he was. His mission was to give you clarity and high-level thought, but with animals tramping around his work environment, it wasn’t an easy job. And things were about to get much worse. Human evolution continued to make the Higher Being more and more sentient, until one day, he realized something shocking:
WE’RE GOING TO DIE
It marked the first time any species on planet Earth was conscious enough to understand that fact, and it threw all of those animals in the brain—who were not built to handle that kind of information—into a complete frenzy, sending the whole ecosystem into chaos:
chaotic brain
The animals had never experienced this kind of fear before, and their freakout about this—one that continues today—was the last thing the Higher Being needed as he was trying to grow and learn and make decisions for us.
The adrenaline-charged animals romping around our brain can take over our mind, clouding our thoughts, judgment, sense of self, and understanding of the world. The collective force of the animals is what I call “the fog.” The more the animals are running the show and making us deaf and blind to the thoughts and insights of the Higher Being, the thicker the fog is around our head, often so thick we can only see a few inches in front of our face:
fog head
Let’s think back to our goal above and our path to it—being aware of the truth. The Higher Being can see the truth just fine in almost any situation. But when the fog is thick around us, blocking our eyes and ears and coating our brain, we have no access to the Higher Being or his insight. This is why being continually aware of the truth is so hard—we’re too lost in the fog to see it or think about it.
And when the alien representative is finished observing us and heads back to his home planet, I think this would be his sum-up of our problems:
The battle of the Higher Being against the animals—of trying to see through the fog to clarity—is the core internal human struggle.
This struggle in our heads takes place on many fronts. We’ve examined a few of them here: the Higher Being (in his role as the Rational Decision Maker) fighting the Instant Gratification Monkey; the Higher Being (in the role of the Authentic Voice) battling against the overwhelmingly scared Social Survival Mammoth; the Higher Being’s message that life is just a bunch of Todays getting lost in the blinding light of fog-based yearning for better tomorrows. Those are all part of the same core conflict between our primal past and our enlightened future.
The shittiest thing about the fog is that when you’re in the fog, it blocks your vision so you can’t see that you’re in the fog. It’s when the fog is thickest that you’re the least aware that it’s there at all—it makes you unconscious. Being aware that the fog exists and learning how to recognize it is the key first step to rising up in consciousness and becoming a wiser person.
So we’ve established that our goal is wisdom, that to get there we need to become as aware as possible of the truth, and that the main thing standing in our way is the fog. Let’s zoom in on the battlefield to look at why “being aware of the truth” is so important and how we can overcome the fog to get there:
The Battlefield
No matter how hard we tried, it would be impossible for humans to access that light green step one above us on the consciousness staircase. Our advanced capability—the Higher Being—just isn’t there yet. Maybe in a million years or two. For now, the only place this battle can happen is on the one step where we live, so that’s where we’re going to zoom in. We need to focus on the mini spectrum of consciousness within our step, which we can do by breaking our step down into four substeps:
substeps
Climbing this mini consciousness staircase is the road to truth, the way to wisdom, my personal mission for growth, and a bunch of other cliché statements I never thought I’d hear myself say. We just have to understand the game and work hard to get good at it.
Let’s look at each step to try to understand the challenges we’re dealing with and how we can make progress:
Step 1: Our Lives in the Fog
Step 1 is the lowest step, the foggiest step, and unfortunately, for most of us it’s our default level of existence. On Step 1, the fog is all up in our shit, thick and close and clogging our senses, leaving us going through life unconscious. Down here, the thoughts, values, and priorities of the Higher Being are completely lost in the blinding fog and the deafening roaring, tweeting, honking, howling, and squawking of the animals in our heads. This makes us 1) small-minded, 2) short-sighted, and 3) stupid. Let’s discuss each of these:
1) On Step 1, you’re terribly small-minded because the animals are running the show.
When I look at the wide range of motivating emotions that humans experience, I don’t see them as a scattered range, but rather falling into two distinct bins: the high-minded, love-based, advanced emotions of the Higher Being, and the small-minded, fear-based, primitive emotions of our brain animals.
And on Step 1, we’re completely intoxicated by the animal emotions as they roar at us through the dense fog.
animals in fog
This is what makes us petty and jealous and what makes us so thoroughly enjoy the misfortune of others. It’s what makes us scared, anxious, and insecure. It’s why we’re self-absorbed and narcissistic; vain and greedy; narrow-minded and judgmental; cold, callous, and even cruel. And only on Step 1 do we feel that primitive “us versus them” tribalism that makes us hate people different than us.
You can find most of these same emotions in a clan of capuchin monkeys—and that makes sense, because at their core, these emotions can be boiled down to the two keys of animal survival: self-preservation and the need to reproduce.
Step 1 emotions are brutish and powerful and grab you by the collar, and when they’re upon you, the Higher Being and his high-minded, love-based emotions are shoved into the sewer.
2) On Step 1, you’re short-sighted, because the fog is six inches in front of your face, preventing you from seeing the big picture.
The fog explains all kinds of totally illogical and embarrassingly short-sighted human behavior.
Why else would anyone ever take a grandparent or parent for granted while they’re around, seeing them only occasionally, opening up to them only rarely, and asking them barely any questions—even though after they die, you can only think about how amazing they were and how you can’t believe you didn’t relish the opportunity to enjoy your relationship with them and get to know them better when they were around?
Why else would people brag so much, even though if they could see the big picture, it would be obvious that everyone finds out about the good things in your life eventually either way—and that you always serve yourself way more by being modest?
Why else would someone do the bare minimum at work, cut corners on work projects, and be dishonest about their efforts—when anyone looking at the big picture would know that in a work environment, the truth about someone’s work habits eventually becomes completely apparent to both bosses and colleagues, and you’re never really fooling anyone? Why would someone insist on making sure everyone knows when they did something valuable for the company—when it should be obvious that acting that way is transparent and makes it seem like you’re working hard just for the credit, while just doing things well and having one of those things happen to be noticed does much more for your long term reputation and level of respect at the company?
If not for thick fog, why would anyone ever pinch pennies over a restaurant bill or keep an unpleasantly-rigid scorecard of who paid for what on a trip, when everyone reading this could right now give each of their friends a quick and accurate 1-10 rating on the cheap-to-generous (or selfish-to-considerate) scale, and the few hundred bucks you save over time by being on the cheap end of the scale is hardly worth it considering how much more likable and respectable it is to be generous?
What other explanation is there for the utterly inexplicable decision by so many famous men in positions of power to bring down the career and marriage they spent their lives building by having an affair?
And why would anyone bend and loosen their integrity for tiny insignificant gains when integrity affects your long-term self-esteem and tiny insignificant gains affect nothing in the long term?
How else could you explain the decision by so many people to let the fear of what others might think dictate the way they live, when if they could see clearly they’d realize that A) that’s a terrible reason to do or not do something, and B) no one’s really thinking about you anyway—they’re buried in their own lives.
And then there are all the times when someone’s opaque blinders keep them in the wrong relationship, job, city, apartment, friendship, etc. for years, sometimes decades, only for them to finally make a change and say “I can’t believe I didn’t do this earlier,” or “I can’t believe I couldn’t see how wrong that was for me.” They should absolutely believe it, because that’s the power of the fog.
3) On Step 1, you’re very, very stupid.
One way this stupidity shows up is in us making the same obvious mistakes over and over and over again.1
The most glaring example is the way the fog convinces us, time after time after time, that certain things will make us happy that in reality absolutely don’t. The fog lines up a row of carrots, tells us that they’re the key to happiness, and tells us to forget today’s happiness in favor of directing all of our hope to all the happiness the future will hold because we’re gonna get those carrots.
And even though the fog has proven again and again that it has no idea how human happiness works—even though we’ve had so many experiences finally getting a carrot and feeling a ton of temporary happiness, only to watch that happiness fade right back down to our default level a few days later—we continue to fall for the trick.
It’s like hiring a nutritionist to help you with your exhaustion, and they tell you that the key is to drink an espresso shot anytime you’re tired. So you’d try it and think the nutritionist was a genius until an hour later when it dropped you like an anvil back into exhaustion. You go back to the nutritionist, who gives you the same advice, so you try it again and the same thing happens. That would probably be it right? You’d fire the nutritionist. Right? So why are we so gullible when it comes to the fog’s advice on happiness and fulfillment?
The fog is also much more harmful than the nutritionist because not only does it give us terrible advice—but the fog itself is the source of unhappiness. The only real solution to exhaustion is to sleep, and the only real way to improve happiness in a lasting way is to make progress in the battle against the fog.
There’s a concept in psychology called The Hedonic Treadmill, which suggests that humans have a stagnant default happiness level and when something good or bad happens, after an initial change in happiness, we always return to that default level. And on Step 1, this is completely true of course, given that trying to become permanently happier while in the fog is like trying to dry your body off while standing under the shower with the water running.
But I refuse to believe the same species that builds skyscrapers, writes symphonies, flies to the moon, and understands what a Higgs boson is is incapable of getting off the treadmill and actually improving in a meaningful way.
I think the way to do it is by learning to climb this consciousness staircase to spend more of our time on Steps 2, 3, and 4, and less of it mired unconsciously in the fog.
Step 2: Thinning the Fog to Reveal Context
Humans can do something amazing that no other creature on Earth can do—they can imagine. If you show an animal a tree, they see a tree. Only a human can imagine the acorn that sunk into the ground 40 years earlier, the small flimsy stalk it was at three years old, how stark the tree must look when it’s winter, and the eventual dead tree lying horizontally in that same place.
This is the magic of the Higher Being in our heads.
On the other hand, the animals in your head, like their real world relatives, can only see a tree, and when they see one, they react instantly to it based on their primitive needs. When you’re on Step 1, your unconscious animal-run state doesn’t even remember that the Higher Being exists, and his genius abilities go to waste.
Step 2 is all about thinning out the fog enough to bring the Higher Being’s thoughts and abilities into your consciousness, allowing you to see behind and around the things that happen in life. Step 2 is about bringing context into your awareness, which reveals a far deeper and more nuanced version of the truth.
There are plenty of activities or undertakings that can help thin out your fog. To name three:
1) Learning more about the world through education, travel, and life experience—as your perspective broadens, you can see a clearer and more accurate version of the truth.
2) Active reflection. This is what a journal can help with, or therapy, which is basically examining your own brain with the help of a fog expert. Sometimes a hypothetical question can be used as “fog goggles,” allowing you to see something clearly through the fog—questions like, “What would I do if money were no object?” or “How would I advise someone else on this?” or “Will I regret not having done this when I’m 80?” These questions are a way to ask your Higher Being’s opinion on something without the animals realizing what’s going on, so they’ll stay calm and the Higher Being can actually talk—like when parents spell out a word in front of their four-year-old when they don’t want him to know what they’re saying.2
3) Meditation, exercise, yoga, etc.—activities that help quiet the brain’s unconscious chatter, i.e. allowing the fog to settle.
But the easiest and most effective way to thin out the fog is simply to be aware of it. By knowing that fog exists, understanding what it is and the different forms it takes, and learning to recognize when you’re in it, you hinder its ability to run your life. You can’t get to Step 2 if you don’t know when you’re on Step 1.
The way to move onto Step 2 is by remembering to stay aware of the context behind and around what you see, what you come across, and the decisions you make. That’s it—remaining cognizant of the fog and remembering to look at the whole context keeps you conscious, aware of reality, and as you’ll see, makes you a much better version of yourself than you are on Step 1. Some examples—
Here’s what a rude cashier looks like on Step 1 vs. Step 2:
cashier
Here’s what gratitude looks like:
gratitude
Something good happening:
good thing
Something bad happening:
bad thing
That phenomenon where everything suddenly seems horrible late at night in bed:
late night
A flat tire:
flat tire
Long-term consequences:
consequences
Looking at context makes us aware how much we actually know about most situations (as well as what we don’t know, like what the cashier’s day was like so far), and it reminds us of the complexity and nuance of people, life, and situations. When we’re on Step 2, this broader scope and increased clarity makes us feel calmer and less fearful of things that aren’t actually scary, and the animals—who gain their strength from fear and thrive off of unconsciousness—suddenly just look kind of ridiculous:
animals clump
When the small-minded animal emotions are less in our face, the more advanced emotions of the Higher Being—love, compassion, humility, empathy, etc.—begin to light up.
The good news is there’s no learning required to be on Step 2—your Higher Being already knows the context around all of these life situations. It doesn’t take hard work, and no additional information or expertise is needed—you only have to consciously think about being on Step 2 instead of Step 1 and you’re there. You’re probably there right now just by reading this.
The bad news is that it’s extremely hard to stay on Step 2 for long. The Catch-22 here is that it’s not easy to stay conscious of the fog because the fog makes you unconscious.
That’s the first challenge at hand. You can’t get rid of the fog, and you can’t always keep it thin, but you can get better at noticing when it’s thick and develop effective strategies for thinning it out whenever you consciously focus on it. If you’re evolving successfully, as you get older, you should be spending more and more time on Step 2 and less and less on Step 1.
Step 3: Shocking Reality
I . . . a universe of atoms . . . an atom in the universe. —Richard Feynman
Step 3 is when things start to get weird. Even on the more enlightened Step 2, we kind of think we’re here:
happy earth land
As delightful as that is, it’s a complete delusion. We live our days as if we’re just here on this green and brown land with our blue sky and our chipmunks and our caterpillars. But this is actually what’s happening:
Little Earth
But even more actually, this is happening:
IDL TIFF file
We also tend to kind of think this is the situation:
life timeline
When really, it’s this:
long timeline
You might even think you’re a thing. Do you?
Thing
No you’re a ton of these:
atom
This is the next iteration of truth on our little staircase, and our brains can’t really handle it. Asking a human to internalize the vastness of space or the eternity of time or the tininess of atoms is like asking a dog to stand up on its hind legs—you can do it if you focus, but it’s a strain and you can’t hold it for very long.3
You can think about the facts anytime—The Big Bang was 13.8 billion years ago, which is about 130,000 times longer than humans have existed; if the sun were a ping pong ball in New York, the closest star to us would be a ping pong ball in Atlanta; the Milky Way is so big that if you made a scale model of it that was the size of the US, you would still need a microscope to see the sun; atoms are so small that there are about as many atoms in one grain of salt as there are grains of sand on all the beaches on Earth. But once in a while, when you deeply reflect on one of these facts, or when you’re in the right late night conversation with the right person, or when you’re staring at the stars, or when you think too hard about what death actually means—you have a Whoa moment.
A true Whoa moment is hard to come by and even harder to maintain for very long, like our dog’s standing difficulties. Thinking about this level of reality is like looking at an amazing photo of the Grand Canyon; a Whoa moment is like being at the Grand Canyon—the two experiences are similar but somehow vastly different. Facts can be fascinating, but only in a Whoa moment does your brain actually wrap itself around true reality. In a Whoa moment, your brain for a second transcends what it’s been built to do and offers you a brief glimpse into the astonishing truth of our existence. And a Whoa moment is how you get to Step 3.
I love Whoa moments. They make me feel some intense combination of awe, elation, sadness, and wonder. More than anything, they make me feel ridiculously, profoundly humble—and that level of humility does weird things to a person. In those moments, all those words religious people use—awe, worship, miracle, eternal connection—make perfect sense. I want to get on my knees and surrender. This is when I feel spiritual.
And in those fleeting moments, there is no fog—my Higher Being is in full flow and can see everything in perfect clarity. The normally-complicated world of morality is suddenly crystal clear, because the only fathomable emotions on Step 3 are the most high-level. Any form of pettiness or hatred is a laughable concept up on Step 3—with no fog to obscure things, the animals are completely naked, exposed for the sad little creatures that they are.
animals embarrassed
On Step 1, I snap back at the rude cashier, who had the nerve to be a dick to me. On Step 2, the rudeness doesn’t faze me because I know it’s about him, not me, and that I have no idea what his day or life has been like. On Step 3, I see myself as a miraculous arrangement of atoms in vast space that for a split second in endless eternity has come together to form a moment of consciousness that is my life…and I see that cashier as another moment of consciousness that happens to exist on the same speck of time and space that I do. And the only possible emotion I could have for him on Step 3 is love.
cashier 2
In a Whoa moment’s transcendent level of consciousness, I see every interaction, every motivation, every news headline in unusual clarity—and difficult life decisions are much more obvious. I feel wise.
Of course, if this were my normal state, I’d be teaching monks somewhere on a mountain in Myanmar, and I’m not teaching any monks anywhere because it’s not my normal state. Whoa moments are rare and very soon after one, I’m back down here being a human again. But the emotions and the clarity of Step 3 are so powerful, that even after you topple off the step, some of it sticks around. Each time you humiliate the animals, a little bit of their future power over you is diminished. And that’s why Step 3 is so important—even though no one that I know can live permanently on Step 3, regular visits help you dramatically in the ongoing Step 1 vs Step 2 battle, which makes you a better and happier person.
Step 3 is also the answer to anyone who accuses atheists of being amoral or cynical or nihilistic, or wonders how atheists find any meaning in life without the hope and incentive of an afterlife. That’s a Step 1 way to view an atheist, where life on Earth is taken for granted and it’s assumed that any positive impulse or emotion must be due to circumstances outside of life. On Step 3, I feel immensely lucky to be alive and can’t believe how cool it is that I’m a group of atoms that can think about atoms—on Step 3, life itself is more than enough to make me excited, hopeful, loving, and kind. But Step 3 is only possible because science has cleared the way there, which is why Carl Sagan said that “science is not only compatible with spirituality; it is a profound source of spirituality.” In this way, science is the “prophet” of this framework—the one who reveals new truth to us and gives us an opportunity to alter ourselves by accessing it.
So to recap so far—on Step 1, you’re in a delusional bubble that Step 2 pops. On Step 2, there’s much more clarity about life, but it’s within a much bigger delusional bubble, one that Step 3 pops. But Step 3 is supposed to be total, fog-free clarity on truth—so how could there be another step?
Step 4: The Great Unknown
If we ever reach the point where we think we thoroughly understand who we are and where we came from, we will have failed. —Carl Sagan
The game so far has for the most part been clearing out fog to become as conscious as possible of what we as people and as a species know about truth:
Step 1-3 Circles
On Step 4, we’re reminded of the complete truth—which is this:
Step 4 Circle
The fact is, any discussion of our full reality—of the truth of the universe or our existence—is a complete delusion without acknowledging that big purple blob that makes up almost all of that reality.
But you know humans—they don’t like that purple blob one bit. Never have. The blob frightens and humiliates humans, and we have a rich history of denying its existence entirely, which is like living on the beach and pretending the ocean isn’t there. Instead, we just stamp our foot and claim that now we’ve finally figured it all out. On the religious side, we invent myths and proclaim them as truth—and even a devout religious believer reading this who stands by the truth of their particular book would agree with me about the fabrication of the other few thousand books out there. On the science front, we’ve managed to be consistently gullible in believing that “realizing you’ve been horribly wrong about reality” is a phenomenon only of the past.
Having our understanding of reality overturned by a new groundbreaking discovery is like a shocking twist in this epic mystery novel humanity is reading, and scientific progress is regularly dotted with these twists—the Earth being round, the solar system being heliocentric, not geocentric, the discovery of subatomic particles or galaxies other than our own, and evolutionary theory, to name a few. So how is it possible, with the knowledge of all those breakthroughs, that Lord Kelvin, one of history’s greatest scientists, said in the year 1900, “There is nothing new to be discovered in physics now. All that remains is more and more precise measurement”4—i.e. this time, all the twists actually are finished.
Of course, Kelvin was as wrong as every other arrogant scientist in history—the theory of general relativity and then the theory of quantum mechanics would both topple science on its face over the next century.
Even if we acknowledge today that there will be more twists in the future, we’re probably kind of inclined to think we’ve figured out most of the major things and have a far closer-to-complete picture of reality than the people who thought the Earth was flat. Which, to me, sounds like this:
Laughing
The fact is, let’s remember that we don’t know what the universe is. Is it everything? Is it one tiny bubble in a multiverse frothing with bubbles? Is it not a bubble at all but an optical illusion hologram? And we know about the Big Bang, but was that the beginning of everything? Did something arise from nothing, or was it just the latest in a long series of expansion/collapse cycles?5 We have no clue what dark matter is, only that there’s a shit-ton of it in the universe, and when we discussed The Fermi Paradox, it became entirely clear that science has no idea about whether there’s other life out there or how advanced it might be. How about String Theory, which claims to be the secret to unifying the two grand but seemingly-unrelated theories of the physical world, general relativity and quantum mechanics? It’s either the grandest theory we’ve ever come up with or totally false, and there are great scientists on both sides of this debate. And as laypeople, all we need to do is take a look at those two well-accepted theories to realize how vastly different reality can be from how it seems: like general relativity telling us that if you flew to a black hole and circled around it a few times in intense gravity and then returned to Earth a few hours after you left, decades would have passed on Earth while you were gone. And that’s like an ice cream cone compared to the insane shit quantum mechanics tells us—like two particles across the universe from one another being mysteriously linked to each other’s behavior, or a cat that’s both alive and dead at the same time, until you look at it.
And the thing is, everything I just mentioned is still within the realm of our understanding. As we established earlier, compared to a more evolved level of consciousness, we might be like a three-year-old, a monkey, or an ant—so why would we assume that we’re even capable of understanding everything in that purple blob? A monkey can’t understand that the Earth is a round planet, let alone that the solar system, galaxy, or universe exists. You could try to explain it to a monkey for years and it wouldn’t be possible. So what are we completely incapable of grasping even if a more intelligent species tried its hardest to explain it to us? Probably almost everything.
There are really two options when thinking about the big, big picture: be humble or be absurd.
The nonsensical thing about humans feigning certainty because we’re scared is that in the old days, when it seemed on the surface that we were the center of all creation, uncertainty was frightening because it made our reality seem so much bleaker than we had thought—but now, with so much more uncovered, things look highly bleak for us as people and as a species, so our fear should welcome uncertainty. Given my default outlook that I have a small handful of decades left and then an eternity of nonexistence, the fact that we might be totally wrong sounds tremendously hopeful to me.
Ironically, when my thinking reaches the top of this rooted-in-atheism staircase, the notion that something that seems divine to us might exist doesn’t seem so ridiculous anymore. I’m still totally atheist when it comes to all human-created conceptions of a divine higher force—which all, in my opinion, proclaim far too much certainty. But could a super-advanced force exist? It seems more than likely. Could we have been created by something/someone bigger than us or be living as part of a simulation without realizing it? Sure—I’m a three-year-old, remember, so who am I to say no?
To me, complete rational logic tells me to be atheist about all of the Earth’s religions and utterly agnostic about the nature of our existence or the possible existence of a higher being. I don’t arrive there via any form of faith, just by logic.
I find Step 4 mentally mind-blowing but I’m not sure I’m ever quite able to access it in a spiritual way like I sometimes can with Step 3—Step 4 Whoa moments might be reserved for Einstein-level thinkers—but even if I can’t get my feet up on Step 4, I can know it’s there, what it means, and I can remind myself of its existence. So what does that do for me as a human?
Well remember that powerful humility I mentioned in Step 3? It multiples that by 100. For reasons I just discussed, it makes me feel more hopeful. And it leaves me feeling pleasantly resigned to the fact that I will never understand what’s going on, which makes me feel like I can take my hand off the wheel, sit back, relax, and just enjoy the ride. In this way, I think Step 4 can make us live more in the present—if I’m just a molecule floating around an ocean I can’t understand, I might as well just enjoy it.
The way Step 4 can serve humanity is by helping to crush the notion of certainty. Certainty is primitive, leads to “us versus them” tribalism, and starts wars. We should be united in our uncertainty, not divided over fabricated certainty. And the more humans turn around and look at that big purple blob, the better off we’ll be.
Why Wisdom is the Goal
Nothing clears fog like a deathbed, which is why it’s then that people can always see with more clarity what they should have done differently—I wish I had spent less time working; I wish I had communicated with my wife more; I wish I had traveled more; etc. The goal of personal growth should be to gain that deathbed clarity while your life is still happening so you can actually do something about it.
The way you do that is by developing as much wisdom as possible, as early as possible. To me, wisdom is the most important thing to work towards as a human. It’s the big objective—the umbrella goal under which all other goals fall into place. I believe I have one and only one chance to live, and I want to do it in the most fulfilled and meaningful way possible—that’s the best outcome for me, and I do a lot more good for the world that way. Wisdom gives people the insight to know what “fulfilled and meaningful” actually means and the courage to make the choices that will get them there.
And while life experience can contribute to wisdom, I think wisdom is mostly already in all of our heads—it’s everything the Higher Being knows. When we’re not wise, it’s because we don’t have access to the Higher Being’s wisdom because it’s buried in fog. The fog is anti-wisdom, and when you move up the staircase into a clearer place, wisdom is simply a by-product of that increased consciousness.
One thing I learned at some point is that growing old or growing tall is not the same as growing up. Being a grownup is about your level of wisdom and the size of your mind’s scope—and it turns out that it doesn’t especially correlate with age. After a certain age, growing up is about overcoming your fog, and that’s about the person, not the age. I know some supremely wise older people, but there are also a lot of people my age who seem much wiser than their parents about a lot of things. Someone on a growth path whose fog thins as they age will become wiser with age, but I find the reverse happens with people who don’t actively grow—the fog hardens around them and they actually become even less conscious, and even more certain about everything, with age.
When I think about people I know, I realize that my level of respect and admiration for a person is almost entirely in line with how wise and conscious a person I think they are. The people I hold in the highest regard are the grownups in my life—and their ages completely vary.
Another Look at Religion in Light of this Framework:
This discussion helps clarify my issues with traditional organized religion. There are plenty of good people, good ideas, good values, and good wisdom in the religious world, but to me that seems like something happening in spite of religion and not because of it. Using religion for growth requires an innovative take on things, since at a fundamental level, most religions seem to treat people like children instead of pushing them to grow. Many of today’s religions play to people’s fog with “believe in this or else…” fear-mongering and books that are often a rallying cry for ‘us vs. them’ divisiveness. They tell people to look to ancient scripture for answers instead of the depths of the mind, and their stubborn certainty when it comes to right and wrong often leaves them at the back of the pack when it comes to the evolution of social issues. Their certainty when it comes to history ends up actively pushing their followers away from truth—as evidenced by the 42% of Americans who have been deprived of knowing the truth about evolution. (An even worse staircase criminal is the loathsome world of American politics, with a culture that lives on Step 1 and where politicians appeal directly to people’s animals, deliberately avoiding anything on Steps 2-4.)
So What Am I?
Yes, I’m an atheist, but atheism isn’t a growth model any more than “I don’t like rollerblading” is a workout strategy.
So I’m making up a term for what I am—I’m a Truthist. In my framework, truth is what I’m always looking for, truth is what I worship, and learning to see truth more easily and more often is what leads to growth.
In Truthism, the goal is to grow wiser over time, and wisdom falls into your lap whenever you’re conscious enough to see the truth about people, situations, the world, or the universe. The fog is what stands in your way, making you unconscious, delusional, and small-minded, so the key day-to-day growth strategy is staying cognizant of the fog and training your mind to try to see the full truth in any situation.
Over time, you want your [Time on Step 2] / [Time on Step 1] ratio to go up a little bit each year, and you want to get better and better at inducing Step 3 Whoa moments and reminding yourself of the Step 4 purple blob. If you do those things, I think you’re evolving in the best possible way, and it will have profound effects on all aspects of your life.
That’s it. That’s Truthism.
Am I a good Truthist? I’m okay. Better than I used to be with a long way to go. But defining this framework will help—I’ll know where to put my focus, what to be wary of, and how to evaluate my progress, which will help me make sure I’m actually improving and lead to quicker growth.
To help keep me on mission, I made a Truthism logo:
logo
That’s my symbol, my mantra, my WWJD—it’s the thing I can look at when something good or bad happens, when a big decision is at hand, or on a normal day as a reminder to stay aware of the fog and keep my eye on the big picture.
And What Are You?
My challenge to you is to decide on a term for yourself that accurately sums up your growth framework.
If Christianity is your thing and it’s genuinely helping you grow, that word can be Christian. Maybe you already have your own clear, well-defined advancement strategy and you just need a name for it. Maybe Truthism hit home for you, resembles the way you already think, and you want to try being a Truthist with me.
Or maybe you have no idea what your growth framework is, or what you’re using isn’t working. If either A) you don’t feel like you’ve evolved in a meaningful way in the past couple years, or B) you aren’t able to corroborate your values and philosophies with actual reasoning that matters to you, then you need to find a new framework.
To do this, just ask yourself the same questions I asked myself: What’s the goal that you want to evolve towards (and why is that the goal), what does the path look like that gets you there, what’s in your way, and how do you overcome those obstacles? What are your practices on a day-to-day level, and what should your progress look like year-to-year? Most importantly, how do you stay strong and maintain the practice for years and years, not four days? After you’ve thought that through, name the framework and make a symbol or mantra. (Then share your strategy in the comments or email me about it, because articulating it helps clarify it in your head, and because it’s useful and interesting for others to hear about your framework.)
I hope I’ve convinced you how important this is. Don’t wait until your deathbed to figure out what life is all about.
Nous connaissons maintenant les conséquences sur le climat de notre utilisation massive d’énergies fossiles. Pour les remplacer, le nucléaire, toutes générations confondues, n’est crédible ni industriellement, ni moralement. Indéniablement, nous pouvons et nous devons développer les énergies renouvelables. Mais ne nous imaginons pas qu’elles pourront remplacer les énergies fossiles et maintenir notre débauche énergétique actuelle.
Les problèmes auxquels nous faisons face ne pourront pas être résolus simplement par une série d’innovations technologiques et de déploiements industriels de solutions alternatives. Car nous allons nous heurter à un problème de ressources, essentiellement pour deux raisons : il faut des ressources métalliques pour capter les énergies renouvelables ; et celles-ci ne peuvent qu’être imparfaitement recyclées, ce phénomène s’aggravant avec l’utilisation de hautes technologies. La solution climatique ne peut donc passer que par la voie de la sobriété et de technologies adaptées, moins consommatrices.
Energies et ressources sont intimement liées
Les arguments sont connus : les énergies renouvelables ont un potentiel énorme ; et même si elles sont diffuses, pour partie intermittentes, et à date encore un peu trop chères, les progrès continus sur la production, le stockage, le transport, et leur déploiement massif devraient permettre de réduire les coûts et les rendre abordables.
Certes, la Terre reçoit chaque jour une quantité d’énergie solaire des milliers de fois plus grande que les besoins de l’humanité… Les scenarii sur des mondes « énergétiquement vertueux » ne manquent pas : troisième révolution industrielle du prospectiviste Jeremy Rifkin, plan Wind Water Sun du professeur Jacobson de l’université de Stanford, projet industriel Desertec, ou, à l’échelle française, simulations de l’association Negawatt ou de l’ADEME.
Tous sont basés sur des déploiements industriels très ambitieux. Wind Water Sun propose de couvrir les besoins en énergie de l’ensemble du monde, uniquement avec des renouvelables, d’ici 2030. Pour cela, il faudrait 3,8 millions d’éoliennes de 5 MW et 89 000 centrales solaires de 300 MW, soit installer en 15 ans 19 000 GW d’éoliennes (30 fois le rythme actuel de 40 GW au plus par an), et inaugurer quinze centrales solaires par jour.
Economie de guerre
Rien d’impossible sur le papier, mais il faudrait alors une véritable économie de guerre, pour organiser l’approvisionnement en matières premières – acier, ciment, résines polyuréthanes, cuivre, terres rares (pour fournir le néodyme des aimants permanents pour les génératrices de ces éoliennes, il faudrait – si tant est qu’il y ait les réserves disponibles – multiplier la production annuelle par 15 !) –, la production des équipements, la logistique et l’installation (bateaux, grues, bases de stockage…), la formation du personnel… Sans parler des dispositifs de transport et de stockage de l’électricité !
Mais l’irréalisme tient davantage aux ressources qu’aux contraintes industrielles ou financières. Car il faut des métaux pour capter, convertir et exploiter les énergies renouvelables. Moins concentrées et plus intermittentes, elles produisent moins de kWh par unité de métal (cuivre, acier) mobilisée que les sources fossiles. Certaines technologies utilisent des métaux plus rares, comme le néodyme dopé au dysprosium pour les éoliennes de forte puissance, l’indium, le sélénium ou le tellure pour une partie des panneaux photovoltaïques à haut rendement. Il faut aussi des métaux pour les équipements annexes, câbles, onduleurs ou batteries.
Nous disposons de beaucoup de ressources métalliques, de même qu’il reste énormément de gaz et pétrole conventionnels ou non, d’hydrates de méthane, de charbon… bien au-delà du supportable pour la régulation climatique planétaire, hélas.
Mais, comme pour le pétrole et le gaz, la qualité et l’accessibilité de ces ressources minières se dégradent (pour le pétrole et le gaz, le rapport entre quantité d’énergie récupérée et quantité d’énergie investie pour l’extraire est passé de 30-50 dans les champs onshore, à 5-7 dans les exploitations deep ou ultradeep offshore, et même 2-4 pour les sables bitumineux de l’Alberta). Car nous exploitons un stock de minerais qui ont été créés, enrichis par la nature « vivante » de la planète : tectonique des plaques, volcanisme, cycle de l’eau, activité biologique…
Deux problèmes au même moment
Logiquement, nous avons exploité d’abord les ressources les plus concentrées, les plus simples à extraire. Les nouvelles mines ont des teneurs en minerai plus basses que les mines épuisées (ainsi du cuivre, passé d’une moyenne de 1,8-2% dans les années 1930, à 0,5% dans les nouvelles mines), ou bien sont moins accessibles, plus dures à exploiter, plus profondes.
Exploitation de sables bitumineux au Canada (Jørgen Schyberg/flickr/CC)
Exploitation de sables bitumineux au Canada (Jørgen Schyberg/flickr/CC)
Or, que les mines soient plus profondes ou moins concentrées, il faut dépenser plus d’énergie, parce qu’il faut remuer toujours plus de « stériles » miniers, ou parce que la profondeur engendre des contraintes, de température notamment, qui rendent les opérations plus complexes.
Il y a donc une interaction très forte entre disponibilité en énergie et disponibilité en métaux, et la négliger serait se confronter à de grandes désillusions.
Si nous n’avions qu’un problème d’énergie (et de climat !), il « suffirait » de tartiner le monde de panneaux solaires, d’éoliennes et de smart grids (réseaux de transport « intelligents » permettant d’optimiser la consommation, et surtout d’équilibrer à tout moment la demande variable avec l’offre intermittente des énergies renouvelables).
Si nous n’avions qu’un problème de métaux, mais accès à une énergie concentrée et abondante, nous pourrions continuer à exploiter la croûte terrestre à des concentrations toujours plus faibles.
Mais nous faisons face à ces deux problèmes au même moment, et ils se renforcent mutuellement : plus d’énergie nécessaire pour extraire et raffiner les métaux, plus de métaux pour produire une énergie moins accessible.
L’économie circulaire est une gentille utopie
Les ressources métalliques, une fois extraites, ne disparaissent pas. L’économie circulaire, basée en particulier sur l’éco-conception et le recyclage, devrait donc être une réponse logique à la pénurie métallique. Mais celle-ci ne pourra fonctionner que très partiellement si l’on ne change pas radicalement notre façon de produire et de consommer.
Naturellement on peut et il faut recycler plus qu’aujourd’hui, et les taux de recyclage actuels sont souvent si bas que les marges de progression sont énormes. Mais on ne peut jamais atteindre 100% et recycler « à l’infini », quand bien même on récupérerait toute la ressource disponible et on la traiterait toujours dans les usines les plus modernes, avec les procédés les mieux maîtrisés (on en est très loin).
D’abord parce qu’il faut pouvoir récupérer physiquement la ressource pour la recycler, ce qui est impossible dans le cas des usages dispersifs ou dissipatifs. Les métaux sont couramment utilisés comme produits chimiques, additifs, dans les verres, les plastiques, les encres, les peintures, les cosmétiques, les fongicides, les lubrifiants et bien d’autres produits industriels ou de la vie courante (environ 5% du zinc, 10 à 15% du manganèse, du plomb et de l’étain, 15 à 20% du cobalt et du cadmium, et, cas extrême, 95% du titane dont le dioxyde sert de colorant blanc universel).
Ensuite parce qu’il est difficile de recycler correctement. Nous concevons des produits d’une diversité et d’une complexité inouïes, à base de composites, d’alliages, de composants de plus en plus miniaturisés et intégrés… mais notre capacité, technologique ou économique, à repérer les différents métaux ou à les séparer, est limitée.
Les métaux non ferreux contenues dans les aciers alliés issus de première fonte sont ferraillés de manière indifférenciée et finissent dans des usages moins nobles comme les ronds à béton du bâtiment. Ils ont bien été recyclés, mais sont perdus fonctionnellement, les générations futures n’y auront plus accès, ils sont « dilués ». Il y a dégradation de l’usage de la matière : le métal « noble » finit dans un acier bas de gamme, comme la bouteille plastique finit en chaise de jardin.
La vraie voiture propre, c’est le vélo !
La voiture propre est ainsi une expression absurde, quand bien même les voitures fonctionneraient avec une énergie « 100% propre » ou « zéro émission ». Sans remise en question profonde de la conception, il y aura toujours des usages dispersifs (divers métaux dans la peinture, étain dans le PVC, zinc et cobalt dans les pneus, platine rejeté par le pot catalytique…), une carrosserie, des éléments métalliques et de l’électronique de bord qui seront mal recyclés… La vraie voiture propre, ou presque, c’est le vélo !
Perte entropique ou par dispersion (à la source ou à l’usage), perte « mécanique » (par abandon dans la nature, mise en décharge ou incinération), perte fonctionnelle (par recyclage inefficace) : le recyclage n’est pas un cercle mais un boyau percé, et à chaque cycle de production-usage-consommation, on perd de manière définitive une partie des ressources. On peut toujours progresser. Mais sans revoir drastiquement notre manière d’agir, les taux resteront désespérément bas pour de nombreux petits métaux high tech et autres terres rares (pour la plupart, moins de 1% aujourd’hui), tandis que pour les grands métaux nous plafonnerons à un taux typique de 50 à 80% qui restera très insuffisant.
Canettes d'aluminium compactées avant recyclage (SB/Rue89 Bordeaux)
Canettes d’aluminium compactées avant recyclage (SB/Rue89 Bordeaux)
La croissance « verte » sera mortifère
La croissance « verte » se base, en tout cas dans son acception actuelle, sur le tout-technologique. Elle ne fera alors qu’aggraver les phénomènes que nous venons de décrire, qu’emballer le système, car ces innovations « vertes » sont en général basées sur des métaux moins répandus, aggravent la complexité des produits, font appel à des composants high tech plus durs à recycler. Ainsi du dernier cri des énergies renouvelables, des bâtiments « intelligents », des voitures électriques, hybrides ou hydrogène…
Le déploiement suffisamment massif d’énergies renouvelables décentralisées, d’un internet de l’énergie, est irréaliste. Si la métaphore fleure bon l’économie « dématérialisée », c’est oublier un peu vite qu’on ne transporte pas les électrons comme les photons, et qu’on ne stocke pas l’énergie aussi aisément que des octets. Pour produire, stocker, transporter l’électricité, même « verte », il faut quantité de métaux. Et il n’y a pas de loi de Moore (postulant le doublement de la densité des transistors tous les deux ans environ) dans le monde physique de l’énergie.
Mais une lutte technologique contre le changement climatique sera aussi désespérée.
Ainsi dans les voitures, où le besoin de maintenir le confort, la performance et la sécurité nécessite des aciers alliés toujours plus précis pour gagner un peu de poids et réduire les émissions de CO2. Alors qu’il faudrait limiter la vitesse et brider la puissance des moteurs, pour pouvoir dans la foulée réduire le poids et gagner en consommation. La voiture à un litre aux cent kilomètres est à portée de main ! Il suffit qu’elle fasse 300 ou 400 kg, et ne dépasse pas les 80 km/h.
Ainsi dans les bâtiments, où le niveau de confort toujours plus exigeant nécessite l’emploi de matériaux rares (verres faiblement émissifs) et une électronicisation généralisée pour optimiser la consommation (gestion technique du bâtiment, capteurs, moteurs et automatismes, ventilation mécanique contrôlée).
Avec la croissance « verte », nous aimerions appuyer timidement sur le frein tout en restant pied au plancher : plus que jamais, notre économie favorise le jetable, l’obsolescence, l’accélération, le remplacement des métiers de service par des machines bourrées d’électronique, en attendant les drones et les robots. Ce qui nous attend à court terme, c’est une accélération dévastatrice et mortifère, de la ponction de ressources, de la consommation électrique, de la production de déchets ingérables, avec le déploiement généralisé des nanotechnologies, des big data, des objets connectés. Le saccage de la planète ne fait que commencer.
La solution climatique passera par les « low tech »
Il nous faut prendre la vraie mesure de la transition nécessaire et admettre qu’il n’y aura pas de sortie par le haut à base d’innovation technologique – ou qu’elle est en tout cas si improbable, qu’il serait périlleux de tout miser dessus. On ne peut se contenter des business models émergents, à base d’économie de partage ou de la fonctionnalité, peut-être formidables mais ni généralisables, ni suffisants.
Nous devrons décroître, en valeur absolue, la quantité d’énergie et de matières consommées. Il faut travailler sur la baisse de la demande, non sur le remplacement de l’offre, tout en conservant un niveau de « confort » acceptable.
C’est toute l’idée des low tech, les « basses technologies », par opposition aux high tech qui nous envoient dans le mur, puisqu’elles sont plus consommatrices de ressources rares et nous éloignent des possibilités d’un recyclage efficace et d’une économie circulaire. Promouvoir les low tech est avant tout une démarche, ni obscurantiste, ni forcément opposée à l’innovation ou au « progrès », mais orientée vers l’économie de ressources, et qui consiste à se poser trois questions.
Pourquoi produit-on ? Il s’agit d’abord de questionner intelligemment nos besoins, de réduire à la source, autant que possible, le prélèvement de ressources et la pollution engendrée. C’est un exercice délicat car les besoins humains – nourris par la rivalité mimétique – étant a priori extensibles à l’infini, il est impossible de décréter « scientifiquement » la frontière entre besoins fondamentaux et « superflus », qui fait aussi le sel de la vie. D’autant plus délicat qu’il serait préférable de mener cet exercice démocratiquement, tant qu’à faire.
Il y a toute une gamme d’actions imaginables, plus ou moins compliquées, plus ou moins acceptables.
Certaines devraient logiquement faire consensus ou presque, à condition de bien exposer les arguments (suppression de certains objets jetables, des supports publicitaires, de l’eau en bouteille…).
D’autres seront un peu plus difficiles à faire passer, mais franchement nous n’y perdrions quasiment pas de « confort » (retour de la consigne, réutilisation des objets, compostage des déchets, limite de vitesse des véhicules…).
D’autres enfin promettent quelques débats houleux (réduction drastique de la voiture au profit du vélo, adaptation des températures dans les bâtiments, urbanisme revisité pour inverser la tendance à l’hypermobilité…).
Qui est liberticide ?
Liberticide ? Certainement, mais nos sociétés sont déjà liberticides. Il existe bien une limite, de puissance, de poids, fixée par la puissance publique, pour l’immatriculation des véhicules. Pourquoi ne pourrait-elle pas évoluer ? Un des principes fondamentaux en société est qu’il est préférable que la liberté des uns s’arrête là où commence celle des autres. Puisque nous n’avons qu’une planète et que notre consommation dispendieuse met en danger les conditions même de la vie humaine – et de bien d’autres espèces – sur Terre, qui est liberticide ? Le conducteur de 4×4, l’utilisateur de jet privé, le propriétaire de yacht, ou celui qui propose d’interdire ces engins de mort différée ?
Que produit-on ? Il faut ensuite augmenter considérablement la durée de vie des produits, bannir la plupart des produits jetables ou dispersifs, s’ils ne sont pas entièrement à base de ressources renouvelables et non polluantes, repenser en profondeur la conception des objets : réparables, réutilisables, faciles à identifier et démanteler , recyclables en fin de vie sans perte, utilisant le moins possible les ressources rares et irremplaçables, contenant le moins d’électronique possible, quitte à revoir notre « cahier des charges », accepter le vieillissement ou la réutilisation de l’existant, une esthétique moindre pour les objets fonctionnels, parfois une moindre performance ou une perte de rendement… en gros, le moulin à café et la cafetière italienne de grand-mère, plutôt que la machine à expresso dernier cri. Dans le domaine énergétique, cela pourrait prendre la forme de la micro et mini hydraulique, de petites éoliennes « de village » intermittentes, de solaire thermique pour les besoins sanitaires et la cuisson, de pompes à chaleur, de biomasse…
Comment produit-on ? Il y a enfin une réflexion à mener sur nos modes de production. Doit-on poursuivre la course à la productivité et à l’effet d’échelle dans des giga-usines, ou faut-il mieux des ateliers et des entreprises à taille humaine ? Ne doit-on pas revoir la place de l’humain, le degré de mécanisation et de robotisation, la manière dont nous arbitrons aujourd’hui entre main-d’œuvre et ressources / énergie ? Notre rapport au travail (meilleur partage entre tous, intérêt d’une spécialisation outrancière, répartition du temps entre travail salarié et activités domestiques, etc.) ?
Et puis il y a la question aigüe de la territorialisation de la production. Après des décennies de mondialisation facilitée par un coût du pétrole suffisamment bas et le transport par conteneurs, le système est devenu absurde.
À l’heure des futures perturbations, des tensions sociales ou internationales, des risques géopolitiques à venir, que le changement climatique ou les pénuries de ressources risquent d’engendrer, sans parler des scandales sanitaires possibles, un système basé sur une Chine « usine du monde » est-il vraiment résilient ?
Un projet de société
Pour réussir une telle évolution, indispensable mais tellement à contre-courant, il faudra résoudre de nombreuses questions, à commencer par celle de l’emploi. « La croissance, c’est l’emploi » a tellement été martelé qu’il est difficile de parler de sobriété sans faire peur.
Malgré l’évidence des urgences environnementales, toute radicalité écologique, toute évolution réglementaire ou fiscale d’envergure, même progressive, toute réflexion de fond même, est interdite par la terreur – légitime – de détruire des emplois. Une fois acté le fait que la croissance ne reviendra pas (on y vient doucement), et tant mieux compte tenu de ses effets environnementaux, il faudra se convaincre que le plein-emploi, ou la pleine activité, est parfaitement atteignable dans un monde post-croissance économe en ressources.
Il faudra aussi se poser la question de l’échelle territoriale à laquelle mener cette transition, entre une gouvernance mondiale, impossible dans les délais impartis, et des expériences locales individuelles et collectives, formidables mais insuffisantes. Même enchâssé dans le système d’échanges mondial, un pays ou un petit groupe de pays pourrait prendre les devants, et, protégé par des mesures douanières bien réfléchies, amorcer un réel mouvement, porteur d’espoir et de radicalité.
Compte-tenu des forces en présence, il y a bien sûr une part utopique dans un tel projet de société. Mais n’oublions pas que le scénario de statu quo est probablement encore plus irréaliste, avec des promesses de bonheur technologique qui ne seront pas tenues et un monde qui s’enfoncera dans une crise sans fin, sans parler des risques de soubresauts politiques liés aux frustrations toujours plus grandes. Pourquoi ne pas tenter une autre route ? Nous avons largement les moyens, techniques, organisationnels, financiers, sociétaux et culturels pour mener une telle transition. A condition de le vouloir.
Ce n'est pas un scoop: j'ai participé, à ma mesure, à la lutte contre la loi scélérate sur le renseignement. Ce n'est pas une surprise: j'ai été plus que déçu par la décision rendue par le Conseil Constitutionnel à son sujet. Mais ce n'est pas l'objet du présent billet. Avec le recul, il semble évident que nous (les opposants à cette loi) n'avons pas su nous faire comprendre du grand public. Le texte était complexe, ses enjeux très techniques ou très philosophiques, et nous avons choisi de les expliquer: c'était sans doute une erreur.
Pendant que les tenants du texte préféraient jouer sur le registre émotionnel ("Si vous ne votez pas ce texte, vous serez responsables du prochain attentat") ou démagogique ("si vous êtes contre nous, vous êtes avec les terroristes"), nous nous sommes fatigués à décortiquer le danger des "boites noires" et des algorithmes informatiques, à en appeler à Foucault et au panoptique, et à rappeler l'importance de la vie privée pour la liberté de penser.
Sur ces bases, le combat de l'adhésion populaire était perdu d'avance: face au populisme, le pari de l'intellligence est souvent perdant.
Une unanimité jamais vue
Pour autant, un point reste remarquable: jamais, en 20 ans de lutte pour les libertés je n'avais vu pareille unanimité de la mal nommée "société civile" contre un texte. Jamais. Du SNJ au Syndicat de la Magistrature, de l'ONU au Conseil de l'Europe, de la Quadrature du Net à la LDH, en passant par le juge Trevidic et le Défenseur des Droits Jacques Toubon: tous se sont opposés, avec peu ou prou les mêmes réserves, à ce texte. Il serait d'ailleurs bien plus court de faire la liste des organismes ou associations qui l'ont défendu: il n'y en a pas.
Et de cette levée de boucliers qui fut (et c'est aussi une nouveauté) bien reprise dans les médias, le gouvernement n'a rien vu, rien entendu. Devant les deux assemblées, elle a été ignorée d'un revers de main, quand elle n'a pas été dénigrée ou caricaturée.
On nous a tour à tour accusé d'avoir fait peser une "odieuse pression" sur les députés (car il est bien connu qu'il est honteux pour des citoyens d'essayer d'influencer le vote de leurs représentants), d'être des "exégètes amateurs" qui ne comprenaient rien au "juridisme" de la loi, ou encore de n'être que des "numéristes" (comme si comprendre les enjeux des nouvelles technologies ne pouvait que disqualifier ceux qui s'y essaient).
Quant aux rares parlementaires qui ont essayé de relayer ces inquiétudes, ils ont été raillés, dénigrés, ridiculisés par des ministres "droits dans leurs bottes" et totalement sourds aux arguments qui étaient développés. Aucun amendement, aucune remise en cause du texte présenté n'ont été admis. Et toujours au nom de la sacro-sainte lutte contre le terrorisme (qui n'était pourtant, faut-il encore le rappeler, pas l'enjeu principal de la loi).
Pour suivre les débats parlementaires de façon plus ou moins régulière, je n'avais jamais vu ça. Jamais vu autant de rejet de la part de tout ce que la société compte d'entités concernées face à autant d'immobilisme de la part du gouvernement. Quand on voit en parallèle la manière dont le même gouvernement a reculé sans la moindre hésitation face à la fronde des bonnets rouges, de la FNSEA ou d'autres lobbies moins connus à l'occasion des votes de textes qui, eux, ne touchaient pas aux libertés fondamentales, quand on voit avec quelle haine les ministres et la grande majorité des élus parlaient d'Internet pendant les débats, au point d'en faire une insulte, il me semble que c'est très symptomatique.
Odeur de rance.
Mais symptomatique de quoi ?
J'ai voulu, avant de réagir à tout ça, prendre du recul. Un recul qui, peut-être, m'a permis de relier ce symptôme à d'autres, sans rapport avec la loi renseignement, mais qui tous me semblent relever du même mal: un néoconservatisme galopant, une pensée réactionnaire à ce point "décomplexée" qu'elle a largement dépassé son habitat de droite naturel et largement infusé, y compris au sein des grands partis dits "de gauche".
Quand Jean-Jacques Urvoas se réjouit (https://twitter.com/JJUrvoas/status/624324424393592832), sur Twitter, de la décision du Conseil Constitutionnel sur (sic) 'la loi "rens."', le lapsus est révélateur. Quoi de plus rance, en effet, que cette volonté réaffirmée d'un contrôle social, d'une surveillance de masse à même d'imposer un ordre moral venu d'en haut, autrefois garanti par l'église, et dont toute une partie, elle aussi bien rance, de la société souhaite le retour ?
Ce que je vois, bien au delà de cette loi et de la manière dont elle a été votée, c'est une rupture. Une fracture qui est loin de n'être que "numérique".
La fracture temporelle.
Quand une grande part de la société est à la recherche de nouveaux modes de consommation, plus respectueux de l'environnement, plus éthiques aussi, qu'elle développe la culture du partage (des ressources, de la musique, du savoir...) alors que l'état abandonne l'écotaxe, soutient l'agriculture intensive au détriment des petites exploitations (http://www.politis.fr/Un-gouvernement-a-la-botte-de-la,32260.html), et lutte contre toutes les innovations qui risqueraient de mettre à mal des rentes qui remontent au siècle passé (taxe copie privée étendue au "cloud", redevance audiovisuelle étendue aux "box", loi Thevenoud imposant 15mn d'attente aux VTC, et tant d'autres...).
Quand une autre partie de la société - la plus démunie - cesse de réfléchir au futur faute de pouvoir s'y projeter et n'a d'autre espoir qu'un retour à un passé qu'elle croit meilleur, encouragée par tout ce que la classe politique compte de démagogues et de populistes, et entraînant avec elle quelques vieux autoproclamés intellectuels, dépassés par le monde moderne et qui n'ont pas de mots assez durs pour fustiger ce qu'ils n'ont pas les moyens de comprendre.
Tout se passe comme si nous avions d'une part une population tournée vers l'avenir, imaginant une démocratie modernisée, une économie collaborative, sociale et solidaire, s'adaptant aux nouveautés numériques (telle la petite poucette de Michel Serres) mais tout aussi capable d'imaginer un débat public sur le revenu universel, la dépénalisation des drogues douces ou l'accueil des réfugiés, et d'autre part une classe politique résolument tournée vers un passé archaïque, rêvant d'uniformes scolaires, de morale à l'école, d'interdiction du mariage pour tous, et d'un paternalisme assis sur le cumul des mandats, le copinage et la corruption.
Quand certains souhaitent la censure de la pornographie en ligne, ou le retour du "saint du jour" et de quoi remplacer l'église dans son rôle de maître-à-penser, d'autres pensent startup, démocratie liquide, liberté d'expression, post-capitalisme et protection de la vie privée.
Et, hélas, cette "fracture temporelle" emporte avec elle tout ce que la société compte d'exclus, de laissés pour compte et de vieilles haines rancies contre l'autre, quel qu'il soit, en les poussant à croire au bon vieux bouc émissaire (hier juif, aujourd'hui musulman) responsable de tous ses maux, à espérer qu'un retour à d'anciennes "valeurs" leur redonnera un pouvoir (qu'ils n'ont jamais eu) sur leur propre avenir, et à voter pour celui qui saura le mieux prendre la posture maréchalesque du sauveur suprême.
C'est je crois le sens qu'il faut donner à cette volonté manifeste de nos gouvernants, qu'ils soient d'un bord ou de l'autre, de "civiliser" (lire "contrôler, surveiller et censurer") Internet, en tant que symbole de toutes leurs peurs, de toute leur ignorance et de tous les espoirs d'une innovation sociale qu'ils rejettent aveuglément.
On pourrait appeler ça la querelle des anciens et des modernes 2.0, si ça n'était hélas un symptôme supplémentaire du pourrissement de la Vème république et notre démocratie.
Ne nous y trompons pas: "l'invasion des barbares", chère à Nicolas Colin, est en marche et ce ne sont pas les postures passéistes qui protégeront une société qui semble préférer le repli sur soi à l'ouverture aux autres. Sans une transformation radicale du discours politique, si nous ne savons pas mettre l'imagination au pouvoir plutôt qu'une nostalgie d'un passé qui n'a jamais existé, ce n'est pas seulement nos lois qui seront rances.
Ce sera notre société tout entière.
I have gone to Burning Man 15 years in a row. When I went the first time, back in 2000, I was a journalist on assignment for Rolling Stone. That was an amazing introduction to the event, as I was able to go “back stage” and meet the organizers, artists, and geniuses behind the sculptures, lasers, and camps. I was immediately hooked. I couldn’t believe such a place existed – that tens of thousands of people shared the same ideals, and worked together to realize their visions.
I wrote this piece about my experiences. I also wrote a feature about the festival for ArtForum. By proposing that Burning Man had validity as an artistic expression – I discussed Joseph Beuys’ idea of “social sculpture” – I got banned from ArtForum after they published my piece. I also wrote about the festival, personally and philosophically, in Breaking Open the Head, my first book, and 2012: The Return of Quetzalcoatl, my second. Burning Man has had a profound experience on my life, in many ways.
This year, I am skipping it. There are a few reasons for this, but the main one is that I feel Burning Man – an institution in its own process of ongoing change and evolution – has lost its way. Hopefully, this is temporary. I know and love many of the people who create and run the festival, and believe in their intentions and their vision.
Burning Man has accomplished amazing things, opening up whole new realms of individual freedom and culture expression. At the same time the festival has become a bit of a victim of its own success. It has become a massive entertainment complex, a bit like Disney World for a contingent made up mostly of the wealthy elite. It always had this vibe, to some extent, but it seems more pronounced in recent years. It feels like there is more and more of less and less. The potential for some kind of authentic liberation or awakening seems increasingly obscure and remote.
The change in Burning Man – admittedly it is subtle – is happening as our world slides toward ecological catastrophe. The ecological crisis has become my almost monomaniacal focus recently. From my perspective, it is crucial that people awaken to what is happening to our Earth. We need to quickly understand and then start making the changes necessary to ensure the continuity of our ecosystems. Part of my enthusiasm for Burning Man was that it seemed a place where a new human community could arise – a new way of being. This potential is still there – but it seems like it has been co-opted, distorted.
At Burning Man, there was always a tension between two world views, which I would characterize as libertarian hedonism and mystical anarchism. I feel, as a result of its rapid growth and, also, as the festival has become a magnet for the wealthy elite (the Silicon Valley crowd, the media moguls and their entourages, the Ibiza crowd, etc), it has tilted too far toward libertarian hedonism. Art cars have become the new yachts, representing expressions of massively inflated egos. Wealthy camps will drop hundreds of thousands on a vehicle, then parade it around, with a velvet rope vibe. Increasingly, the culture of Burning Man feels like an offshoot of the same mindless, self-interested, nihilistic worldview and neoliberal economics that are rapidly annihilating our shared life-world.
I remember, a few years back, I stayed near a camp that had been built for the founder of Cirq du Soleil, Guy de Liberte, and his friends. The camp was empty throughout the week. There were many beautiful gypsy caravan-style tents set up, awaiting the weekend visitors from Europe and Ibiza. There were also a few Mexican workers who labored over the course of the week, building shade structures and decorating the art cars. Nobody had offered these workers a place to stay in one of the carefully shaded luxury tents, so they had pitched their small nylon tent directly in the hot sun. That image seems to sum up where Burning Man has drifted, inexorably.
We lack a moral center in our society, and we are rapidly caroming toward the abyss. It is absolutely extraordinary – in itself, miraculous – that the new Pope, Pope Francis, has shown up as one of the only people in our entire planetary culture able to speak directly to the needs of our moment – he calls for an “ecological conversion,” for shared sacrifice on the part of the wealthy elite, a new mode of empathic and compassionate action for us all. In the Encyclical, Care for Our Common Home, Francis writes:
All-powerful God, you are present in the whole universe
and in the smallest of your creatures.
You embrace with your tenderness all that exists.
Pour out upon us the power of your love,
that we may protect life and beauty.
Fill us with peace, that we may live as brothers and sisters, harming no one.
O God of the poor,
help us to rescue the abandoned and forgotten of this earth,
so precious in your eyes.
Bring healing to our lives, that we may protect the world and not prey on it,
that we may sow beauty, not pollution and destruction.
Is it possible that Pope Francis could rehabilitate the Catholic tradition, which seemed utterly hopeless, corrupt and antiquated, and turn it into a progressive force for good? We are going to need a number of miraculous conversions and transformations such as this one, if we are going to survive as a species, and learn to flourish together with nature, in the short time before it is too late to do anything but undergo a universal, horrific meltdown – a Chod ritual, on a planetary scale.
As I wrote in my books, I believe Burning Man represents an organic expression of something innate to human being-ness: We need initiatory experiences – centers where non-ordinary states of consciousness can be explored and, also, interpreted, with a shared context for understanding and integration. Emerging from the psychedelic culture of the Bay Area, Burning Man is, to a certain extent, a postmodern reinvention of centers of Mystery School wisdom, like Eleusis, which the artists, philosophers, and leaders of the Classical World visited each year. However, at this point, it lacks a deeper awareness of its own value and purpose. Without this, it is in danger of becoming another appendage of the military-industrial-entertainment complex – another distraction factory.
I find that many people I know are living on the razor-edge of nihilism right now, skating the edge of the Void. In my own life, I have lived through the eruption and the projection of my own shadow material – and I see many people undergoing their own versions of this, in different areas of their lives. I can’t help but see this as a perfectly appropriate and even necessary part of a process that could lead to our apotheosis as a species (the birth of the Ubermench, who according to Nietzsche, represents the fusion of “the mind of Caesar” with “the soul of Christ”) or our collective dissolution. It is exciting that this process seems to be happening within our current lifespans.
The infusion of Eastern metaphysics into the Western worldview is not necessarily helping, and it may actually be exacerbating our current crisis of values. The popular Buddhist monk Thich Nhat Hanh has recently noted that, within 100 years, the human race may go extinct. His perspective is accurate, according to scientific predictions. He notes, with an accelerated warming cycle like the one that caused the Permian Mass Extinction, 250 million years ago, “another 95 per cent of species will die out, including Homo sapiens. That is why we have to learn to touch eternity with our in breath and out breath. Extinction of species has happened several times. Mass extinction has already happened five times and this one is the sixth. According to the Buddhist tradition there is no birth and no death. After extinction things will reappear in other forms, so you have to breathe very deeply in order to acknowledge the fact that we humans may disappear in just 100 years on earth.”
There is a kind of fatalism to Buddhist thought that doesn’t mesh with our Western approach to reality. Personally, I find myself resonating far more deeply with the Pope’s call for a new spiritual mission that unifies humanity behind protecting life and nature, than I do with Hahn’s view, although I recognize the validity of his statement. Ultimately, there is only the white light of the Void, which certain psychedelic experiences – particularly 5-meo-DMT – experientially confirm. However, there are many other dimensions of being and levels of consciousness we can know and experience. We also possess creative, empathic, and imaginative capacities, which seem be a divine power and dispensation. I think it would be truly amazing if we chose to make use of our deepest abilities to reverse the current direction of our society – to confront the ecological mega-crisis as a true initiation, and offer ourselves as vessels of this transformation.
In order to accomplish this, we would need to overcome our desire for spectacular distraction and insatiable consumption. Burning Man has always drawn its imaginative power from the paradoxes which are essential to it. A huge amount of money, energy, time, and fossil fuel is expended to create conditions which are difficult and force people (except for those wealthy enough to have air-tight sanctuaries built for them) to undergo a certain level of inner confrontation. I think we could further generalize from this, realizing that difficult and uncomfortable conditions are, in fact, necessary for our own development.
I will wrap this up, for now. The main point is there are many crucial lessons to learn from Burning Man: In many ways, it reveals our innate capacities to build a new society, a redesigned society, based on creativity, community, inspiration, and compassion. At the same time, Burning Man has become another spectacle – another cultural phenomenon, in a sense, a cult – and one that sucks a huge amount of energy and time from people who could re-focus their talents and genius on what we must do to escape ecological collapse (building a resilient or regenerative society). The organization, itself, needs to undergo another level of self-analysis and transformation – much like the Catholic Church appears to be doing, under Pope Francis’ lead.
In order to survive what’s coming, we must find a way to awaken a new spiritual impulse in the human community, beginning with our cultural, technocratic, and financial elites. And we don’t have time to waste.