I keep having this idea, not that I think it’s true, that when you die you appear in a talk show studio, and everyone is clapping. A host shakes your hand and asks you to sit down, and the both of you go over how you think you did.
On a large screen, they play a long montage containing some of the more significant moments in your life. You and the host, along with the audience, look on as you make pivotal choices, overcome dilemmas, and meet the people who would become your friends and partners.
The film includes a lot of personality-defining moments, such as when you made the choice to embrace what became your art or your calling, if you had one, or when you took on a long-term responsibility that became a part of who you were. You also get to see, for only the second time, the moments in which your most important relationships went from superficial to true. Everyone in the studio is moved.
The members of the audience have seen many episodes of this show, and were once on it themselves. The overall tone of the production is quite pleasant and earnest. Clearly everyone is happy for you, celebrating your life rather than judging it, and probably remembering similar moments from their own reel.
The montage also covers things you missed—many of of the experiences and relationships that didn’t happen, but could have, if you had accepted or extended a particular invitation, if you had made a particular effort at small talk instead of sinking into another painful silence, if you had bought that piano after all, if you had attended the indoor climbing center’s open house instead of telling yourself you’d go next year.
Of all the missed possibilities, the missed human connections stand out above the other kinds—the missed career and travel opportunities, cultural experiences, even the creative achievements—because by the end of your life the only thing that seemed relevant was the people you loved, or ended up loving. When you died all the value in your world resided there, in the simple and all-important fact that you really knew other people and other people really knew you.
And this part lasts forever, because, as you learn quickly, you missed many more connections than you made. Maybe fifty or a hundred times more. In fact, many times a wonderful connection with another person was just one simple action away from you, but you pulled back.
Such an incredible wealth of human connection—the greatest part of life, you know now—hinged on a phone call you didn’t bother with, a conversation you shut down, or an apology you’d make in an instant if they sent you back now. There was so much available to you, and it was so much closer than it seemed at the time.
In most of these moments, you pulled away from a budding connection because you wanted to protect yourself from some mildly uncomfortable moment—that you might be bored at an acquaintance’s party and have to excuse yourself early, that a conversation you start might be difficult to escape from, that your act of openness might be taken advantage of. So you stayed home, said no, made excuses, and avoided many conversations. This small amount of uneasiness you avoided, you realize now, cost you many friendships as deep and rich as the best ones you did manage to have.
But you’re not going back, and there’s nothing left to cling to, and nothing left to protect yourself from. So the feeling you get watching all these missed connections isn’t regret, it’s abundance. It seems really wonderful that a human life could have contained fifty undeveloped relationships for every one that was allowed to thrive, given how rich and fulfilling some of those connections were. You’re happy to see that those chances were there, even though you didn’t quite recognize them in time to take advantage.
This all rests fine with you, knowing that you don’t need any more life advantages, because you’re done with the whole thing. Your lifelong wish of being safe from everything you fear has been granted. For the first time there is truly nothing to worry about.
It was all tradeoffs anyway. One thing you didn’t do allowed for something else to happen. But you can’t deny that there is a pattern in these tradeoffs: you frequently chose another dose of the predictable and comfortable over developing a relationship with another person.
After your segment finishes, new guests come on the show and you see the same thing in most of their clips. There are a few people who apparently had no reservations about being open and proactive towards others, and a few people whose reticence clearly helped them get by. But for the most part, you see people who really valued friendship and connection—more than anything else, they would say now—but let it pass them by again and again, because of some comfort-related concern that seemed more important at the time. It is the perfect example of John Lennon’s “making other plans” remark.
Happily, a little bit of this kind of wealth goes a long way. Even one great friendship is enough to make a person feel blessed that life went the way it did. So you don’t feel bad for the new guests. But it is endlessly fascinating to watch people learn that there was so much more out there, just a little bit beyond what felt perfectly safe.
So, fun fact! Zootopia underwent some major plot changes late in production … and it is so much better for it.
Only 17 months before completion, Zootopia had a very different premise. It featured Nick as its protagonist, a world-weary fox scraping by in a Zootopia in which predators were forced to wear “tame collars,” which shocked them every time they felt primitive (and thus dangerous) emotions like anger or excitement. You can see Nick wearing his in some of the concept art.
Nick was going to be framed for a crime he didn’t commit, and a tough-as-nails Judy was going to chase him down. Though the story did conclude with the abolishment of the tame collar mandate (there’s even a screen test of Nick’s reaction when the collar is removed) the filmmakers eventually decided that they’d created a world too unjust to save and characters too unlikable to root for. So, they rewrote the script to make Judy the main character and the discrimination in Zootopia more covert.
While the early version’s dystopian (dys-zootopian?) setting and darker story might have been as good as what we ended up with (the world shall, sadly, never know), I think it’s safe to say that the movie’s depiction of bigotry would have been much less interesting. The tricky thing about writing an allegory about prejudice in which the minority group faces codified oppression is that it tends to evoke thoughts of historical injustices rather than present ones.
It’s not that Zootopia’s anti-predator sentiment doesn’t draw parallels to current issues like Islamophobia, but there’s a knee-jerk reaction in people to connect allegories of institutionalized injustice with past wrongs, perhaps because it’s comforting to think that That Sort of Thing Doesn’t Happen Anymore.
Ultimately, a governmental mandate to collar all predators would have been perceived as an allegory for the registration of Jews and Romani under the Third Reich or the internment of Japanese Americans rather than for the casual prejudice and covertly bigoted legislation (i.e “Voter ID Laws are only intended to prevent voter fraud, we swear”) that are prevalent today.
What makes the final cut of Zootopia so brilliant is that its depiction of prejudice is distinctly modern and therefore much more uncomfortable to deal with. The city of Zootopia is integrated and progressive enough that, on a superficial level, it appears that its citizens aren’t prejudiced at all.
Clearly, predators aren’t regarded with suspicion, because the mayor is a lion. Small animals aren’t belittled or abused, because a sheep made it to assistant mayor. Rabbits aren’t dismissed as weak and dumb, because the ZPD just took on their first rabbit lieutenant, and foxes aren’t excluded, because there’s no legislation prohibiting a fox pup from joining the Junior Ranger Scouts.
Needless to say, if you’ve seen the movie, you know things are a lot more complicated.
We can’t know how deftly the earlier drafts of Zootopia tackled its subject matter (I really do feel bad about criticizing a script I haven’t even read), but I can’t imagine that Judy’s openly anti-fox sentiment would have had the same cringe-factor as her condescension in calling Nick “a real articulate fella” or that her revelation that it’s wrong to forcibly collar 10% of the population would have been as heartbreaking and ugly as the moment Judy instinctively reaches for fox repellant and she and Nick realize that—despite everything they’ve accomplished together—she’s afraid of him.
I don’t want to be unfair. Had the earlier draft made it to the final cut, Zootopia might have still been a great film that addressed the origins and consequences of bigotry with intelligence, empathy and humor. Again, we’ll never know.
However, the changes made two-thirds of the way through production fundamentally altered the film’s approach to prejudice. It made it more relevant and therefore more complicated. It was a risk, and that risk paid off. To Byron Howard, Rich Moore, and everyone else who worked on this film, thank you for that.
hen I was young, there was nothing so bad as being asked to work. Now I find it hard to conjure up that feeling, but I see it in my five-year-old daughter. “Can I please have some water, daddy?”
“You can get it yourself, you’re a big girl.”
“WHY DOES EVERYONE ALWAYS TREAT ME LIKE A MAID?”
That was me when I was young, rolling on the ground in agony on being asked to clean my room. As a child, I wonderingly observed the hours my father worked. The stoical way he went off to the job, chin held high, seemed a beautiful, heroic embrace of personal suffering. The poor man! How few hours he left himself to rest on the couch, read or watch American football.
My father had his own accounting firm in Raleigh, North Carolina. His speciality was helping people manage their tax and financial affairs as they started, expanded, or in some cases shut down their businesses. He has taken his time retiring, and I now realise how much he liked his work. I can remember the glowing terms in which his clients would tell me about the help he’d given them, as if he’d performed life-saving surgery on them. I also remember the way his voice changed when he received a call from a client when at home. Suddenly he spoke with a command and facility that I never heard at any other time, like a captive penguin released into open water, swimming in his element with natural ease.
At 37, I see my father’s routine with different eyes. I live in a terraced house in Wandsworth, a moderately smart and wildly expensive part of south-west London, and a short train ride from the headquarters of The Economist, where I write about economics. I get up at 5.30am and spend an hour or two at my desk at home. Once the children are up I join them for breakfast, then go to work as they head off to school. I can usually leave the office in time to join the family for dinner and put the children to bed. Then I can get a bit more done at home: writing, if there is a deadline looming, or reading, which is also part of the job. I work hard, doggedly, almost relentlessly. The joke, which I only now get, is that work is fun.
Not all work, of course. When my father was a boy on the family farm, the tasks he and his father did in the fields – the jobs many people still do – were gruelling and thankless. I once visited the textile mill where my grandmother worked for a time. The noise of the place was so overpowering that it was impossible to think. But my work – the work we lucky few well-paid professionals do every day, as we co-operate with talented people while solving complex, interesting problems – is fun. And I find that I can devote surprising quantities of time to it.
What is less clear to me, and to so many of my peers, is whether we should do so much of it. One of the facts of modern life is that a relatively small class of people works very long hours and earns good money for its efforts. Nearly a third of college-educated American men, for example, work more than 50 hours a week. Some professionals do twice that amount, and elite lawyers can easily work 70 hours a week almost every week of the year.
Work, in this context, means active, billable labour. But in reality, it rarely stops. It follows us home on our smartphones, tugging at us during an evening out or in the middle of our children’s bedtime routines. It makes permanent use of valuable cognitive space, and chooses odd hours to pace through our thoughts, shoving aside whatever might have been there before. It colonises our personal relationships and uses them for its own ends. It becomes our lives if we are not careful. It becomes us.
When John Maynard Keynes mused in 1930 that, a century hence, society might be so rich that the hours worked by each person could be cut to ten or 15 a week, he was not hallucinating, just extrapolating. The working week was shrinking fast. Average hours worked dropped from 60 at the turn of the century to 40 by the 1950s. The combination of extra time and money gave rise to an age of mass leisure, to family holidays and meals together in front of the television. There was a vision of the good life in this era. It was one in which work was largely a means to an end – the working class had become a leisured class. Households saved money to buy a house and a car, to take holidays, to finance a retirement at ease. This was the era of the three-Martini lunch: a leisurely, expense-padded midday bout of hard drinking. This was when bankers lived by the 3-6-3 rule: borrow at 3%, lend at 6%, and head off to the golf course by 3pm.
The vision of a leisure-filled future occurred against the backdrop of the competition against communism, but it is a capitalist dream: one in which the productive application of technology rises steadily, until material needs can be met with just a few hours of work. It is a story of the triumph of innovation and markets, and one in which the details of a post-work world are left somewhat hazy. Keynes, in his essay on the future, reckoned that when the end of work arrived:
For the first time since his creation man will be faced with his real, his permanent problem – how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well.
Karl Marx had a different view: that being occupied by good work was living well. Engagement in productive, purposeful work was the means by which people could realise their full potential. He’s not credited with having got much right about the modern world, but maybe he wasn’t so wrong about our relationship with work.
MARX IS NOT CREDITED WITH HAVING GOT MUCH RIGHT ABOUT THE MODERN WORLD, BUT MAYBE HE WASN’T SO WRONG ABOUT OUR RELATIONSHIP WITH WORK
In those decades after the second world war, Keynes seemed to have the better of the argument. As productivity rose across the rich world, hourly wages for typical workers kept rising and hours worked per week kept falling – to the mid-30s, by the 1970s. But then something went wrong. Less-skilled workers found themselves forced to accept ever-smaller pay rises to stay in work. The bargaining power of the typical blue-collar worker eroded as technology and globalisation handed bosses a whole toolkit of ways to squeeze labour costs. At the same time, the welfare state ceased its expansion and began to retreat, swept back by governments keen to boost growth by cutting taxes and removing labour-market restrictions. The income gains that might have gone to workers, that might have kept living standards rising even as hours fell, that might have kept society on the road to the Keynesian dream, flowed instead to those at the top of the income ladder. Willingly or unwillingly, those lower down the ladder worked fewer and fewer hours. Those at the top, meanwhile, worked longer and longer.
It was not obvious that things would turn out this way. You might have thought that whereas, before, a male professional worked 50 hours a week while his wife stayed at home with the children, a couple of married professionals might instead each opt to work 35 hours a week, sharing more of the housework, and ending up with both more money and more leisure. That didn’t happen. Rather, both are now more likely to work 60 hours a week and pay several people to care for the house and children.
Why? One possibility is that we have all got stuck on a treadmill. Technology and globalisation mean that an increasing number of good jobs are winner-take-most competitions. Banks and law firms amass extraordinary financial returns, directors and partners within those firms make colossal salaries, and the route to those coveted positions lies through years of round-the-clock work. The number of firms with global reach, and of tech start-ups that dominate a market niche, is limited. Securing a place near the top of the income spectrum in such a firm, and remaining in it, is a matter of constant struggle and competition. Meanwhile the technological forces that enable a few elite firms to become dominant also allow work, in the form of those constantly pinging emails, to follow us everywhere.
This relentless competition increases the need to earn high salaries, for as well-paid people cluster together they bid up the price of the resources for which they compete. In the brainpower-heavy cities where most of them live, getting on the property ladder requires the sort of sum that can be built up only through long hours in an important job. Then there is conspicuous consumption: the need to have a great-looking car and a home out of Interiors magazine, the competition to place children in good (that is, private) schools, the need to maintain a coterie of domestic workers – you mean you don’t have a personal shopper? And so on, and on.
The dollars and hours pile up as we aim for a good life that always stays just out of reach. In moments of exhaustion we imagine simpler lives in smaller towns with more hours free for family and hobbies and ourselves. Perhaps we just live in a nightmarish arms race: if we were all to disarm, collectively, then we could all live a calmer, happier, more equal life.
But that is not quite how it is. The problem is not that overworked professionals are all miserable. The problem is that they are not.
Drinking coffee one morning with a friend from my home town, we discuss our fathers’ working habits. Both are just past retirement age. Both worked in an era in which a good job was not all-consuming. When my father began his professional career, the post-war concept of the good life was still going strong. He was a dedicated, even passionate worker. Yet he never supposed that work should be the centre of his life.
Work was a means to an end; it was something you did to earn the money to pay for the important things in life. This was the advice I was given as a university student, struggling to figure out what career to pursue in order to have the best chance at an important, meaningful job. I think my parents were rather baffled by my determination to find satisfaction in my professional life. Life was what happened outside work. Life, in our house, was a week’s holiday at the beach or Pop standing on the sidelines at our baseball games. It was my parents at church, in the pew or volunteering in some way or another. It was having kids who gave you grandkids. Work merely provided more people to whom to show pictures of the grandkids.
This generation of workers, on the early side of the baby boom, is marching off to retirement now. There are things to do in those sunset years. But the hours will surely stretch out and become hard to fill. As I sit with my friend it dawns on us that retirement sounds awful. Why would we stop working?
Here is the alternative to the treadmill thesis. As professional life has evolved over the past generation, it has become much more pleasant. Software and information technology have eliminated much of the drudgery of the workplace. The duller sorts of labour have gone, performed by people in offshore service-centres or by machines. Offices in the rich world’s capitals are packed not with drones filing paperwork or adding up numbers but with clever people working collaboratively.
The pleasure lies partly in flow, in the process of losing oneself in a puzzle with a solution on which other people depend. The sense of purposeful immersion and exertion is the more appealing given the hands-on nature of the work: top professionals are the master craftsmen of the age, shaping high-quality, bespoke products from beginning to end. We design, fashion, smooth and improve, filing the rough edges and polishing the words, the numbers, the code or whatever is our chosen material. At the end of the day we can sit back and admire our work – the completed article, the sealed deal, the functioning app – in the way that artisans once did, and those earning a middling wage in the sprawling service-sector no longer do.
The fact that our jobs now follow us around is not necessarily a bad thing, either. Workers in cognitively demanding fields, thinking their way through tricky challenges, have always done so at odd hours. Academics in the midst of important research, or admen cooking up a new creative campaign, have always turned over the big questions in their heads while showering in the morning or gardening on a weekend afternoon. If more people find their brains constantly and profitably engaged, so much the better.
Smartphones do not just enable work to follow us around; they also make life easier. Tasks that might otherwise require you to stay late in the office can be taken home. Parents can enjoy dinner and bedtime with the children before turning back to the job at hand. Technology is also lowering the cost of the support staff that make long hours possible. No need to employ a full-time personal assistant to run the errands these days: there are apps to take care of the shopping, the laundry and the dinner, walk the dog, fix the car and mend the hole in the roof. All of these allow us to focus ever more of our time and energy on doing what our jobs require of us.
There are downsides to this life. It does not allow us much time with newborn children or family members who are ill; or to develop hobbies, side-interests or the pleasures of particular, leisurely rituals – or anything, indeed, that is not intimately connected with professional success. But the inadmissible truth is that the eclipsing of life’s other complications is part of the reward.
It is a cognitive and emotional relief to immerse oneself in something all-consuming while other difficulties float by. The complexities of intellectual puzzles are nothing to those of emotional ones. Work is a wonderful refuge.
This life is a package deal. Cities are expensive. Less prestigious work that demands less commitment from those who do it pays less – often much less. For those without independent wealth, dialling back professional ambition and effort means moving away, to smaller and cheaper places.
But stepping off the treadmill does not just mean accepting a different vision of one’s prospects with a different salary trajectory. It means upending one’s life entirely: changing locations, tumbling out of the community, losing one’s identity. That is a difficult thing to survive. One must have an extremely strong, secure sense of self to negotiate it.
I’ve watched people try. In 2009 good friends of ours packed their things and moved away from Washington, DC, where we lived at the time, to the small college town of Charlottesville, Virginia. It was an idyllic little place, nestled in the Appalachian foothills, surrounded by horse farms and vineyards, with cheap, charming homes. He persuaded his employer to let him telework; she left her high-pressure job as vice-president at a big web firm near Washington to take a position at a local company.
My wife and I were intrigued by the thought of doing the same. She could teach there, we reckoned, and I could write. It was a reasonable train ride from Washington, if I needed to meet editors. We would be able to enjoy the fresh air, and the peace and quiet. Perhaps at some point we would open our own shop on the main street or try our hand at winemaking, if we could save a little money.
IT WASN’T THE STRESS OF BEING ON THE FAST TRACK THAT CAUSED MY CHEST TO TIGHTEN AND MY HEART RATE TO RISE, BUT THE THOUGHT OF BEING LEFT BEHIND BY THOSE STILL ON IT
Yet the more seriously we thought about it, the less I liked the idea. I want hours of quiet to write in, not days and weeks. I would miss, desperately, being in an office and arguing about ideas. More than that, I could anticipate with perfect clarity how the rhythm of life would slow as we left the city, how the external pressure to keep moving would diminish. I didn’t want more time to myself; I wanted to feel pushed to be better and achieve more. It wasn’t the stress of being on the fast track that caused my chest to tighten and my heart rate to rise, but the thought of being left behind by those still on it.
Less than a year after moving away, our friends moved back. They had found themselves bored and lonely. We were glad, and relieved as well: their return justified our decision to stay in the city. One reason the treadmill is so hard to walk away from is that life off it is not what it once was. When I was a child, our neighbourhood was rich with social interaction. My father played on the church softball team until his back got too bad. My mother helped with charity food-and-toy drives. They both taught classes and chaperoned youth choir trips. They socialised with neighbours who did these things too.
Those elements of life persist, of course, but they are somewhat diminished, as Robert Putnam, a social scientist, observed in 1995 in “Bowling Alone: America’s Declining Social Capital”. He described the shrivelling of civic institutions, which he blamed on many of the forces that coincided with, and contributed to, our changing relationship to work: the entry of women into the workforce; the rise of professional ghettoes; longer working hours.
One of the civic groups that Putnam cites as an important contributor to social capital in ages past was the labour union. In the post-war era, unions thrived because of healthy demand for blue-collar workers who shared a strong sense of class identity. That allowed the unions’ members to capture an outsize share of the gains from economic growth, while also providing workers and their families with a strong sense of community – indeed, of solidarity.
The labour movement has unravelled in recent decades, and with it the network that supported its members; but these days a similar virtuous circle supports the professional classes instead. Our social networks are made up not just of neighbours and friends, but also of clients and colleagues. This interlaced world of work and social life enriches us, exposing us to people who do fascinating things, keeping us informed of professional gossip and providing those who have good ideas with the connections to help turn them into reality. It also traps us. The suspicion that one might be missing out on a useful opportunity or idea helps prod us off the sofa when an evening with “True Detective” beckons seductively.
This mixing of the social and professional is not new. It is not unlike Hollywood, where friends have always become collaborators, actors marry directors, and an evening out on the town has always been a public act that shapes the brand value of the star. Or like Washington, DC, in which public officials, journalists and policy experts swap jobs every few years and go to the same parties at night: befriending and sleeping with each other, exchanging ideas, living a life in which all behaviour is professional to some extent. But as hours have lengthened and work has become more engaging, this social pattern has swallowed other worlds.
There is a psychic value to the intertwining of life and work as well as an economic one. The society of people like us reinforces our belief in what we do.
Working effectively at a good job builds up our identity and esteem in the eyes of others. We cheer each other on, we share in (and quietly regret) the successes of our friends, we lose touch with people beyond our network. Spending our leisure time with other professional strivers buttresses the notion that hard work is part of the good life and that the sacrifices it entails are those that a decent person makes. This is what a class with a strong sense of identity does: it effortlessly recasts the group’s distinguishing vices as virtues.
Life within this professional community has its impositions. It makes failure or error a more difficult, humiliating experience. Social life ceases to be a refuge from the indignities of work. The sincerity of relationships becomes questionable when people are friends of convenience. A friend – a real one – muses to me that those who become immersed in lives like this suffer from Stockholm Syndrome: they befriend their clients because they spend too much time with them to know there are other, better options available. The fact that I find it hard to pass judgment on this statement suggests that I, too, may be a victim.
My parents have not quite managed to retire, but they are getting there. Even with one foot in and one foot out of retirement, their post-career itinerary is becoming clear. They mean to see parts of the world they couldn’t when they were young and had no money, or when they were older and had no time. Their travels occasionally bring them to London to see me and my family. On a recent visit the talk shifted, as it often does, to when I might be planning to return to the east coast of America, much closer to the Carolinas, which is where they and most of the rest of my extended family still live. As my father walks around the house, my three-year-old son trotting adoringly behind him, they ask whether I couldn’t do my job as easily closer to home.
I get hung up on as easily. The writing I could do as easily, just about. Building my career, away from our London headquarters, would not be so easy. As I explain this, a circularity threatens to overtake my point: to build my career is to make myself indispensable, demonstrating indispensability means burying myself in the work, and the upshot of successfully demonstrating my indispensability is the need to continue working tirelessly. Not only can I not do all that elsewhere; outside London, the obvious brilliance of a commitment to this course of action is underappreciated. It looks pointless – daft, even.
And I begin to understand the nature of the trouble I’m having communicating to my parents precisely why what I’m doing appeals to me. They are asking about a job. I am thinking about identity, community, purpose – the things that provide meaning and motivation. I am talking about my life.
WAR is a racket. It always has been.
It is possibly the oldest, easily the most profitable, surely the most vicious. It is the only one international in scope. It is the only one in which the profits are reckoned in dollars and the losses in lives.
A racket is best described, I believe, as something that is not what it seems to the majority of the people. Only a small "inside" group knows what it is about. It is conducted for the benefit of the very few, at the expense of the very many. Out of war a few people make huge fortunes.
In the World War [I] a mere handful garnered the profits of the conflict. At least 21,000 new millionaires and billionaires were made in the United States during the World War. That many admitted their huge blood gains in their income tax returns. How many other war millionaires falsified their tax returns no one knows.
How many of these war millionaires shouldered a rifle? How many of them dug a trench? How many of them knew what it meant to go hungry in a rat-infested dug-out? How many of them spent sleepless, frightened nights, ducking shells and shrapnel and machine gun bullets? How many of them parried a bayonet thrust of an enemy? How many of them were wounded or killed in battle?
Out of war nations acquire additional territory, if they are victorious. They just take it. This newly acquired territory promptly is exploited by the few -- the selfsame few who wrung dollars out of blood in the war. The general public shoulders the bill.
And what is this bill?
This bill renders a horrible accounting. Newly placed gravestones. Mangled bodies. Shattered minds. Broken hearts and homes. Economic instability. Depression and all its attendant miseries. Back-breaking taxation for generations and generations.
For a great many years, as a soldier, I had a suspicion that war was a racket; not until I retired to civil life did I fully realize it. Now that I see the international war clouds gathering, as they are today, I must face it and speak out.
Again they are choosing sides. France and Russia met and agreed to stand side by side. Italy and Austria hurried to make a similar agreement. Poland and Germany cast sheep's eyes at each other, forgetting for the nonce [one unique occasion], their dispute over the Polish Corridor.
The assassination of King Alexander of Jugoslavia [Yugoslavia] complicated matters. Jugoslavia and Hungary, long bitter enemies, were almost at each other's throats. Italy was ready to jump in. But France was waiting. So was Czechoslovakia. All of them are looking ahead to war. Not the people -- not those who fight and pay and die -- only those who foment wars and remain safely at home to profit.
There are 40,000,000 men under arms in the world today, and our statesmen and diplomats have the temerity to say that war is not in the making.
Hell's bells! Are these 40,000,000 men being trained to be dancers?
Not in Italy, to be sure. Premier Mussolini knows what they are being trained for. He, at least, is frank enough to speak out. Only the other day, Il Duce in "International Conciliation," the publication of the Carnegie Endowment for International Peace, said:
"And above all, Fascism, the more it considers and observes the future and the development of humanity quite apart from political considerations of the moment, believes neither in the possibility nor the utility of perpetual peace. . . . War alone brings up to its highest tension all human energy and puts the stamp of nobility upon the people who have the courage to meet it."
Undoubtedly Mussolini means exactly what he says. His well-trained army, his great fleet of planes, and even his navy are ready for war -- anxious for it, apparently. His recent stand at the side of Hungary in the latter's dispute with Jugoslavia showed that. And the hurried mobilization of his troops on the Austrian border after the assassination of Dollfuss showed it too. There are others in Europe too whose sabre rattling presages war, sooner or later.
Herr Hitler, with his rearming Germany and his constant demands for more and more arms, is an equal if not greater menace to peace. France only recently increased the term of military service for its youth from a year to eighteen months.
Yes, all over, nations are camping in their arms. The mad dogs of Europe are on the loose. In the Orient the maneuvering is more adroit. Back in 1904, when Russia and Japan fought, we kicked out our old friends the Russians and backed Japan. Then our very generous international bankers were financing Japan. Now the trend is to poison us against the Japanese. What does the "open door" policy to China mean to us? Our trade with China is about $90,000,000 a year. Or the Philippine Islands? We have spent about $600,000,000 in the Philippines in thirty-five years and we (our bankers and industrialists and speculators) have private investments there of less than $200,000,000.
Then, to save that China trade of about $90,000,000, or to protect these private investments of less than $200,000,000 in the Philippines, we would be all stirred up to hate Japan and go to war -- a war that might well cost us tens of billions of dollars, hundreds of thousands of lives of Americans, and many more hundreds of thousands of physically maimed and mentally unbalanced men.
Of course, for this loss, there would be a compensating profit -- fortunes would be made. Millions and billions of dollars would be piled up. By a few. Munitions makers. Bankers. Ship builders. Manufacturers. Meat packers. Speculators. They would fare well.
Yes, they are getting ready for another war. Why shouldn't they? It pays high dividends.
But what does it profit the men who are killed? What does it profit their mothers and sisters, their wives and their sweethearts? What does it profit their children?
What does it profit anyone except the very few to whom war means huge profits?
Yes, and what does it profit the nation?
Take our own case. Until 1898 we didn't own a bit of territory outside the mainland of North America. At that time our national debt was a little more than $1,000,000,000. Then we became "internationally minded." We forgot, or shunted aside, the advice of the Father of our country. We forgot George Washington's warning about "entangling alliances." We went to war. We acquired outside territory. At the end of the World War period, as a direct result of our fiddling in international affairs, our national debt had jumped to over $25,000,000,000. Our total favorable trade balance during the twenty-five-year period was about $24,000,000,000. Therefore, on a purely bookkeeping basis, we ran a little behind year for year, and that foreign trade might well have been ours without the wars.
It would have been far cheaper (not to say safer) for the average American who pays the bills to stay out of foreign entanglements. For a very few this racket, like bootlegging and other underworld rackets, brings fancy profits, but the cost of operations is always transferred to the people -- who do not profit.
| Top |
CHAPTER TWO
Who Makes The Profits?
The World War, rather our brief participation in it, has cost the United States some $52,000,000,000. Figure it out. That means $400 to every American man, woman, and child. And we haven't paid the debt yet. We are paying it, our children will pay it, and our children's children probably still will be paying the cost of that war.
The normal profits of a business concern in the United States are six, eight, ten, and sometimes twelve percent. But war-time profits -- ah! that is another matter -- twenty, sixty, one hundred, three hundred, and even eighteen hundred per cent -- the sky is the limit. All that traffic will bear. Uncle Sam has the money. Let's get it.
Of course, it isn't put that crudely in war time. It is dressed into speeches about patriotism, love of country, and "we must all put our shoulders to the wheel," but the profits jump and leap and skyrocket -- and are safely pocketed. Let's just take a few examples:
Take our friends the du Ponts, the powder people -- didn't one of them testify before a Senate committee recently that their powder won the war? Or saved the world for democracy? Or something? How did they do in the war? They were a patriotic corporation. Well, the average earnings of the du Ponts for the period 1910 to 1914 were $6,000,000 a year. It wasn't much, but the du Ponts managed to get along on it. Now let's look at their average yearly profit during the war years, 1914 to 1918. Fifty-eight million dollars a year profit we find! Nearly ten times that of normal times, and the profits of normal times were pretty good. An increase in profits of more than 950 per cent.
Take one of our little steel companies that patriotically shunted aside the making of rails and girders and bridges to manufacture war materials. Well, their 1910-1914 yearly earnings averaged $6,000,000. Then came the war. And, like loyal citizens, Bethlehem Steel promptly turned to munitions making. Did their profits jump -- or did they let Uncle Sam in for a bargain? Well, their 1914-1918 average was $49,000,000 a year!
Or, let's take United States Steel. The normal earnings during the five-year period prior to the war were $105,000,000 a year. Not bad. Then along came the war and up went the profits. The average yearly profit for the period 1914-1918 was $240,000,000. Not bad.
There you have some of the steel and powder earnings. Let's look at something else. A little copper, perhaps. That always does well in war times.
Anaconda, for instance. Average yearly earnings during the pre-war years 1910-1914 of $10,000,000. During the war years 1914-1918 profits leaped to $34,000,000 per year.
Or Utah Copper. Average of $5,000,000 per year during the 1910-1914 period. Jumped to an average of $21,000,000 yearly profits for the war period.
Let's group these five, with three smaller companies. The total yearly average profits of the pre-war period 1910-1914 were $137,480,000. Then along came the war. The average yearly profits for this group skyrocketed to $408,300,000.
A little increase in profits of approximately 200 per cent.
Does war pay? It paid them. But they aren't the only ones. There are still others. Let's take leather.
For the three-year period before the war the total profits of Central Leather Company were $3,500,000. That was approximately $1,167,000 a year. Well, in 1916 Central Leather returned a profit of $15,000,000, a small increase of 1,100 per cent. That's all. The General Chemical Company averaged a profit for the three years before the war of a little over $800,000 a year. Came the war, and the profits jumped to $12,000,000. a leap of 1,400 per cent.
International Nickel Company -- and you can't have a war without nickel -- showed an increase in profits from a mere average of $4,000,000 a year to $73,000,000 yearly. Not bad? An increase of more than 1,700 per cent.
American Sugar Refining Company averaged $2,000,000 a year for the three years before the war. In 1916 a profit of $6,000,000 was recorded.
Listen to Senate Document No. 259. The Sixty-Fifth Congress, reporting on corporate earnings and government revenues. Considering the profits of 122 meat packers, 153 cotton manufacturers, 299 garment makers, 49 steel plants, and 340 coal producers during the war. Profits under 25 per cent were exceptional. For instance the coal companies made between 100 per cent and 7,856 per cent on their capital stock during the war. The Chicago packers doubled and tripled their earnings.
And let us not forget the bankers who financed the great war. If anyone had the cream of the profits it was the bankers. Being partnerships rather than incorporated organizations, they do not have to report to stockholders. And their profits were as secret as they were immense. How the bankers made their millions and their billions I do not know, because those little secrets never become public -- even before a Senate investigatory body.
But here's how some of the other patriotic industrialists and speculators chiseled their way into war profits.
Take the shoe people. They like war. It brings business with abnormal profits. They made huge profits on sales abroad to our allies. Perhaps, like the munitions manufacturers and armament makers, they also sold to the enemy. For a dollar is a dollar whether it comes from Germany or from France. But they did well by Uncle Sam too. For instance, they sold Uncle Sam 35,000,000 pairs of hobnailed service shoes. There were 4,000,000 soldiers. Eight pairs, and more, to a soldier. My regiment during the war had only one pair to a soldier. Some of these shoes probably are still in existence. They were good shoes. But when the war was over Uncle Sam has a matter of 25,000,000 pairs left over. Bought -- and paid for. Profits recorded and pocketed.
There was still lots of leather left. So the leather people sold your Uncle Sam hundreds of thousands of McClellan saddles for the cavalry. But there wasn't any American cavalry overseas! Somebody had to get rid of this leather, however. Somebody had to make a profit in it -- so we had a lot of McClellan saddles. And we probably have those yet.
Also somebody had a lot of mosquito netting. They sold your Uncle Sam 20,000,000 mosquito nets for the use of the soldiers overseas. I suppose the boys were expected to put it over them as they tried to sleep in muddy trenches -- one hand scratching cooties on their backs and the other making passes at scurrying rats. Well, not one of these mosquito nets ever got to France!
Anyhow, these thoughtful manufacturers wanted to make sure that no soldier would be without his mosquito net, so 40,000,000 additional yards of mosquito netting were sold to Uncle Sam.
There were pretty good profits in mosquito netting in those days, even if there were no mosquitoes in France. I suppose, if the war had lasted just a little longer, the enterprising mosquito netting manufacturers would have sold your Uncle Sam a couple of consignments of mosquitoes to plant in France so that more mosquito netting would be in order.
Airplane and engine manufacturers felt they, too, should get their just profits out of this war. Why not? Everybody else was getting theirs. So $1,000,000,000 -- count them if you live long enough -- was spent by Uncle Sam in building airplane engines that never left the ground! Not one plane, or motor, out of the billion dollars worth ordered, ever got into a battle in France. Just the same the manufacturers made their little profit of 30, 100, or perhaps 300 per cent.
Undershirts for soldiers cost 14¢ [cents] to make and uncle Sam paid 30¢ to 40¢ each for them -- a nice little profit for the undershirt manufacturer. And the stocking manufacturer and the uniform manufacturers and the cap manufacturers and the steel helmet manufacturers -- all got theirs.
Why, when the war was over some 4,000,000 sets of equipment -- knapsacks and the things that go to fill them -- crammed warehouses on this side. Now they are being scrapped because the regulations have changed the contents. But the manufacturers collected their wartime profits on them -- and they will do it all over again the next time.
There were lots of brilliant ideas for profit making during the war.
One very versatile patriot sold Uncle Sam twelve dozen 48-inch wrenches. Oh, they were very nice wrenches. The only trouble was that there was only one nut ever made that was large enough for these wrenches. That is the one that holds the turbines at Niagara Falls. Well, after Uncle Sam had bought them and the manufacturer had pocketed the profit, the wrenches were put on freight cars and shunted all around the United States in an effort to find a use for them. When the Armistice was signed it was indeed a sad blow to the wrench manufacturer. He was just about to make some nuts to fit the wrenches. Then he planned to sell these, too, to your Uncle Sam.
Still another had the brilliant idea that colonels shouldn't ride in automobiles, nor should they even ride on horseback. One has probably seen a picture of Andy Jackson riding in a buckboard. Well, some 6,000 buckboards were sold to Uncle Sam for the use of colonels! Not one of them was used. But the buckboard manufacturer got his war profit.
The shipbuilders felt they should come in on some of it, too. They built a lot of ships that made a lot of profit. More than $3,000,000,000 worth. Some of the ships were all right. But $635,000,000 worth of them were made of wood and wouldn't float! The seams opened up -- and they sank. We paid for them, though. And somebody pocketed the profits.
It has been estimated by statisticians and economists and researchers that the war cost your Uncle Sam $52,000,000,000. Of this sum, $39,000,000,000 was expended in the actual war itself. This expenditure yielded $16,000,000,000 in profits. That is how the 21,000 billionaires and millionaires got that way. This $16,000,000,000 profits is not to be sneezed at. It is quite a tidy sum. And it went to a very few.
The Senate (Nye) committee probe of the munitions industry and its wartime profits, despite its sensational disclosures, hardly has scratched the surface.
Even so, it has had some effect. The State Department has been studying "for some time" methods of keeping out of war. The War Department suddenly decides it has a wonderful plan to spring. The Administration names a committee -- with the War and Navy Departments ably represented under the chairmanship of a Wall Street speculator -- to limit profits in war time. To what extent isn't suggested. Hmmm. Possibly the profits of 300 and 600 and 1,600 per cent of those who turned blood into gold in the World War would be limited to some smaller figure.
Apparently, however, the plan does not call for any limitation of losses -- that is, the losses of those who fight the war. As far as I have been able to ascertain there is nothing in the scheme to limit a soldier to the loss of but one eye, or one arm, or to limit his wounds to one or two or three. Or to limit the loss of life.
There is nothing in this scheme, apparently, that says not more than 12 per cent of a regiment shall be wounded in battle, or that not more than 7 per cent in a division shall be killed.
Of course, the committee cannot be bothered with such trifling matters.
| Top |
CHAPTER THREE
Who Pays The Bills?
Who provides the profits -- these nice little profits of 20, 100, 300, 1,500 and 1,800 per cent? We all pay them -- in taxation. We paid the bankers their profits when we bought Liberty Bonds at $100.00 and sold them back at $84 or $86 to the bankers. These bankers collected $100 plus. It was a simple manipulation. The bankers control the security marts. It was easy for them to depress the price of these bonds. Then all of us -- the people -- got frightened and sold the bonds at $84 or $86. The bankers bought them. Then these same bankers stimulated a boom and government bonds went to par -- and above. Then the bankers collected their profits.
But the soldier pays the biggest part of the bill.
If you don't believe this, visit the American cemeteries on the battlefields abroad. Or visit any of the veteran's hospitals in the United States. On a tour of the country, in the midst of which I am at the time of this writing, I have visited eighteen government hospitals for veterans. In them are a total of about 50,000 destroyed men -- men who were the pick of the nation eighteen years ago. The very able chief surgeon at the government hospital; at Milwaukee, where there are 3,800 of the living dead, told me that mortality among veterans is three times as great as among those who stayed at home.
Boys with a normal viewpoint were taken out of the fields and offices and factories and classrooms and put into the ranks. There they were remolded; they were made over; they were made to "about face"; to regard murder as the order of the day. They were put shoulder to shoulder and, through mass psychology, they were entirely changed. We used them for a couple of years and trained them to think nothing at all of killing or of being killed.
Then, suddenly, we discharged them and told them to make another "about face" ! This time they had to do their own readjustment, sans [without] mass psychology, sans officers' aid and advice and sans nation-wide propaganda. We didn't need them any more. So we scattered them about without any "three-minute" or "Liberty Loan" speeches or parades. Many, too many, of these fine young boys are eventually destroyed, mentally, because they could not make that final "about face" alone.
In the government hospital in Marion, Indiana, 1,800 of these boys are in pens! Five hundred of them in a barracks with steel bars and wires all around outside the buildings and on the porches. These already have been mentally destroyed. These boys don't even look like human beings. Oh, the looks on their faces! Physically, they are in good shape; mentally, they are gone.
There are thousands and thousands of these cases, and more and more are coming in all the time. The tremendous excitement of the war, the sudden cutting off of that excitement -- the young boys couldn't stand it.
That's a part of the bill. So much for the dead -- they have paid their part of the war profits. So much for the mentally and physically wounded -- they are paying now their share of the war profits. But the others paid, too -- they paid with heartbreaks when they tore themselves away from their firesides and their families to don the uniform of Uncle Sam -- on which a profit had been made. They paid another part in the training camps where they were regimented and drilled while others took their jobs and their places in the lives of their communities. The paid for it in the trenches where they shot and were shot; where they were hungry for days at a time; where they slept in the mud and the cold and in the rain -- with the moans and shrieks of the dying for a horrible lullaby.
But don't forget -- the soldier paid part of the dollars and cents bill too.
Up to and including the Spanish-American War, we had a prize system, and soldiers and sailors fought for money. During the Civil War they were paid bonuses, in many instances, before they went into service. The government, or states, paid as high as $1,200 for an enlistment. In the Spanish-American War they gave prize money. When we captured any vessels, the soldiers all got their share -- at least, they were supposed to. Then it was found that we could reduce the cost of wars by taking all the prize money and keeping it, but conscripting [drafting] the soldier anyway. Then soldiers couldn't bargain for their labor, Everyone else could bargain, but the soldier couldn't.
Napoleon once said,
"All men are enamored of decorations . . . they positively hunger for them."
So by developing the Napoleonic system -- the medal business -- the government learned it could get soldiers for less money, because the boys liked to be decorated. Until the Civil War there were no medals. Then the Congressional Medal of Honor was handed out. It made enlistments easier. After the Civil War no new medals were issued until the Spanish-American War.
In the World War, we used propaganda to make the boys accept conscription. They were made to feel ashamed if they didn't join the army.
So vicious was this war propaganda that even God was brought into it. With few exceptions our clergymen joined in the clamor to kill, kill, kill. To kill the Germans. God is on our side . . . it is His will that the Germans be killed.
And in Germany, the good pastors called upon the Germans to kill the allies . . . to please the same God. That was a part of the general propaganda, built up to make people war conscious and murder conscious.
Beautiful ideals were painted for our boys who were sent out to die. This was the "war to end all wars." This was the "war to make the world safe for democracy." No one mentioned to them, as they marched away, that their going and their dying would mean huge war profits. No one told these American soldiers that they might be shot down by bullets made by their own brothers here. No one told them that the ships on which they were going to cross might be torpedoed by submarines built with United States patents. They were just told it was to be a "glorious adventure."
Thus, having stuffed patriotism down their throats, it was decided to make them help pay for the war, too. So, we gave them the large salary of $30 a month.
All they had to do for this munificent sum was to leave their dear ones behind, give up their jobs, lie in swampy trenches, eat canned willy (when they could get it) and kill and kill and kill . . . and be killed.
But wait!
Half of that wage (just a little more than a riveter in a shipyard or a laborer in a munitions factory safe at home made in a day) was promptly taken from him to support his dependents, so that they would not become a charge upon his community. Then we made him pay what amounted to accident insurance -- something the employer pays for in an enlightened state -- and that cost him $6 a month. He had less than $9 a month left.
Then, the most crowning insolence of all -- he was virtually blackjacked into paying for his own ammunition, clothing, and food by being made to buy Liberty Bonds. Most soldiers got no money at all on pay days.
We made them buy Liberty Bonds at $100 and then we bought them back -- when they came back from the war and couldn't find work -- at $84 and $86. And the soldiers bought about $2,000,000,000 worth of these bonds!
Yes, the soldier pays the greater part of the bill. His family pays too. They pay it in the same heart-break that he does. As he suffers, they suffer. At nights, as he lay in the trenches and watched shrapnel burst about him, they lay home in their beds and tossed sleeplessly -- his father, his mother, his wife, his sisters, his brothers, his sons, and his daughters.
When he returned home minus an eye, or minus a leg or with his mind broken, they suffered too -- as much as and even sometimes more than he. Yes, and they, too, contributed their dollars to the profits of the munitions makers and bankers and shipbuilders and the manufacturers and the speculators made. They, too, bought Liberty Bonds and contributed to the profit of the bankers after the Armistice in the hocus-pocus of manipulated Liberty Bond prices.
And even now the families of the wounded men and of the mentally broken and those who never were able to readjust themselves are still suffering and still paying.
| Top |
CHAPTER FOUR
How To Smash This Racket!
WELL, it's a racket, all right.
A few profit -- and the many pay. But there is a way to stop it. You can't end it by disarmament conferences. You can't eliminate it by peace parleys at Geneva. Well-meaning but impractical groups can't wipe it out by resolutions. It can be smashed effectively only by taking the profit out of war.
The only way to smash this racket is to conscript capital and industry and labor before the nations manhood can be conscripted. One month before the Government can conscript the young men of the nation -- it must conscript capital and industry and labor. Let the officers and the directors and the high-powered executives of our armament factories and our munitions makers and our shipbuilders and our airplane builders and the manufacturers of all the other things that provide profit in war time as well as the bankers and the speculators, be conscripted -- to get $30 a month, the same wage as the lads in the trenches get.
Let the workers in these plants get the same wages -- all the workers, all presidents, all executives, all directors, all managers, all bankers -- yes, and all generals and all admirals and all officers and all politicians and all government office holders -- everyone in the nation be restricted to a total monthly income not to exceed that paid to the soldier in the trenches!
Let all these kings and tycoons and masters of business and all those workers in industry and all our senators and governors and majors pay half of their monthly $30 wage to their families and pay war risk insurance and buy Liberty Bonds.
Why shouldn't they?
They aren't running any risk of being killed or of having their bodies mangled or their minds shattered. They aren't sleeping in muddy trenches. They aren't hungry. The soldiers are!
Give capital and industry and labor thirty days to think it over and you will find, by that time, there will be no war. That will smash the war racket -- that and nothing else.
Maybe I am a little too optimistic. Capital still has some say. So capital won't permit the taking of the profit out of war until the people -- those who do the suffering and still pay the price -- make up their minds that those they elect to office shall do their bidding, and not that of the profiteers.
Another step necessary in this fight to smash the war racket is the limited plebiscite to determine whether a war should be declared. A plebiscite not of all the voters but merely of those who would be called upon to do the fighting and dying. There wouldn't be very much sense in having a 76-year-old president of a munitions factory or the flat-footed head of an international banking firm or the cross-eyed manager of a uniform manufacturing plant -- all of whom see visions of tremendous profits in the event of war -- voting on whether the nation should go to war or not. They never would be called upon to shoulder arms -- to sleep in a trench and to be shot. Only those who would be called upon to risk their lives for their country should have the privilege of voting to determine whether the nation should go to war.
There is ample precedent for restricting the voting to those affected. Many of our states have restrictions on those permitted to vote. In most, it is necessary to be able to read and write before you may vote. In some, you must own property. It would be a simple matter each year for the men coming of military age to register in their communities as they did in the draft during the World War and be examined physically. Those who could pass and who would therefore be called upon to bear arms in the event of war would be eligible to vote in a limited plebiscite. They should be the ones to have the power to decide -- and not a Congress few of whose members are within the age limit and fewer still of whom are in physical condition to bear arms. Only those who must suffer should have the right to vote.
A third step in this business of smashing the war racket is to make certain that our military forces are truly forces for defense only.
At each session of Congress the question of further naval appropriations comes up. The swivel-chair admirals of Washington (and there are always a lot of them) are very adroit lobbyists. And they are smart. They don't shout that "We need a lot of battleships to war on this nation or that nation." Oh no. First of all, they let it be known that America is menaced by a great naval power. Almost any day, these admirals will tell you, the great fleet of this supposed enemy will strike suddenly and annihilate 125,000,000 people. Just like that. Then they begin to cry for a larger navy. For what? To fight the enemy? Oh my, no. Oh, no. For defense purposes only.
Then, incidentally, they announce maneuvers in the Pacific. For defense. Uh, huh.
The Pacific is a great big ocean. We have a tremendous coastline on the Pacific. Will the maneuvers be off the coast, two or three hundred miles? Oh, no. The maneuvers will be two thousand, yes, perhaps even thirty-five hundred miles, off the coast.
The Japanese, a proud people, of course will be pleased beyond expression to see the united States fleet so close to Nippon's shores. Even as pleased as would be the residents of California were they to dimly discern through the morning mist, the Japanese fleet playing at war games off Los Angeles.
The ships of our navy, it can be seen, should be specifically limited, by law, to within 200 miles of our coastline. Had that been the law in 1898 the Maine would never have gone to Havana Harbor. She never would have been blown up. There would have been no war with Spain with its attendant loss of life. Two hundred miles is ample, in the opinion of experts, for defense purposes. Our nation cannot start an offensive war if its ships can't go further than 200 miles from the coastline. Planes might be permitted to go as far as 500 miles from the coast for purposes of reconnaissance. And the army should never leave the territorial limits of our nation.
To summarize: Three steps must be taken to smash the war racket.
We must take the profit out of war.
We must permit the youth of the land who would bear arms to decide whether or not there should be war.
We must limit our military forces to home defense purposes.
| Top |
CHAPTER FIVE
To Hell With War!
I am not a fool as to believe that war is a thing of the past. I know the people do not want war, but there is no use in saying we cannot be pushed into another war.
Looking back, Woodrow Wilson was re-elected president in 1916 on a platform that he had "kept us out of war" and on the implied promise that he would "keep us out of war." Yet, five months later he asked Congress to declare war on Germany.
In that five-month interval the people had not been asked whether they had changed their minds. The 4,000,000 young men who put on uniforms and marched or sailed away were not asked whether they wanted to go forth to suffer and die.
Then what caused our government to change its mind so suddenly?
Money.
An allied commission, it may be recalled, came over shortly before the war declaration and called on the President. The President summoned a group of advisers. The head of the commission spoke. Stripped of its diplomatic language, this is what he told the President and his group:
"There is no use kidding ourselves any longer. The cause of the allies is lost. We now owe you (American bankers, American munitions makers, American manufacturers, American speculators, American exporters) five or six billion dollars.
If we lose (and without the help of the United States we must lose) we, England, France and Italy, cannot pay back this money . . . and Germany won't.
So . . . "
Had secrecy been outlawed as far as war negotiations were concerned, and had the press been invited to be present at that conference, or had radio been available to broadcast the proceedings, America never would have entered the World War. But this conference, like all war discussions, was shrouded in utmost secrecy. When our boys were sent off to war they were told it was a "war to make the world safe for democracy" and a "war to end all wars."
Well, eighteen years after, the world has less of democracy than it had then. Besides, what business is it of ours whether Russia or Germany or England or France or Italy or Austria live under democracies or monarchies? Whether they are Fascists or Communists? Our problem is to preserve our own democracy.
And very little, if anything, has been accomplished to assure us that the World War was really the war to end all wars.
Yes, we have had disarmament conferences and limitations of arms conferences. They don't mean a thing. One has just failed; the results of another have been nullified. We send our professional soldiers and our sailors and our politicians and our diplomats to these conferences. And what happens?
The professional soldiers and sailors don't want to disarm. No admiral wants to be without a ship. No general wants to be without a command. Both mean men without jobs. They are not for disarmament. They cannot be for limitations of arms. And at all these conferences, lurking in the background but all-powerful, just the same, are the sinister agents of those who profit by war. They see to it that these conferences do not disarm or seriously limit armaments.
The chief aim of any power at any of these conferences has not been to achieve disarmament to prevent war but rather to get more armament for itself and less for any potential foe.
There is only one way to disarm with any semblance of practicability. That is for all nations to get together and scrap every ship, every gun, every rifle, every tank, every war plane. Even this, if it were possible, would not be enough.
The next war, according to experts, will be fought not with battleships, not by artillery, not with rifles and not with machine guns. It will be fought with deadly chemicals and gases.
Secretly each nation is studying and perfecting newer and ghastlier means of annihilating its foes wholesale. Yes, ships will continue to be built, for the shipbuilders must make their profits. And guns still will be manufactured and powder and rifles will be made, for the munitions makers must make their huge profits. And the soldiers, of course, must wear uniforms, for the manufacturer must make their war profits too.
But victory or defeat will be determined by the skill and ingenuity of our scientists.
If we put them to work making poison gas and more and more fiendish mechanical and explosive instruments of destruction, they will have no time for the constructive job of building greater prosperity for all peoples. By putting them to this useful job, we can all make more money out of peace than we can out of war -- even the munitions makers.
So...I say,
TO HELL WITH WAR!
Silicon Valley rarely talks politics – except, perhaps, to discuss the quickest ways of disrupting it. On the rare occasions that its leaders do speak out, it is usually to disparage the homeless, celebrate colonialism or complain about the hapless city regulators who are out to strangle the fragile artisans who gave us Uber and Airbnb.
Thus it is puzzling that America’s tech elites have become the world’s loudest proponents of basic income – an old but radical idea that has been embraced, for very different reasons and in very different forms, by both left and right. From Marc Andreessen to Tim O’Reilly, Silicon Valley’s royalty seems intrigued by the prospect of handing out cash to ordinary citizens, regardless of whether they work or not.
Y Combinator, one of Silicon Valley’s premier startup incubators, has announced it wants to provide funding to a group of volunteers and hire a researcher – for five years, no less – to study the issue.
Albert Wenger, a partner in United Square Ventures, a prominent venture capital firm, is so taken with the idea that he is currently at work on a book. So, why all the fuss – and in Silicon Valley, of all places?
Basic income is seen as the Trojan horse that allows tech companies to say we are the good cop to Wall Street’s bad cop
First, there is the traditional libertarian argument against the intrusiveness and inefficiency of the welfare state – a problem that basic income, once combined with the full-blown dismantling of public institutions, might solve. Second, the coming age of automation might result in even more people losing their jobs – and the prospect of a guaranteed and unconditional basic income might reduce the odds of another Luddite uprising. Better to have everyone learning how to code, receiving basic income and hoping to meet an honest venture capitalist.
Third, the precarious nature of employment in the gig economy no longer looks as terrifying if you receive basic income of some kind. Driving for Uber, after all, could be just a hobby that occasionally yields some material benefits. Think fishing, but a bit more social. And who doesn’t like fishing?
Basic income, therefore, is often seen as the Trojan horse that would allow tech companies to position themselves as progressive, even caring – the good cop to Wall Street’s bad cop – while eliminating the hurdles that stand in the way of further expansion.
Goodbye to all those cumbersome institutions of the welfare state, employment regulations that guarantee workers’ rights or subversive attempts to question the status quo with regards to the ownership of data or the infrastructure that produces it.
And yet there is something else to Silicon Valley’s advocacy: the sudden realisation that, should it fail to define the horizons of the basic income debate, the public might eventually realise that the main obstacle in the way of this radical idea is none other than Silicon Valley itself.
To understand why, it is best to examine the most theoretically and technologically sophisticated version of the basic income argument.
This is the work of radical Italian economists – Carlo Vercellone, Andrea Fumagalli and Stefano Lucarelli – who for decades have been penning pungent critiques of “cognitive capitalism” – that is, the current stage of capitalism, characterised by the growing importance of cognitive labour and the declining importance of material production.
Dutch city plans to pay citizens a ‘basic income’, and Greens say it could work in the UK
Read more
Unlike other defenders of basic income who argue that it is necessary on moral or social grounds, these economists argue that it makes sound economic sense during our transition to cognitive capitalism. It is a way to avoid structural instability – generated, among other things, by the increasing precariousness of work and growing income polarisation – and to improve the circulation of ideas (as well as their innovative potential) in the economy.
How so? First, it is a way to compensate workers for the work they do while not technically working – which, as we enter cognitive capitalism, often produces far more value than paid work. Think of Uber drivers who are generating useful data, which helps Uber in making resource allocation decisions, in between their trips.
Second, because much of our labour today is collective – do you know by how much your individual search improves Google’s search index? Or how much a line of code you contribute to a free software project enhances the overall product? – it is often impossible to determine the share of individual contribution in the final product. Basic income simply acknowledges that much of modern cognitive labour is social in character.
Finally, it is a way to ensure that some of the productivity gains associated with the introduction of new techniques for rationalising the work process – which used to be passed on to workers through mechanisms such as wage indexing – can still be passed on, even as collective bargaining and other forms of employment rights are weakened. This, in turn, could lead to higher investments and higher profits, creating a virtuous circle.
The cognitive capitalism argument for basic income is more complex than this crude summary, but it requires two further conditions.
First, that the welfare state, in a somewhat reformed form, must survive and flourish – it is a key social institution that, with its generous investments in health and education, gives us the freedom to be creative.
Second, that there must be a fundamental reform of the tax system to fund it – with taxes not just on financial transactions, but also on the use of instruments such as patents, trademarks and, increasingly, various rights claims over data that prevent the optimal utilisation of knowledge.
This more radical interpretation of the basic income agenda suggests that Silicon Valley, far from being its greatest champion, is its main enemy. It actively avoids paying taxes; it keeps finding new ways to estrange data from the users who produce it; it wants to destroy the welfare state, either by eliminating it or by replacing it with its own, highly privatised and highly individualised alternatives (think preventive health-tracking with FitBit versus guaranteed free healthcare).
Besides, it colonises, usurps and commodifies whatever new avenues of genuine social cooperation– the much-maligned sharing economy – are opened up by the latest advances in communication.
Why is Silicon Valley so 'tone deaf' to India?
Read more
In short, you can either have a radical agenda of basic income, where people are free to collaborate as they wish because they no longer have to work, or the kind of platform capitalism that seeks to turn everyone into a precarious entrepreneur. But you can’t have both.
In fact, Silicon Valley can easily make the first step towards the introduction of basic income: why not make us, the users, the owners of our own data? At the very minimum, it could help us to find alternative, non-commercial users of this data. At its most ambitious you can think of a mechanism whereby cities, municipalities and eventually nation states, starved of the data that now accrues almost exclusively to the big tech firms, would compensate citizens for their data with some kind of basic income, that might be either direct (cash) or indirect (free services such as transportation).
This, however, is not going to happen because data is the very asset that makes Silicon Valley impossible to disrupt – and it knows it. What we get instead is Silicon Valley’s loud, but empty, advocacy of an agenda it is aggressively working to suppress.
Somehow our tech elites want us to believe that governments will scrape enough cash together to make it happen. Who will pay for it, though? Clearly, it won’t be the radical moguls of Silicon Valley: they prefer to park their cash offshore.
Suddenly, it feels like 2000 again. Back then, surveillance programs like [Carnivore](https://en.wikipedia.org/wiki/Carnivore_(software%29), Echelon, and Total Information Awareness helped spark a surge in electronic privacy awareness. Now a decade later, the recent discovery of programs like [PRISM](https://en.wikipedia.org/wiki/PRISM_(surveillance_program%29), Boundless Informant, and FISA orders are catalyzing renewed concern.
The programs of the past can be characterized as “proximate” surveillance, in which the government attempted to use technology to directly monitor communication themselves. The programs of this decade mark the transition to “oblique” surveillance, in which the government more often just goes to the places where information has been accumulating on its own, such as email providers, search engines, social networks, and telecoms.
Both then and now, privacy advocates have typically come into conflict with a persistent tension, in which many individuals don’t understand why they should be concerned about surveillance if they have nothing to hide. It’s even less clear in the world of “oblique” surveillance, given that apologists will always frame our use of information-gathering services like a mobile phone plan or GMail as a choice.
We’re All One Big Criminal Conspiracy
As James Duane, a professor at Regent Law School and former defense attorney, notes in his excellent lecture on why it is never a good idea to talk to the police:
Estimates of the current size of the body of federal criminal law vary. It has been reported that the Congressional Research Service cannot even count the current number of federal crimes. These laws are scattered in over 50 titles of the United States Code, encompassing roughly 27,000 pages. Worse yet, the statutory code sections often incorporate, by reference, the provisions and sanctions of administrative regulations promulgated by various regulatory agencies under congressional authorization. Estimates of how many such regulations exist are even less well settled, but the ABA thinks there are “[n]early 10,000.”
If the federal government can’t even count how many laws there are, what chance does an individual have of being certain that they are not acting in violation of one of them?
As Supreme Court Justice Breyer elaborates:
The complexity of modern federal criminal law, codified in several thousand sections of the United States Code and the virtually infinite variety of factual circumstances that might trigger an investigation into a possible violation of the law, make it difficult for anyone to know, in advance, just when a particular set of statements might later appear (to a prosecutor) to be relevant to some such investigation.
For instance, did you know that it is a federal crime to be in possession of a lobster under a certain size? It doesn’t matter if you bought it at a grocery store, if someone else gave it to you, if it’s dead or alive, if you found it after it died of natural causes, or even if you killed it while acting in self defense. You can go to jail because of a lobster.
If the federal government had access to every email you’ve ever written and every phone call you’ve ever made, it’s almost certain that they could find something you’ve done which violates a provision in the 27,000 pages of federal statues or 10,000 administrative regulations. You probably do have something to hide, you just don’t know it yet.
We Should Have Something To Hide
Over the past year, there have been a number of headline-grabbing legal changes in the US, such as the legalization of marijuana in CO and WA, as well as the legalization of same-sex marriage in a growing number of US states.
As a majority of people in these states apparently favor these changes, advocates for the US democratic process cite these legal victories as examples of how the system can provide real freedoms to those who engage with it through lawful means. And it’s true, the bills did pass.
What’s often overlooked, however, is that these legal victories would probably not have been possible without the ability to break the law.
The state of Minnesota, for instance, legalized same-sex marriage this year, but sodomy laws had effectively made homosexuality itself completely illegal in that state until 2001. Likewise, before the recent changes making marijuana legal for personal use in WA and CO, it was obviously not legal for personal use.
Imagine if there were an alternate dystopian reality where law enforcement was 100% effective, such that any potential law offenders knew they would be immediately identified, apprehended, and jailed. If perfect law enforcement had been a reality in MN, CO, and WA since their founding in the 1850s, it seems quite unlikely that these recent changes would have ever come to pass. How could people have decided that marijuana should be legal, if nobody had ever used it? How could states decide that same sex marriage should be permitted, if nobody had ever seen or participated in a same sex relationship?
The cornerstone of liberal democracy is the notion that free speech allows us to create a marketplace of ideas, from which we can use the political process to collectively choose the society we want. Most critiques of this system tend to focus on the ways in which this marketplace of ideas isn’t totally free, such as the ways in which some actors have substantially more influence over what information is distributed than others.
The more fundamental problem, however, is that living in an existing social structure creates a specific set of desires and motivations in a way that merely talking about other social structures never can. The world we live in influences not just what we think, but how we think, in a way that a discourse about other ideas isn’t able to. Any teenager can tell you that life’s most meaningful experiences aren’t the ones you necessarily desired, but the ones that actually transformed your very sense of what you desire.
We can only desire based on what we know. It is our present experience of what we are and are not able to do that largely determines our sense for what is possible. This is why same sex relationships, in violation of sodomy laws, were a necessary precondition for the legalization of same sex marriage. This is also why those maintaining positions of power will always encourage the freedom to talk about ideas, but never to act.
Technology And Law Enforcement
Law enforcement used to be harder. If a law enforcement agency wanted to track someone, it required physically assigning a law enforcement agent to follow that person around. Tracking everybody would be inconceivable, because it would require having as many law enforcement agents as people.
Today things are very different. Almost everyone carries a tracking device (their mobile phone) at all times, which reports their location to a handful of telecoms, which are required by law to provide that information to the government. Tracking everyone is no longer inconceivable, and is in fact happening all the time. We know that Sprint alone responded to 8 million law enforcement requests for real time customer location just in 2008. They got so many requests that they built an automated system to handle them.
Combined with ballooning law enforcement budgets, this trend towards automation, which includes things like license plate scanners and domestically deployed drones, represents a significant shift in the way that law enforcement operates.
Police already abuse the immense power they have, but if everyone’s every action were being monitored, and everyone technically violates some obscure law at some time, then punishment becomes purely selective. Those in power will essentially have what they need to punish anyone they’d like, whenever they choose, as if there were no rules at all.
Even ignoring this obvious potential for new abuse, it’s also substantially closer to that dystopian reality of a world where law enforcement is 100% effective, eliminating the possibility to experience alternative ideas that might better suit us.
Compromise
Some will say that it’s necessary to balance privacy against security, and that it’s important to find the right compromise between the two. Even if you believe that, a good negotiator doesn’t begin a conversation with someone whose position is at the exact opposite extreme by leading with concessions.
And that’s exactly what we’re dealing with. Not a balance of forces which are looking for the perfect compromise between security and privacy, but an enormous steam roller built out of careers and billions in revenue from surveillance contracts and technology. To negotiate with that, we can’t lead with concessions, but rather with all the opposition we can muster.
All The Opposition We Can Muster
Even if you believe that voting is more than a selection of meaningless choices designed to mask the true lack of agency we have, there is a tremendous amount of money and power and influence on the other side of this equation. So don’t just vote or petition.
To the extent that we’re “from the internet,” we have a certain amount of power of our own that we can leverage within this domain. It is possible to develop user-friendly technical solutions that would stymie this type of surveillance. I help work on Open Source security and privacy apps at Open Whisper Systems, but we all have a long ways to go. If you’re concerned, please consider finding some way to directly oppose this burgeoning worldwide surveillance industry (we could use help at Open Whisper Systems!). It’s going to take all of us.
Analyse de l'actualité pour en sortir une liste d'événements annotés sémantiquement. Vraiment classe !
In the VR community, “presence” is a term of art. It’s the idea that once VR reaches a certain quality level your brain is actually tricked — at the lowest, most primal level — into believing that what you see in front of you is reality. Studies show that even if you rationally believe you’re not truly standing at the edge of a steep cliff, and even if you try with all your might to jump, your legs will buckle. Your low-level lizard brain won’t let you do it.
With presence, your brain goes from feeling like you have a headset on to feeling like you’re immersed in a different world.
Computer enthusiasts and science fiction writers have dreamed about VR for decades. But earlier attempts to develop it, especially in the 1990s, were disappointing. It turns out the technology wasn’t ready yet. What’s happening now — because of Moore’s Law, and also the rapid improvement of processors, screens, and accelerometers, driven by the smartphone boom — is that VR is finally ready to go mainstream.
Once VR achieves presence, we start to believe.
We use the phrase “suspension of disbelief” about the experience of watching TV or movies. This implies that our default state watching TV and movies is disbelief. We start to believe only when we become sufficiently immersed.
With VR, the situation is reversed: our brains believe, by default, that what we see is real. The risk isn’t that it’s boring but that it’s overwhelmingly intense. We need to suspend belief and remind ourselves that what we think we’re experiencing isn’t real.
As Chris Milk, an early VR pioneer, says:
You read a book; your brain reads letters printed in ink on paper and transforms that into a world. You watch a movie; you’re seeing imagery inside of a rectangle while you’re sitting inside a room, and your brain translates that into a world. And you connect to this even though you know it’s not real, but because you’re in the habit of suspending disbelief.
With virtual reality, you’re essentially hacking the visual-audio system of your brain and feeding it a set of stimuli that’s close enough to the stimuli it expects that it sees it as truth. Instead of suspending your disbelief, you actually have to remind yourself not to believe.
This has implications for the kinds of software that will succeed in VR. For example, a popular video game like Call of Duty ported to VR would be frightening and disorienting for most people.
What will likely succeed instead are relatively simple experiences. Some examples: go back in time and walk around ancient Rome; overcome your fear of heights by climbing skyscrapers; execute precision moves as you train to safely land planes; return to places you “3D photographed” on your last vacation; have a picnic on a sunny afternoon with a long-lost friend; build trust with virtual work colleagues in a way that today you can only do in person.
These experiences will be dreamt up by “experience makers” — the VR version of filmmakers. The next few decades of VR will be similar to the first few decades of film. Filmmakers had no idea what worked and what didn’t: how to write, how to shoot, how to edit, etc. After decades of experiments they established the grammar of film. We’re about to enter a similar period of exploration with VR.
There will be great games made in VR, and gaming will probably dominate the VR narrative for the next few years. But longer term, we won’t think of games as essential to the medium. The original TV shows were newscasts and game shows, but today we think of TV screens as content-agnostic input-output devices.
VR will be the ultimate input-output device. Some people call VR “the last medium” because any subsequent medium can be invented inside of VR, using software alone. Looking back, the movie and TV screens we use today will be seen as an intermediate step between the invention of electricity and the invention of VR. Kids will think it’s funny that their ancestors used to stare at glowing rectangles hoping to suspend disbelief.
One day in the early 1980s, I was flipping through the TV channels, when I stopped at a news report. The announcer was grey-haired. His tone was urgent. His pronouncement was dire: between the war in the Middle East, famine in Africa, AIDS in the cities, and communists in Afghanistan, it was clear that the Four Horsemen of the Apocalypse were upon us. The end had come.
We were Methodists and I’d never heard this sort of prediction. But to my grade-school mind, the evidence seemed ironclad, the case closed. I looked out the window and could hear the drumming of hoof beats.
Life went on, however, and those particular horsemen went out to pasture. In time, others broke loose, only to slow their stride as well. Sometimes, the end seemed near. Others it would recede. But over the years, I began to see it wasn’t the end that was close. It was our dread of it. The apocalypse wasn’t coming: it was always with us. It arrived in a stampede of our fears, be they nuclear or biological, religious or technological.
In the years since, I watched this drama play out again and again, both in closed communities such as Waco and Heaven’s Gate, and in the larger world with our panics over SARS, swine flu, and Y2K. In the past, these fears made for some of our most popular fiction. The alien invasions in H G Wells’s War of the Worlds (1898); the nuclear winter in Nevil Shute’s On the Beach (1957); God’s wrath in the Left Behind series of books, films and games. In most versions, the world ended because of us, but these were horrors that could be stopped, problems that could be solved.
But today something is different. Something has changed. Judging from its modern incarnation in fiction, a new kind of apocalypse is upon us, one that is both more compelling and more terrifying. Today our fears are broader, deeper, woven more tightly into our daily lives, which makes it feel like the seeds of our destruction are all around us. We are more afraid, but less able to point to a single source for our fear. At the root is the realisation that we are part of something beyond our control.
I noticed this change recently when I found myself reading almost nothing but post-apocalyptic fiction, of which there has been an unprecedented outpouring. I couldn’t seem to get enough. I tore through one after another, from the tenderness and brutality of Peter Heller’s The Dog Stars (2013) to the lonely wandering of Emily St John Mandel’s survivors in Station Eleven (2014), to the magical horrors of Benjamin Percy’s The Dead Lands (2015). There were the remnants of humanity trapped in the giant silos of Hugh Howey’s Wool (2013), the bizarre biotech of Paolo Bacigalupi’s The Windup Girl (2009), the desolate realism of Cormac McCarthy’s The Road (2006).
I read these books like my life depended on it. It was impossible to look away from the ruins of our civilisation. There seemed to be no end to them, with nearly every possible depth being plumbed, from Tom Perrotta’s The Leftovers (2011), about the people not taken by the rapture, to Nick Holdstock’s The Casualties (2015), about people’s lives just before the apocalypse. The young-adult aisle was filled with similar books, such as Veronica Roth’s Divergent (2011), James Dashner’s The Maze Runner (2009) and Michael Perry’s The Scavengers (2014). Movie screens were alight with the apocalyptic visions of Snowpiercer (2013), Mad Max: Fury Road (2015), The Hunger Games (2012-15), Z for Zachariah (2015). There was also apocalyptic poetry in Sara Eliza Johnson’s Bone Map (2014), apocalyptic essays in Joni Tevis’s The World Is on Fire (2015), apocalyptic non-fiction in Alan Weisman’s The World Without Us (2007). Even the academy was on board with dense parsings, such as Eugene Thacker’s In the Dust of This Planet: Horror of Philosophy (2010) and Samuel Scheffler’s Death and the Afterlife (2013).
I would stand on the overpass and watch the cars flow underneath. They never slowed. There was never a time when the road was empty
In the annals of eschatology, we are living in a golden age. The end of the world is on everyone’s mind. Why now? In the recent past we were arguably much closer to the end – just a few nuclear buttons had to be pushed.
The current wave of anxiety might be obvious on the surface, but it runs much deeper. It’s a feeling I’ve had for a long time, and one that has been building over the years. The first time I remember it was when I lived in a house next to a 12-lane freeway. Sometimes I would stand on the overpass and watch the cars flow underneath. They never slowed. They never stopped. There was never a time when the road was empty, when there were no cars driving on it.
When I tried to wrap my mind around this endlessness, it filled me with a kind of panic. It felt like something was careening out of control. But they were just cars. I drove one myself every day. It made no sense. It was like there was something my mind was trying reach around, but couldn’t.
This same ungraspable feeling has hit me at odd times since: on a train across Hong Kong’s New Territories through the endless apartment towers. In an airplane rising over the Midwest, watching the millions of small houses and yards merge into a city, a state, a country. Seeing dumpster after dumpster being carried off from a construction site to a pile growing somewhere.
When I began my apocalyptic binge, I could channel that same feeling and let it run all the way through. It echoed through those stories, through the dead landscapes. And now after reading so many, I believe I am starting to understand the nature of this new fear.
Humans have always been an organised species. We have always functioned as a group, as something larger than ourselves. But in the recent past, the scale of that organisation has grown so much, the pace of that growth is so fast, the connective tissue between us so dense, that there has been a shift of some kind. Namely, we have become so powerful that some scientists argue we have entered a new era, the Anthropocene, in which humans are a geological force. That feeling, that panic, comes from those moments when this fact is unavoidable. It comes from being unable to not see what we’ve become – a planet-changing superorganism. It is from the realisation that I am part of it.
Apocalyptic fictions of the current wave feed off precisely this fear: the feeling that we are part of something over which we have no control
Most days, I don’t feel like I’m part of anything with much power to create and destroy. Day to day, my own life feels chaotic and hard, trying to collect money I’m owed, or get my car fixed, or pay for health insurance, or feed my kids. Most of the time, making it through the day feels like a victory, not like I’m playing any part in a larger drama, or that the errands I’m running, the things I’m buying, the electricity I’m wasting could be bringing about our doom.
Yet apocalyptic fictions of the current wave feed off precisely this fear: the feeling that we are part of something over which we have no control, of which we have no real choice but to keep being part. The bigger it grows, the more we rely on it, the deeper the anxiety becomes. It is the curse of being a self-aware piece of a larger puzzle, of an emergent consciousness in a larger emergent system. It is as hard to fathom as the colony is to the ant.
Emergence, or the way that complex systems come from simple parts, is a well-known phenomenon in science and nature. It is how everything from slime moulds to cities to our sense of self arises. It is how bees become a hive, how cells become an organism, how a brain becomes a mind. And it is how humans become humanity.
But what’s less well-known is that there are two ways of interpreting emergence. The first is known as weak emergence, which is more intuitive. In this view you should be able to trace the lines of causation from the bottom of a system all the way to the top. In looking at an ant, you should detect the making of the colony. In examining brain cells, you should find the self. In this view, the neuroscientist Michael Gazzaniga explains in his book Who’s In Charge? Free Will and the Science of the Brain (2011), ‘the emergent property is reducible to its individual components’.
But another view is called strong emergence, in which the new system takes on qualities the parts don’t have. In strong emergence, the new system undergoes what Gazzaniga called a ‘phase shift’, not unlike the way that water changes to ice. They are made of the same stuff, but behave according to different rules. The emergent system might not be more than the sum of its parts, but it is different from them.
Strong emergence has been grudgingly accepted in physics, a field where quantum mechanics and general relativity have never been reconciled, and where they might never do so because of a phase shift: physics at our level emerges from below, but changes once it does. Quantum mechanics and general relativity operate at different levels of organisation. They work according to different rules.
Whenever I think about reconciling my own life with that of my species, I have a similar feeling. My life depends on technologies I don’t understand, signals I can’t see, systems I can’t perceive. I don’t understand how any of it works, how I could change it, or how it can last. Its feels like peering across some chasm, like I am part of something I cannot quite grasp, like there has been a phase shift from humans struggling to survive to humanity struggling to survive our success.
The problems we face will not be fixed at the level of the individual life. We all know this because none of us have changed our own lives anywhere near enough to make a difference. Where would we start? With our commute? With candles? Life is already hard. Solutions will need to be implemented at a higher level of organisation. We fear this. We know it, but we have no idea what those solutions might look like. Hence the creeping sense of doom.
‘It doesn’t make sense,’ says a character in Mandel’s Station Eleven. ‘Are we supposed to believe that civilisation has just come to an end?’
Another responds: ‘Well, it was always a little fragile, wouldn’t you say?’
This is what our fiction is telling us. This is what makes it so mesmerising, so satisfying. In our stories of the post-apocalypse, the dilemma is resolved, the fragility laid bare. In these, humans are both villain and hero, disease and cure. Our doom is our salvation. In our books at least, humanity’s destruction is also its redemption.
Standing on that bridge now, I see even more cars than ever before. Beneath the roar of engines is the sound of hoof beats. As they draw closer, the old feeling rises up, but I know now that it comes not just from a fear of the end itself, but from the fear of knowing that the rider is me.
OLPC MEMO-1
Marvin Minsky, Feb 16, 2008 (revised March 22)
This is the first of several memos about how OLPC could initiate useful projects that then could grow without our further support—if adopted by groups in our Diaspora.
What makes Mathematics hard to learn? Drawbacks of Age-Based Segregation
Role Models, Mentors, and Imprimers Why Projects are better than Subjects
Making Intellectual Communities Why Classics are better than Textbooks
What makes Mathematics hard to learn?
Why do some children find Math hard to learn? I suspect that this is often caused by starting with the practice and drill of a bunch of skills called Arithmetic—and instead of promoting inventiveness, we focus on preventing mistakes. I suspect that this negative emphasis leads many children not only to dislike Arithmetic, but also later to become averse to everything else that smells of technology. It might even lead to a long-term distaste for the use of symbolic representations.
Anecdote: A parent once asked me to tutor a student who was failing to learn the multiplication table. When the child complained that this was a big job, I tried to explain that because of diagonal symmetry, there are less than 50 facts to learn.
½
However, that child had a larger-scale complaint:
“Last year I had to learn the addition table and it was really boring. This year I have to learn another, harder one, and I figure if I learn it then next year there will be another one and there’ll never be any end to this stupid nonsense. "
This child imagined ‘Math’ to be a continuous string of mechanical tasks—an unending prospect of practice and drill. It was hard to convince him that there would not be any more tables in subsequent years.
To deal with the immediate problem, I made a deck of “flash cards,” each of which showed two digits on the front and their product on the back. The process was to guess each answer and, if it was correct, then to remove that card from the deck. This made the task seem more like a game in which one can literally feel one’s progress as the size and weight of the deck diminishes. Shortly the child excitedly said, “This deck is a really smart teaching machine! It remembers which products I’ve learned, and then only asks for the ones I don’t know, so it saves me from wasting a lot of time!”
However, a more serious problem was that this child had no good image or “cognitive map” of what might result from learning this subject. What function might Math serve in later years? What goals and ambitions might it help to achieve?
Anecdote: I asked a certain 6-year-old child “how much is 15 and 15”and she quickly answered, “I think it’s 30.” I asked how she figured that out so fast and she replied, “Well, everyone knows that 16 and 16 is 32, so then I subtracted the extra two 1’s.”
Traditional teacher: “Your answer is right but your method was wrong: you should add the two 5’s to make a 10; then write down the 0 and carry the 1, and then add it to the other two 1’s.”
The traditional emphasis on accuracy leads to weakness of ability to make order-of-magnitude estimates—whereas this particular child already knew and could use enough powers of 2 to make approximations that rivaled some adult’s abilities. Why should children learn only “fixed-point” arithmetic, when “floating point” thinking is usually better for problems of everyday life!
More generally, we need to develop better ways to answer the questions that kids are afraid to ask, like “What am I doing here, and why? ”What can I expect to happen next?” or “Where and when could I find any use for this?
I’ll conclude with a perceptive remark from MIT’s Phil Sung: “Students are being led to think that they dislike math when they actually just dislike whatever it is that they're being taught in math classes.”
Students need Cognitive Maps of their Subjects
Until the 20th century, mathematics was mainly composed of Arithmetic, Geometry, Algebra, and Calculus. Then the fields of Logic and Topology started to rapidly grow, and in the 1950 we saw a great explosion of new ideas about the nature of information and computation. Today, these new concepts have become so useful and empowering that our math curriculum is out of date by a century. We need to find ways to introduce these ideas into our children’s earlier years.
In the traditional curriculum, Arithmetic was seen as so absolutely foundational that all other mathematical thinking depended on it. Accordingly, we sentenced all our children to two or three year terms of hard labor at doing addition, multiplication, and division! However, today it might be better to regard those tasks as little more than particular examples of algorithms—and this suggests that we could start, instead, with some simpler and more interesting ones!
For example, we could engage our children’s early minds with simple examples and ideas about Formal Languages and Finite State Machines. This would provide them with thoughtful and interesting ways to think about programs that they could create with the low-cost computers that they possess. Languages like Logo and Scratch can help children to experiment not only with simple arithmetic, but also with more interesting aspects of geometry, physics, math, and linguistics! What’s more, this would also empower them to apply those ideas to develop their own ideas about graphics, games, and languages—which in turn could lead them to contribute practical application that their communities can develop and share.
Similarly in the realm of Geometry, we can provide young children with interactive graphical programs that can lead them to observe and explore various sorts of symmetries—and thus begin to grasp the higher-level ideas that mathematicians call the “Theory of Groups”—which can be seen as a conceptual basis not only for Arithmetic, but for many aspects of other subjects. (To see examples of such things, type “Geometer's Sketchpad” to Google.)
Similarly in the realm of Physics, children can have access to programs that simulate the dynamics of structures, and thus become familiar with such important concepts as stress and strain, acceleration, momentum, energy—and vibration, damping, and dimensional scaling.
In any case, we need to provide our children with better cognitive maps of the subjects we want them to learn. I asked several grade-school teachers how often they actually used long division. One of them said, "I use it each year to compute the average grade.” Another teacher claimed to have used it for filling out tax forms—but couldn’t recall a specific example. But none of them seemed to have clear images of mathematics as a potential lifetime activity. Here is a simple but striking example of a case in which a child lacked a cognitive map:
A child was sent to me for tutoring because of failing a geometry class, and gave this excuse: " I must have been absent on the day when they explained how to prove a theorem."
No wonder this child was confused—and seemed both amazed and relieved when I explained that there was no standard way to make proofs—and that “you have to figure it out for yourself”. One could say that this child simply wasn’t told the rules of the game he was asked to play. However, this is a very peculiar case in which the ‘rule’ is that there are no rules! (In fact, automatic theorem-provers do exist, but I would not recommend their use.)
Bringing Mathematics to Life
What is mathematics, anyway? I once was in a classroom where some children were writing LOGO programs. One program was making colored flowers grow on the screen, and someone asked if the program was using mathematics. The child replied, “Oh, mathematics isn’t anything special: it’s just the smart way to understand things.” Here are a few kinds of questions that pupils should ask about the mathematical concepts we ask them to learn:
Arithmetic: Why does “compound interest’ tend to add more digits at constant rates? How do populations grow? How does recursion lead to exponentiation? It is easy to understand such things when one experiments with computer programs, but not when a child is constrained to the tedious rate of boring numerical calculation,
Geometry: How many different ways can you paint 6 colors on the faces of a cube? Can you envision how to divide a cube into three identical five-sided objects? We know that gloves come in left- and right-hand forms—but why are there only two such versions of things? We all live in a 3-D world, but few people learn good ways to think about 3-D objects. Shouldn’t this be seen as a handicap?
Logic: If most A’s are B’s, and most B’s are C’s, does this imply that many A’s must also be C’s? Many adults will give the wrong answer to this! Is it possible that when John Smith moved from Apple to Microsoft, this raised the average IQ of both companies? We all try to use logical arguments, but we also need to learn about the most common mistakes!
Mechanics: What makes a physical structure stronger when one braces it with triangular struts? That’s because two triangles are congruent, when their corresponding sides are equal—which means that that there’s no way to change a triangle’s shape, once the lengths of its sides are constrained. Today most children grow into adults without ever having learned to use the basic concept of “degrees of freedom.”
Statistics: Few mathematical subjects rival Statistics in the range of its everyday applications. How do effects accumulate? What kinds of knowledge and experience could help children to make better generalizations? How should one evaluate evidence? What’s the difference between correlation and cause? Every child should learn about the most common forms of biases—and also about why one needs to be skeptical of anecdotes.
A very few fragments of knowledge about statistics can illuminate most other subjects. In particular, it seems to me, that we should try to get children to learn to use the “T-test” method, which is an extremely simple statistical test, yet, one that handles huge ranges of situations. (To use it, only needs to know enough about the powers of 2!) Also they should understand using square roots to assess variations. (You can estimate a square root simply by halving the number of digits!) Example: Basketball scores often turn out to be number pairs like 103 to 97—which are not statistically significant!
Combinatorics: Consider that, when we teach about democracy, few pupils ever recognize that, in an electoral-college voting system, a 26% minority can win an election—and if there are 2 tiers of this, then a mere 7% minority could win! How do cultural memes manage to propagate? How does economics work? At what point should we try to teach at least the simplest aspects of the modern Theory of Games?
Abstract Algebra and Topology: These are considered to be very advanced, even postgraduate. Yet there are many phenomena that are hard to describe if one lacks access to those ideas—such as fixed-points, symmetries, singularities, and other features of dynamic trajectories, all of which appear in many real-world phenomena. Every large society is a complex organization that can only be well described by using representations at many different levels of abstraction—e.g., in terms of person, family, village, town, city, country, and whole-world economy—and “higher mathematics” has many concepts that could help to better understand such structures.
How can we encourage children to invent and carry out more elaborate processes in their heads? Teachers often insist that pupils “show their work”—which means to make them “write down every step.” This is convenient for making grades, as well as for diagnosing mistakes, but I suspect that this focus on ‘writing things down’ could lead to mental slowness and awkwardness, by discouraging pupils from trying to learn to perform those processes inside their heads—so that they can use mathematical thinking in ‘real time’. It isn’t merely a matter of speed, but of being able to keep in mind an adequate set of alternative goals and being able to quickly switch among different strategies and representations. This suggests that OLPC should promote the development of programs that help pupils to improve their working memories, and to refine the ways that they represent things in their minds:
The Impoverished Language of School-Mathematics.
There’s something peculiar about how we teach math. If you look at each subject in elementary school—History, English, Social Studies, etc.— you'll see that each pupil learn hundreds of new words in every term. You learn the names of many organizations, leaders, and wars; the titles of many books and their authors; and terms for all sorts of ideas and concepts—thousands of new words every year.
However, in the case of school-mathematics, the vocabulary is remarkably small. The children do learn words for various objects and processes—such as addition, multiplication, fraction, quotient, divisor, rectangle, parallelogram, and cylinder, equation, variable, function, and graph. But they learn only a few such terms pr year—which means that in the realm of mathematics, our children are mentally starved, by having to live in a “linguistic desert.” It is hard to think about something until one learns enough terms to express the important ideas in that area.
Specifically, it isn’t enough just to learn nouns; one also needs adequate adjectives! What's the word for when you should use addition? It’s when a phenomenon is linear. What's the word for when you should use multiplication? That’s when something is quadratic or bilinear. How does one describe processes that change suddenly or gradually: one needs terms like discrete and continuous. To talk about similarities, one needs terms like isomorphic and homotopic. Our children all need better ways to talk about, not only Arithmetic and Geometry, but also vocabularies for the ideas one needs to think about statistics, logic, and topology. This suggests an opportunity for the OLPC children’s community: to try set up discussion groups that encourage the everyday use of mathematical terms—communities in which a child can say “nonlinear” and have others admire, and not discourage her.
Mentors and Communities:
If one tries to learn a substantial skill without a good conceptual map, one is likely to end up with several collections of scripts and facts, without good ways to know which of them to use, and when—or how to find good alternatives when what you tried has failed to work. But how can our children acquire such maps? In the times before our modern schools, most young children mainly learned by being forced to work on particular jobs, and ended up without very much ‘general’ competence. However, there always were children who somehow absorbed their supervisors’ knowledge and skills—and there always were people who knew how to teach the children who were apprenticed to them.’
I’ll come back to this in another Memo about the disadvantages of modern age-based classes. Today most education is broader, but apprenticeship itself now is rare, because few teachers ever have enough time to interact very much with each of their students: a modern teacher can only do so much. The result is that no one has time to deal thoroughly questions like “What am I doing here, and why? ”What can I expect to happen next?” or “Where and when am I likely to use this?
However, now we can open new networks through which every child can communicate. This means that we can begin to envision, for each of our children, a competent adult with enough “spare time” to serve as a mentor or friend to help them develop their projects and skills. From where will all those new mentors come? Perhaps that problem will solve itself, because our lifespans are rapidly growing. The current rate of increasing longevity today is one more year for every four, so; soon we may have more retired persons than active ones!
Of course, each child will be especially good at learning particular ways to think—so we’ll also need to develop ways to match up good “apprenticeship pairs.” In effect we’ll need to develop “intellectual dating services” for finding the right persons to emulate!
In any case, no small school r community can teach all possible subjects, or serve the needs of individuals who abilities are atypical. If a child develops a specialized interest, it is unlikely that any local person can be of much help in developing that child’s special talents and abilities. (Nor can any small community can offer the range of resources to serve children with limited abilities.) However, with more global connections, it will be easier to reach others with similar interests, so that each child can join (or help form) an interactive community that offers good opportunities.
(Some existing communities will find this hard to accept, because most cultures have evolved to reward those who thinking about the same subjects in the same ways as do the rest! This will pose difficult problems for children who want to acquire new ways to think and do things that their neighbors and companions don't do—and thus escape or break out of the cultures in which they were born. To deal with this, OLPC will need to develop great new skills of diplomacy.)
Emphasizing Novelty rather than Drudgery?
Actually, I loved arithmetic in school. You had to add up a column of numbers and this was fun because there were so many different ways to do it. You could look here and there and notice three 3's and think, “that's almost a 10 so I'll take a 1 off that 7 and make it a 6 and make that 9 into a 10." But how do you keep from counting some numbers twice? Well, you could think: “Now I won’t count any more 3's.” How many children did these things exactly as they were told to do? Surely not those who became engineers or mathematicians! For when you use the same procedure again, there’s little chance to learn anything new—whereas each new method that you invent will leave you with some new mental skill (—such as a new way to use your memory).
For example, when you add 6 and 7 and write down a 3, how do you remember to “carry” a 1? Sometimes I’d mentally put it on my shoulder. How do you remember a telephone number? Most people don’t have too much trouble with remembering a ‘local’ 7-digit number, but reach for a pen when there’s also an area code. However, you can easily learn to mentally put those three other digits into your pocket—or in your left ear, if you don’t have a pocket!
Why are so many people averse to Math? Perhaps this often happens because our usual ways to teach arithmetic only insist on using certain rigid skills, while discouraging each child from trying to invent new ways to do those things. Indeed, perhaps we should study this subject when we want to discover ways to teach aversions to things!
Negative Expertise
There is a popular idea that, in order to understand something well, it is best to begin by getting things right—because then you'll never make any mistakes. We tend to think of knowledge in positive terms—and of experts as people who know exactly what to do. But one could argue that much of an expert’s competence stems from having learned to avoid the most common bugs. How much of what each person learns has this negative character? It would be hard for scientists to measure this, because a person’s knowledge about what not to do doesn’t overtly show in that individual’s behavior.
This issue is important because it is possible that our mental accumulations of counterexamples are larger and more powerful than our collections of instances and examples. If so, then it is also possible that people might learn more from negative rather than from positive reinforcement? Many educators have been taught that learning works best when it seems pleasant and enjoyable—but that discounts the value of experiencing frustrations, failures and disappointments. Besides, many feelings that we regard as positive (such as beauty, humor, pleasure, and decisiveness) may result from the censorship of other ideas, inhibition of competing activities, and the suppression of more ambitious goals (so that, instead of being positive, those feelings actually may reflect the workings of unconscious double negatives). See the longer discussions of this in Sections 1-1 and 9-4 of The Emotion Machine. Also see “Introduction to LogoWorks” at web.media.mit.edu/~minsky/papers/Logoworks.html
Différence travail/jeu, intérêt pour le travail, etc.
Chapter 1
I've been lucky enough to have been born before computers and video games were ubiquitous. I had the luck to play outdoors with friends and my brother, and of inventing our own games.
We could be our own heroes, use a twig that would instantly become a bow, a gun, a sword, or a telescope. It could be anything, except maybe a boomerang because once you throw the stick away, you have to go fetch it back.
comparison between a boomerang and a stick
At some point I grew up, and it became embarrassing to play that way. You can't treat a pinecone as a grenade and pretend to have magical powers when other kids think being an adult is cool. You just don't fit in anymore. You eventually get pressured into growing up. Still, that's a very lucky childhood.
At some point I got the chance to play video games, and to use computers. There could be the imaginary world you had wanted all this time, materialized in front of you. It's consuming you, and for a moment you live a different life.
But there's something particular about most video games: you don't create, you react, you consume. I eventually did improvisational theatre as a teenager. Then, again, it was okay to be with people and create and pretend out of nothing.
an improv rink with people playing
Of course, improvisational theatre in Quebec is different; there's an ice rink in there — everything's hockey.
When I got to a vocational college to study multimedia from 2005 to 2008, I eventually tripped into programming work. I found it amazing! Creativity was there again, and it could get me money! I then designed the mechanism of my first game, and it blew my mind.
HTML form of an old browser game named 'DANGER IL Y A UN HOMME ARMÉ DANS CET ÉDIFICE 3'
That's not a real video game, I was told. That's just an HTML form. You should have used an array for the text and options it would have been better. The code needs cleaning up.
I was a bit disheartened; the game was really about the 11 pages of text I had written for the "choose your adventure" aspect of it. But I realized that if I wanted to make stuff more people thought was good, I'd have to learn a lot.
I'd have to learn "real programming". Move from JScript in a GUI toolkit to something better, like PHP. So I learned that, along with Javascript. Then eventually I was told to learn how to do real programming again; PHP is terrible. I was told to maybe try Python, which I then learned.
But real programmers knew fancier stuff, and python's lambdas didn't cut it, object-oriented programming was not where you wanted to be. Reading SICP would be the next good step, I was told, because it was like the bible of computer science.
Book cover of Structure and Interpretation of Computer Programs
That got me to Scheme. And I got the K&R book because real programmers in the real world did C, and I registered for part time classes at my local university while juggling them with work, because real programmers knew data structures and math, which I learned to some extent. I started reading papers and books, because real programmers stayed up to date and knew fancy algorithms.
Somewhere through that I picked up Erlang and started making a career out of it. I wrote a book on it. Curiously enough, nobody ever questioned if I were a real author, or a real writer, or a real illustrator. Hell, I got a job teaching Erlang without ever having used it in a production system.
Chapter 2
So I lived my life flying around the world, telling people how to do things I had sometimes never done myself, while everyone suddenly seemed to believe I was a real programmer because of things I did that were mostly not related to programming in the first place.
One day, I was stuck in an airport coming back from a conference, furiously typing at a terminal, when an odd, gentle voice asked me:
If you please, design me a system!
What?!
Design me a system!
I looked up from my screen, surprised by the request. I looked around and saw this kid who aspired to be a developer and wanted me to call him "printf", which I felt was very stupid and gimmicky. He looked a bit like this:
little printf, with a red and yellow tuque, similarly colored scarf, green coat, red mittens, and beige-yellow pants, standing in snow with a broken laptop at his sides
I don't know computers much yet, but it seems you do. I want to write programs and blog about them and have people use and read them. Please, design me a system!
Now that was a surprising request, and I had been awake for 20 hours by then, not too sure I fully understood or felt like it. I told him systems were hard. I didn't know what he wanted to do, how he wanted it to fail, how many readers it should support, where he'd want to host it, and I could therefore not design a proper system with so little information.
That doesn't matter. Design me a system.
So I made the following architecture diagram:
somewhat complex architecture diagram
He looked at it and said No, this system is not good enough. Make me another.
So I did:
a rather complex architecture diagram
and I gave him a rundown of how it would work.
My new friend smiled politely. That is not what I want, it's way too complex and does a lot of stuff I don't need
I felt a bit insulted, having considered redundancy, monitoring, backups, caches and other mechanisms to reduce load, external payment processor for legal protection, failovers, easy deployment, and so on. I could have charged decent money as a consulting fee for that! Out of patience, I just drew this:
a black box with the text 'enjoy!' written under.
And I added: this is your design. The system you want is inside the black box, hoping this shitty answer would have him leave me alone. But I was surprised to hear back:
That is exactly the way I wanted it!
And that is how I made the acquaintance of the little printf.
Chapter 3
I soon learned of this little guy's portfolio. In his repositories were only small programs, simple web pages with forms, trivial command line utils. They would be unspectacular, would come into being, and no sooner disappear.
Then at some point, he started working on a bigger program, that used multiple modules. It needed sockets, accessed the disk, talked to an actual database. When it first built and ran properly, little printf was amazed. But the program was not enough yet.
It needed refactorings, better tests, documentation, linting and analysis. The program would run for a while, and one morning, it crashed.
And it crashed again, and again.
The configurations were wrong, the logs would not rotate, the disk had unpredictable speed, the network would get the hiccups, bugs would show up, the encodings would be confused, the database needed vacuuming, transactions would hang, certificates would expire, CVEs would keep coming, and the metrics would remain silent.
a plate of meatball spaghetti
It kept turning to Spaghetti.
He told me: the fact is I didn't know anything! I ought to have judged by my needs. I got the hubris of writing a fancy system, and I spent so much time fixing it, it felt like it cancelled the time it saved me. Still, I should have known what a wonderful thing it was.
One morning, he decided to leave his office. Goodbye, he said to a blinkenlight that seemed to have burnt out. He left to see what the world of software had to offer aside from his messy little server.
The logs would keep accumulating, until the hard drive would fill no more.
Chapter 4
a building
He went to a workspace, looking for experienced developers from whom to get tips and help.
The first one he met was a very proud senior engineer who seemed to feel rather superior.
a balding man in a suit, with thick glasses
Ah, here comes a learner! Welcome to my domain, of which I am the expert he said.
An expert? Little printf asked. Does this mean you can program anything and everything?
Yes! the expert answered. He added Well almost; I only program programs that are worth programming. I don't lose my time on trivialities. Many programs I have never written but could write with all the ease in the world.
Ah, so could you help me with my system? As soon as the little printf started explaining his business, the domain expert interrupted him:
I'm sorry, but I don't really see the point of doing that.
Why not?
Experience. I am good at programming the things I program, and I program things I am good at. By getting better at this fairly restricted set of things I'm already good at, I make sure I'm more valuable than ever at it. Call it job security, call it survival of the fittest, but that's how I roll.
And why can't you help me?
Well you see, taking my time away to help you means I divert important self-investment into furthering the progress of others — that's a losing strategy for me. The best way to learn for you is the way I took myself: struggle very hard and figure it out yourself. It helps forge character.
That doesn't seem very efficient...
Well you can go to school and learn, or you can learn on your own. Really what it does is weed out the lazy people who just want it easy, and forces everyone who stays here to be those who really deserve it. The moment we let moochers in, the very value of the work I produce goes down with it.
Do you not think cooperation or colleagues could help you?
Not really. I work best when left alone and not being distracted. Every time I end up forced working with others, it's nigh impossible to get our stuff working together. Out of exasperation, I grab their work and rewrite most of it in a sane way; then it works right.
Little printf was surprised to meet an expert who seemed so disinterested in helping others, yet so annoyed by their perceived lack of skill. It was a bit sad that this man narrowed his vision of himself to just the one area he knew, to the point where he didn't do anything else than create problems for himself to fix!
I see... well I guess I'm happy you won't give me your help, said my little friend
What do you mean? asked the meritocratic man, whose value seemed suddenly downgraded. Don't you think the work I do is interesting?
Oh that I do. It just seems like you would see me as a hindrance and annoyance more than anything else, and what I am looking for is help, not affliction.
And little printf left swiftly, leaving the expert to realize he had made himself untouchable in more ways than just his job security.
Chapter 5
a man sitting at his desk in front of multiple filled bookcases
On his way, little printf went in front of the door to an office occupied by a man surrounded by thick hardcover books, with fancy images on them like wizards and dragons and fractals and mathematical patterns.
Nice books, sir, said printf
Thanks. I think they're essential material for programmers. If you don't have them, you're not really a pro
I guess I'm not a pro then, said little printf. Which one is your favorite?
Oh, well I haven't read most of them.
Are you not a good programmer then?
No, I am not. The developer proudly added: In fact, I'm a terrible programmer.
That's a shame, said little printf, who continued: I'm getting better myself.
Have you heard of the Dunning-Kruger effect?, asked the man.
No, what is it?
It's a cognitive bias thing. It basically says that people who are less competent tend to overestimate their qualifications, and people who are competent tend to systemically underestimate theirs.
So if I think I'm getting better, I'm probably not great
Yeah, exactly. You're probably bad. On the other hand, I openly say I'm a terrible programmer. But according to Dunning-Kruger, I'm probably underestimating myself, and that makes me a good developer, don't you see?
I guess?
That's because self-deprecation is a vital tool of the developer. The moment you feel you're good, you relax and stop improving.
Doesn't this mean that the moment you feel good about yourself, you're on your way to failure and then you should feel bad?
yes. But the way to go about this is to say that everything is terrible, even if you have no solutions to offer. That way you look smart, but don't have much to contribute.
What do you mean?
Say I go online and see a project I dislike. The trick is to point out everything that is wrong, give no more information than that. You can probably subtly point ways in which the person who did the thing is an idiot and get away with it.
And how is anyone better for this?
Well I like to think they are better for knowing they're on the wrong track, and I'm better off for showing them that. It's a bit of smoke and mirrors. Nobody knows what they're doing but that way it looks like I do.
And what happens when you are asked for help and can't do anything about it?
That's when you go back to saying everything is terrible; you have too much yak shaving to do, improving other things, and being overly pessimistic. They're on their own.
So this is all posturing? You're gaming your way through? You're the person who pretends to be incompetent at things they know, which makes people who actually know nothing there feel even worse, and you're the person who pretends to be competent at things you don't know, so that people trying to improve there also feel bad.
In any case, competence has very little to do with it. Reputation is pretty important though. People hire friends, and people who aren't liked and non-essential get fired first; try to change the system and you become disliked. It's all a very social game. It's how it works in the industry, and probably in academia too, though I wouldn't know, now would I? It's all about who you know, selling yourself, your own personal brand you know? That's how you get jobs in the business.
If this is how things are and that you must feel bad and make others feel bad to do well, maybe I don't want a job in the business, said little printf, before walking out.
Chapter 6
nondescript programmer sitting back to the viewer in the dark, with a sandwich on their desk
During the time that would have been lunch break, Printf interrupted a person who had seemingly forgotten to eat their lunch, a sandwich growing cold by the minute, while sitting at their desk and looking at their screen.
That seemed like quite a busy person who might have known what they were doing. Printf asked:
If a primary database can fail, can the follower fail too?
Everything you run, the person said, can and will sooner or later fail.
Even the things telling you things have failed?
Yes, even these ones. All large systems are in some state of partial failure at any given time.
Then, trying to make reliable systems, what use is it?
The person did not know, for at that moment, they were trying to answer a page for the sky falling out onto their head due to a broken cloud, wondering the same thing.
Then making reliable systems, what use is it? pressed little Printf again
Upset as the person was dealing with a production issue, with this kid not letting go and a sandwich going to waste, the person impatiently shot back:
It's of no use at all. Programming is all shit anyway.
oh!, he gasped.
Then there was a moment of complete silence.
a garbage can on fire with a golden plate saying 'programming' on it
The little guy responded, with a hint of resent:
I don't believe you. Programs are fragile, but programmers can make good efforts and make things better and useful.
No answer came back. At that point the person had opened the document explaining how to boot a new copy of the whole cluster from scratch, and things seemed to go from bad to worse.
And you actually believe good reliable prog-
Oh no! the person said. No no no! I don't believe in good or reliable programs! Not anymore! They're all terrible! I just told you the first thing that came to my head because I'm dealing with one of these shitty systems right now. Don't you see I'm trying to keep this stuff running? This shit is actually of consequence.
Printf stared back, with a shocked expression.
Actually of consequence? You talk just like a 'real programmer'.
He added:
You mix everything up, confuse everything. There's been millions of programs, and for years they've been running and failing just the same. And people have used them and needed them. And I know of some programs that run nowhere but on a single laptop, and in a single mistake could destroy entire communities, without even noticing. And you think that this is not important?
The person remained silent.
Chapter 7
a man at his desk in front of two monitors
The fourth workspace my friend visited had a man whose computer was covered in so many stickers nobody could tell what brand it was.
motor-mvc, quadrangular JS, GoQuery, cometeor, some japanese soundy thing, ...
Hi, interrupted printf. What are you doing?
alchemist, bongodb, mochascript, walktime.js, portasql, ..., the man kept going
What are you doing?, he asked again, louder this time.
Oh, I'm trying out new frameworks, tools, databases, languages.
Whoa, you seem to be going fast, maybe as fast as 10 programmers put together!
yes! well, the industry moves so very fast!, he looked at his phone for a second, and added there! the cardboard.io framework came up with version 3.5 which broke compatibility with 3.4 and this yielded 4 forks in the community! I have to try them all to know which to choose!
and what do you do learning all of these?
I'm an early adopter. If you don't stay up to date you get stuck writing COBOL or MUMPS for a living. You want to find the next big thing, and ride the wave to the top!
Has it ever worked?
Oh yes! I found out about Rails before it got big, and I figured out node.js before it was popular, and I was on the first beta copies of redis and mongodb and riak! I was the first one to use vagrant and then I got us to switch to docker but of course now it's all about unikernels..
Cool, and all these things you were at the forefront of, how did it pay off?
oh it didn't; by the time rails became huge I had moved on to the next big thing so I didn't get left behind. Similarly for the other ones. Here's hoping for unikernels though
I see, added little printf, pensively. What problems do you solve with all of these frameworks?
Oh, I make sure we don't use something that is not going to be big, so that this company doesn't get to bet on technologies that have no future. It's very important work, because if you don't do that, you can't find anyone to hire except old grey beards behind the times, and you want self-motivated go-getters, who are also early adopters., said the man.
That is funny, chimed our friend.
It is very hard! in the startup world, if you want a-players, you need good technology to bring them in! Otherwise you're stuck with inflexible laggards. Nobody wants to be an inflexible laggard.
The little printf interjected: No, that's not what I mean, and he then added I mean it's funny that tools are meant to solve problems for us, but for you, the tools themselves have become a problem.
And while the man stood there in silence (on his new cool treadmill desk), little printf hopped out of the room.
Chapter 8
a woman in purple hoodie, slouched over her keyboard with her desk full of empty mugs and bottles
In the next office over sat a tired employee, with dozens of empty coffee cups, slouched over, typing angrily.
Hi, said little printf.
The woman didn't stop what she was doing. She kept typing furiously.
Hello? he asked again.
The woman stopped at once, got a flask out of a drawer in her desk, and took a swig.
I have a terrible job, she said. I do devops. It started okay, where I'd mostly develop and then sometimes debug stuff, but as time moved on, it got worse and worse. I started fighting fires in our stack, and then more fires kept happening. I got rid of most of them, pulling small miracles here and there to then meet the deadlines on dev stuff I also had to do
And did they hire anyone to help?
No, that's the thing. Small fires kept happening here and there, and because of the time I took to fight them, I couldn't be as careful as before with the dev stuff, so I created more fires all the time. Now I'm fighting fires all day and all night and I hate my job
Why doesn't your employer do anything?
I'm good at my job, and I managed to keep things under control long enough that everyone got used to it. When you make a habit of small miracles, people get used to it. Then you're stuck doing miracles all the time or they will think you won't do your job at all.
That sounds very sad
It is; and because you're the most familiar person with these fires, you get to only work on them more and more, until your employer hires someone else to cover your old job, the one you loved. If you care hard enough about your work to be the one doing the stuff everyone else hates, you're thanked by doing more and more of that work you don't like, until that's all you do. And then there's nothing left for you to enjoy.
Then you're unlucky, said little printf.
And her pager went off again.
That woman, said little printf to himself, as he continued farther on his journey, that woman would be scorned by all the others: by the senior expert, by the rockstar developer, by the serial early adopter. Nevertheless she is the only one of them all who seems helpful. Perhaps that is because she is thinking of something else besides herself.
Chapter 9
software architect sitting at his desk with reams of paper on top of it
At the corner of the building, printf found a large office with big windows giving a stunning view of the area. In it sat an old gentleman with reams of documentation on his desk.
Ah, here comes a developer exclaimed the man, as printf stood in the doorway. Come in!
Looking through the windows, little printf noticed that they were full of writing. With the help of a dry-erase pen, the view to the outside world was masked by tons of circles, arrows, cylinders, and clouds. While it was curious the man needed clouds drawn where real ones could be seen outdoors, the whole ensemble was more intriguing.
What is this?, asked our friend, pointing at the windows.
Oh this? This is our production system! Said the man, not once thinking the question was about the outside world. I am a software architect.
What's a software architect?
Mostly, it's someone who knows how to best structure and coordinates the components of a large system so they all fit together well. It's someone who has to know about databases, languages, frameworks, editors, serialization formats, protocols, and concepts like encapsulation and separation of concerns.
That is very interesting! said little printf, here is someone who can answer all my questions! He glanced at the architecture diagrams. Your system is very impressive. Is it running very fast?
I couldn't tell you, said the architect. It should, though
How's the code then, is it good?
I couldn't tell you
Are the users happy about it?
I couldn't tell you either, I'm afraid
But you're a software architect!
Exactly! But I am not a developer. It is not the architect who goes and writes the modules and classes, combines the libraries. The software architect is much too important to go around touching code. But he talks with programmers and developers, asks them questions, provides them guidance. And if the problem is looking interesting enough, the architect takes over the planning.
And why is that?
Because we are more experienced. We know more about systems and what works or not. Developers can then be an extension of our knowledge to produce great systems!
But how do you know if things are going well without getting involved with code?
We trust the developers
So you trust them to implement your ideas correctly, but not enough to come up with their own ideas?
The software architect was visibly shaken by this comment. I guess I might have been a bit disconnected, he finally admitted. The problem is that after a while you are asked to work with ideas so much you don't have a good way to get them tested or verified... he stared down, pensively. Sometimes a software architect does neither software nor architecture, it seems.
Little printf left the room, and being done with his visit, exited the building.
Chapter 10
man in a plaid shirt, winter hat, with a clipboard and a bell
Little printf, once outside, met a man collecting money for some charity.
Hi, said the man. how would you feel about helping someone today?
It would probably make me feel better, said little printf back. I have been in this office all day, and now I'm more confused than ever.
Ah I see. These people are all developers. They are not really helpful, are they? What they love to say is that they're changing the world, and they pretty much succeed at that, in fact.
Why does it feel so awkward, then? Asked little printf.
Well, the best they do is often help convert some people's jobs into programs, or make everyone's leisure more leisurely. Software is eating the world and that changes its face for sure... but deep down it's the same old world, with a mangled face. The reason it feels awkward is that changing in that way doesn't mean things are getting any better. We have the same flaws and problems we always had, the same holes to fill deep down inside.
So how can I feel better? Little Printf was visibly anxious.
The man thought for a while, and offered printf to come help him help others, as this was this man's way of feeling better. During the afternoon, printf told the man about his problems and his adventure. After a long silence, the man said:
The games people play, the roles and reputations they chase and entertain, the fleeting pleasure they derive from solving intricate problems, is all fun for a while. Ultimately though, if you do not solve anything worthwhile, if you forget about the people involved, it's never gonna be truly fulfilling.
And that may be fine, and it might not be, and you may or may not get that from somewhere else than your workplace when you grow up. Work can be work; it can be for the money, it can be for the fun of it. That's okay. As long as you manage to get that fulfillment somewhere in your life.
In the end though, it is only when you solve problems with a human face that you can feel truly right; What is essential is invisible to the computer.
It is the time you have spent on your system that makes it so important", the man added, "and when you lost sight of why it made sense to spend time on it, when it became a game of pride, it caused more grief than relief.
Developers have often forgotten this truth; If you lose sight of things, working on your system becomes its own problem, and the most effective solution is to get rid of the system, given it's the problem.
It is only when you solve problems with a human face that you can feel truly right, repeated little printf to himself, so he would remember.
Chapter 11
same image of printf as before, except he's smiling this time around
Printf, who's now sitting right in front of me, is on his way home. Talking with him made me realize how much of what I do flies in the face of what I liked, what I started programming for. Each of the people Printf met are roles I see myself taking one day or another over time. I was encouraged by them to become them, and probably encouraged people to do the same.
Where I got dragged in the game of trying to become a real programmer, Printf didn't. He said he was okay with not being a real programmer, that he preferred to be a programmer with a human face.
Today I'm stuck in the situation where I look back, and have to figure out if I can, too, become a programmer with a human face; or if everything I do is just a job. There doesn't seem to be too much that's worthwhile in-between.
In any case, where printf felt he didn't need to be a real programmer, I think I feel the same now.
On a warm morning in December, a few dozen Syrians from Aleppo and Idlib—former students, teachers, vegetable sellers and farmers—gather in an abandoned firefighting training center near the Syrian-Turkish border. They have come here to learn advanced rescue skills that they will use to teach newly recruited emergency workers back home. They are members of the Syrian Civil Defense, known as the White Helmets—the largest civil society group in Syria and one that is nonsectarian, neutral and unarmed.
The site looks like a deserted campground, aside from the burned-out bus in the middle of a neighboring field and collapsed concrete buildings that they use for simulation exercises. The exact location of their training center is undisclosed, and most of them ask to be identified by only their first names, because the White Helmets have received death threats. They also know there are sleeper cells of the Islamic State militant group (ISIS) in the area, and there have been shootings and bombings nearby.
It’s Week Three of training, and soon the men will go home. The mood is somber—a recent bombing in Idlib killed more than 50 people—but there is a sense of deep bonding here. Some of these men have known one another since childhood, and they are bound by this vital and perilous work they have undertaken.
Instructors of the White Helmets, or the Syrian Civil Defense, during a training course in southern Turkey, on December 13 and 14. Volunteers are filling the void of emergency responders, simultaneously becoming firefighters, paramedics, and search and rescue teams during the ongoing Syrian Civil War. From top left: Samer Hossein (30, from Idlib, Syria), Ali (40, from Aleppo), Khaled (40, from Idlib, Syria), Yusuf Azzo (27, from Heyyan, Syria), Mahmoud Hatter (27, from Aleppo, Syria), Husam (25, from Binnish, Syria), Osama (29, from Jebel Zawiyah, Syria), Ahmad Al-Imam (30, from Anadan, Syria), and Ali Juma (39, from Al Atarib, Syria).
NICOLE TUNG FOR NEWSWEEK
Their first exercise involves building a huge oil fire near the ruins of the bus, then extinguishing it. As they pull on protective gear, including gas masks, and unravel hoses, they make a few jokes and talk about their lives before the civil war and people they know in common. One of their trainers has “better than nothing” scrawled on the back of his jacket. Khaled, a father of four from Idlib, explains, “We have a strange gallows humor. We’ve seen so much. It’s a way of releasing tension.”
“Tell her about the sheep market in Aleppo,” one man says. Samer Hussain, 30, responds, “A bomb hit the market when it was most crowded—people had come out to buy food. The animal flesh was mixed with humans,” he says. “We found arms, legs, heads. We lost around 25 people that day. Some of them were beyond recognition because of the bombs. You could no longer describe them as human.”
Several other men work on building up the fire. Then they take a break, pull off their helmets and masks and take out packs of cigarettes. They exist, they say, on cigarettes and coffee. “It’s not like we worry about dying from cigarettes,” says one. “We probably have the most dangerous jobs on earth.”
With the war in Syria now in its fifth year, average life expectancy there has dropped by two decades. More than 250,000 people have been killed and more than 1 million injured, according to the United Nations. Millions more have been driven from their homes, including more than 4 million who have fled the country as refugees.
There are more than 2,800 White Helmets, including 80 women, all volunteers who work full time and get paid a $150 monthly stipend. So far, according to Raed al-Saleh, 33, the founder of the White Helmets, they have saved more than 40,000 lives.
Although they operate largely in rebel-held areas of Syria, the White Helmets don’t discriminate between victims on one side or the other. “To save one life is to save humanity” is their motto, and from the rubble they have dug out members of Hezbollah or Iranians fighting on behalf of President Bashar al-Assad, as well as Free Syrian Army opposition fighters. But most often, they save civilians. For those who live in frequently targeted areas, the Syrian Civil Defense, or Difaa Midani in Arabic, is a symbol of hope in an exceedingly bleak conflict.
This is a war that has attracted limited international humanitarian assistance, given the risks of operating in Syria, so the civilian population has suffered terribly. Nearly all structures of society have broken down, from education to health care. Schools have not functioned for years, and if you have a chronic disease, such as cancer or diabetes, you most likely die without treatment.
The White Helmets formed in 2013 as a grassroots operation funded by the British, Danish and Japanese governments to recruit first responders. It has a budget of $30 million a year, much of which is spent on equipment, such as heavy diggers to remove bodies from under concrete that has collapsed, and the stipends.
After initially working with foreign advisers, it is now an entirely Syrian operation, with around 20 to 30 new recruits coming forward each month. “The very fact that this exists in communities gives people more of a sense of security,” says James le Mesurie, a former British soldier with Mayday, a nongovernmental organization that along with skilled Turkish rescue workers helped set up and train the first cadre of White Helmets.
So far, 110 White Helmets have died on the job, and four times that many have been seriously wounded. The average age is 26, although one elderly man joined the day after he buried his son, who was a White Helmet. The youngest is 17. They work at all hours of the day and night, and their centers, although in secret locations, are frequently targeted, as are their vehicles, including their ambulances. They say this has happened with more alarming frequency since Russian airstrikes in support of Assad began on September 30.
01_29_SyriaWhiteHelmets_03
The volunteers say they often run toward a scene fully acknowledging that bombs may still be falling.
NICOLE TUNG FOR NEWSWEEK
“This is the least we can do for our country,” says Khaled, the father from Idlib. He likens it more to a “calling” than a job.
After training and pledging to abide by the code of conduct—no guns, strict neutrality and no sectarianism—they are given a white uniform and helmet and sent on their first mission. The training rarely prepares them fully for the real thing, says Abdul Khafi from Idlib. The most difficult part of the job is not the physical but the psychological impact of seeing so many dead and injured. “Killing is easy. Saving lives is much harder,” he says. “Sometimes the pressure is more than our endurance.”
Once they join, they rarely quit. One White Helmet left to become a refugee in Germany. “We need this,” says Jawad, 35, from Idlib, a married father with two children, who once worked in a fire brigade. “We need to save as many people as we can, especially as the war gets worse. It shows something. It means something. ”
“No words can adequately describe what it is like to save a life,” Saleh, the founder of the White Helmets, wrote in a Washington Post op-ed in March 2015. “But for us the elation never lasts because we are constantly under attack.” A former electronics salesman from Idlib, Saleh addressed the U.N. Security Council this past summer in an attempt to explain the misery of living under bombardment.
When they race to a location that has been bombed, they are acutely aware that more bombs—“the second tap”—are probably coming within minutes. This past summer, they had to start painting their ambulances in camouflage colors. Back then, before the Russian airstrikes began, the biggest killer was barrel bombs—rusty containers loaded with nails, glass, shrapnel, explosives and sometimes chlorine gas. (The U.N. has accused Assad of using barrel bombs, though he denies it.)
For a White Helmet from Idlib named Osama, 29, the greatest challenge was overcoming his fear. “You learn, slowly,” he says. “But now, since the Russians started coming, I am more frightened than ever. It’s a different kind of bombing.”
The White Helmets have their detractors. Regime bloggers and Russian Internet trolls accuse them of being the Nusra Front, the Al-Qaeda franchise in Syria. In the early days of their operation, one White Helmet was photographed with a gun (he was immediately dismissed). Yet at night, gathered in the hotel where they are living for these three weeks of training, most of the men don’t want to talk about religion or politics. “If you make the decision to risk your life, to save other people, it goes against radicalization,” says le Mesurier. “They’ve emerged as the representative of the average, good Syrian.”
During the evening, there are more cigarettes, along with laughter, singing, even some planning for a wedding in Aleppo. Life at home, under regular bombardment, is hellish, but few of these men seem to exhibit signs of post-traumatic stress disorder. “The fact they are part of a strong community helps them,” le Mesurier says. “They are not isolated.”
And yet, they go out when the call comes knowing there is a strong possibility they will either be injured or not return. Khafi tells of a recent raid near his home in Idlib, when a Russian bomber first targeted a civilian area, then took out the White Helmet center. It was, he says, his worst day. “It only took 10 minutes for the bombs to land, and by the end of it seven out of nine of us were badly injured,” he says. “In a matter of minutes, our center was no longer operational. That quickly, you can wipe out lives.”
A few days earlier, Khafi and his team responded to an attack outside Aleppo involving “more than 40 cluster bombs.” People were screaming for help in every direction. “Sometimes, you don’t know where to begin,” he says, describing the chaos, the confusion, the dust. Once he spent hours building a 100-foot tunnel through the rubble to reach an 8-year-old girl who had been trapped when her house crumbled.
“When we reached her, the first thing she said was, ‘Get my sister out first,’” Khafi recalls, and she pointed to another corner of what had once been her room. Her twin sister died before the White Helmets could reach her.
“My worst day so far was at the end of October,” says Osama. He says he got a call that a Russian fighter jet had hit a chicken farm where refugees were living. As he raced to the scene with a digger to trawl out the bodies, he got another call: “Our spotter saw more Russian planes coming in for a double tap,” he says.
01_29_SyriaWhiteHelmets_04
The Syrian Civil Defense numbers at roughly 2,500 members throughout various cities and villages in the country. At least 150 have been killed in the line of duty, and four times as many have suffered serious injuries, many as a result of secondary bombs that land on the site they have responded to initially.
NICOLE TUNG FOR NEWSWEEK
He got out of his vehicle and watched helplessly as the jets bombed a second time, while his colleagues continued to work. When he arrived, many of them were gravely injured. “One of my colleagues was cut in half,” Osama says. How do you abandon people, he asks, who are buried under rubble, crying out for help? “It makes you feel completely helpless.”
He and his team continued to work for hours trying to excavate a mother and seven children. “But in the end, we couldn’t do it,” he says. “By the time we got to them, they were so badly burnt I couldn’t tell if they were little boys or little girls.”
Hossam, who is 25 and was studying English literature before the war, says he joined the White Helmets in 2013 after he got out of a regime detention center, where he was held for a month. He says he entered because it was the only nonviolent way to help his country. “When I look at the last three years of my life,” he says, “I feel proud.” But his memories are also gruesome: pulling up the head of what he thought was a doll from a pile of rubble only to discover it belonged to a small girl, and finding a weeping mother who had lost all her children and asked, “Where are my angels?”
After days spent dislodging mutilated bodies covered in dust and blood, Hossam is not sure how he keeps going, but he says it is important to do so. “We know we are saving,” he says. “The bombs are destroying, but we are building. The regime is killing. We are saving.”
Hossam doesn’t know what he will do after the war ends. “I’m not sure I can go back to the life we had before,” he says. “But there is one thing: We built something here, out of nothing.”
Ten years ago, I reached a point in my career that felt either like a dead-end or a turning point – I wasn’t sure which. By then, I had spent 25 years as a gerontologist, professionally occupied with everything to do with ageing. I conducted research using longitudinal data sets and sophisticated statistical analyses. I developed and evaluated programmes to improve older people’s lives. I taught courses and gave lectures on ageing. I opined on policy issues affecting our ageing society. So what was the revelation?
I never talked to old people.
My research kept me at more than an arm’s length from the living, breathing individuals who were its subject. At best, hired interviewers spoke with my respondents. Elsewhere, I used even more distant secondary data sets. My ‘engagement’ with real people involved checking codes and running statistics. The living, breathing humans who reported buoyant life satisfaction or high levels of caregiver stress were equally distant from me. And so I suddenly felt an urge to go out into the world of people in the eighth decade of life and beyond, and listen to what they had to say. What I heard changed my whole approach to life. Perhaps it will do the same for you.
In a seminar room on an Ivy League campus, I sat across from hopeful, earnest, and anxious college seniors. In a few months, they would leave the classic tree-lined campus, the football games, and the near-gourmet meals that US dining halls now serve. I had arranged the meeting to find out what these ‘emerging adults’ wanted to learn about work and careers from their elders.
Sitting with these students on a bright spring morning, I anticipated that they would want to hear about success strategies, tips for getting ahead, and suggestions for landing a high-paying dream job. So I was taken aback by the first question. It came from Josh, a future money manager dressed in a jacket and tie. He asked:
I’d like you to ask them about something that really worries me. Do I need a purpose in life? That’s what all the books say, but I guess I don’t have one. Is there something wrong with me? And how do I get a purpose if I need one?
There was furious nodding from the other participants. Because these students were driven to excel, they had devoured books about career strategies and success, many of which emphasised purpose. They had heard motivational speakers exhort them to find a single life passion, without which they were sure to drift, rudderless, through a disappointing career. But as we talked, it became clear that it just didn’t feel that way to them. They might have an interest, an inclination, an inkling for something they would enjoy – but one all-consuming life goal eluded them. They feared that this lack of a unique and compelling purpose might doom them to a life of failure and futility.
Related video
Card creo final2
VIDEO
‘If you can do puberty, you can do old age’: a 90-year-old athlete’s perspective
8 minutes
And yet, from the other end of life’s voyage, our elders give us a very different view of a life purpose – and a tip for finding one. Basically, the oldest Americans (most of whom also struggled with the question) tell you to relax. They say that you are likely to have a number of purposes, which will shift as you progress through life.
Marjorie Wilcox, aged 87, brought this lesson home to me. Marjorie is tall, fit and active. She captures a certain casual elegance – there’s a bit of Lauren Bacall in both her appearance and the tone of her voice. Marjorie devoted her career to developing affordable housing, travelling to the worst parts of industrial cities throughout the US. With this passion to make things right in the world and her own history of adversity, I expected a strong endorsement of purpose as the first condition for a good life.
In fact, I heard something different from Marjorie and many of the other elders: namely, that our focus should not be on a purpose, but on purposes. She reported that the ‘purposes’ in her life changed as her life situation, interests, and priorities shifted. She warned specifically against being railroaded in the direction of a single purpose:
You will do several different things. Do not be on one train track because the train will change. Widen your mind. That’s what you should have as your priorities as a young person. Make sure you keep flexible. Lead with your strengths, and they will get you where you want to go.
The elders recommend that we re-shape the quest for a purpose, thinking instead of looking for a general direction and pursuing it energetically and courageously. Determining a direction in life is easier, more spontaneous, more flexible, and less laden with overtones of a mystical revelation that sets you on an immutable life path. Times change, circumstances change – indeed, change itself is the norm rather than the exception. A grand purpose, in their view, is not only unnecessary – it can also get in the way of a fulfilling career. Instead, they have offered the idea of finding an orientation, a ‘working model’ if you will, that guides you through each phase of life.
But how should you go about finding a direction? How to settle on a purpose that fits your current life stage? One technique turns out to be immensely valuable – and yet most people ignore it. If you are searching for a direction or purpose, interview your future self.
There are in fact a host of benefits to doing this. Experiments have shown that when people are made to think in detail about their future selves, they are more likely to make better financial planning decisions, show altruistic behaviour, and make more ethical choices. But it’s hard to do. A good deal of social science research over the past decade has shown that most people feel disconnected from their future selves. It takes work to imagine oneself a decade or two from now – let alone a half-century or more. Researchers have gone so far as to invent software that ‘morphs’ the reflection of a young subject to age 70 or 80.
But this is as far as time-travel technology seems to have got, so it’s sadly not possible to meet your real future self. Yet it’s astonishing how few people do the next best thing: interview an older person who embodies the ‘self’ you would like to be. This idea came to me from Barry Fine, a highly successful serial entrepreneur who still manages a business at 89. In fact, he didn’t use the term ‘future self’. He used a word he’d learned growing up on New York’s Lower East Side. His advice was to ‘find a maven’.
You don’t want a 40-year-old if you are 20; you want someone in his or her 80s, 90s, or a centenarian if you can find one
Like many Yiddish expressions, ‘maven’ defies a single definition. It’s derived from a Hebrew word meaning ‘one who knows’, or ‘one who understands’. Mavens are trusted experts, reliable sources of accumulated wisdom. That’s who we need to guide us, according to Barry:
In whatever business I’ve been in, and I’ve been in about eight businesses – some successful, some not successful – the most important thing is to have is a maven. Somebody who can really guide you. Where I’ve done this, where I’ve had a wonderful maven, I’ve always been successful. Where I went by myself, on my own, I’ve always failed. When I haven’t listened, I’ve lost a lot of money. Younger people may not be so aware of this. They don’t really understand that there are so many aspects of business you don’t get taught in school. They have to see long-term into the future. They need to think three years, six years, 20 years out. That is what the maven is for, steering them in the right direction, based on his or her experiences.
In any period where you feel directionless, wavering, stuck with one foot in two different worlds, and hearing in the back of your mind the song lyrics ‘Should I stay or should I go?’ – find your future self. He or she should be old – and preferably really old. You don’t want a 40-year-old if you are 20; you want someone in his or her 80s, 90s, or a centenarian if you can find one. You need your future self to have the truly long view, as well as the detachment that comes from a very long life.
This person also needs to be as close as possible to your imagined future self. Debating a career in medicine? Find a doctor who loved what she did. Worried about whether you can balance your values with a career in the financial services industry? Find an older person who struck that balance and made it to the end of life without regrets. Planning to work an undemanding day job so you have the energy to paint/write/act in your spare time? Some very old people did just that (and can tell stories of bohemian life that will sound very familiar today).
When I hit my crisis point 10 years ago, I couldn’t decide what to do, so I sought out Henry. Standing just a little over five feet tall and equipped with two hearing aids, Henry might not have seemed an imposing figure. But he was one of the leading developmental psychologists of his era, and he still came into the office every day to conduct research. Henry was cagey about his age, but I knew from talking with his wife that he had recently turned 93. On a whim, I asked him if we could have lunch. While he ate a green salad and I a cheeseburger, I let it all come out. Could I embrace this kind of risk, moving from churning out scientific articles in turgid academic prose to take the step of writing a book? A non-academic book, at that? And if I didn’t, would I regret it when I was his age?
He stopped me with the single word ‘Yes.’ Yes, he said, I would regret it if I did not take this leap, just as he regretted opportunities in his life that he had let slip by. He assured me that at his age, I would be much more likely to regret something that I had not done than something I had. And so I stepped away from the computer and the statistical software packages, and went on a search for the practical wisdom of older people. Ten years, 2,000 interviews, and two books later, I was not disappointed.
Sometimes things turn out to be less complicated than they seem. In preparation for my research, I ploughed through books that promised to help me find my life purpose in a short six or eight weeks; books that offered to show me my purpose in a set of steps or exercises; and more books that simply exhorted me to find that purpose and do it now! Along the way, I have learned that I would be helped by synchronicities, purpose boot camps, life portfolios, and a number of books by divine inspiration. Maybe, I realised, it can be much simpler than that.
Why not begin with an activity as old as the human race: asking the advice of the oldest people you know? Because older people have one thing that the rest of us do not: they have lived their lives. They have been where we haven’t. Indeed, people who have experienced most of a long life are in an ideal position to assess what ‘works’ and what doesn’t for finding a direction. It is impossible for a younger person to know about the entire course of life as deeply and intimately as an older person does. They bring to our contemporary problems and choices perspectives from a different time. These insights can make a world of difference to us. So find someone who mirrors your image of your future self, and ask about your direction – you won’t regret it.
In 2007, right before the first iPhone launched, I asked Steve Jobs the obvious question: The design of the iPhone was based on discarding every physical interface element except for a touchscreen. Would users be willing to give up the then-dominant physical keypads for a soft keyboard?
His answer was brusque: “They’ll learn.”
Steve turned out to be right. Today, touchscreens are ubiquitous and seem normal, and other interfaces are emerging. An entire generation is now coming of age with a completely different tactile relationship to information, validating all over again Marshall McLuhan’s observation that “the medium is the message”.
A great deal of product development is based on the assumption that products must adapt to unchanging human needs or risk being rejected. Yet, time and again, people adapt in unpredictable ways to get the most out of new tech. Creative people tinker to figure out the most interesting applications, others build on those, and entire industries are reshaped.
People change, then forget that they changed, and act as though they always behaved a certain way and could never change again. Because of this, unexpected changes in human behavior are often dismissed as regressive rather than as potentially intelligent adaptations.
But change happens anyway. “Software is eating the world” is the most recent historic transformation of this sort.
In 2014, a few of us invited Venkatesh Rao to spend the year at Andreessen Horowitz as a consultant to explore the nature of such historic tech transformations. In particular, we set out to answer the question: Between both the breathless and despairing extremes of viewing the future, could an intellectually rigorous case be made for pragmatic optimism?
As this set of essays argues — many of them inspired by a series of intensive conversations Venkat and I had — there is indeed such a case, and it follows naturally from the basic premise that people can and do change. To “break smart” is to adapt intelligently to new technological possibilities.
With his technological background, satirical eye, and gift for deep and different takes (as anyone who follows his Ribbonfarm blog knows!), there is perhaps nobody better suited than Venkat for telling a story of the future as it breaks smart from the past.
Whether you’re a high school kid figuring out a career or a CEO attempting to navigate the new economy, Breaking Smart should be on your go-to list of resources for thinking about the future, even as you are busy trying to shape it.
Something momentous happened around the year 2000: a major new soft technology came of age. After written language and money, software is only the third major soft technology to appear in human civilization. Fifteen years into the age of software, we are still struggling to understand exactly what has happened. Marc Andreessen’s now-familiar line, software is eating the world, hints at the significance, but we are only just beginning to figure out how to think about the world in which we find ourselves.
Only a handful of general-purpose technologies1 – electricity, steam power, precision clocks, written language, token currencies, iron metallurgy and agriculture among them – have impacted our world in the sort of deeply transformative way that deserves the description eating. And only two of these, written language and money, were soft technologies: seemingly ephemeral, but capable of being embodied in a variety of specific physical forms. Software has the same relationship to any specific sort of computing hardware as money does to coins or credit cards or writing to clay tablets and paper books.
But only since about 2000 has software acquired the sort of unbridled power, independent of hardware specifics, that it possesses today. For the first half century of modern computing after World War II, hardware was the driving force. The industrial world mostly consumed software to meet existing needs, such as tracking inventory and payroll, rather than being consumed by it. Serious technologists largely focused on solving the clear and present problems of the industrial age rather than exploring the possibilities of computing, proper.
Sometime around the dot com crash of 2000, though, the nature of software, and its relationship with hardware, underwent a shift. It was a shift marked by accelerating growth in the software economy and a peaking in the relative prominence of hardware.2 The shift happened within the information technology industry first, and then began to spread across the rest of the economy.
But the economic numbers only hint at3 the profundity of the resulting societal impact. As a simple example, a 14-year-old teenager today (too young to show up in labor statistics) can learn programming, contribute significantly to open-source projects, and become a talented professional-grade programmer before age 18. This is breaking smart: an economic actor using early mastery of emerging technological leverage — in this case a young individual using software leverage — to wield disproportionate influence on the emerging future.
Only a tiny fraction of this enormously valuable activity — the cost of a laptop and an Internet connection — would show up in standard economic metrics. Based on visible economic impact alone, the effects of such activity might even show up as a negative, in the form of technology-driven deflation. But the hidden economic significance of such an invisible story is at least comparable to that of an 18-year-old paying $100,000 over four years to acquire a traditional college degree. In the most dramatic cases, it can be as high as the value of an entire industry. The music industry is an example: a product created by a teenager, Shawn Fanning’s Napster, triggered a cascade of innovation whose primary visible impact has been the vertiginous decline of big record labels, but whose hidden impact includes an explosion in independent music production and rapid growth in the live-music sector.4
Software eating the world is a story of the seen and the unseen: small, measurable effects that seem underwhelming or even negative, and large invisible and positive effects that are easy to miss, unless you know where to look.5
Today, the significance of the unseen story is beginning to be widely appreciated. But as recently as fifteen years ago, when the main act was getting underway, even veteran technologists were being blindsided by the subtlety of the transition to software-first computing.
Perhaps the subtlest element had to do with Moore’s Law, the famous 1965 observation by Intel co-founder Gordon Moore that the density with which transistors can be packed into a silicon chip doubles every 18 months. By 2000, even as semiconductor manufacturing firms began running into the fundamental limits of Moore’s Law, chip designers and device manufacturers began to figure out how to use Moore’s Law to drive down the cost and power consumption of processors rather than driving up raw performance. The results were dramatic: low-cost, low-power mobile devices, such as smartphones, began to proliferate, vastly expanding the range of what we think of as computers. Coupled with reliable and cheap cloud computing infrastructure and mobile broadband, the result was a radical increase in technological potential. Computing could, and did, become vastly more accessible, to many more people in every country on the planet, at radically lower cost and expertise levels.
One result of this increased potential was that technologists began to grope towards a collective vision commonly called the Internet of Things. It is a vision based on the prospect of processors becoming so cheap, miniaturized and low-powered that they can be embedded, along with power sources, sensors and actuators, in just about anything, from cars and light bulbs to clothing and pills. Estimates of the economic potential of the Internet of Things – of putting a chip and software into every physical item on Earth – vary from $2.7 trillion to over $14 trillion: comparable to the entire GDP of the United States today.6
By 2010, it had become clear that given connectivity to nearly limitless cloud computing power and advances in battery technologies, programming was no longer something only a trained engineer could do to a box connected to a screen and a keyboard. It was something even a teenager could do, to almost anything.
The rise of ridesharing illustrates the process particularly well.
Only a few years ago services like Uber and Lyft seemed like minor enhancements to the process of procuring and paying for cab rides. Slowly, it became obvious that ridesharing was eliminating the role of human dispatchers and lowering the level of expertise required of drivers. As data accumulated through GPS tracking and ratings mechanisms, it further became clear that trust and safety could increasingly be underwritten by data instead of brand promises and regulation. This made it possible to dramatically expand driver supply, and lower ride costs by using underutilized vehicles already on the roads.
As the ridesharing sector took root and grew in city after city, second-order effects began to kick in. The increased convenience enables many more urban dwellers to adopt carless lifestyles. Increasing supply lowers costs, and increases accessibility for people previously limited to inconvenient public transportation. And as the idea of the carless lifestyle began to spread, urban planners began to realize that century-old trends like suburbanization, driven in part by car ownership, could no longer be taken for granted.
The ridesharing future we are seeing emerge now is even more dramatic: the higher utilization of cars leads to lower demand for cars, and frees up resources for other kinds of consumption. Individual lifestyle costs are being lowered and insurance models are being reimagined. The future of road networks must now be reconsidered in light of greener and more efficient use of both vehicles and roads.
Meanwhile, the emerging software infrastructure created by ridesharing is starting to have a cascading impact on businesses, such as delivery services, that rely on urban transportation and logistics systems. And finally, by proving many key component technologies, the rideshare industry is paving the way for the next major development: driverless cars.
These developments herald a major change in our relationship to cars.
To traditionalists, particularly in the United States, the car is a motif for an entire way of life, and the smartphone just an accessory. To early adopters who have integrated ridesharing deeply into their lives, the smartphone is the lifestyle motif, and the car is the accessory. To generations of Americans, owning a car represented freedom. To the next generation, not owning a car will represent freedom.
And this dramatic reversal in our relationships to two important technologies – cars and smartphones – is being catalyzed by what was initially dismissed as “yet another trivial app.”
Similar impact patterns are unfolding in sector after sector. Prominent early examples include the publishing, education, cable television, aviation, postal mail and hotel sectors. The impact is more than economic. Every aspect of the global industrial social order is being transformed by the impact of software.
This has happened before of course: money and written language both transformed the world in similarly profound ways. Software, however, is more flexible and powerful than either.
Writing is very flexible: we can write with a finger on sand or with an electron beam on a pinhead. Money is even more flexible: anything from cigarettes in a prison to pepper and salt in the ancient world to modern fiat currencies can work. But software can increasingly go wherever writing and money can go, and beyond. Software can also eat both, and take them to places they cannot go on their own.
Partly as a consequence of how rarely soft, world-eating technologies erupt into human life, we have been systematically underestimating the magnitude of the forces being unleashed by software. While it might seem like software is constantly in the news, what we have already seen is dwarfed by what still remains unseen.
The effects of this widespread underestimation are dramatic. The opportunities presented by software are expanding, and the risks of being caught on the wrong side of the transformation are dramatically increasing. Those who have correctly calibrated the impact of software are winning. Those who have miscalibrated it are losing.
And the winners are not winning by small margins or temporarily either. Software-fueled victories in the past decade have tended to be overwhelming and irreversible faits accompli. And this appears to be true at all levels from individuals to businesses to nations. Even totalitarian dictatorships seem unable to resist the transformation indefinitely.
So to understand how software is eating the world, we have to ask why we have been systematically underestimating its impact, and how we can recalibrate our expectations for the future.
There are four major reasons we underestimate the increasing power of software. Three of these reasons drove similar patterns of miscalibration in previous technological revolutions, but one is unique to software.
First, as futurist Roy Amara noted, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Technological change unfolds exponentially, like compound interest, and we humans seem wired to think about exponential phenomena in flawed ways.1 In the case of software, we expected too much too soon from 1995 to 2000, leading to the crash. Now in 2015, many apparently silly ideas from 2000, such as home-delivery of groceries ordered on the Internet, have become a mundane part of everyday life in many cities. But the element of surprise has dissipated, so we tend to expect too little, too far out, and are blindsided by revolutionary change in sector after sector. Change that often looks trivial or banal on the surface, but turns out to have been profound once the dust settles.
Second, we have shifted gears from what economic historian Carlota Perez calls the installation phase of the software revolution, focused on basic infrastructure such as operating systems and networking protocols, to a deployment phase focused on consumer applications such as social networks, ridesharing and ebooks. In her landmark study of the history of technology,2 Perez demonstrates that the shift from installation to deployment phase for every major technology is marked by a chaotic transitional phase of wars, financial scandals and deep anxieties about civilizational collapse. One consequence of the chaos is that attention is absorbed by transient crises in economic, political and military affairs, and the apocalyptic fears and utopian dreams they provoke. As a result, momentous but quiet change passes unnoticed.
Third, a great deal of the impact of software today appears in a disguised form. The genomics and nanotechnology sectors appear to be rooted in biology and materials science. The “maker” movement around 3d printing and drones appears to be about manufacturing and hardware. Dig a little deeper though, and you invariably find that the action is being driven by possibilities opened up by software more than fundamental new discoveries in those physical fields. The crashing cost of genome-sequencing is primarily due to computing, with innovations in wet chemistry playing a secondary role. Financial innovations leading to cheaper insurance and credit are software innovations in disguise. The Nest thermostat achieves energy savings not by exploiting new discoveries in thermodynamics, but by using machine learning algorithms in a creative way. The potential of this software-driven model is what prompted Google, a software company, to pay $3B to acquire Nest: a company that on the surface appeared to have merely invented a slightly better mousetrap.
These three reasons for under-estimating the power of software had counterparts in previous technology revolutions. The railroad revolution of the nineteenth century also saw a transitional period marked by systematically flawed expectations, a bloody civil war in the United States, and extensive patterns of disguised change — such as the rise of urban living, grocery store chains, and meat consumption — whose root cause was cheap rail transport.
The fourth reason we underestimate software, however, is a unique one: it is a revolution that is being led, in large measure, by brash young kids rather than sober adults.3
This is perhaps the single most important thing to understand about the revolution that we have labeled software eating the world: it is being led by young people, and proceeding largely without adult supervision (though with many adults participating). This has unexpected consequences.
As in most periods in history, older generations today run or control all key institutions worldwide. They are better organized and politically more powerful. In the United States for example, the AARP is perhaps the single most influential organization in politics. Within the current structure of the global economy, older generations can, and do, borrow unconditionally from the future at the expense of the young and the yet-to-be-born.
But unlike most periods in history, young people today do not have to either “wait their turn” or directly confront a social order that is systematically stacked against them. Operating in the margins by a hacker ethos — a problem solving sensibility based on rapid trial-and-error and creative improvisation — they are able to use software leverage and loose digital forms of organization to create new economic, social and political wealth. In the process, young people are indirectly disrupting politics and economics and creating a new parallel social order. Instead of vying for control of venerable institutions that have already weathered several generational wars, young people are creating new institutions based on the new software and new wealth. These improvised but highly effective institutions repeatedly emerge out of nowhere, and begin accumulating political and economic power. Most importantly, they are relatively invisible. Compared to the visible power of youth counterculture in the 1960s for instance, today’s youth culture, built around messaging apps and photo-sharing, does not seem like a political force to reckon with. This culture also has a decidedly commercial rather than ideological character, as a New York Times writer (rather wistfully) noted in a 2011 piece appropriately titled Generation Sell.4 Yet, today’s youth culture is arguably more powerful as a result, representing as it does what Jane Jacobs called the “commerce syndrome” of values, rooted in pluralistic economic pragmatism, rather than the opposed “guardian syndrome” of values, rooted in exclusionary and authoritarian political ideologies.
Chris Dixon captured this guerrilla pattern of the ongoing shift in political power with a succinct observation: what the smartest people do on the weekend is what everyone else will do during the week in ten years.
The result is strange: what in past eras would have been a classic situation of generational conflict based on political confrontation, is instead playing out as an economy-wide technological disruption involving surprisingly little direct political confrontation. Movements such as #Occupy pale in comparison to their 1960s counterparts, and more importantly, in comparison to contemporary youth-driven economic activity.
This does not mean, of course, that there are no political consequences. Software-driven transformations directly disrupt the middle-class life script, upon which the entire industrial social order is based. In its typical aspirational form, the traditional script is based on 12 years of regimented industrial schooling, an additional 4 years devoted to economic specialization, lifetime employment with predictable seniority-based promotions, and middle-class lifestyles. Though this script began to unravel as early as the 1970s, even for the minority (white, male, straight, abled, native-born) who actually enjoyed it, the social order of our world is still based on it. Instead of software, the traditional script runs on what we might call paperware: bureaucratic processes constructed from the older soft technologies of writing and money. Instead of the hacker ethos of flexible and creative improvisation, it is based on the credentialist ethos of degrees, certifications, licenses and regulations. Instead of being based on achieving financial autonomy early, it is based on taking on significant debt (for college and home ownership) early.
It is important to note though, that this social order based on credentialism and paperware worked reasonably well for almost a century between approximately 1870 and 1970, and created a great deal of new wealth and prosperity. Despite its stifling effects on individualism, creativity and risk-taking, it offered its members a broader range of opportunities and more security than the narrow agrarian provincialism it supplanted. For all its shortcomings, lifetime employment in a large corporation like General Motors, with significantly higher standards of living, was a great improvement over pre-industrial rural life.
But by the 1970s, industrialization had succeeded so wildly, it had undermined its own fundamental premises of interchangeability in products, parts and humans. As economists Jeffrey Greenwood and Mehmet Yorkuglu5 argue in a provocative paper titled 1974, that year arguably marked the end of the industrial age and the beginning of the information age. Computer-aided industrial automation was making ever-greater scale possible at ever-lower costs. At the same time, variety and uniqueness in products and services were becoming increasingly valuable to consumers in the developed world. Global competition, especially from Japan and Germany, began to directly threaten American industrial leadership. This began to drive product differentiation, a challenge that demanded originality rather than conformity from workers. Industry structures that had taken shape in the era of mass-produced products, such as Ford’s famous black Model T, were redefined to serve the demand for increasing variety. The result was arguably a peaking in all aspects of the industrial social order based on mass production and interchangeable workers roughly around 1974, a phenomenon Balaji Srinivasan has dubbed peak centralization.6
One way to understand the shift from credentialist to hacker modes of social organization, via young people acquiring technological leverage, is through the mythological tale of Prometheus stealing fire from the heavens for human use.
The legend of Prometheus has been used as a metaphor for technological progress at least since Mary Shelley’s Frankenstein: A Modern Prometheus. Technologies capable of eating the world typically have a Promethean character: they emerge within a mature social order (a metaphoric “heaven” that is the preserve of older elites), but their true potential is unleashed by an emerging one (a metaphoric “earth” comprising creative marginal cultures, in particular youth cultures), which gains relative power as a result. Software as a Promethean technology emerged in the heart of the industrial social order, at companies such as AT&T, IBM and Xerox, universities such as MIT and Stanford, and government agencies such as DARPA and CERN. But its Promethean character was unleashed, starting with the early hacker movement, on the open Internet and through Silicon-Valley style startups.
As a result of a Promethean technology being unleashed, younger and older face a similar dilemma: should I abandon some of my investments in the industrial social order and join the dynamic new social order, or hold on to the status quo as long as possible?
The decision is obviously easier if you are younger, with much less to lose. But many who are young still choose the apparent safety of the credentialist scripts of their parents. These are what David Brooks called Organization Kids (after William Whyte’s 1956 classic, The Organization Man7): those who bet (or allow their “Tiger” parents8 to bet on their behalf) on the industrial social order. If you are an adult over 30, especially one encumbered with significant family obligations or debt, the decision is harder.
Those with a Promethean mindset and an aggressive approach to pursuing a new path can break out of the credentialist life script at any age. Those who are unwilling or unable to do so are holding on to it more tenaciously than ever.
Young or old, those who are unable to adopt the Promethean mindset end up defaulting to what we call a pastoral mindset: one marked by yearning for lost or unattained utopias. Today many still yearn for an updated version of romanticized9 1950s American middle-class life for instance, featuring flying cars and jetpacks.
How and why you should choose the Promethean option, despite its disorienting uncertainties and challenges, is the overarching theme of Season 1. It is a choice we call breaking smart, and it is available to almost everybody in the developed world, and a rapidly growing number of people in the newly-connected developing world.
These individual choices matter.
As historians such as Daron Acemoglu and James Robinson10 and Joseph Tainter11 have argued, it is the nature of human problem-solving institutions, rather than the nature of the problems themselves, that determines whether societies fail or succeed. Breaking smart at the level of individuals is what leads to organizations and nations breaking smart, which in turn leads to societies succeeding or failing.
Today, the future depends on increasing numbers of people choosing the Promethean option. Fortunately, that is precisely what is happening.
In this season of Breaking Smart, I will not attempt to predict the what and when of the future. In fact, a core element of the hacker ethos is the belief that being open to possibilities and embracing uncertainty is necessary for the actual future to unfold in positive ways. Or as computing pioneer Alan Kay put it, inventing the future is easier than predicting it.
And this is precisely what tens of thousands of small teams — small enough to be fed by no more than two pizzas, by a rule of thumb made famous by Amazon founder Jeff Bezos — are doing across the world today.
Prediction as a foundation for facing the future involves risks that go beyond simply getting it wrong. The bigger risk is getting attached to a particular what and when, a specific vision of a paradise to be sought, preserved or reclaimed. This is often a serious philosophical error — to which pastoralist mindsets are particularly prone — that seeks to limit the future.
But while I will avoid dwelling too much on the what and when, I will unabashedly advocate for a particular answer to how. Thanks to virtuous cycles already gaining in power, I believe almost all effective responses to the problems and opportunities of the coming decades will emerge out of the hacker ethos, despite its apparent peripheral role today. The credentialist ethos of extensive planning and scripting towards deterministic futures will play a minor supporting role at best. Those who adopt a Promethean mindset and break smart will play an expanding role in shaping the future. Those who adopt a pastoral mindset and retreat towards tradition will play a diminishing role, in the shrinking number of economic sectors where credentialism is still the more appropriate model.
The nature of problem-solving in the hacker mode, based on trial-and-error, iterative improvement, testing and adaptation (both automated and human-driven) allows us to identify four characteristics of how the future will emerge.
First, despite current pessimism about the continued global leadership of the United States, the US remains the single largest culture that embodies the pragmatic hacker ethos, nowhere more so than in Silicon Valley. The United States in general, and Silicon Valley in particular, will therefore continue to serve as the global exemplar of Promethean technology-driven change. And as virtual collaboration technologies improve, the Silicon Valley economic culture will increasingly become the global economic culture.
Second, the future will unfold through very small groups having very large impacts. One piece of wisdom in Silicon Valley today is that the core of the best software is nearly always written by teams of fewer than a dozen people, not by huge committee-driven development teams. This means increasing well-being for all will be achieved through small two-pizza teams beating large ones. Scale will increasingly be achieved via loosely governed ecosystems of additional participants creating wealth in ways that are hard to track using traditional economic measures. Instead of armies of Organization Men and Women employed within large corporations, and Organization Kids marching in at one end and retirees marching out at the other, the world of work will be far more diverse.
Third, the future will unfold through a gradual and continuous improvement of well-being and quality of life across the world, not through sudden emergence of a utopian software-enabled world (or sudden collapse into a dystopian world). The process will be one of fits and starts, toys and experiments, bugginess and brokenness. But the overall trend will be upwards, towards increasing prosperity for all.
Fourth, the future will unfold through rapid declines in the costs of solutions to problems, including in heavily regulated sectors historically resistant to cost-saving innovations, such as healthcare and higher education. In improvements wrought by software, poor and expensive solutions have generally been replaced by superior and cheaper (often free) solutions, and these substitution effects will accelerate.
Putting these four characteristics together, we get a picture of messy, emergent progress that economist Bradford Delong calls “slouching towards utopia“: a condition of gradual, increasing quality of life available, at gradually declining cost, to a gradually expanding portion of the global population.
A big implication is immediately clear: the asymptotic condition represents a consumer utopia. As consumers, we will enjoy far more for far less. This means that the biggest unknown today is our future as producers, which brings us to what many view as the central question today: the future of work.
The gist of a robust answer, which we will explore in Understanding Elite Discontent, was anticipated by John Maynard Keynes as far back as 1930,1 though he did not like the implications: the majority of the population will be engaged in creating and satisfying each other’s new needs in ways that even the most prescient of today’s visionaries will fail to anticipate.
While we cannot predict precisely what workers of the future will be doing — what future wants and needs workers will be satisfying — we can predict some things about how they will be doing it. Work will take on an experimental, trial-and-error character, and will take place in an environment of rich feedback, self-correction, adaptation, ongoing improvement, and continuous learning. The social order surrounding work will be a much more fluid descendant of today’s secure but stifling paycheck world on the one hand, and liberating but precarious world of free agency and contingent labor on the other.
In other words, the hacker ethos will go global and the workforce at large will break smart. As the hacker ethos spreads, we will witness what economist Edmund Phelps calls a mass flourishing2 — a state of the economy where work will be challenging and therefore fulfilling. Unchallenging, predictable work will become the preserve of machines.
Previous historical periods of mass flourishing, such as the early industrial revolution, were short-lived, and gave way, after a few decades, to societies based on a new middle class majority built around predictable patterns of work and life. This time around, the state of mass flourishing will be a sustained one: a slouching towards a consumer and producer utopia.
If this vision seems overly dramatic, consider once again the comparison to other soft technologies: software is perhaps the most imagination-expanding technology humans have invented since writing and money, and possibly more powerful than either. To operate on the assumption that it will transform the world at least as dramatically, far from being wild-eyed optimism, is sober realism.
At the heart of the historical development of computing is the age-old philosophical impasse between purist and pragmatist approaches to technology, which is particularly pronounced in software due to its seeming near-Platonic ineffability. One way to understand the distinction is through a dinnerware analogy.
Purist approaches, which rely on alluring visions, are like precious “good” china: mostly for display, and reserved exclusively for narrow uses like formal dinners. Damage through careless use can drastically lower the value of a piece. Broken or missing pieces must be replaced for the collection to retain its full display value. To purists, mixing and matching, either with humbler everyday tableware, or with different china patterns, is a kind of sacrilege.
The pragmatic approach on the other hand, is like unrestricted and frequent use of hardier everyday dinnerware. Damage through careless play does not affect value as much. Broken pieces may still be useful, and missing pieces need not be replaced if they are not actually needed. For pragmatists, mixing and matching available resources, far from being sacrilege, is something to be encouraged, especially for collaborations such as neighborhood potlucks.
In software, the difference between the two approaches is clearly illustrated by the history of the web browser.
On January 23, 1993, Marc Andreessen sent out a short email, announcing the release of Mosaic, the first graphical web browser:
07:21:17-0800 by marca@ncsa.uiuc.edu:
By the power vested in me by nobody in particular, alpha/beta version 0.5 of NCSA’s Motif-based networked information systems and World Wide Web browser, X Mosaic, is hereby released:
file://ftp.ncsa.uiuc.edu/Web/xmosaic/xmosaic-0.5.tar.Z
Along with Eric Bina he had quickly hacked the prototype together after becoming enthralled by his first view of the World Wide Web, which Tim Berners-Lee had unleashed from CERN in Geneva in 1991. Over the next year, several other colleagues joined the project, equally excited by the possibilities of the web. All were eager to share the excitement they had experienced, and to open up the future of the web to more people, especially non-technologists.
Over the course of the next few years, the graphical browser escaped the confines of the government-funded lab (the National Center for Supercomputing Applications at the University of Illinois) where it was born. As it matured at Netscape and later at Microsoft, Mozilla and Google, it steered the web in unexpected (and to some, undesirable) directions. The rapid evolution triggered both the legendary browser wars and passionate debates about the future of the Internet. Those late-nineties conflicts shaped the Internet of today.
To some visionary pioneers, such as Ted Nelson, who had been developing a purist hypertext paradigm called Xanadu for decades, the browser represented an undesirably messy direction for the evolution of the Internet. To pragmatists, the browser represented important software evolving as it should: in a pluralistic way, embodying many contending ideas, through what the Internet Engineering Task Force (IETF) calls “rough consensus and running code.”
While every major software project has drawn inspiration from both purists and pragmatists, the browser, like other pieces of software that became a mission critical part of the Internet, was primarily derived from the work and ideas of pragmatists: pioneers like Jon Postel, David Clark, Bob Kahn and Vint Cerf, who were instrumental in shaping the early development of the Internet through highly inclusive institutions like the IETF.
Today, the then-minority tradition of pragmatic hacking has matured into agile development, the dominant modern approach for making software. But the significance of this bit of history goes beyond the Internet. Increasingly, the pragmatic, agile approach to building things has spread to other kinds of engineering and beyond, to business and politics.
The nature of software has come to matter far beyond software. Agile philosophies are eating all kinds of building philosophies. To understand the nature of the world today, whether or not you are a technologist, it is crucial to understand agility and its roots in the conflict between pragmatic and purist approaches to computing.
The story of the browser was not exceptional. Until the early 1990s, almost all important software began life as purist architectural visions rather than pragmatic hands-on tinkering.
This was because early programming with punch-card mainframes was a highly constrained process. Iterative refinement was slow and expensive. Agility was a distant dream: programmers often had to wait weeks between runs. If your program didn’t work the first time, you might not have gotten another chance. Purist architectures, worked out on paper, helped minimize risk and maximize results under these conditions.
As a result, early programming was led by creative architects (often mathematicians and, with rare exceptions like Klari Von Neumann and Grace Hopper, usually male) who worked out the structure of complex programs upfront, as completely as possible. The actual coding onto punch cards was done by large teams of hands-on programmers (mostly women1) with much lower autonomy, responsible for working out implementation details.
In short, purist architecture led the way and pragmatic hands-on hacking was effectively impossible. Trial-and-error was simply too risky and slow, which meant significant hands-on creativity had to be given up in favor of productivity.
With the development of smaller computers capable of interactive input hands-on hacking became possible. At early hacker hubs, like MIT through the sixties, a high-autonomy culture of hands-on programming began to take root. Though the shift would not be widely recognized until after 2000, the creative part of programming was migrating from visioning to hands-on coding. Already by 1970, important and high-quality software, such as the Unix operating system, had emerged from the hacker culture growing at the minicomputer margins of industrial programming.
Through the seventies, a tenuous balance of power prevailed between purist architects and pragmatic hackers. With the introduction of networked personal computing in the eighties, however, hands-on hacking became the defining activity in programming. The culture of early hacker hubs like MIT and Bell Labs began to diffuse broadly through the programming world. The archetypal programmer had evolved: from interchangeable member of a large team, to the uniquely creative hacker, tinkering away at a personal computer, interacting with peers over networks. Instead of dutifully obeying an architect, the best programmers were devoting increasing amounts of creative energy to scratching personal itches.
The balance shifted decisively in favor of pragmatists with the founding of the IETF in 1986. In January of that year, a group of 21 researchers met in San Diego and planted the seeds of what would become the modern “government” of the Internet.
Despite its deceptively bureaucratic-sounding name, the IETF is like no standards organization in history, starting with the fact that it has no membership requirements and is open to all who want to participate. Its core philosophy can be found in an obscure document, The Tao of the IETF, little known outside the world of technology. It is a document that combines the informality and self-awareness of good blogs, the gravitas of a declaration of independence, and the aphoristic wisdom of Zen koans. This oft-quoted section illustrates its basic spirit:
In many ways, the IETF runs on the beliefs of its members. One of the “founding beliefs” is embodied in an early quote about the IETF from David Clark: “We reject kings, presidents and voting. We believe in rough consensus and running code”. Another early quote that has become a commonly-held belief in the IETF comes from Jon Postel: “Be conservative in what you send and liberal in what you accept”.
Though the IETF began as a gathering of government-funded researchers, it also represented a shift in the center of programming gravity from government labs to the commercial and open-source worlds. Over the nearly three decades since, it has evolved into the primary steward2 of the inclusive, pluralistic and egalitarian spirit of the Internet. In invisible ways, the IETF has shaped the broader economic and political dimensions of software eating the world.
The difference between purist and pragmatic approaches becomes clear when we compare the evolution of programming in the United States and Japan since the early eighties. Around 1982, Japan chose the purist path over the pragmatic path, by embarking on the ambitious “fifth-generation computing” effort. The highly corporatist government-led program, which caused much anxiety in America at the time, proved to be largely a dead-end. The American tradition on the other hand, outgrew its government-funded roots and gave rise to the modern Internet. Japan’s contemporary contributions to software, such as the hugely popular Ruby language designed by Yukihiro Matsumoto, belong squarely within the pragmatic hacker tradition.
I will argue that this pattern of development is not limited to computer science. Every field eaten by software experiences a migration of the creative part from visioning activities to hands-on activities, disrupting the social structure of all professions. Classical engineering fields like mechanical, civil and electrical engineering had already largely succumbed to hands-on pragmatic hacking by the nineties. Non-engineering fields like marketing are beginning to convert.
So the significance of pragmatic approaches prevailing over purist ones cannot be overstated: in the world of technology, it was the equivalent of the fall of the Berlin Wall.
While pragmatic hacking was on the rise, purist approaches entered a period of slow, painful and costly decline. Even as they grew in ambition, software projects based on purist architecture and teams of interchangeable programmers grew increasingly unmanageable. They began to exhibit the predictable failure patterns of industrial age models: massive cost-overruns, extended delays, failed launches, damaging unintended consequences, and broken, unusable systems.
These failure patterns are characteristic of what political scientist James Scott1 called authoritarian high modernism: a purist architectural aesthetic driven by the authoritarian priorities. To authoritarian high-modernists, elements of the environment that do not conform to their purist design visions appear “illegible” and anxiety-provoking. As a result, they attempt to make the environment legible by forcibly removing illegible elements. Failures follow because important elements, critical to the functioning of the environment, get removed in the process.
Geometrically laid-out suburbs, for example, are legible and conform to platonic architectural visions, even if they are unlivable and economically stagnant. Slums on the other hand, appear illegible and are anxiety-provoking to planners, even when they are full of thriving economic life. As a result, authoritarian planners level slums and relocate residents into low-cost planned housing. In the process they often destroy economic and cultural vitality.
In software, what authoritarian architects find illegible and anxiety-provoking is the messy, unplanned tinkering hackers use to figure out creative solutions. When the pragmatic model first emerged in the sixties, authoritarian architects reacted like urban planners: by attempting to clear “code slums.” These attempts took the form of increasingly rigid documentation and control processes inherited from manufacturing. In the process, they often lost the hacker knowledge keeping the project alive.
In short, authoritarian high modernism is a kind of tunnel vision. Architects are prone to it in environments that are richer than one mind can comprehend. The urge to dictate and organize is destructive, because it leads architects to destroy the apparent chaos that is vital for success.
The flaws of authoritarian high modernism first became problematic in fields like forestry, urban planning and civil engineering. Failures of authoritarian development in these fields resulted in forests ravaged by disease, unlivable “planned” cities, crony capitalism and endemic corruption. By the 1960s, in the West, pioneering critics of authoritarian models, such as the urbanist Jane Jacobs and the environmentalist Rachel Carson, had begun to correctly diagnose the problem.
By the seventies, liberal democracies had begun to adopt the familiar, more democratic consultation processes of today. These processes were adopted in computing as well, just as the early mainframe era was giving way to the minicomputer era.
Unfortunately, while democratic processes did mitigate the problems, the result was often lowered development speed, increased cost and more invisible corruption. New stakeholders brought competing utopian visions and authoritarian tendencies to the party. The problem now became reconciling conflicting authoritarian visions. Worse, any remaining illegible realities, which were anxiety-provoking to all stakeholders, were now even more vulnerable to prejudice and elimination. As a result complex technology projects often slowed to a costly, gridlocked crawl. Tyranny of the majority — expressed through autocratic representatives of particular powerful constituencies — drove whatever progress did occur. The biggest casualty was innovation, which by definition is driven by ideas that are illegible to all but a few: what Peter Thiel calls secrets — things entrepreneurs believe that nobody else does, which leads them to unpredictable breakthroughs.
The process was most clearly evident in fields like defense. In major liberal democracies, different branches of the military competed to influence the design of new weaponry, and politicians competed to create jobs in their constituencies. As a result, major projects spiraled out of control and failed in predictable ways: delayed, too expensive and technologically compromised. In the non liberal-democratic world, the consequences were even worse. Authoritarian high modernism continued (and continues today in countries like Russia and North Korea), unchecked, wreaking avoidable havoc.
Software is no exception to this pathology. As high-profile failures like the launch of healthcare.gov2 show, “democratic” processes meant to mitigate risks tend to create stalled or gridlocked processes, compounding the problem.
Both in traditional engineering fields and in software, authoritarian high modernism leads to a Catch-22 situation: you either get a runaway train wreck due to too much unchecked authoritarianism, or a train that barely moves due to a gridlock of checks and balances.
Fortunately, agile software development manages to combine both decisive authority and pluralistic visions, and mitigate risks without slowing things to a crawl. The basic principles of agile development, articulated by a group of 17 programmers in 2001, in a document known as the Agile Manifesto, represented an evolution of the pragmatic philosophy first explicitly adopted by the IETF.
The cost of this agility is a seemingly anarchic pattern of progress. Agile development models catalyze illegible, collective patterns of creativity, weaken illusions of control, and resist being yoked to driving utopian visions. Adopting agile models leads individuals and organizations to gradually increase their tolerance for anxiety in the face of apparent chaos. As a result, agile models can get more agile over time.
Not only are agile models driving reform in software, they are also spreading to traditional domains where authoritarian high-modernism first emerged. Software is beginning to eat domains like forestry, urban planning and environment protection. Open Geographic Information Systems (GIS) in forestry, open data initiatives in urban governance, and monitoring technologies in agriculture, all increase information availability while eliminating cumbersome paperware processes. As we will see in upcoming essays, enhanced information availability and lowered friction can make any field hacker-friendly. Once a field becomes hacker-friendly, software begins to eat it. Development gathers momentum: the train can begin moving faster, without leading to train wrecks, resolving the Catch-22.
Today the shift from purist to pragmatist has progressed far enough that it is also reflected at the level of the economics of software development. In past decades, economic purists argued variously for idealized open-source, market-driven or government-led development of important projects. But all found themselves faced with an emerging reality that was too complex for any one economic ideology to govern. As a result, rough consensus and running economic mechanisms have prevailed over specific economic ideologies and gridlocked debates. Today, every available economic mechanism — market-based, governmental, nonprofit and even criminal — has been deployed at the software frontier. And the same economic pragmatism is spreading to software-eaten fields.
This is a natural consequence of the dramatic increase in both participation levels and ambitions in the software world. In 1943, only a small handful of people working on classified military projects had access to the earliest computers. Even in 1974, the year of Peak Centralization, only a small and privileged group had access to the early hacker-friendly minicomputers like the DEC PDP series. But by 1993, the PC revolution had nearly delivered on Bill Gates’ vision of a computer at every desk, at least in the developed world. And by 2000, laptops and Blackberries were already foreshadowing the world of today, with near-universal access to smartphones, and an exploding number of computers per person.
The IETF slogan of rough consensus and running code (RCRC) has emerged as the only workable doctrine for both technological development and associated economic models under these conditions.
As a result of pragmatism prevailing, a nearly ungovernable Promethean fire has been unleashed. Hundreds of thousands of software entrepreneurs are unleashing innovations on an unsuspecting world by the power vested in them by “nobody in particular,” and by any economic means necessary.
It is in the context of the anxiety-inducing chaos and complexity of a mass flourishing that we then ask: what exactly is software?
Software possesses an extremely strange property: it is possible to create high-value software products with effectively zero capital outlay. As Mozilla engineer Sam Penrose put it, software programming is labor that creates capital.
This characteristic make software radically different from engineering materials like steel, and much closer to artistic media such as paint.1 As a consequence, engineer and engineering are somewhat inappropriate terms. It is something of a stretch to even think of software as a kind of engineering “material.” Though all computing requires a physical substrate, the trillions of tiny electrical charges within computer circuits, the physical embodiment of a running program, barely seem like matter.
The closest relative to this strange new medium is paper. But even paper is not as cheap or evanescent. Though we can appreciate the spirit of creative abundance with which industrial age poets tossed crumpled-up false starts into trash cans, a part of us registers the wastefulness. Paper almost qualifies as a medium for true creative abundance, but falls just short.
Software though, is a medium that not only can, but must be approached with an abundance mindset. Without a level of extensive trial-and-error and apparent waste that would bankrupt both traditional engineering and art, good software does not take shape. From the earliest days of interactive computing, when programmers chose to build games while more “serious” problems waited for computer time, to modern complaints about “trivial” apps (which often turn out to be revolutionary), scarcity-oriented thinkers have remained unable to grasp the essential nature of software for fifty years.
The difference has a simple cause: unsullied purist visions have value beyond anxiety-alleviation and planning. They are also a critical authoritarian marketing and signaling tool — like formal dinners featuring expensive china — for attracting and concentrating scarce resources in fields such as architecture. In an environment of abundance, there is much less need for visions to serve such a marketing purpose. They only need to provide a roughly correct sense of direction to those laboring at software development to create capital using whatever tools and ideas they bring to the party — like potluck participants improvising whatever resources are necessary to make dinner happen.
Translated to technical terms, the dinnerware analogy is at the heart of software engineering. Purist visions tend to arise when authoritarian architects attempt to concentrate and use scarce resources optimally, in ways they often sincerely believe is best for all. By contrast, tinkering is focused on steady progress rather than optimal end-states that realize a totalizing vision. It is usually driven by individual interests and not obsessively concerned with grand and paternalistic “best for all” objectives. The result is that purist visions seem more comforting and aesthetically appealing on the outside, while pragmatic hacking looks messy and unfocused. At the same time purist visions are much less open to new possibilities and bricolage, while pragmatic hacking is highly open to both.
Within the world of computing, the importance of abundance-oriented approaches was already recognized by the 1960s. With Moore’s Law kicking in, pioneering computer scientist Alan Kay codified the idea of abundance orientation with the observation that programmers ought to “waste transistors” in order to truly unleash the power of computing.
But even for young engineers starting out today, used to routinely renting cloudy container-loads2 of computers by the minute, the principle remains difficult to follow. Devoting skills and resources to playful tinkering still seems “wrong,” when there are obvious and serious problems desperately waiting for skilled attention. Like the protagonist in the movie Brewster’s Millions, who struggles to spend $30 million within thirty days in order to inherit $300 million, software engineers must unlearn habits born of scarcity before they can be productive in their medium.
The principle of rough consensus and running code is perhaps the essence of the abundance mindset in software.
If you are used to the collaboration processes of authoritarian organizations, the idea of rough consensus might conjure up the image of a somewhat informal committee meeting, but the similarity is superficial. Consensus in traditional organizations tends to be brokered by harmony-seeking individuals attuned to the needs of others, sensitive to constraints, and good at creating “alignment” among competing autocrats. This is a natural mode of operation when consensus is sought in order to deal with scarcity. Allocating limited resources is the typical purpose of such industrial-age consensus seeking. Under such conditions, compromise represents a spirit of sharing and civility. Unfortunately, it is also a recipe for gridlock when compromise is hard and creative breakthroughs become necessary.
By contrast, software development favors individuals with an autocratic streak, driven by an uncompromising sense of how things ought to be designed and built, which at first blush appears to contradict the idea of consensus.
Paradoxically, the IETF philosophy of eschewing “kings, presidents and voting” means that rough consensus evolves through strong-minded individuals either truly coming to an agreement, or splitting off to pursue their own dissenting ideas. Conflicts are not sorted out through compromises that leave everybody unhappy. Instead they are sorted out through the principle futurist Bob Sutton identified as critical for navigating uncertainty: strong views, weakly held.
Pragmatists, unlike the authoritarian high-modernist architects studied by James Scott, hold strong views on the basis of having contributed running code rather than abstract visions. But they also recognize others as autonomous peers, rather than as unquestioning subordinates or rivals. Faced with conflict, they are willing to work hard to persuade others, be persuaded themselves, or leave.
Rough consensus favors people who, in traditional organizations, would be considered disruptive and stubborn: these are exactly the people prone to “breaking smart.” In its most powerful form, rough consensus is about finding the most fertile directions in which to proceed rather than uncovering constraints. Constraints in software tend to be relatively few and obvious. Possibilities, however, tend to be intimidatingly vast. Resisting limiting visions, finding the most fertile direction, and allying with the right people become the primary challenges.
In a process reminiscent of the “rule of agreement” in improv theater, ideas that unleash the strongest flood of follow-on builds tend to be recognized as the most fertile and adopted as the consensus. Collaborators who spark the most intense creative chemistry tend to be recognized as the right ones. The consensus is rough because it is left as a sense of direction, instead of being worked out into a detailed, purist vision.
This general principle of fertility-seeking has been repeatedly rediscovered and articulated in a bewildering variety of specific forms. The statements have names such as the principle of least commitment (planning software), the end-to-end principle (network design), the procrastination principle (architecture), optionality (investing), paving the cowpaths (interface design), lazy evaluation (language design) and late binding (code execution). While the details, assumptions and scope of applicability of these different statements vary, they all amount to leaving the future as free and unconstrained as possible, by making as few commitments as possible in any given local context.
The principle is in fact an expression of laissez-faire engineering ethics. Donald Knuth, another software pioneer, captured the ethical dimension with his version: premature optimization is the root of all evil. The principle is the deeper reason autonomy and creativity can migrate downstream to hands-on decision-making. Leaving more decisions for the future also leads to devolving authority to those who come later.
Such principles might seem dangerously playful and short-sighted, but under conditions of increasing abundance, with falling costs of failure, they turn out to be wise. It is generally smarter to assume that problems that seem difficult and important today might become trivial or be rendered moot in the future. Behaviors that would be short-sighted in the context of scarcity become far-sighted in the context of abundance.
The original design of the Mosaic browser, for instance, reflected the optimistic assumption that everybody would have high-bandwidth access to the Internet in the future, a statement that was not true at the time, but is now largely true in the developed world. Today, many financial technology entrepreneurs are building products based on the assumption that cryptocurrencies will be widely adopted and accepted. Underlying all such optimism about technology is an optimism about humans: a belief that those who come after us will be better informed and have more capabilities, and therefore able to make more creative decisions.
The consequences of this optimistic approach are radical. Traditional processes of consensus-seeking drive towards clarity in long-term visions but are usually fuzzy on immediate next steps. By contrast, rough consensus in software deliberately seeks ambiguity in long-term outcomes and extreme clarity in immediate next steps. It is a heuristic that helps correct the cognitive bias behind Amara’s Law. Clarity in next steps counteracts the tendency to overestimate what is possible in the short term, while comfort with ambiguity in visions counteracts the tendency to underestimate what is possible in the long term. At an ethical level, rough consensus is deeply anti-authoritarian, since it avoids constraining the freedoms of future stakeholders simply to allay present anxieties. The rejection of “voting” in the IETF model is a rejection of a false sense of egalitarianism, rather than a rejection of democratic principles.
In other words, true north in software is often the direction that combines ambiguity and evidence of fertility in the most alluring way: the direction of maximal interestingness.
The decade after the dot com crash of 2000 demonstrated the value of this principle clearly. Startups derided for prioritizing “growth in eyeballs” (an “interestingness” direction) rather than clear models of steady-state profitability (a self-limiting purist vision of an idealized business) were eventually proven right. Iconic “eyeball” based businesses, such as Google and Facebook, turned out to be highly profitable. Businesses which prematurely optimized their business model in response to revenue anxieties limited their own potential and choked off their own growth.
The great practical advantage of this heuristic is that the direction of maximal interestingness can be very rapidly updated to reflect new information, by evolving the rough consensus. The term pivot, introduced by Eric Ries as part of the Lean Startup framework, has recently gained popularity for such reorientation. A pivot allows the direction of development to change rapidly, without a detailed long-term plan. It is enough to figure out experimental next steps. This ability to reorient and adopt new mental models quickly (what military strategists call a fast transient4) is at the heart of agility.
The response to new information is exactly the reverse in authoritarian development models. Because such models are based on detailed purist visions that grow more complex over time, it becomes increasingly harder to incorporate new data. As a result, the typical response to new information is to label it as an irrelevant distraction, reaffirm commitment to the original vision, and keep going. This is the runaway-train-wreck scenario. On the other hand, if the new information helps ideological opposition cohere within a democratic process, a competing purist vision can emerge. This leads to the stalled-train scenario.
The reason rough consensus avoids both these outcomes is that it is much easier to agree roughly on the most interesting direction than to either update a complex, detailed vision or bring two or more conflicting complex visions into harmony.
For this to work, an equally pragmatic implementation philosophy is necessary. One that is very different from the authoritarian high-modernist way, or as it is known in software engineering, the waterfall model (named for the way high-level purist plans flow unidirectionally towards low-level implementation work).
Not only does such a pragmatic implementation philosophy exist, it works so well that running code actually tends to outrun even the most uninhibited rough consensus process without turning into a train wreck. One illustration of this dynamic is that successful software tends to get both used and extended in ways that the original creators never anticipated – and are often pleasantly surprised by, and sometimes alarmed by. This is of course the well-known agile model. We will not get into the engineering details,5 but what matters are the consequences of using it.
The biggest consequence is this: in the waterfall model, execution usually lags vision, leading to a deficit-driven process. By contrast, in working agile processes, running code races ahead, leaving vision to catch up, creating a surplus-driven process.
Both kinds of gaps contain lurking unknowns, but of very different sorts. The surplus in the case of working agile processes is the source of many pleasant surprises: serendipity. The deficit in the case of waterfall models is the source of what William Boyd called zemblanity: “unpleasant unsurprises.”
In software, waterfall processes fail in predictable ways, like classic Greek tragedies. Agile processes on the other hand, can lead to snowballing serendipity, getting luckier and luckier, and succeeding in unexpected ways. The reason is simple: waterfall plans constrain the freedom of future participants, leading them to resent and rebel against the grand plan in predictable ways. By contrast, agile models empower future participants in a project, catalyzing creativity and unpredictable new value.
The engineering term for the serendipitous, empowering gap between running code and governing vision has now made it into popular culture in the form of a much-misunderstood idea: perpetual beta.
When Google’s Gmail service finally exited beta status in July 2009, five years after it was launched, it already had over 30 million users. By then, it was the third largest free email provider after Yahoo and Hotmail, and was growing much faster than either.1 For most of its users, it had already become their primary personal email service.
The beta label on the logo, indicating experimental prototype status, had become such a running joke that when it was finally removed, the project team included a whimsical “back to beta” feature, which allowed users to revert to the old logo. That feature itself was part of a new section of the product called Gmail Labs: a collection of settings that allowed users to turn on experimental features. The idea of perpetual beta had morphed into permanent infrastructure within Gmail for continuous experimentation.
Today, this is standard practice: all modern web-based software includes scaffolding for extensive ongoing experimentation within the deployed production site or smartphone app backend (and beyond, through developer APIs2). Some of it is even visible to users. In addition to experimental features that allow users to stay ahead of the curve, many services also offer “classic” settings that allow them to stay behind the curve — for a while. The best products use perpetual beta as a way to lead their users towards richer, more empowered behaviors, instead of following them through customer-driven processes. Backward compatibility is limited to situations of pragmatic need, rather than being treated as a religious imperative.
The Gmail story contains an answer to the obvious question about agile models you might ask if you have only experienced waterfall models: How does anything ambitious get finished by groups of stubborn individuals heading in the foggiest possible direction of “maximal interestingness” with neither purist visions nor “customer needs” guiding them?
The answer is that it doesn’t get finished. But unlike in waterfall models, this does not necessarily mean the product is incomplete. It means the vision is perpetually incomplete and growing in unbounded ways, due to ongoing evolutionary experiments. When this process works well, what engineers call technical debt can get transformed into what we might call technical surplus.3 The parts of the product that lack satisfying design justifications represent the areas of rapid innovation. The gaps in the vision are sources of serendipitous good luck. (If you are a Gmail user, browsing the “Labs” section might lead you to some serendipitous discoveries: features you did not know you wanted might already exist unofficially).
The deeper significance of perpetual beta culture in technology often goes unnoticed: in the industrial age, engineering labs were impressive, enduring buildings inside which experimental products were created. In the digital age, engineering labs are experimental sections inside impressive, enduring products. Those who bemoan the gradual decline of famous engineering labs like AT&T Bell Labs and Xerox PARC often miss the rise of even more impressive labs inside major modern products and their developer ecosystems.
Perpetual beta is now such an entrenched idea that users expect good products to evolve rapidly and serendipitously, continuously challenging their capacity to learn and adapt. They accept occasional non-critical failures as a price worth paying. Just as the ubiquitous under construction signs on the early static websites of the 1990s gave way to dynamic websites that were effectively always “under construction,” software products too have acquired an open-ended evolutionary character.
Just as rough consensus drives ideation towards “maximal interestingness”, agile processes drive evolution towards the regimes of greatest operational uncertainty, where failures are most likely to occur. In well-run modern software processes, not only is the resulting chaos tolerated, it is actively invited. Changes are often deliberately made at seemingly the worst possible times. Intuit, a maker of tax software, has a history of making large numbers of changes and updates at the height of tax season.
Conditions that cause failure, instead of being cordoned off for avoidance in the future, are deliberately and systematically recreated and explored further. There are even automated systems designed to deliberately cause failures in production systems, such as ChaosMonkey, a system developed by Netflix to randomly take production servers offline, forcing the system to heal itself or die trying.
The glimpses of perpetual beta that users can see is dwarfed by unseen backstage experimentation.
This is neither perverse, nor masochistic: it is necessary to uncover hidden risks in experimental ideas early, and to quickly resolve gridlocks with data.
The origins of this curious philosophy lie in what is known as the release early, release often (RERO) principle, usually attributed to Linus Torvalds, the primary architect of the Linux operating system. The idea is exactly what it sounds like: releasing code as early as possible, and as frequently as possible while it is actively evolving.
What makes this possible in software is that most software failures do not have life-threatening consequences.4 As a result, it is usually faster and cheaper to learn from failure than to attempt to anticipate and accommodate it via detailed planning (which is why the RERO principle is often restated in terms of failure as fail fast).
So crucial is the RERO mindset today that many companies, such as Facebook and Etsy, insist on new hires contributing and deploying a minor change to mission-critical systems on their very first day. Companies that rely on waterfall processes by contrast, often put new engineers through years of rotating assignments before trusting them with significant autonomy.
To appreciate just how counterintuitive the RERO principle is, and why it makes traditional engineers nervous, imagine a car manufacturer rushing to put every prototype into “experimental” mass production, with the intention of discovering issues through live car crashes. Or supervisors in a manufacturing plant randomly unplugging or even breaking machinery during peak demand periods. Even lean management models in manufacturing do not go this far. Due to their roots in scarcity, lean models at best mitigate the problems caused by waterfall thinking. Truly agile models on the other hand, do more: they catalyze abundance.
Perhaps the most counter-intuitive consequence of the RERO principle is this: where engineers in other disciplines attempt to minimize the number of releases, software engineers today strive to maximize the frequency of releases. The industrial-age analogy here is the stuff of comedy science fiction: an intern launching a space mission just to ferry a single paper-clip to the crew of a space station.
This tendency makes no sense within waterfall models, but is a necessary feature of agile models. The only way for execution to track the changing direction of the rough consensus as it pivots is to increase the frequency of releases. Failed experiments can be abandoned earlier, with lower sunk costs. Successful ones can migrate into the product as fast as hidden risks can be squeezed out. As a result, a lightweight sense of direction — rough consensus — is enough. There is no need to navigate by an increasingly unattainable utopian vision.
Which raises an interesting question: what happens when there are irreconcilable differences of opinion that break the rough consensus?
If creating great software takes very little capital, copying great software takes even less. This means dissent can be resolved in an interesting way that is impossible in the world of atoms. Under appropriately liberal intellectual property regimes, individuals can simply take a copy of the software and continue developing it independently. In software, this is called forking. Efforts can also combine forces, a process known as merging. Unlike the superficially similar process of spin-offs and mergers in business, forking and merging in software can be non-zero sum.
Where democratic processes would lead to gridlock and stalled development, conflicts under rough consensus and running code and release early, release often processes leads to competing, divergent paths of development that explore many possible worlds in parallel.
This approach to conflict resolution is so radically unfamiliar1 that it took nearly three decades even for pragmatic hackers to recognize forking as something to be encouraged. Twenty five years passed between the first use of the term “fork” in this sense (by Unix hacker Eric Altman in 1980) and the development of a tool that encouraged rather than discouraged it: git, developed by Linus Torvalds in 2005. Git is now the most widely used code management system in the world, and the basis for Github, the leading online code repository.
In software development, the model works so well that a nearly two-century old industrial model of work is being abandoned for one built around highly open collaboration, promiscuous forking and opt-in staffing of projects.
The dynamics of the model are most clearly visible in certain modern programming contests, such as the regular Matlab programming contests conducted by MathWorks.
Such events often allow contestants to frequently check their under-development code into a shared repository. In the early stages, such sharing allows for the rapid dissemination of the best design ideas through the contestant pool. Individuals effectively vote for the most promising ideas by appropriating them for their own designs, in effect forming temporary collaborations. Hoarding ideas or code tends to be counterproductive due to the likelihood that another contestant will stumble on the same idea, improve upon it in unexpected ways, or detect a flaw that allows it to “fail fast.” But in the later stages, the process creates tricky competitive conditions, where speed of execution beats quality of ideas. Not surprisingly, the winner is often a contestant who makes a minor, last-minute tweak to the best submitted solution, with seconds to spare.
Such contests — which exhibit in simplified forms the dynamics of the open-source community as well as practices inside leading companies — not only display the power of RCRC and RERO, they demonstrate why promiscuous forking and open sharing lead to better overall outcomes.
Software that thrives in such environments has a peculiar characteristic: what computer scientist Richard Gabriel described as worse is better.2 Working code that prioritizes visible simplicity, catalyzing effective collaboration and rapid experimentation, tends to spread rapidly and unpredictably. Overwrought code that prioritizes authoritarian, purist concerns such as formal correctness, consistency, and completeness tends to die out.
In the real world, teams form through self-selection around great code written by one or two linchpin programmers rather than contest challenges. Team members typically know each other at least casually, which means product teams tend to grow to a few dozen at most. Programmers who fail to integrate well typically leave in short order. If they cannot or do not leave, they are often explicitly told to do nothing and stay out of the way, and actively shunned and cut out of the loop if they persist.
While the precise size of an optimal team is debatable, Jeff Bezos’ two-pizza rule suggests that the number is no more than about a dozen.3
In stark contrast to the quality code developed by “worse is better” processes, software developed by teams of anonymous, interchangeable programmers, with bureaucratic top-down staffing, tends to be of terrible quality. Turning Gabriel’s phrase around, such software represents a “better is worse” outcome: utopian visions that fail predictably in implementation, if they ever progress beyond vaporware at all.
The IBM OS/2 project of the early nineties,4 conceived as a replacement for the then-dominant operating system, MS-DOS, provides a perfect illustration of “better is worse.” Each of the thousands of programmers involved was expected to design, write, debug, document, and support just 10 lines of code per day. Writing more than 10 lines was considered a sign of irresponsibility. Project estimates were arrived at by first estimating the number of lines of code in the finished project, dividing by the number of days allocated to the project, and then dividing by 10 to get the number of programmers to assign to the project. Needless to say, programmers were considered completely interchangeable. The nominal “planning” time required to complete a project could be arbitrarily halved at any time, by doubling the number of assigned engineers.5 At the same time, dozens of managers across the the company could withhold approval and hold back a release, a process ominously called “nonconcurrence.”
“Worse is better” can be a significant culture shock to those used to industrial-era work processes. The most common complaint is that a few rapidly growing startups and open-source projects typically corner a huge share of the talent supply in a region at any given time, making it hard for other projects to grow. To add insult to injury, the process can at times seem to over-feed the most capricious and silly projects while starving projects that seem more important. This process of the best talent unpredictably abandoning other efforts and swarming a few opportunities is a highly unforgiving one. It creates a few exceptional winning products and vast numbers of failed ones, leaving those with strong authoritarian opinions about “good” and “bad” technology deeply dissatisfied.
But not only does the model work, it creates vast amounts of new wealth through both technology startups and open-source projects. Today, its underlying concepts like rough consensus, pivot, fast failure, perpetual beta, promiscuous forking, opt-in and worse is better are carrying over to domains beyond software and regions beyond Silicon Valley. Wherever they spread, limiting authoritarian visions and purist ideologies retreat.
There are certainly risks with this approach, and it would be polyannish to deny them. The state of the Internet today is the sum of millions of pragmatic, expedient decisions made by hundreds of thousands of individuals delivering running code, all of which made sense at the time. These decisions undoubtedly contributed to the serious problems facing us today, ranging from the poor security of Internet protocols to the ones being debated around Net Neutrality. But arguably, had the pragmatic approach not prevailed, the Internet would not have evolved significantly beyond the original ARPANET at all. Instead of a thriving Internet economy that promises to revitalize the old economy, the world at large might have followed the Japanese down the dead-end purist path of fifth-generation mainframe computing.
Today, moreover, several solutions to such serious legacy problems are being pursued, such as blockchain technology (the software basis for cryptocurrencies like Bitcoin). These are vastly more creative than solutions that were debated in the early days of the Internet, and reflect an understanding of problems that have actually been encountered, rather than the limiting anxieties of authoritarian high-modernist visions. More importantly, they validate early decisions to resist premature optimization and leave as much creative room for future innovators as possible. Of course, if emerging solutions succeed, more lurking problems will surface that will in turn need to be solved, in the continuing pragmatic tradition of perpetual beta.
Our account of the nature of software ought to suggest an obvious conclusion: it is a deeply subversive force. For those caught on the wrong side of this force, being on the receiving end of Blitzkrieg operations by a high-functioning agile software team can feel like mounting zemblanity: a sense of inevitable doom.
This process has by now occurred often enough, that a general sense of zemblanity has overcome the traditional economy at large. Every aggressively growing startup seems like a special-forces team with an occupying army of job-eating machine-learning programs and robots following close behind.
Internally, the software-eaten economy is even more driven by disruption: the time it takes for a disruptor to become a disruptee has been radically shrinking in the last decade — and startups today are highly aware of that risk. That awareness helps explain the raw aggressiveness that they exhibit.
It is understandable that to people in the traditional economy, software eating the world sounds like a relentless war between technology and humanity.
But exactly the opposite is the case. Technological progress, unlike war or Wall Street style high finance, is not a zero-sum game, and that makes all the difference. The Promethean force of technology is today, and always has been, the force that has rescued humanity from its worst problems just when it seemed impossible to avert civilizational collapse. With every single technological advance, from the invention of writing to the invention of television, those who have failed to appreciate the non-zero-sum nature of technological evolution have prophesied doom and been proven wrong. Every time, they have made some version of the argument: this time it is different, and been proven wrong.
Instead of enduring civilizational collapse, humanity has instead ascended to a new level of well-being and prosperity each time.
Of course, this poor record of predicting collapses is not by itself proof that it is no different this time. There is no necessary reason the future has to be like the past. There is no fundamental reason our modern globalized society is uniquely immune to the sorts of game-ending catastrophes that led to the fall of the Roman empire or the Mayan civilization. The case for continued progress must be made anew with each technological advance, and new concerns, such as climate change today, must be seriously considered.
But concerns that the game might end should not lead us to limit ourselves to what philosopher James Carse6 called finite game views of the world, based on “winning” and arriving at a changeless, pure and utopian state as a prize. As we will argue in the next essay, the appropriate mindset is what Carse called an infinite game view, based on the desire to continue playing the game in increasingly generative ways. From an infinite game perspective, software eating the world is in fact the best thing that can happen to the world.
The unique characteristics of software as a technological medium have an impact beyond the profession itself. To understand the broader impact of software eating the world, we have to begin by examining the nature of technology adoption processes.
A basic divide in the world of technology is between those who believe humans are capable of significant change, and those who believe they are not. Prometheanism is the philosophy of technology that follows from the idea that humans can, do and should change. Pastoralism, on the other hand is the philosophy that change is profane. The tension between these two philosophies leads to a technology diffusion process characterized by a colloquial phrase popular in the startup world: first they ignore you, then they laugh at you, then they fight you, then you win.1
Science fiction writer Douglas Adams reduced the phenomenon to a set of three sardonic rules from the point of view of users of technology:
Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
Anything invented after you’re thirty-five is against the natural order of things.
As both these folk formulations suggest, there is certain inevitability to technological evolution, and a certain naivete to certain patterns of resistance.
To understand why this is in fact the case, consider the proposition that technological evolution is path-dependent in the short term, but not in the long term.
Major technological possibilities, once uncovered, are invariably exploited in ways that maximally unleash their potential. While there is underutilized potential left, individuals compete and keep adapting in unpredictable ways to exploit that potential. All it takes is one thing: a thriving frontier of constant tinkering and diverse value systems must exist somewhere in the world.
Specific ideas may fail. Specific uses may not endure. Localized attempts to resist may succeed, as the existence of the Amish demonstrates. Some individuals may resist some aspects of the imperative to change successfully. Entire nations may collectively decide to not explore certain possibilities. But with major technologies, it usually becomes clear very early on that the global impact is going to be of a certain magnitude and cause a corresponding amount of disruptive societal change. This is the path-independent outcome and the reason there seems to be a “right side of history” during periods of rapid technological developments.
The specifics of how, when, where and through whom a technology achieves its maximal impact are path dependent. Competing to guess the right answers is the work of entrepreneurs and investors. But once the answers are figured out, the contingent path from “weird” to “normal” will be largely forgotten, and the maximally transformed society will seem inevitable with hindsight.
The ongoing evolution of ridesharing through conflict with the taxicab industry illustrates this phenomenon well. In January 2014 for instance, striking cabdrivers in Paris attacked vehicles hired through Uber. The rioting cabdrivers smashed windshields and slashed tires, leading to immediate comparisons in the media to the original pastoralists of industrialized modernity: the Luddites of the early 19th century.2
Like the Luddite movement, the reaction to ridesharing services such as Uber and Lyft is not resistance to innovative technology per se, but something larger and more complex: an attempt to limit the scope and scale of impact in order to prevent disruption of a particular way of life. As Richard Conniff notes in a 2011 essay in the Smithsonian magazine:
As the Industrial Revolution began, workers naturally worried about being displaced by increasingly efficient machines. But the Luddites themselves “were totally fine with machines,” says Kevin Binfield, editor of the 2004 collection Writings of the Luddites. They confined their attacks to manufacturers who used machines in what they called “a fraudulent and deceitful manner” to get around standard labor practices. “They just wanted machines that made high-quality goods,” says Binfield, “and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.3
In his essay, Conniff argues that the original Luddites were simply fighting to preserve their idea of human values, and concludes that “standing up against technologies that put money or convenience above other human values” is necessary for a critical engagement of technology. Critics make similar arguments in every sector being eaten by software.
The apparent reasonableness of this view is deceptive: it is based on the wishful hope that entire societies can and should agree on what the term human values means, and use that consensus to decide which technologies to adopt. An unqualified appeal to “universal” human values is usually a call for an authoritarian imposition of decidedly non-universal values.
As the rideshare industry debates demonstrate, even consumers and producers within a single sector find it hard to achieve consensus on values. Protests by cab drivers in London in 2014 for instance, led to an increase in business4 for rideshare companies, clear evidence that consumers do not necessarily act in solidarity with incumbent producers based on shared “human values.”
It is tempting to analyze such conflicts in terms of classical capitalist or labor perspectives. The result is a predictable impasse: capitalists emphasize increased supply driving prices down, while progressives focus on loss of jobs in the taxicab industry. Both sides attempt to co-opt the political loyalties of rideshare drivers. Capitalists highlight increased entrepreneurial opportunities, while progressives highlight increased income precarity. Capitalists like to label rideshare drivers free agents or micro-entrepreneurs, while progressives prefer labels like precariat (by analogy to proletariat) or scab. Both sides attempt to make the future determinate by force-fitting it into preferred received narratives using loaded terms.
Both sides also operate by the same sense of proportions: they exaggerate the importance of the familiar and trivialize the new. Apps seem trivial, while automobiles loom large as a motif of an entire century-old way of life. Societies organized around cars seem timeless, normal, moral and self-evidently necessary to preserve and extend into the future. The smartphone at first seems to add no more than a minor element of customer convenience within a way of life that cannot possibly change. The value it adds to the picture is treated like a rounding error and ignored. As a result both sides see the conflict as a zero-sum redistribution of existing value: gains on one side, exactly offset by losses on the other side.
But as Marshall McLuhan observed, new technologies change our sense of proportions.
Even today’s foggy view of a smartphone-centric future suggests that ridesharing is evolving from convenience to necessity. By sustaining cheaper and more flexible patterns of local mobility, ridesharing enables new lifestyles in urban areas. Young professionals can better afford to work in opportunity-rich cities. Low-income service workers can expand their mobility beyond rigid public transit and the occasional expensive emergency taxi-ride. Small restaurants with limited working capital can use ridesharing-like services to offer delivery services. It is in fact getting hard to imagine how else transportation could work in a society with smartphones.
The impact is shifting from the path-dependent phase, when it wasn’t clear whether the idea was even workable, to the non-path-dependent phase, where it seems inevitable enough that other ideas can be built on top.
Such snowballing changes in patterns of life are due to what economists call consumer surplus5 (increased spending power elsewhere due to falling costs in one area of consumption) and positive spillover effects6 (unexpected benefits in unrelated industries or distant geographies). For technologies with a broad impact, these are like butterfly effects: deceptively tiny causes with huge, unpredictable effects. Due to the unpredictability of surplus and spillover, the bulk of the new wealth created by new technologies (on the order of 90% or more) eventually accrues to society at large,7 rather than the innovators who drove the early, path-dependent phase of evolution. This is the macroeconomic analog to perpetual beta: execution by many outrunning visioning by a few, driving more bottom-up experimentation and turning society itself into an innovation laboratory.
Far from the value of the smartphone app being a rounding error in the rideshare industry debate, it in fact represents the bulk of the value. It just does not accrue directly to any of the participants in the overt, visible conflict.
If adoption models were entirely dictated by the taxicab industry, this value would not exist, and the zero-sum framing would become a self-fulfilling prophecy. Similarly, when entrepreneurs try to capture all or even most of the value they set out to create, the results are counterproductive: minor evolutionary advances that again make zero-sum outcomes a self-fulfilling prophecy. Technology publishing pioneer Tim O’Reilly captured the essence of this phenomenon with the principle, “create more value than you capture.” For the highest-impact products, the societal value created dwarfs the value captured.
These largely invisible surplus and spillover effects do more than raise broad living standards. By redirecting newly freed creative energy and resources down indeterminate paths, consumer surpluses and spillover effects actually drive further technological evolution in a non-zero-sum way. The bulk of the energy leaks away to drive unexpected innovations in unrelated areas. A fraction courses through unexpected feedback paths and improves the original innovation itself, in ways the pioneers themselves do not anticipate. Similar unexpected feedback paths improve derivative inventions as well, vastly amplifying the impact beyond simple “technology diffusion.”
The story of the steam engine is a good illustration of both effects. It is widely recognized that spillover effects from James Watt’s steam engine, originally introduced in the Cornish mining industry, helped trigger the British industrial revolution. What is less well-known8 is that the steam engine itself was vastly improved by hundreds of unknown tinkerers adding “microinventions” in the decades immediately following the expiration of James Watt’s patents. Once an invention leaks into what Robert Allen calls “collective invention settings,” with a large number of individuals and firms freely sharing information and independently tinkering with an innovation, future evolution gathers unstoppable momentum and the innovation goes from “weird” to “new normal.” Besides the Cornish mining district in the early 1800s, the Connecticut Valley in the 1870s-1890s,9 Silicon Valley since 1950 and the Shenzen region of China since the 1990s are examples of flourishing collective invention settings. Together, such active creative regions constitute the global technology frontier: the worldwide zone of bricolage.
The path-dependent phase of evolution of a technology can take centuries, as Joel Mokyr shows in his classic, Lever of Riches. But once it enters a collective invention phase, surplus and spillover effects gather momentum and further evolution becomes simultaneously unpredictable and inevitable. Once the inevitability is recognized, it is possible to bet on follow-on ideas without waiting for details to become clear. Today, it is possible to bet on a future based on ridesharing and driverless cars without knowing precisely what those futures will look like.
As consumers, we experience this kind of evolution as what Buckminster Fuller called ephemeralization: the seemingly magical ability of technology to do more and more with less and less.
This is most visible today in the guise of Moore’s Law, but ephemeralization is in fact a feature of all technological evolution. Potable water was once so hard to come by, many societies suffered from endemic water-borne diseases and forced to rely on expensive and inefficient procedures like boiling water at home. Today, only around 10% of the world lacks such access.10 Diamonds were once worth fighting wars over. Today artificial diamonds, indistinguishable from natural ones, are becoming widely available.
The result is a virtuous cycle of increasing serendipity, driven by widespread lifestyle adaptation and cascades of self-improving innovation. Surplus and spillover creating more surplus and spillover. Brad deLong’s slouching towards utopia for consumers and Edmund Phelps’ mass flourishing for producers. And when the virtuous cycle is powered by a soft, world-eating technology, the steady, cumulative impact is immense.
Both critics and enthusiasts of innovation deeply misunderstand the nature of this virtuous cycle. Critics typically lament lifestyle adaptations as degeneracy and call for a return to traditional values. Many enthusiasts, instead of being inspired by a sense of unpredictable, flourishing potential, are repeatedly seduced by specific visions of the Next Big Thing, sometimes derived rather literally from popular science fiction. As a result, they lament the lack of collective attention directed towards their pet societal projects. The priorities of other enthusiasts seem degenerate.
The result in both cases is the same: calls for reining in the virtuous cycle. Both kinds of lament motivate efforts to concentrate and deploy surpluses in authoritarian ways (through retention of excessive monopolistic profits by large companies or government-led efforts funded through taxation) and contain spillover effects (by restricting access to new technological capabilities). Both are ultimately attempts to direct creative energies down a few determinate paths. Both are driven by a macroeconomic version of the Luddite hope: that it is possible to enjoy the benefits of non-zero-sum innovation without giving up predictability. For critics, it is the predictability of established patterns of life. For Next Big Thing enthusiasts, it is a specific aspirational pattern of life.
Both are varieties of pastoralism, the cultural cousin of purist approaches in engineering. Pastoralism suffers from precisely the same, predictable authoritarian high-modernist failure modes. Like purist software visions, pastoralist visions too are marked by an obsessive desire to permanently win a specific, zero-sum finite game rather than to keep playing the non-zero-sum infinite game.
When the allure of pastoralist visions is resisted, and the virtuous cycle is allowed to work, we get Promethean progress. This is unpredictable evolution in the direction of maximal societal impact, unencumbered by limiting deterministic visions. Just as the principle of rough consensus and running code creates great software, consumer surplus and spillover effects create great societies. Just as pragmatic and purist development models lead to serendipity and zemblanity in engineering respectively, Promethean and pastoral models lead to serendipity and zemblanity at the level of entire societies.
When pastoralist calls for actual retreat are heeded, the technological frontier migrates elsewhere, often causing centuries of stagnation. This was precisely what happened in China and the Islamic world around the fifteenth century, when the technological frontier shifted to Europe.
Heeding the other kind of pastoralist call, to pursue a determinate Next Big Thing at the expense of many indeterminate small things, leads to somewhat better results. Such models can deliver impressive initial gains, but invariably create a hardening landscape of authoritarian, corporatist institutions. This triggers a vicious cycle that predictably stifles innovation.
The Apollo program, for instance, fulfilled John F. Kennedy’s call to put humans on the moon within the decade. It also led to the inexorable rise of the military-industrial complex that his predecessor, Dwight D. Eisenhower, had warned against. The Soviets fared even worse: they made equally impressive strides in the space race, but the society they created collapsed on itself under the weight of authoritarianism. What prevented that outcome in the United States was the regional technological frontier migrating to the West Coast, and breaking smart from the military-industrial complex in the process. This allowed some of the creative energy being gradually stifled to escape to a more favorable environment.
With software eating the world, we are again witnessing predictable calls for pastoralist development models. Once again, the challenge is to resist the easy answers on offer.
In art, the term pastoral refers to a genre of painting and literature based on romanticized and idealized portrayals of a pastoral lifestyle, usually for urban audiences with no direct experience of the actual squalor and oppression of pre-industrial rural life.
Biblical Pastoralism: drawing inspiration for the 21st century from shepherds.
Within religious traditions, pastorals may also be associated with the motifs and symbols of uncorrupted states of being. In the West for instance, pastoral art and literature often evoke the Garden of Eden story. In Islamic societies, the first caliphate is often evoked in a similar way.
The notion of a pastoral is useful for understanding idealized understandings of any society, real or imagined, past, present or future. In Philip Roth’s American Pastoral for instance, the term is an allusion to the idealized American lifestyle enjoyed by the protagonist Seymour “Swede” Levov, before it is ruined by the social turmoil of the 1960s.
At the center of any pastoral we find essentialized notions of what it means to be human, like Adam and Eve or William Whyte’s Organization Man, arranged in a particular social order (patriarchal in this case). From these archetypes we get to pure and virtuous idealized lifestyles. Lifestyles that deviate from these understandings seem corrupt and vice-driven. The belief that “people don’t change” is at once an approximation and a prescription: people should not change except to better conform to the ideal they are assumed to already approximate. The belief justifies building technology to serve the predictable and changeless ideal and labeling unexpected uses of technology degenerate.
We owe our increasingly farcical yearning for jetpacks and flying cars, for instance, to what we might call the “World Fairs pastoral,” since the vision was strongly shaped by mid-twentieth-century World Fairs. Even at the height of its influence, it was already being satirized by television shows like The Flintstones and The Jetsons. The shows portrayed essentially the 1950s social order, full of Organization Families, transposed to past and future pastoral settings. The humor in the shows rested on audiences recognizing the escapist non-realism.
Not quite as clever as the Flintstones or Jetsons, but we try.
The World Fairs pastoral, inspired strongly by the aerospace technologies of the 1950s, represented a future imagined around flying cars, jetpacks and glamorous airlines like Pan Am. Flying cars merely updated a familiar nuclear-family lifestyle. Jetpacks appealed to the same individualist instincts as motorcycles. Airlines like Pan Am, besides being an integral part of the military-industrial complex, owed their “glamor” in part to their deliberate perpetuation of the sexist culture of the fifties. Within this vision, truly significant developments, like the rise of vastly more efficient low-cost airlines in the 70s, seemed like decline from a “Golden Age” of air travel.
Arguably, the aerospace future that actually unfolded was vastly more interesting than the one envisioned in the World Fairs pastoral. Low-cost, long-distance air travel opened up a globalized and multicultural future, broke down barriers between insular societies, and vastly increased global human mobility. Along the way, it helped dismantle much of the institutionalized sexism behind the glamour of the airline industry. These developments were enabled in large part by post-1970s software technologies,1 rather than improvements in core aerospace engineering technologies. These were precisely the technologies that were beginning to “break smart” out of the stifling influence of the military-industrial complex.
In 2012, thanks largely to these developments, for the first time in history there were over a billion international tourist arrivals worldwide.2 Software had eaten and democratized elitist air travel. Today, software is continuing to eat airplanes in deeper ways, driving the current explosion in drone technology. Again, those fixated on jetpacks and flying cars are missing the actual, much more interesting action because it is not what they predicted. When pastoralists pay attention to drones at all, they see them primarily as morally objectionable military weapons. The fact that they replace technologies of mass slaughter such as carpet bombing, and the growing number of non-military uses, are ignored.
In fact the entire World Fairs pastoral is really a case of privileged members of society, presuming to speak for all, demanding “faster horses” for all of society (in the sense of the likely apocryphal3 quote attributed to Henry Ford, “If I’d asked my customers what they wanted, they would have demanded faster horses.”)
Fortunately for the vitality of the United States and the world at large, the future proved wiser than any limiting pastoral vision of it. The aerospace story is just one among many that suddenly appear in a vastly more positive light once we drop pastoral obsessions and look at the actual unfolding action. Instead of the limited things we could imagine in the 1950s, we got much more impactful things. Software eating aerospace technology allowed it to continue progressing in the direction of maximum potential.
If pastoral visions are so limiting, why do we get so attached to them? Where do they even come from in the first place? Ironically, they arise from Promethean periods of evolution that are too successful.
The World Fairs pastoral, for instance, emerged out of a Promethean period in the United States, heralded by Alexander Hamilton in the 1790s. Hamilton recognized the enormous potential of industrial manufacturing, and in his influential 1792 Report on Manufactures,4 argued that the then-young United States ought to strive to become a manufacturing superpower. For much of the nineteenth century, Hamilton’s ideas competed for political influence5 with Thomas Jefferson’s pastoral vision of an agrarian, small-town way of life, a romanticized, sanitized version of the society that already existed.
For free Americans alive at the time, Jefferson’s vision must have seemed tangible, obviously valuable and just within reach. Hamilton’s must have seemed speculative, uncertain and profane, associated with the grime and smoke of early industrializing Britain. For almost 60 years, it was in fact Jefferson’s parochial sense of proportions that dominated American politics. It was not until the Civil War that the contradictions inherent in the Jeffersonian pastoral led to its collapse as a political force. Today, while it still supplies powerful symbolism to politicians’ speeches, all that remains of the Jeffersonian Pastoral is a nostalgic cultural memory of small-town agrarian life.
During the same period, Hamilton’s ideas, through their overwhelming success, evolved from a vague sense of direction in the 1790s into a rapidly maturing industrial social order by the 1890s. By the 1930s, this social order was already being pastoralized into an alluring vision of jetpacks and flying cars in a vast, industrialized, centralized society. A few decades later, this had turned into a sense of dead-end failure associated with the end of the Apollo program, and the reality of a massive, overbearing military-industrial complex straddling the technological world. The latter has now metastasized into an entire too-big-to-fail old economy. One indicator of the freezing of the sense of direction is that many contemporary American politicians still remain focused on physical manufacturing the way Alexander Hamilton was in 1791. What was a prescient sense of direction then has turned into nostalgia for an obsolete utopian vision today. But where we have lost our irrational attachment to the Jeffersonian Pastoral, the World Fairs pastoral is still too real to let go.
We get attached to pastorals because they offer a present condition of certainty and stability and a utopian future promise of absolutely perfected certainty and stability. Arrival at the utopia seems like a well-deserved reward for hard-won Promethean victories. Pastoral utopias are where the victors of particular historical finite games hope to secure their gains and rest indefinitely on their laurels. The dark side, of course, is that pastorals also represent fantasies of absolute and eternal power over the fate of society: absolute utopias for believers that necessarily represent dystopias for disbelievers. Totalitarian ideologies of the twentieth century, such as communism and fascism, are the product of pastoral mindsets in their most toxic forms. The Jeffersonian pastoral was a nightmare for black Americans.
When pastoral fantasies start to collapse under the weight of their own internal contradictions, long-repressed energies are unleashed. The result is a societal condition marked by widespread lifestyle experimentation based on previously repressed values. To those faced with a collapse of the World Fairs pastoral project today, this seems like an irreversible slide towards corruption and moral decay.
Because they serve as stewards of dominant pastoral visions, cultural elites are most prone to viewing unexpected developments as degeneracy. From the Greek philosopher Plato1 (who lamented the invention of writing in the 4th century BC) to the Chinese scholar, Zhang Xian Wu2 (who lamented the invention of printing in the 12th century AD), alarmist commentary on technological change has been a constant in history. A contemporary example can be found in a 2014 article3 by Paul Verhaege in The Guardian:
There are constant laments about the so-called loss of norms and values in our culture. Yet our norms and values make up an integral and essential part of our identity. So they cannot be lost, only changed. And that is precisely what has happened: a changed economy reflects changed ethics and brings about changed identity. The current economic system is bringing out the worst in us.
Viewed through any given pastoral lens, any unplanned development is more likely to subtract rather than add value. In an imagined world where cars fly, but driving is still a central rather than peripheral function, ridesharing can only be seen as subtracting taxi drivers from a complete vision. Driverless cars — the name is revealing, like “horseless carriage” — can only be seen as subtracting all drivers from the vision. And with such apparent subtraction, values and humans can only be seen as degenerating (never mind that we still ride horses for fun, and will likely continue driving cars for fun).
This tendency to view adaptation as degeneracy is perhaps why cultural elites are startlingly prone to the Luddite fallacy. This is the idea that technology-driven unemployment is a real concern, an idea that arises from the more basic assumption that there is a fixed amount of work (“lump of labor”) to be done. By this logic, if a machine does more, then there is less for people to do.
Prometheans often attribute this fallacious argument to a lack of imagination, but the roots of its appeal lie much deeper. Pastoralists are perfectly willing and able to imagine many interesting things, so long as they bring reality closer to the pastoral vision. Flying cars — and there are very imaginative ways to conceive of them — seem better than land-bound ones because drivers predictably evolving into pilots conforms to the underlying notion of human perfectibility. Drivers unpredictably evolving into smartphone-wielding free agents, and breaking smart from the Organization Man archetype, does not. Within the Jeffersonian pastoral, faster horses (not exactly trivial to breed) made for more empowered small-town yeoman farmers. Drivers of early horseless carriages were degenerate dependents, beholden to big corporations, big cities and Standard Oil.
In other words, pastoralists can imagine sustaining changes to the prevailing social order, but disruptive changes seem profane. As a result, those who adapt to disruption in unexpected ways seem like economic and cultural degenerates, rather than representing employment rebounding in unexpected ways.
History of course, has shown that the idea of technological unemployment is not just wrong, it is wildly wrong. Contemporary fears of software eating jobs is just the latest version of the argument that “people cannot change” and that this time, the true limits of human adaptability have been discovered.
This argument is absolutely correct — within the pastoral vision that it is made.
Once we remove pastoral blinders, it becomes obvious that the future of work lies in the unexpected and degenerate-seeming behaviors of today. Agriculture certainly suffered a devastating permanent loss of employment to machinery within the Jeffersonian pastoral by 1890. Fortunately, Hamilton’s profane ideas, and the degenerate citizens of the industrial world he foresaw, saved the day. The ideal Jeffersonian human, the noble small-town yeoman farmer, did in fact become practically extinct as the Jeffersonians feared. Today the pastoral-ideal human is a high-IQ credentialist Organization Man, headed for gradual extinction, unable to compete with higher-IQ machines. The degenerate, breaking-smart humans of the software-eaten world on the other hand, have no such fears. They are too busy tinkering with new possibilities to bemoan imaginary lost utopias.
John Maynard Keynes was too astute to succumb to the Luddite fallacy in this naive form. In his 1930 conception of the leisure society,4 he noted that the economy could arbitrarily expand to create and satisfy new needs, and with a lag, absorb labor as fast as automation freed it up. But Keynes too failed to recognize that with new lifestyles come new priorities, new lived values and new reasons to want to work. As a result, he saw the Promethean pattern of progress as a necessary evil on the path to a utopian leisure society based on traditional, universal religious values:
I see us free, therefore, to return to some of the most sure and certain principles of religion and traditional virtue-that avarice is a vice, that the exaction of usury is a misdemeanour, and the love of money is detestable, that those walk most truly in the paths of virtue and sane wisdom who take least thought for the morrow. We shall once more value ends above means and prefer the good to the useful. We shall honour those who can teach us how to pluck the hour and the day virtuously and well, the delightful people who are capable of taking direct enjoyment in things, the lilies of the field who toil not, neither do they spin.
But beware! The time for all this is not yet. For at least another hundred years we must pretend to ourselves and to every one that fair is foul and foul is fair; for foul is useful and fair is not. Avarice and usury and precaution must be our gods for a little longer still. For only they can lead us out of the tunnel of economic necessity into daylight.
Perceptions of moral decline however, have no necessary relationship with actual moral decline. As Joseph Tainter observes in The Collapse of Complex Societies:
Values of course, vary culturally, socially and individually…What one individual, society, or culture values highly another does not…Most of us approve, in general, of that which culturally is most like or most pleasing, or at least most intelligible to us. The result is a global bedlam of idiosyncratic ideologies, each claiming exclusive possession of ‘truth.’…
The ‘decadance’ concept seems particularly detrimental [and is] notoriously difficult to define. Decadent behavior is that which differs from one’s own moral code, particular if the offender at some former time behaved in a manner of which one approves. There is no clear causal link between the morality of behavior and political fortunes.
While there is no actual moral decline in any meaningful absolute sense, the anxiety experienced by pastoralists is real. For those who yearn for paternalistic authority, more lifestyle possibilities leads to a sense of anomie rather than freedom. It triggers what the philosopher George Steiner called nostalgia for the absolute.5 Calls for a retreat to tradition or a collectivist drive towards the Next Big Thing (often an Updated Old Thing, as in the case of President Obama’s call for a “new Sputnik moment” a few years ago) share a yearning for a simpler world. But, as Steiner notes:
I do not think it will work. On the most brutal, empirical level, we have no example in history…of a complex economic and technological system backtracking to a more simple, primitive level of survival. Yes, it can be done individually. We all, I think, in the universities now have a former colleague or student somewhere planting his own organic food, living in a cabin in the forest, trying to educate his family far from school. Individually it might work. Socially, I think, it is moonshine.
In 1974, the year of peak centralization, Steiner was presciently observing the beginnings of the transformation. Today, the angst he observed on university campuses has turned into a society-wide condition of pastoral longing, and a pervasive sense of moral decay.
For Prometheans, on the other hand, not only is there no decay, there is actual moral progress.
Prometheans understand technological evolution in terms of increasing diversity of lived values, in the form of more varied actual lifestyles. From any given pastoral perspective, such increasing pluralism is a sign of moral decline, but from a Promethean perspective, it is a sign of moral progress catalyzed by new technological capabilities.
Emerging lifestyles introduce new lived values into societies. Hamilton did not just suggest a way out of the rural squalor1 that was the reality of the Jeffersonian pastoral. His way also led to the dismantlement of slavery, the rise of modern feminism and the gradual retreat of colonial oppression and racism. Today, we are not just leaving the World Fairs pastoral behind for a richer technological future. We are also leaving behind its paternalistic institutions, narrow “resource” view of nature, narrow national identities and intolerance of non-normative sexual identities.
Promethean attitudes begin with an acknowledgment of the primacy of lived values over abstract doctrines. This does not mean that lived values must be uncritically accepted or left unexamined. It just means that lived values must be judged on their own merit, rather than through the lens of a prejudiced pastoral vision.
The shift from car-centric to smartphone-centric priorities in urban transportation is just one aspect of a broader shift from hardware-centric to software-centric lifestyles. Rideshare driver, carless urban professional and low-income-high-mobility are just the tip of an iceberg that includes many other emerging lifestyles, such as eBay or Etsy merchant, blogger, indie musician and search-engine marketer. Each new software-enabled lifestyle adds a new set of lived values and more apparent profanity to society. Some, like rent-over-own values, are shared across many emerging lifestyles and threaten pastorals like the “American Dream,” built around home ownership. Others, such as dietary preferences, are becoming increasingly individualized and weaken the very idea of a single “official food pyramid” pastoral script for all.
Such broad shifts have historically triggered change all the way up to the global political order. Whether or not emerging marginal ideologies2 achieve mainstream prominence, their sense of proportions and priorities, driven by emerging lifestyles and lived values, inevitably does.
These observations are not new among historians of technology, and have led to endless debates about whether societal values drive technological change (social determinism) or whether technological change drives societal values (technological determinism). In practice, the fact that people change and disrupt the dominant prevailing ideal of “human values” renders the question moot. New lived values and new technologies simultaneously irrupt into society in the form of new lifestyles. Old lifestyles do not necessarily vanish: there are still Jeffersonian small farmers and traditional blacksmiths around the world for instance. Rather, they occupy a gradually diminishing role in the social order. As a result, new and old technologies and an increasing number of value systems coexist.
In other words, human pluralism eventually expands to accommodate the full potential of technological capabilities.3
We call this the principle of generative pluralism. Generative pluralism is what allows the virtuous cycle of surplus and spillover to operate. Ephemeralization — the ability to gradually do more with less — creates room for the pluralistic expansion of lifestyle possibilities and individual values, without constraining the future to a specific path.
The inherent unpredictability in the principle implies that both technological and social determinism are incomplete models driven by zero-sum thinking. The past cannot “determine” the future at all, because the future is more complex and diverse. It embodies new knowledge about the world and new moral wisdom, in the form of a more pluralistic and technologically sophisticated society.
Thanks to a particularly fertile kind of generative pluralism that we know as network effects, soft technologies like language and money have historically caused the greatest broad increases in complexity and pluralism. When more people speak a language or accept a currency, the potential of that language or currency increases in a non-zero-sum way. Shared languages and currencies allow more people to harmoniously co-exist, despite conflicting values, by allowing disputes to be settled through words or trade4 rather than violence. We should therefore expect software eating the world to cause an explosion in the variety of possible lifestyles, and society as a whole becoming vastly more pluralistic.
And this is in fact what we are experiencing today.
The principle also resolves the apparent conflict between human agency and “what technology wants”: Far from limiting human agency, technological evolution in fact serves as the most complete expression of it. Technology evolution takes on its unstoppable and inevitable character only after it breaks smart from authoritarian control and becomes part of unpredictable and unscripted collective invention culture. The existence of thousands of individuals and firms working relatively independently on the same frontier means that every possibility will not only be uncovered, it will be uncovered by multiple individuals, operating with different value systems, at different times and places. Even if one inventor chooses not to pursue a possibility, chances are, others will. As a result, all pastoralist forms of resistance are eventually overwhelmed. But the process retains rational resistance to paths that carry risk of ending the infinite game for all, in proportion to their severity. As global success in limiting the spread of nuclear and biological weapons shows, generative pluralism is not the same as mad scientists and James Bond villains running amok.
Prometheans who discover high-leverage unexpected possibilities enter a zone of serendipity. The universe seems to conspire to magnify their agency to superhuman levels. Pastoralists who reject change altogether as profanity turn lack of agency into a self-fulfilling prophecy, and enter a zone of zemblanity. The universe seems to conspire to diminish whatever agency they do have, resulting in the perception that technology diminishes agency.
Power, unlike capability, is zero-sum, since it is defined in terms of control over other human beings. Generative pluralism implies that on average, pastoralists are constantly ceding power to Prometheans. In the long term, however, the loss of power is primarily a psychological rather than material loss. To the extent that ephemeralization frees us of the need for power, we have less use for a disproportionate share.
As a simple example, consider a common twentieth-century battleground: public signage. Today, different languages contend for signaling power in public spaces. In highly multilingual countries, this contention can turn violent. But automated translation and augmented reality technologies5 can make it unnecessary to decide, for instance, whether public signage in the United States ought to be in English, Spanish or both. An arbitrary number of languages can share the same public spaces, and there is much less need for linguistic authoritarianism. Like physical sports in an earlier era, soft technologies such as online communities, video games and augmented reality are all slowly sublimating our most violent tendencies. The 2014 protests in Ferguson, MO, are a powerful example. Compared to the very similar civil rights riots in the 1960s, information in the form of social media coverage, rather than violence, was the primary medium of influence.
The broader lesson of the principle of generative pluralism is this: through technology, societies become intellectually capable of handling progressively more complex value-based conflicts. As societies gradually awaken to resolution mechanisms that do not require authoritarian control over the lives of others, they gradually substitute intelligence and information for power and coercion.
So far, we have tried to convey a visceral sense of what is essentially an uneven global condition of explosive positive change. Change that is progressing at all levels from individual to business to communities to the global societal order. Perhaps most important part of the change is that we are experiencing a systematic substitution of intelligence for brute authoritarian power in problem solving, allowing a condition of vastly increased pluralism to emerge.
Paradoxically, due to the roots of vocal elite discontent in pastoral sensibilities, this analysis is valid only to the extent that it feels viscerally wrong. And going by the headlines of the past few years, it certainly does.
Much of our collective sense of looming chaos and paradises being lost is in fact a clear and unambiguous sign of positive change in the world. By this model, if our current collective experience of the human condition felt utopian, with cultural elites extolling its virtues, we should be very worried indeed. Societies that present a facade of superficial pastoral harmony, as in the movie Stepford Wives, tend to be sustained by authoritarian, non-pluralistic polities, hidden demons, and invisible violence.
Innovation can in fact be defined as ongoing moral progress achieved by driving directly towards the regimes of greatest moral ambiguity, where our collective demons lurk. These are also the regimes where technology finds its maximal expressions, and it is no accident that the two coincide. Genuine progress feels like onrushing obscenity and profanity, and also requires new technological capabilities to drive it.
The subjective psychological feel of this evolutionary process is what Marshall McLuhan described in terms of a rear-view mirror effect: “we see the world through a rear-view mirror. We march backwards into the future.”
Our aesthetic and moral sensibilities are oriented by default towards romanticized memories of paradises lost. Indeed, this is the only way we can enter the future. Our constantly pastoralizing view of the world, grounded in the past, is the only one we have. The future, glimpsed only through a small rear-view mirror, is necessarily framed by the past. To extend McLuhan’s metaphor, the great temptation is to slam on the brakes and shift from what seems like reverse gear into forward gear. The paradox of progress is that what seems like the path forward is in fact the reactionary path of retreat. What seems like the direction of decline is in fact the path forward.
Today, our collective rear-view mirror is packed with seeming profanity, in the form of multiple paths of descent into hell. Among the major ones that occupy our minds are the following:
These are such complex and strongly coupled themes that conversations about any one of them quickly lead to a jumbled discussion of all of them, in the form of an ambiguous “inequality, surveillance and everything” non-question. Dickens’ memorable opening paragraph in A Tale of Two Cities captures this state of confused urgency and inchoate anxiety perfectly:
It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way – in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only.
Such a state of confused urgency often leads to hasty and ill-conceived grand pastoralist schemes by way of the well-known politician’s syllogism:1
Something must be done
This is something
This must be done
Promethean sensibilities suggest that the right response to the sense of urgency is not the politician’s syllogism, but counter-intuitive courses of action: driving straight into the very uncertainties the ambiguous problem statements frame. Often, when only reactionary pastoralist paths are under consideration, this means doing nothing, and allowing events to follow a natural course.
In other words, our basic answer to the non-question of “inequality, surveillance and everything” is this: the best way through it is through it. It is an answer similar in spirit to the stoic principle that “the obstacle is the way” and the Finnish concept of sisu: meeting adversity head-on by cultivating a capacity for managing stress, rather than figuring out schemes to get around it. Seemingly easier paths, as the twentieth century’s utopian experiments showed, create a great deal more pain in the long run.
Broken though they might seem, the mechanisms we need for working through “inequality, surveillance and everything” are the generative, pluralist ones we have been refining over the last century: liberal democracy, innovation, entrepreneurship, functional markets and the most thoughtful and limited new institutions we can design.
This answer will strike many as deeply unsatisfactory and perhaps even callous. Yet, time and again, when the world has been faced with seemingly impossible problems, these mechanisms have delivered.
Beyond doing the utmost possible to shield those most exposed to, and least capable of enduring, the material pain of change, it is crucial to limit ourselves and avoid the temptation of reactionary paths suggested by utopian or dystopian visions, especially those that appear in futurist guises. The idea that forward is backward and sacred is profane will never feel natural or intuitive, but innovation and progress depend on acting by these ideas anyway.
In the remaining essays in this series, we will explore what it means to act by these ideas.
Part-way through Douglas Adams’ Hitchhikers’ Guide to the Galaxy, we learn that Earth is not a planet, but a giant supercomputer built by a race of hyperintelligent aliens. Earth was designed by a predecessor supercomputer called Deep Thought, which in turn had been built to figure out the answer to the ultimate question of “Life, the Universe and Everything.” Much to the annoyance of the aliens, the answer turns out to be a cryptic and unsatisfactory “42.”
What is 7 times 6?
We concluded the previous essay with our own ultimate question of “Inequality, Surveillance and Everything.” The basic answer we offered — “the best way through it is through it” — must seem as annoying, cryptic and unsatisfactory as Deep Thought’s “42.”
In Adams’ tale, Deep Thought gently suggests to the frustrated aliens that perhaps the answer seemed cryptic because they never understood the question in the first place. Deep Thought then proceeds to design Earth to solve the much tougher problem of figuring out the actual question.
First performed as a radio show in 1978, Adams’ absurdist epic precisely portrayed the societal transformation that was gaining momentum at the time. Rapid technological progress due to computing was accompanied by cryptic and unsatisfactory answers to confused and urgent-seeming questions about the human condition. Our “Inequality, Surveillance and Everything” form of the non-question is not that different from the corresponding non-question of the late 1970s: “Cold War, Globalization and Everything.” Then, as now, the frustrating but correct answer was “the best way through it is through it.”
The Hitchhiker’s Guide can be read as a satirical anti-morality tale about pastoral sensibilities, utopian solutions and perfect answers. In their dissatisfaction with the real “Ultimate Answer,” the aliens failed to notice the truly remarkable development: they had built an astoundingly powerful computer, which had then proceeded to design an even more powerful successor.
Like the aliens, we may not be satisfied with the answers we find to timeless questions, but simply by asking the questions and attempting to answer them, we are bootstrapping our way to a more advanced society.
As we argued in the last essay, the advancement is both technological and moral, allowing for a more pluralistic society to emerge from the past.
Adams died in 2001, just as his satirical visions, which had inspired a generation of technologists, started to actually come true. Just as Deep Thought had given rise to a fictional “Earth” computer, centralized mainframe computing of the industrial era gave way to distributed, networked computing. In a rather perfect case of life imitating art, IBM researchers named a powerful chess-playing supercomputer Deep Thought in the 1990s, in honor of Adams’ fictional computer. A later version, Deep Blue, became the first computer to beat the reigning human champion in 1997. But the true successor to the IBM era of computing was the planet-straddling distributed computer we call the Internet.
Manufactured in Taiwan
Science fiction writer Neal Stephenson noted the resulting physical transformation as early as 1996, in his essay on the undersea cable-laying industry, Mother Earth, Motherboard.1 By 2004, Kevin Kelly had coined a term and launched a new site to talk about the idea of digitally integrated technology as a single, all-subsuming social reality,2 emerging on this motherboard:
I’m calling this site The Technium. It’s a word I’ve reluctantly coined to designate the greater sphere of technology – one that goes beyond hardware to include culture, law, social institutions, and intellectual creations of all types. In short, the Technium is anything that springs from the human mind. It includes hard technology, but much else of human creation as well. I see this extended face of technology as a whole system with its own dynamics.
The metaphor of the world as a single interconnected entity that subsumes human existence is an old one, and in its modern form, can be traced at least to Hobbes’ Leviathan (1651), and Herbert Spencer’s The Social Organism (1853). What is new about this specific form is that it is much more than a metaphor. The view of the world as a single, connected, substrate for computation is not just a poetic way to appreciate the world: It is a way to shape it and act upon it. For many software projects, the idea that “the network is the computer” (due to John Gage, a computing pioneer at Sun Microsystems) is the only practical perspective.
While the pre-Internet world can also be viewed as a programmable planetary computer based on paperware, what makes today’s planetary computer unique in history is that almost anyone with an Internet connection can program it at a global scale, rather than just powerful leaders with the ability to shape organizations.
The kinds of programming possible on such a vast, democratic scale have been rapidly increasing in sophistication. In November 2014 for instance, within a few days of the Internet discovering and becoming outraged by a sexist 2013 Barbie comic-book titled Computer Engineer Barbie, hacker Kathleen Tuite had created a web app (using an inexpensive cloud service called Heroku) allowing anyone to rewrite the text of the book. The hashtag #FeministHackerBarbie immediately went viral. Coupled with the web app, the hashtag unleashed a flood of creative rewrites of the Barbie book. What would have been a short-lived flood of outrage only a few years ago had turned into a breaking-smart moment for the entire software industry.
To appreciate just how remarkable this episode was, consider this: a hashtag is effectively an instantly defined soft network within the Internet, with capabilities comparable to the entire planet’s telegraph system a century ago. By associating a hashtag with the right kind of app, Tuite effectively created an entire temporary publishing company, with its own distribution network, in a matter of hours rather than decades. In the process, reactive sentiment turned into creative agency.
These capabilities emerged in just 15 years: practically overnight by the normal standards of technological change.
In 1999, SETI@home,3 the first distributed computing project to capture the popular imagination, merely seemed like a weird way to donate spare personal computing power to science. By 2007, Facebook, Twitter, YouTube, Wikipedia and Amazon’s Mechanical Turk4 had added human creativity, communication and money into the mix, and the same engineering approaches had created the social web. By 2014, experimental mechanisms developed in the culture of cat memes5 were influencing elections. The penny-ante economy of Amazon’s Mechanical Turk had evolved into a world where bitcoin miners were making fortunes, car owners were making livable incomes through ridesharing on the side, and canny artists were launching lucrative new careers on Kickstarter.
Even as the old planet-scale computer declines, the new one it gave birth to is coming of age.
In our Tale of Two Computers, the parent is a four-century-old computer whose basic architecture was laid down in the zero-sum mercantile age. It runs on paperware, credentialism, and exhaustive territorial claims that completely carve up the world with strongly regulated boundaries. Its structure is based on hierarchically arranged container-like organizations, ranging from families to nations. In this order of things, there is no natural place for a free frontier. Ideally, there is a place for everything, and everything is in its place. It is a computer designed for stability, within which innovation is a bug rather than a feature.
We’ll call this planet-scale computer the geographic world.
The child is a young, half-century old computer whose basic architecture was laid down during the Cold War. It runs on software, the hacker ethos, and soft networks that wire up the planet in ever-richer, non-exclusive, non-zero-sum ways. Its structure is based on streams like Twitter: open, non-hierarchical flows of real-time information from multiple overlapping networks. In this order of things, everything from banal household gadgets to space probes becomes part of a frontier for ceaseless innovation through bricolage. It is a computer designed for rapid, disorderly and serendipitous evolution, within which innovation, far from being a bug, is the primary feature.
We’ll call this planet-scale computer the networked world.
The networked world is not new. It is at least as old as the oldest trade routes, which have been spreading subversive ideas alongside valuable commodities throughout history. What is new is its growing ability to dominate the geographic world. The story of software eating the world is the also the story of networks eating geography.
There are two major subplots to this story. The first subplot is about bits dominating atoms. The second subplot is about the rise of a new culture of problem-solving.
In 2015, it is safe to say that the weird problem-solving mechanisms of SETI@home and kitten-picture sharing have become normal problem-solving mechanisms for all domains.
Today it seems strange to not apply networked distributed computing involving both neurons and silicon to any complex problem. The term social media is now unnecessary: Even when there are no humans involved, problem-solving on this planet-scale computer almost necessarily involves social mechanisms. Whatever the mix of humans, software and robots involved, solutions tend to involve the same “social” design elements: real-time information streams, dynamically evolving patterns of trust, fluid identities, rapidly negotiated collaborations, unexpected emergent problem decompositions, efficiently allocated intelligence, and frictionless financial transactions.
Each time a problem is solved using these elements, the networked world is strengthened.
As a result of this new and self-reinforcing normal in problem-solving, the technological foundation of our planet is evolving with extraordinary rapidity. The process is a branching, continuous one rather than the staged, sequential process suggested by labels like Web 2.0 and Web 3.01, which reflect an attempt to understand it in somewhat industrial terms. Some recently sprouted extensions and branches have already been identified and named: the Mobile Web, the Internet of Things (IoT), streaming media, Virtual Reality (VR), Augmented Reality (AR) and the blockchain. Others will no doubt emerge in profusion, further blurring the line between real and virtual.
Surprisingly, as a consequence of software eating the technology industry itself, the specifics of the hardware are not important in this evolution. Outside of the most demanding applications, data, code, and networking are all largely hardware-agnostic today.
The Internet Wayback Machine,2 developed by Brewster Kahle and Bruce Gilliat in 1996, has already preserved a history of the web across a few generations of hardware. While such efforts can sometimes seem woefully inadequate with respect to pastoralist visions of history preservation, it is important to recognize the enormity of the advance they represent over paper-based collective memories.
Crashing storage costs and continuously upgraded datacenter hardware allows corporations to indefinitely save all the data they generate. This is turning out to be cheaper than deciding what to do with it3 in real time, resulting in the Big Data approach to business. At a personal level, cloud-based services like Dropbox make your personal data trivial to move across computers.
Most code today, unlike fifty years ago, is in hardware-independent high-level programming languages rather than hardware-specific machine code. As a result of virtualization (technology that allows one piece of hardware to emulate another, a fringe technology until around 20004), most cloud-based software runs within virtual machines and “code containers” rather than directly on hardware. Containerization in shipping drove nearly a seven-fold increase5 in trade among industrialized nations over 20 years. Containerization of code is shaping up to be even more impactful in the economics of software.
Networks too, are defined primarily in software today. It is not just extremely high-level networks, such as the transient, disposable ones defined by hashtags, that exist in software. Low-level networking software can also persist across generations of switching equipment and different kinds of physical links, such as telephone lines, optic fiber cables and satellite links. Thanks to the emerging technology of software-defined networking (SDN), functions that used to be performed by network hardware are increasingly performed by software.
In other words, we don’t just live on a networked planet. We live on a planet networked by software, a distinction that makes all the difference. The software-networked planet is an entity that can exist in a continuous and coherent way despite continuous hardware churn, just as we humans experience a persistent identity, even though almost every atom in our bodies gets swapped out every few years.
This is a profound development. We are used to thinking of atoms as enduring and bits as transient and ephemeral, but in fact the reverse is more true today.
bitsoveratoms75ppi
The emerging planetary computer has the capacity to retain an evolving identity and memory across evolutionary epochs in hardware, both silicon and neural. Like money and writing, software is only dependent on hardware in the short term, not in the long term. Like the US dollar or the plays of Shakespeare, software and software-enabled networks can persist through changes in physical technology.
By contrast it is challenging to preserve old hard technologies even in museums, let alone in working order as functional elements of society. When software eats hardware, however, we can physically or virtually recreate hardware as necessary, imbuing transient atoms with the permanence of bits.
For example, the Realeaux collection of 19th century engineering mechanisms, a priceless part of mechanical engineering heritage, is now available as a set of 3d printable models from Cornell University6 for students anywhere in the world to download, print and study. A higher-end example is NASA’s reverse engineering of 1970s-vintage Saturn V rocket engines.7 The complex project used structured light 3d scanning to reconstruct accurate computer models, which were then used to inform a modernized design. Such resurrection capabilities even extend to computing hardware itself. In 1997, using modern software tools, researchers at the University of Pennsylvania led by Jan Van Der Spiegel recreated ENIAC, the first modern electronic computer — in the form of an 8mm by 8mm chip.8
As a result of such capabilities, the very idea of hardware obsolescence is becoming obsolete. Rapid evolution does not preclude the persistence of the past in a world of digital abundance.
The potential in virtual and augmented reality is perhaps even higher, and the potential goes far beyond consumption devices like the Oculus VR, Magic Leap, Microsoft Hololens and the Leap 3d motion sensor. The more exciting story is that production capabilities are being democratized. In the early decades of prohibitively expensive CGI and motion capture technology, only big-budget Hollywood movies and video games could afford to create artificial realities. Today, with technologies like Microsoft’s Photosynth (which allows you to capture 3d imagery with smartphones), SketchUp, (a powerful and free 3d modeling tool), 3d Warehouse (a public repository of 3d virtual objects), Unity (a powerful game-design tool) and 3d scanning apps such as Trimensional, it is becoming possible for anyone to create living historical records and inhabitable fictions in the form of virtual environments. The Star Trek “holodeck” is almost here: our realities can stay digitally alive long after they are gone in the physical world.
These are more than cool toys. They are soft technological capabilities of enormous political significance. Software can preserve the past in the form of detailed, relivable memories that go far beyond the written word. In 1964, only the “Big 3” network television crews had the ability to film the civil rights riots in America, making the establishment record of events the only one. A song inspired by the movement was appropriately titled This revolution will not be televised. In 1991, a lone witness with a personal camcorder videotaped the tragic beating of Rodney King, triggering the Los Angeles riots.
Fast-forwarding fifteen years, in 2014, smartphones were capturing at least fragments of nearly every important development surrounding the death of Michael Brown in Ferguson, and thousands of video cameras were being deployed to challenge the perspectives offered by the major television channels. In a rare display of consensus, civil libertarians on both the right and left began demanding that all police officers and cars be equipped with cameras that cannot be turned off. Around the same time, the director of the FBI was reduced to conducting a media roadshow to attempt to stall the spread of cryptographic technologies capable of limiting government surveillance.
In just a year after the revelations of widespread surveillance by the NSA, the tables were already being turned.
It is only a matter of time before all participants in every event of importance will be able to record and share their experiences from their perspective as comprehensively as they want. These can then turn into collective, relivable, 3d memories that are much harder for any one party to manipulate in bad faith. History need no longer be written by past victors.
Even authoritarian states are finding that surveillance capabilities cut both ways in the networked world. During the 2014 #Occupy protests in Hong Kong for instance, drone imagery allowed news agencies to make independent estimates of crowd sizes,9 limiting the ability of the government to spin the story as a minor protest. Software was being used to record history from the air, even as it was being used to drive the action on the ground.
When software eats history this way, as it is happening, the ability to forget10 becomes a more important political, economic and cultural concern than the ability to remember.
When bits begin to dominate atoms, it no longer makes sense to think of virtual and physical worlds as separate, detached spheres of human existence. It no longer makes sense to think of machine and human spheres as distinct non-social and social spaces. When software eats the world, “social media,” including both human and machine elements, becomes the entire Internet. “The Internet” in turn becomes the entire world. And in this fusion of digital and physical, it is the digital that dominates.
The fallacious idea that the online world is separate from and subservient to the offline world (an idea called digital dualism, the basis for entertaining but deeply misleading movies such as Tron and The Matrix) yields to an understanding of the Internet as an alternative basis for experiencing all reality, including the old basis: geography.
Science fiction writer Bruce Sterling captured the idea of bits dominating atoms with his notion of “spimes” — enduring digital master objects that can be flexibly realized in different physical forms as the need arises. A book, for instance, is a spime rather than a paper object today, existing as a master digital copy that can evolve indefinitely, and persist beyond specific physical copies.
At a more abstract level, the idea of a “journey” becomes a spime that can be flexibly realized in many ways, through specific physical vehicles or telepresence technologies. A “television news show” becomes an abstract spime that might be realized through the medium of a regular television crew filming on location, an ordinary citizen livestreaming events she is witnessing, drone footage, or official surveillance footage obtained by activist hackers.
Spimes in fact capture the essential spirit of bricolage: turning ideas into reality using whatever is freely or cheaply available, instead of through dedicated resources controlled by authoritarian entities. This capability highlights the economic significance of bits dominating atoms. When the value of a physical resource is a function of how openly and intelligently it can be shared and used in conjunction with software, it becomes less contentious. In a world organized by atoms-over-bits logic, most resources are by definition what economists call rivalrous: if I have it, you don’t. Such captive resources are limited by the imagination and goals of one party. An example is a slice of the electromagnetic spectrum reserved for a television channel. Resources made intelligently open to all on the other hand, such as Twitter, are limited only by collective technical ingenuity. The rivalrousness of goods becomes a function of the the amount of software and imagination used to leverage them, individually or collectively.
When software eats the economy, the so-called “sharing economy” becomes the entire economy, and renting, rather than ownership, becomes the default logic driving consumption.
The fact that all this follows from “social” problem-solving mechanisms suggests that the very meaning of the word has changed. As sociologist Bruno Latour has argued, “social” is now about more than the human. It includes ideas and objects flexibly networked through software. Instead of being an externally injected alien element, technology and innovation become part of the definition of what it means to be social.
What we are living through today is a hardware and software upgrade for all of civilization. It is, in principle no different from buying a new smartphone and moving music, photos, files and contacts to it. And like a new smartphone, our new planet-scale hardware comes with powerful, but disorienting new capabilities. Capabilities that test our ability to adapt.
And of all the ways we are adapting, the single most important one is the adaptation in our problem-solving behaviors.
This is the second major subplot in our Tale of Two Computers. Wherever bits begin to dominate atoms, we solve problems differently. Instead of defining and pursuing goals we create and exploit luck.
Upgrading a planet-scale computer is, of course, a more complex matter than trading in an old smartphone for a new one, so it is not surprising that it has already taken us nearly half a century, and we’re still not done.
Since 1974, the year of peak centralization, we have been trading in a world whose functioning is driven by atoms in geography for one whose functioning is driven by bits on networks. The process has been something like vines growing all over an aging building, creeping in through the smallest cracks in the masonry to establish a new architectural logic.
The difference between the two is simple: the geographic world solves problems in goal-driven ways, through literal or metaphoric zero-sum territorial conflict. The networked world solves them in serendipitous ways, through innovations that break assumptions about how resources can be used, typically making them less rivalrous and unexpectedly abundant.
Goal-driven problem-solving follows naturally from the politician’s syllogism: we must do something; this is something; we must do this. Such goals usually follow from gaps between reality and utopian visions. Solutions are driven by the deterministic form-follows-function1 principle, which emerged with authoritarian high-modernism in the early twentieth century. At its simplest, the process looks roughly like this:
Problem selection: Choose a clear and important problem
Resourcing: Capture resources by promising to solve it
Solution: Solve the problem within promised constraints
This model is so familiar that it seems tautologically equivalent to “problem solving”. It is hard to see how problem-solving could work any other way. This model is also an authoritarian territorial claim in disguise. A problem scope defines a boundary of claimed authority. Acquiring resources means engaging in zero-sum competition to bring them into your boundary, as captive resources. Solving the problem generally means achieving promised effects within the boundary without regard to what happens outside. This means that unpleasant unintended consequences — what economists call social costs — are typically ignored, especially those which impact the least powerful.
We have already explored the limitations of this approach in previous essays, so we can just summarize them here. Choosing a problem based on “importance” means uncritically accepting pastoral problem frames and priorities. Constraining the solution with an alluring “vision” of success means limiting creative possibilities for those who come later. Innovation is severely limited: You cannot act on unexpected ideas that solve different problems with the given resources, let alone pursue the direction of maximal interestingness indefinitely. This means unseen opportunity costs can be higher than visible benefits. You also cannot easily pursue solutions that require different (and possibly much cheaper) resources than the ones you competed for: problems must be solved in pre-approved ways.
This is not a process that tolerates uncertainty or ambiguity well, let alone thrive on it. Even positive uncertainty becomes a problem: an unexpected budget surplus must be hurriedly used up, often in wasteful ways, otherwise the budget might shrink next year. Unexpected new information and ideas, especially from novel perspectives — the fuel of innovation — are by definition a negative, to be dealt with like unwanted interruptions. A new smartphone app not anticipated by prior regulations must be banned.
In the last century, the most common outcome of goal-directed problem solving in complex cases has been failure.
The networked world approach is based on a very different idea. It does not begin with utopian goals or resources captured through specific promises or threats. Instead it begins with open-ended, pragmatic tinkering that thrives on the unexpected. The process is not even recognizable as a problem-solving mechanism at first glance:
Immersion in relevant streams of ideas, people and free capabilities
Experimentation to uncover new possibilities through trial and error
Leverage to double down on whatever works unexpectedly well
Where the politician’s syllogism focuses on repairing things that look broken in relation to an ideal of changeless perfection, the tinkerer’s way focuses on possibilities for deliberate change. As Dilbert creator Scott Adams observed, “Normal people don’t understand this concept; they believe that if it ain’t broke, don’t fix it. Engineers believe that if it ain’t broke, it doesn’t have enough features yet.”2
What would be seemingly pointless disruption in an unchanging utopia becomes a way to stay one step ahead in a changing environment. This is the key difference between the two problem-solving processes: in goal-driven problem-solving, open-ended ideation is fundamentally viewed as a negative. In tinkering, it is a positive.
The first phase — inhabiting relevant streams — can look like idle procrastination on Facebook and Twitter, or idle play with cool new tools discovered on Github. But it is really about staying sensitized to developing opportunities and threats. The perpetual experimentation, as we saw in previous essays, feeds via bricolage on whatever is available. Often these are resources considered “waste” by neighboring goal-directed processes: a case of social costs being turned into assets. A great deal of modern data science for instance, begins with “data exhaust”: data of no immediate goal-directed use to an organization that would normally get discarded in an environment of high storage costs. Since the process begins with low-stakes experimentation, the cost of failures is naturally bounded. The upside, however, is unbounded: there is no necessary limit to what unexpected leveraged uses you might discover for new capabilities.
Tinkerers — be they individuals or organizations — in possession of valuable but under-utilized resources tend to do something counter-intuitive. Instead of keeping idle resources captive, they open up access to as many people as possible, with as few strings attached as possible, in the hope of catalyzing spillover tinkering. Where it works, thriving ecosystems of open-ended innovation form, and steady streams of new wealth begin to flow. Those who share interesting and unique resources in such open ways gain a kind of priceless goodwill money cannot buy. The open-source movement, Google’s Android operating system, Big Data technology, the Arduino hardware experimentation kit and the OpenROV underwater robot all began this way. Most recently, Tesla voluntarily opened up access to its electric vehicle technology patents under highly liberal terms compared to automobile industry norms.
Tinkering is a process of serendipity-seeking that does not just tolerate uncertainty and ambiguity, it requires it. When conditions for it are right, the result is a snowballing effect where pleasant surprises lead to more pleasant surprises.
What makes this a problem-solving mechanism is diversity of individual perspectives coupled with the law of large numbers (the statistical idea that rare events can become highly probable if there are enough trials going on). If an increasing number of highly diverse individuals operate this way, the chances of any given problem getting solved via a serendipitous new idea slowly rises. This is the luck of networks.
Serendipitous solutions are not just cheaper than goal-directed ones. They are typically more creative and elegant, and require much less conflict. Sometimes they are so creative, the fact that they even solve a particular problem becomes hard to recognize. For example, telecommuting and video-conferencing do more to “solve” the problem of fossil-fuel dependence than many alternative energy technologies, but are usually understood as technologies for flex-work rather than energy savings.
Ideas born of tinkering are not targeted solutions aimed at specific problems, such as “climate change” or “save the middle class,” so they can be applied more broadly. As a result, not only do current problems get solved in unexpected ways, but new value is created through surplus and spillover. The clearest early sign of such serendipity at work is unexpectedly rapid growth in the adoption of a new capability. This indicates that it is being used in many unanticipated ways, solving both seen and unseen problems, by both design and “luck”.
Venture capital is ultimately the business of detecting such signs of serendipity early and investing to accelerate it. This makes Silicon Valley the first economic culture to fully and consciously embrace the natural logic of networks. When the process works well, resources flow naturally towards whatever effort is growing and generating serendipity the fastest. The better this works, the more resources flow in ways that minimize opportunity costs.
From the inside, serendipitous problem solving feels like the most natural thing in the world. From the perspective of goal-driven problem solvers, however, it can look indistinguishable from waste and immoral priorities.
This perception exists primarily because access to the luck of sufficiently weak networks can be slowed down by sufficiently strong geographic world boundaries (what is sometimes called bahramdipity: serendipity thwarted by powerful forces). Where resources cannot stream freely to accelerate serendipity, they cannot solve problems through engineered luck, or create surplus wealth. The result is growing inequality between networked and geographic worlds.
This inequality superficially resembles the inequality within the geographic world created by malfunctioning financial markets, crony capitalism and rent-seeking behaviors. As a result, it can be hard for non-technologists to tell Wall Street and Silicon Valley apart, even though they represent two radically different moral perspectives and approaches to problem-solving. When the two collide on highly unequal terms, as they did in the cleantech sector in the late aughts, the overwhelming advantage enjoyed by geographic-world incumbents can prove too much for the networked world to conquer. In the case of cleantech, software was unable to eat the sector and solve its problems in large part due to massive subsidies and protections available to incumbents.
But this is just a temporary state. As the networked world continues to strengthen, we can expect very different outcomes the next time it takes on problems in the cleantech sector.
As a result of failures and limits that naturally accompany young and growing capabilities, the networked world can seem “unresponsive” to “real” problems.
So while both Wall Street and Silicon Valley can often seem tone-deaf and unresponsive to pressing and urgent pains while minting new billionaires with boring frequency, the causes are different. The problems of Wall Street are real, and symptomatic of a true crisis of social and economic mobility in the geographic world. Those of Silicon Valley on the other hand, exist because not everybody is sufficiently plugged into the networked world yet, limiting its power. The best response we have come up with for the former is periodic bailouts for “too big to fail” organizations in both the public and private sector. The problem of connectivity on the other hand, is slowly and serendipitously solving itself as smartphones proliferate.
This difference between the two problem-solving cultures carries over to macroeconomic phenomena as well.
Unlike booms and busts in the financial markets, which are often artificially created, technological booms and busts are an intrinsic feature of wealth creation itself. As Carlota Perez notes, technology busts in fact typically open up vast new capabilities that were overbuilt during booms. They radically expand access to the luck of networks to larger populations. The technology bust of 2000 for instance, radically expanded access to the tools of entrepreneurship and began fueling the next wave of innovation almost immediately.
The 2007 subprime mortgage bust, born of deceit and fraud, had no such serendipitous impact. It destroyed wealth overall, rather than creating it. The global financial crisis that followed is representative of a broader systematic crisis in the geographic world.
Structure, as the management theorist Alfred Chandler noted in his study of early industrial age corporations, follows strategy. Where a goal-driven strategy succeeds, the temporary scope of the original problem hardens into an enduring and policed organizational boundary. Temporary and specific claims on societal resources transform into indefinite and general captive property rights for the victors of specific political, cultural or military wars.
containers75ppi
As a result we get containers with eternally privileged insiders and eternally excluded outsiders: geographic-world organizations. By their very design, such organizations are what Daron Acemoglu and James Robinson call extractive institutions. They are designed not just to solve a specific problem and secure the gains, but to continue extracting wealth indefinitely. Whatever the broader environmental conditions, ideally wealth, harmony and order accumulate inside the victor’s boundaries, while waste, social costs, and strife accumulate outside, to be dealt with by the losers of resource conflicts.
This description does not apply just to large banks or crony capitalist corporations. Even an organization that seems unquestionably like a universal good, such as the industrial age traditional family, comes with a societal cost. In the United States for example, laws designed to encourage marriage and home-ownership systematically disadvantage single adults and non-traditional families (who now collectively form more than half the population). Even the traditional family, as defined and subsidized by politics, is an extractive institution.
Where extractive institutions start to form, it becomes progressively harder to solve future problems in goal-driven ways. Each new problem-solving effort has more entrenched boundaries to deal with. Solving new problems usually means taking on increasingly expensive conflict to redraw boundaries as a first step. In the developed world, energy, healthcare and education are examples of sectors where problem-solving has slowed to a crawl due to a maze of regulatory and other boundaries. The result has been escalating costs and declining innovation — what economist William Baumol has labeled the “cost disease.”
The cost disease is an example of how, in their terminal state, goal-driven problem solving cultures exhaust themselves. Without open-ended innovation, the growing complexity of boundary redrawing makes most problems seem impossible. The planetary computer that is the geographic world effectively seizes up.
On the cusp of the first Internet boom, the landscape of organizations that defines the geographic world was already in deep trouble. As Giles Deleuze noted around 1992:1
We are in a generalized crisis in relation to all environments of enclosure — prison, hospital, factory, school, family…The administrations in charge never cease announcing supposedly necessary reforms…But everyone knows these environments are finished, whatever the length of their expiration periods. It’s only a matter of administering their last rites and of keeping people employed until the installation of new forces knocking at the door.
The “crisis in environments of enclosure” is a natural terminal state for the geographic world. When every shared societal resource has been claimed by a few as an eternal and inalienable right, and secured behind regulated boundaries, the only way to gain something is to deprive somebody else of it through ideology-driven conflict.
This is the zero-sum logic of mercantile economic organization, and dates to the sixteenth century. In fact, because some value is lost through conflict, in the absence of open-ended innovation, it can be worse than zero-sum: what decision theorists call negative-sum (the ultimate example of which is of course war).
By the early twentieth century, mercantilist economic logic had led to the world being completely carved up in terms of inflexible land, water, air, mineral and — perhaps most relevant today — spectrum rights. Rights that could not be freely traded or renegotiated in light of changing circumstances.
This is a grim reality we have a tendency to romanticize. As the etymology of words like organization and corporation suggests, we tend to view our social containers through anthropomorphic metaphors. We extend metaphoric and legal fictions of identity, personality, birth and death far beyond the point of diminishing marginal utility. We assume the “life” of these entities to be self-evidently worth extending into immortality. We even mourn them when they do occasionally enter irreversible decline. Companies like Kodak and Radio Shack for example, evoke such strong positive memories for many Americans that their decline seems truly tragic to many, despite the obvious irrelevance of the business models that originally fueled their rise. We assume that the fates of actual living humans is irreversibly tied to the fates of the artificial organisms they inhabit.
In fact, in the late crisis-ridden state of the geographic world, the “goal” of a typical problem-solving effort is often to “save” some anthropomorphically conceived part of society, without any critical attention devoted to whether it is still necessary, or whether better alternatives are already serendipitously emerging. If innovation is considered a necessary ingredient in the solution at all, only sustaining innovations — those that help preserve and perfect the organization in question — are considered.
Whether the intent is to “save” the traditional family, a failing corporation, a city in decline, or an entire societal class like the “American middle class,” the idea that the continued existence of any organization might be both unnecessary and unjustifiable is rejected as unthinkable. The persistence of geographic world organizations is prized for its own sake, whatever the changes in the environment.
The dark side of such anthropomorphic romanticization is what we might call geographic dualism: a stable planet-wide separation of local utopian zones secured for a privileged few and increasingly dystopian zones for many, maintained through policed boundaries. The greater the degree of geographic dualism, the clearer the divides between slums and high-rises, home owners and home renters, developing and developed nations, wrong and right sides of the tracks, regions with landfills and regions with rent-controlled housing. And perhaps the most glaring divide: secure jobs in regulated sectors with guaranteed lifelong benefits for some, at the cost of needlessly heightened precarity in a rapidly changing world for others.
In a changing environment, organizational stability valued for its own sake becomes a kind of immorality. Seeking such stability means allowing the winners of historic conflicts to enjoy the steady, fixed benefits of stability by imposing increasing adaptation costs on the losers.
In the late eighteenth century, two important developments planted the seeds of a new morality, which sparked the industrial revolution. As a result new wealth began to be created despite the extractive, stability-seeking nature of the geographic world.
With the benefit of a century of hindsight, the authoritarian high-modernist idea that form can follow function in a planned way, via coercive control, seems like wishful thinking beyond a certain scale and complexity. Two phrases popularized by the open-source movement, free as in beer and free as in speech, get at the essence of problem solving through serendipity, an approach that does work1 in large-scale and complex systems.
The way complex systems — such as planet-scale computing capabilities — evolve is perhaps best described by a statement known as Gall’s Law:
A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.
Gall’s Law is in fact much too optimistic. It is not just non-working complex systems designed from scratch that cannot be patched up. Even naturally evolved complex systems that used to work, but have now stopped working, generally cannot be patched into working order again.
The idea that a new, simpler system can revitalize a complex system in a state of terminal crisis is the essence of Promethean thinking. Though the geographic world has reached a state of terminal crisis only recently, the seeds of a simpler working system to replace it were actually planted in the eighteenth century, nearly 200 years before software joined the party. The industrial revolution itself was driven by two elements of our world being partially freed from geographic world logic: people and ideas.
In the eighteenth century, the world gradually rejected the idea that people could be property, to be exclusively claimed by other people or organizations as a problem-solving “resource,” and held captive within specific boundaries. Individual rights and at-will employment models emerged in liberal democracies, in place of institutions like slavery, serfdom and caste-based hereditary professions.
The second was ideas. Again, in the late eighteenth century, modern intellectual property rights, in the form of patents with expiration dates, became the norm. In ancient China, those who revealed the secrets of silk-making were put to death by the state. In late eighteenth century Britain, the expiration of James Watt’s patents sparked the industrial revolution.
Thanks to these two enlightened ideas, a small trickle of individual inventions turned into a steady stream of non-zero sum intellectual and capitalist progress within an otherwise mercantilist, zero-sum world. In the process, the stability-seeking logic of mercantilism was gradually replaced by the adaptive logic of creative destruction.
People and ideas became increasingly free in two distinct ways. As Richard Stallman, the pioneer of the open source movement, famously expressed it: The two kinds of freedom are free as in beer and free as in speech.
First, people and ideas were increasingly free in the sense of no longer being considered “property” to be bought and sold like beer by others.
Second, people and ideas became increasingly free in the sense of not being restricted to a single purpose. They could potentially play any role they were capable of fulfilling. For people, this second kind of freedom is usually understood in terms of specific rights such as freedom of speech, freedom of association and assembly, and freedom of religion. What is common to all these specific freedoms is that they represent freedom from the constraints imposed by authoritarian goals. This second kind of freedom is so new, it can be alarming to those used to being told what to do by authority figures.
Where both kinds of freedom exist, networks begin to form. Freedom of speech, for instance, tends to create a thriving literary and journalistic culture, which exists primarily as a network of individual creatives rather than specific organizations. Freedom of association and assembly creates new political movements, in the form of grassroots political networks.
Free people and ideas can associate in arbitrary ways, creating interesting new combinations and exploring open-ended possibilities. They can make up their own minds about whether problems declared urgent by authoritarian leaders are actually the right focus for their talents. Free ideas are even more powerful, since unlike the talents of free individuals, they are not restricted to one use at a time.
Free people and free ideas formed the “working simple system” that drove two centuries of disruptive industrial age innovation.
Tinkering — the steady operation of this working simple system — is a much more subversive force than we usually recognize, since it poses an implicit challenge to authoritarian priorities.
This is what makes tinkering an undesirable, but tolerable bug in the geographic world. So long as material constraints limited the amount of tinkering going on, the threat to authority was also limited. Since the “means of production” were not free, either as in beer or as in speech, the anti-authoritarian threat of tinkering could be contained by restricting access to them.
With software eating the world, this is changing. Tinkering is becoming much more than a minority activity pursued by the lucky few with access to well-stocked garages and junkyards. It is becoming the driver of a global mass flourishing.
As Karl Marx himself realized, the end-state of industrial capitalism is in fact the condition where the means of production become increasingly available to all. Of course, it is already becoming clear that the result is neither the utopian collectivist workers’ paradise he hoped for, nor the utopian leisure society that John Maynard Keynes hoped for. Instead, it is a world where increasingly free people, working with increasingly free ideas and means of production, operate by their own priorities. Authoritarian leaders, used to relying on coercion and policed boundaries, find it increasingly hard to enforce their priorities on others in such a world.
Chandler’s principle of structure following strategy allows us to understand what is happening as a result. If non-free people, ideas and means of production result in a world of container-like organizations, free people, ideas and means of production result in a world of streams.
A stream is simply a life context formed by all the information flowing towards you via a set of trusted connections — to free people, ideas and resources — from multiple networks. If in a traditional organization nothing is free and everything has a defined role in some grand scheme, in a stream, everything tends steadily towards free as in both beer and speech. “Social” streams enabled by computing power in the cloud and on smartphones are not a compartmentalized location for a particular kind of activity. They provide an information and connection-rich context for all activity.
streams75ppi
Unlike organizations defined by boundaries, streams are what Acemoglu and Robinson call pluralist institutions. These are the opposite of extractive: they are open, inclusive and capable of creating wealth in non-zero-sum ways.
On Facebook for example, connections are made voluntarily (unlike reporting relationships on an org chart) and pictures or notes are usually shared freely (unlike copyrighted photos in a newspaper archive), with few restrictions on further sharing. Most of the capabilities of the platform are free-as-in-beer. What is less obvious is that they are also free-as-in-speech. Except at the extremes, Facebook does not attempt to dictate what kinds of groups you are allowed to form on the platform.
If the three most desirable things in a world defined by organizations are location, location and location,1 in the networked world they are connections, connections and connections.
Streams are not new in human culture. Before the Silk Road was a Darknet site, it was a stream of trade connecting Asia, Africa and Europe. Before there were lifestyle-designing free agents, hackers and modern tinkerers, there were the itinerant tinkers of early modernity. The collective invention settings we discussed in the last essay, such as the Cornish mining district in James Watt’s time and Silicon Valley today, are examples of early, restricted streams. The main streets of thriving major cities are also streams, where you might run into friends unexpectedly, learn about new events through posted flyers, and discover new restaurants or bars.
What is new is the idea of a digital stream created by software. While geography dominates physical streams, digital streams can dominate geography. Access to the stream of innovation that is Silicon Valley is limited by geographic factors such as cost of living and immigration barriers. Access to the stream of innovation that is Github is not. On a busy main street, you can only run into friends who also happen to be out that evening, but with Augmented Reality glasses on, you might also “run into” friends from around the world and share your physical experiences with them.
What makes streams ideal contexts for open-ended innovation through tinkering is that they constantly present unrelated people, ideas and resources in unexpected juxtapositions. This happens because streams emerge as the intersection of multiple networks. On Facebook, or even your personal email, you might be receiving updates from both family and coworkers. You might also be receiving imported updates from structurally distinct networks, such as Twitter or the distribution network of a news source. This means each new piece of information in a stream is viewed against a backdrop of overlapping, non-exclusive contexts, and a plurality of unrelated goals. At the same time, your own actions are being viewed by others in multiple unrelated ways.
As a result of such unexpected juxtapositions, you might “solve” problems you didn’t realize existed and do things that nobody realized were worth doing. For example, seeing a particular college friend and a particular coworker in the same stream might suggest a possibility for a high-value introduction: a small act of social bricolage. Because you are seen by many others from different perspectives, you might find people solving problems for you without any effort on your part. A common experience on Twitter, for example, is a Twitter-only friend tweeting an obscure but important news item, which you might otherwise have missed, just for your benefit.
When a stream is strengthened through such behaviors, every participating network is strengthened.
While Twitter and Facebook are the largest global digital streams today, there are thousands more across the Internet. Specialized ones such as Github and Stack Overflow cater to specific populations, but are open to anyone willing to learn. Newer ones such as Instagram and Whatsapp tap into the culture of younger populations. Reddit has emerged as an unusual venue for keeping up with science by interacting with actual working scientists. The developers of every agile software product in perpetual beta inhabit a stream of unexpected uses discovered by tinkering users. Slack turns the internal life of a corporation into a stream.
Streams are not restricted to humans. Twitter already has a vast population of interesting bots, ranging from House of Coates (an account that is updated by a smart house) to space probes and even sharks tagged with transmitters by researchers.2 Facebook offers pages that allow you to ‘like’ and follow movies and books.
By contrast, when you are sitting in a traditional office, working with a laptop configured exclusively for work use by an IT department, you receive updates only from one context, and can only view them against the backdrop of a single, exclusive and totalizing context. Despite the modernity of the tools deployed, the architecture of information is not very different from the paperware world. If information from other contexts leaks in, it is generally treated as a containment breach: a cause for disciplinary action in the most old-fashioned businesses. People you meet have pre-determined relationships with you, as defined by the organization chart. If you relate to a coworker in more than one way (as both a team member and a tennis buddy), that weakens the authority of the organization. The same is true of resources and ideas. Every resource is committed to a specific “official” function, and every idea is viewed from a fixed default perspective and has a fixed “official” interpretation: the organization’s “party line” or “policy.”
This has a radical consequence. When organizations work well and there are no streams, we view reality in what behavioral psychologists call functionally fixed 3 ways: people, ideas and things have fixed, single meanings. This makes them less capable of solving new problems in creative ways. In a dystopian stream-free world, the most valuable places are the innermost sanctums: these are typically the oldest organizations, most insulated from new information. But they are also the locus of the most wealth, and offer the most freedom for occupants. In China, for instance, the innermost recesses of the Communist Party are still the best place to be. In a Fortune 500 company, the best place to be is still the senior executive floor.
When streams work well on the other hand, reality becomes increasingly intertwingled (a portmanteau of intertwined and tangled), as Ted Nelson evocatively labeled the phenomenon. People, ideas and things can have multiple, fluid meanings depending on what else appears in juxtaposition with them. Creative possibilities rapidly multiply, with every new network feeding into the stream. The most interesting place to be is usually the very edge, rather than the innermost sanctums. In the United States, being a young and talented person in Silicon Valley can be more valuable and interesting than being a senior staffer in the White House. Being the founder of the fastest growing startup may offer more actual leverage than being President of the United States.
We instinctively understand the difference between the two kinds of context. In an organization, if conflicting realities leak in, we view them as distractions or interruptions, and react by trying to seal them out better. In a stream, if things get too homogeneous and non-pluralistic, we complain that things are getting boring, predictable, and turning into an echo chamber. We react by trying to open things up, so that more unexpected things can happen.
What we do not understand as instinctively is that streams are problem-solving and wealth-creation engines. We view streams as zones of play and entertainment, through the lens of the geographic-dualist assumption that play cannot also be work.
In our Tale of Two Computers, the networked world will become firmly established as the dominant planetary computer when this idea becomes instinctive, and work and play become impossible to tell apart.
The first sustainable socioeconomic order of the networked world is just beginning to emerge, and the experience of being part of a system that is growing smarter at an exponential rate is deeply unsettling to pastoralists and immensely exciting to Prometheans.
trashedplanet75ppi
Our geographic-world intuitions and our experience of the authoritarian institutions of the twentieth century lead us to expect that any larger system we are part of will either plateau into some sort of impersonal, bureaucratic stupidity, or turn “evil” somehow and oppress us.
The first kind of apocalyptic expectation is at the heart of movies like Idiocracy and Wall-E, set in trashed futures inhabited by a degenerate humanity that has irreversibly destroyed nature.
The second kind is the fear behind the idea of the Singularity: the rise of a self-improving systemic intelligence that might oppress us. Popular literal-minded misunderstandings of the concept, rooted in digital dualism, result in movies such as Terminator. These replace the fundamental humans-against-nature conflict of the geographic world with an imagined humans-against-machines conflict of the future. As a result, believers in such dualist singularities, rather ironically for extreme technologists, are reduced to fearfully awaiting the arrival of a God-like intelligence with fingers crossed, hoping it will be benevolent.
Both fears are little more than technological obscurantism. They are motivated by a yearning for the comforting certainties of the geographic world, with its clear boundaries, cohesive identities, and idealized heavens and hells.
Neither is a meaningful fear. The networked world blurs the distinction between wealth and waste. This undermines the first fear. The serendipity of the networked world depends on free people, ideas and capabilities combining in unexpected ways: “Skynet” cannot be smarter than humans unless the humans within it are free. This undermines the second fear.
To the extent that these fears are justified at all, they reflect the terminal trajectory of the geographic world, not the early trajectory of the networked world.
An observation due to Arthur C. Clarke offers a way to understand this second trajectory: any sufficiently advanced technology is indistinguishable from magic. The networked world evolves so rapidly through innovation, it seems like a frontier of endless magic.
Clarke’s observation has inspired a number of snowclones that shed further light on where we might be headed. The first, due to Bruce Sterling, is that any sufficiently advanced civilization is indistinguishable from its own garbage. The second, due to futurist Karl Schroeder,1 is that any sufficiently advanced civilization is indistinguishable from nature.
To these we can add one from social media theorist Seb Paquet, which captures the moral we drew from our Tale of Two Computers: any sufficiently advanced kind of work is indistinguishable from play.
Putting these ideas together, we are messily slouching towards a non-pastoral utopia on an asymptotic trajectory where reality gradually blurs into magic, waste into wealth, technology into nature and work into play. `
This is a world that is breaking smart, with Promethean vigor, from its own past, like the precocious teenagers who are leading the charge. In broad strokes, this is what we mean by software eating the world.
For Prometheans, the challenge is to explore how to navigate and live in this world. A growing non-geographic-dualist understanding of it is leading to a network culture view of the human condition. If the networked world is a planet-sized distributed computer, network culture is its operating system.
Our task is like Deep Thought’s task when it began constructing its own successor: to develop an appreciation for the “merest operational parameters” of the new planet-sized computer to which we are migrating all our civilizational software and data.
Scanneur en mode shodan, mais non-commercial (?)