Les rêves appliqués au machine learning (un modèle qui "hallucine" et prédit le futur, l'autre qui joue dans ce monde prédit, et apprend).
Un neurone dans les LSTM, qui donne le sentiment général. Alors que ça n'était pas du tout un des objectifs initial.
In 2007, right before the first iPhone launched, I asked Steve Jobs the obvious question: The design of the iPhone was based on discarding every physical interface element except for a touchscreen. Would users be willing to give up the then-dominant physical keypads for a soft keyboard?
His answer was brusque: “They’ll learn.”
Steve turned out to be right. Today, touchscreens are ubiquitous and seem normal, and other interfaces are emerging. An entire generation is now coming of age with a completely different tactile relationship to information, validating all over again Marshall McLuhan’s observation that “the medium is the message”.
A great deal of product development is based on the assumption that products must adapt to unchanging human needs or risk being rejected. Yet, time and again, people adapt in unpredictable ways to get the most out of new tech. Creative people tinker to figure out the most interesting applications, others build on those, and entire industries are reshaped.
People change, then forget that they changed, and act as though they always behaved a certain way and could never change again. Because of this, unexpected changes in human behavior are often dismissed as regressive rather than as potentially intelligent adaptations.
But change happens anyway. “Software is eating the world” is the most recent historic transformation of this sort.
In 2014, a few of us invited Venkatesh Rao to spend the year at Andreessen Horowitz as a consultant to explore the nature of such historic tech transformations. In particular, we set out to answer the question: Between both the breathless and despairing extremes of viewing the future, could an intellectually rigorous case be made for pragmatic optimism?
As this set of essays argues — many of them inspired by a series of intensive conversations Venkat and I had — there is indeed such a case, and it follows naturally from the basic premise that people can and do change. To “break smart” is to adapt intelligently to new technological possibilities.
With his technological background, satirical eye, and gift for deep and different takes (as anyone who follows his Ribbonfarm blog knows!), there is perhaps nobody better suited than Venkat for telling a story of the future as it breaks smart from the past.
Whether you’re a high school kid figuring out a career or a CEO attempting to navigate the new economy, Breaking Smart should be on your go-to list of resources for thinking about the future, even as you are busy trying to shape it.
Something momentous happened around the year 2000: a major new soft technology came of age. After written language and money, software is only the third major soft technology to appear in human civilization. Fifteen years into the age of software, we are still struggling to understand exactly what has happened. Marc Andreessen’s now-familiar line, software is eating the world, hints at the significance, but we are only just beginning to figure out how to think about the world in which we find ourselves.
Only a handful of general-purpose technologies1 – electricity, steam power, precision clocks, written language, token currencies, iron metallurgy and agriculture among them – have impacted our world in the sort of deeply transformative way that deserves the description eating. And only two of these, written language and money, were soft technologies: seemingly ephemeral, but capable of being embodied in a variety of specific physical forms. Software has the same relationship to any specific sort of computing hardware as money does to coins or credit cards or writing to clay tablets and paper books.
But only since about 2000 has software acquired the sort of unbridled power, independent of hardware specifics, that it possesses today. For the first half century of modern computing after World War II, hardware was the driving force. The industrial world mostly consumed software to meet existing needs, such as tracking inventory and payroll, rather than being consumed by it. Serious technologists largely focused on solving the clear and present problems of the industrial age rather than exploring the possibilities of computing, proper.
Sometime around the dot com crash of 2000, though, the nature of software, and its relationship with hardware, underwent a shift. It was a shift marked by accelerating growth in the software economy and a peaking in the relative prominence of hardware.2 The shift happened within the information technology industry first, and then began to spread across the rest of the economy.
But the economic numbers only hint at3 the profundity of the resulting societal impact. As a simple example, a 14-year-old teenager today (too young to show up in labor statistics) can learn programming, contribute significantly to open-source projects, and become a talented professional-grade programmer before age 18. This is breaking smart: an economic actor using early mastery of emerging technological leverage — in this case a young individual using software leverage — to wield disproportionate influence on the emerging future.
Only a tiny fraction of this enormously valuable activity — the cost of a laptop and an Internet connection — would show up in standard economic metrics. Based on visible economic impact alone, the effects of such activity might even show up as a negative, in the form of technology-driven deflation. But the hidden economic significance of such an invisible story is at least comparable to that of an 18-year-old paying $100,000 over four years to acquire a traditional college degree. In the most dramatic cases, it can be as high as the value of an entire industry. The music industry is an example: a product created by a teenager, Shawn Fanning’s Napster, triggered a cascade of innovation whose primary visible impact has been the vertiginous decline of big record labels, but whose hidden impact includes an explosion in independent music production and rapid growth in the live-music sector.4
Software eating the world is a story of the seen and the unseen: small, measurable effects that seem underwhelming or even negative, and large invisible and positive effects that are easy to miss, unless you know where to look.5
Today, the significance of the unseen story is beginning to be widely appreciated. But as recently as fifteen years ago, when the main act was getting underway, even veteran technologists were being blindsided by the subtlety of the transition to software-first computing.
Perhaps the subtlest element had to do with Moore’s Law, the famous 1965 observation by Intel co-founder Gordon Moore that the density with which transistors can be packed into a silicon chip doubles every 18 months. By 2000, even as semiconductor manufacturing firms began running into the fundamental limits of Moore’s Law, chip designers and device manufacturers began to figure out how to use Moore’s Law to drive down the cost and power consumption of processors rather than driving up raw performance. The results were dramatic: low-cost, low-power mobile devices, such as smartphones, began to proliferate, vastly expanding the range of what we think of as computers. Coupled with reliable and cheap cloud computing infrastructure and mobile broadband, the result was a radical increase in technological potential. Computing could, and did, become vastly more accessible, to many more people in every country on the planet, at radically lower cost and expertise levels.
One result of this increased potential was that technologists began to grope towards a collective vision commonly called the Internet of Things. It is a vision based on the prospect of processors becoming so cheap, miniaturized and low-powered that they can be embedded, along with power sources, sensors and actuators, in just about anything, from cars and light bulbs to clothing and pills. Estimates of the economic potential of the Internet of Things – of putting a chip and software into every physical item on Earth – vary from $2.7 trillion to over $14 trillion: comparable to the entire GDP of the United States today.6
By 2010, it had become clear that given connectivity to nearly limitless cloud computing power and advances in battery technologies, programming was no longer something only a trained engineer could do to a box connected to a screen and a keyboard. It was something even a teenager could do, to almost anything.
The rise of ridesharing illustrates the process particularly well.
Only a few years ago services like Uber and Lyft seemed like minor enhancements to the process of procuring and paying for cab rides. Slowly, it became obvious that ridesharing was eliminating the role of human dispatchers and lowering the level of expertise required of drivers. As data accumulated through GPS tracking and ratings mechanisms, it further became clear that trust and safety could increasingly be underwritten by data instead of brand promises and regulation. This made it possible to dramatically expand driver supply, and lower ride costs by using underutilized vehicles already on the roads.
As the ridesharing sector took root and grew in city after city, second-order effects began to kick in. The increased convenience enables many more urban dwellers to adopt carless lifestyles. Increasing supply lowers costs, and increases accessibility for people previously limited to inconvenient public transportation. And as the idea of the carless lifestyle began to spread, urban planners began to realize that century-old trends like suburbanization, driven in part by car ownership, could no longer be taken for granted.
The ridesharing future we are seeing emerge now is even more dramatic: the higher utilization of cars leads to lower demand for cars, and frees up resources for other kinds of consumption. Individual lifestyle costs are being lowered and insurance models are being reimagined. The future of road networks must now be reconsidered in light of greener and more efficient use of both vehicles and roads.
Meanwhile, the emerging software infrastructure created by ridesharing is starting to have a cascading impact on businesses, such as delivery services, that rely on urban transportation and logistics systems. And finally, by proving many key component technologies, the rideshare industry is paving the way for the next major development: driverless cars.
These developments herald a major change in our relationship to cars.
To traditionalists, particularly in the United States, the car is a motif for an entire way of life, and the smartphone just an accessory. To early adopters who have integrated ridesharing deeply into their lives, the smartphone is the lifestyle motif, and the car is the accessory. To generations of Americans, owning a car represented freedom. To the next generation, not owning a car will represent freedom.
And this dramatic reversal in our relationships to two important technologies – cars and smartphones – is being catalyzed by what was initially dismissed as “yet another trivial app.”
Similar impact patterns are unfolding in sector after sector. Prominent early examples include the publishing, education, cable television, aviation, postal mail and hotel sectors. The impact is more than economic. Every aspect of the global industrial social order is being transformed by the impact of software.
This has happened before of course: money and written language both transformed the world in similarly profound ways. Software, however, is more flexible and powerful than either.
Writing is very flexible: we can write with a finger on sand or with an electron beam on a pinhead. Money is even more flexible: anything from cigarettes in a prison to pepper and salt in the ancient world to modern fiat currencies can work. But software can increasingly go wherever writing and money can go, and beyond. Software can also eat both, and take them to places they cannot go on their own.
Partly as a consequence of how rarely soft, world-eating technologies erupt into human life, we have been systematically underestimating the magnitude of the forces being unleashed by software. While it might seem like software is constantly in the news, what we have already seen is dwarfed by what still remains unseen.
The effects of this widespread underestimation are dramatic. The opportunities presented by software are expanding, and the risks of being caught on the wrong side of the transformation are dramatically increasing. Those who have correctly calibrated the impact of software are winning. Those who have miscalibrated it are losing.
And the winners are not winning by small margins or temporarily either. Software-fueled victories in the past decade have tended to be overwhelming and irreversible faits accompli. And this appears to be true at all levels from individuals to businesses to nations. Even totalitarian dictatorships seem unable to resist the transformation indefinitely.
So to understand how software is eating the world, we have to ask why we have been systematically underestimating its impact, and how we can recalibrate our expectations for the future.
There are four major reasons we underestimate the increasing power of software. Three of these reasons drove similar patterns of miscalibration in previous technological revolutions, but one is unique to software.
First, as futurist Roy Amara noted, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Technological change unfolds exponentially, like compound interest, and we humans seem wired to think about exponential phenomena in flawed ways.1 In the case of software, we expected too much too soon from 1995 to 2000, leading to the crash. Now in 2015, many apparently silly ideas from 2000, such as home-delivery of groceries ordered on the Internet, have become a mundane part of everyday life in many cities. But the element of surprise has dissipated, so we tend to expect too little, too far out, and are blindsided by revolutionary change in sector after sector. Change that often looks trivial or banal on the surface, but turns out to have been profound once the dust settles.
Second, we have shifted gears from what economic historian Carlota Perez calls the installation phase of the software revolution, focused on basic infrastructure such as operating systems and networking protocols, to a deployment phase focused on consumer applications such as social networks, ridesharing and ebooks. In her landmark study of the history of technology,2 Perez demonstrates that the shift from installation to deployment phase for every major technology is marked by a chaotic transitional phase of wars, financial scandals and deep anxieties about civilizational collapse. One consequence of the chaos is that attention is absorbed by transient crises in economic, political and military affairs, and the apocalyptic fears and utopian dreams they provoke. As a result, momentous but quiet change passes unnoticed.
Third, a great deal of the impact of software today appears in a disguised form. The genomics and nanotechnology sectors appear to be rooted in biology and materials science. The “maker” movement around 3d printing and drones appears to be about manufacturing and hardware. Dig a little deeper though, and you invariably find that the action is being driven by possibilities opened up by software more than fundamental new discoveries in those physical fields. The crashing cost of genome-sequencing is primarily due to computing, with innovations in wet chemistry playing a secondary role. Financial innovations leading to cheaper insurance and credit are software innovations in disguise. The Nest thermostat achieves energy savings not by exploiting new discoveries in thermodynamics, but by using machine learning algorithms in a creative way. The potential of this software-driven model is what prompted Google, a software company, to pay $3B to acquire Nest: a company that on the surface appeared to have merely invented a slightly better mousetrap.
These three reasons for under-estimating the power of software had counterparts in previous technology revolutions. The railroad revolution of the nineteenth century also saw a transitional period marked by systematically flawed expectations, a bloody civil war in the United States, and extensive patterns of disguised change — such as the rise of urban living, grocery store chains, and meat consumption — whose root cause was cheap rail transport.
The fourth reason we underestimate software, however, is a unique one: it is a revolution that is being led, in large measure, by brash young kids rather than sober adults.3
This is perhaps the single most important thing to understand about the revolution that we have labeled software eating the world: it is being led by young people, and proceeding largely without adult supervision (though with many adults participating). This has unexpected consequences.
As in most periods in history, older generations today run or control all key institutions worldwide. They are better organized and politically more powerful. In the United States for example, the AARP is perhaps the single most influential organization in politics. Within the current structure of the global economy, older generations can, and do, borrow unconditionally from the future at the expense of the young and the yet-to-be-born.
But unlike most periods in history, young people today do not have to either “wait their turn” or directly confront a social order that is systematically stacked against them. Operating in the margins by a hacker ethos — a problem solving sensibility based on rapid trial-and-error and creative improvisation — they are able to use software leverage and loose digital forms of organization to create new economic, social and political wealth. In the process, young people are indirectly disrupting politics and economics and creating a new parallel social order. Instead of vying for control of venerable institutions that have already weathered several generational wars, young people are creating new institutions based on the new software and new wealth. These improvised but highly effective institutions repeatedly emerge out of nowhere, and begin accumulating political and economic power. Most importantly, they are relatively invisible. Compared to the visible power of youth counterculture in the 1960s for instance, today’s youth culture, built around messaging apps and photo-sharing, does not seem like a political force to reckon with. This culture also has a decidedly commercial rather than ideological character, as a New York Times writer (rather wistfully) noted in a 2011 piece appropriately titled Generation Sell.4 Yet, today’s youth culture is arguably more powerful as a result, representing as it does what Jane Jacobs called the “commerce syndrome” of values, rooted in pluralistic economic pragmatism, rather than the opposed “guardian syndrome” of values, rooted in exclusionary and authoritarian political ideologies.
Chris Dixon captured this guerrilla pattern of the ongoing shift in political power with a succinct observation: what the smartest people do on the weekend is what everyone else will do during the week in ten years.
The result is strange: what in past eras would have been a classic situation of generational conflict based on political confrontation, is instead playing out as an economy-wide technological disruption involving surprisingly little direct political confrontation. Movements such as #Occupy pale in comparison to their 1960s counterparts, and more importantly, in comparison to contemporary youth-driven economic activity.
This does not mean, of course, that there are no political consequences. Software-driven transformations directly disrupt the middle-class life script, upon which the entire industrial social order is based. In its typical aspirational form, the traditional script is based on 12 years of regimented industrial schooling, an additional 4 years devoted to economic specialization, lifetime employment with predictable seniority-based promotions, and middle-class lifestyles. Though this script began to unravel as early as the 1970s, even for the minority (white, male, straight, abled, native-born) who actually enjoyed it, the social order of our world is still based on it. Instead of software, the traditional script runs on what we might call paperware: bureaucratic processes constructed from the older soft technologies of writing and money. Instead of the hacker ethos of flexible and creative improvisation, it is based on the credentialist ethos of degrees, certifications, licenses and regulations. Instead of being based on achieving financial autonomy early, it is based on taking on significant debt (for college and home ownership) early.
It is important to note though, that this social order based on credentialism and paperware worked reasonably well for almost a century between approximately 1870 and 1970, and created a great deal of new wealth and prosperity. Despite its stifling effects on individualism, creativity and risk-taking, it offered its members a broader range of opportunities and more security than the narrow agrarian provincialism it supplanted. For all its shortcomings, lifetime employment in a large corporation like General Motors, with significantly higher standards of living, was a great improvement over pre-industrial rural life.
But by the 1970s, industrialization had succeeded so wildly, it had undermined its own fundamental premises of interchangeability in products, parts and humans. As economists Jeffrey Greenwood and Mehmet Yorkuglu5 argue in a provocative paper titled 1974, that year arguably marked the end of the industrial age and the beginning of the information age. Computer-aided industrial automation was making ever-greater scale possible at ever-lower costs. At the same time, variety and uniqueness in products and services were becoming increasingly valuable to consumers in the developed world. Global competition, especially from Japan and Germany, began to directly threaten American industrial leadership. This began to drive product differentiation, a challenge that demanded originality rather than conformity from workers. Industry structures that had taken shape in the era of mass-produced products, such as Ford’s famous black Model T, were redefined to serve the demand for increasing variety. The result was arguably a peaking in all aspects of the industrial social order based on mass production and interchangeable workers roughly around 1974, a phenomenon Balaji Srinivasan has dubbed peak centralization.6
One way to understand the shift from credentialist to hacker modes of social organization, via young people acquiring technological leverage, is through the mythological tale of Prometheus stealing fire from the heavens for human use.
The legend of Prometheus has been used as a metaphor for technological progress at least since Mary Shelley’s Frankenstein: A Modern Prometheus. Technologies capable of eating the world typically have a Promethean character: they emerge within a mature social order (a metaphoric “heaven” that is the preserve of older elites), but their true potential is unleashed by an emerging one (a metaphoric “earth” comprising creative marginal cultures, in particular youth cultures), which gains relative power as a result. Software as a Promethean technology emerged in the heart of the industrial social order, at companies such as AT&T, IBM and Xerox, universities such as MIT and Stanford, and government agencies such as DARPA and CERN. But its Promethean character was unleashed, starting with the early hacker movement, on the open Internet and through Silicon-Valley style startups.
As a result of a Promethean technology being unleashed, younger and older face a similar dilemma: should I abandon some of my investments in the industrial social order and join the dynamic new social order, or hold on to the status quo as long as possible?
The decision is obviously easier if you are younger, with much less to lose. But many who are young still choose the apparent safety of the credentialist scripts of their parents. These are what David Brooks called Organization Kids (after William Whyte’s 1956 classic, The Organization Man7): those who bet (or allow their “Tiger” parents8 to bet on their behalf) on the industrial social order. If you are an adult over 30, especially one encumbered with significant family obligations or debt, the decision is harder.
Those with a Promethean mindset and an aggressive approach to pursuing a new path can break out of the credentialist life script at any age. Those who are unwilling or unable to do so are holding on to it more tenaciously than ever.
Young or old, those who are unable to adopt the Promethean mindset end up defaulting to what we call a pastoral mindset: one marked by yearning for lost or unattained utopias. Today many still yearn for an updated version of romanticized9 1950s American middle-class life for instance, featuring flying cars and jetpacks.
How and why you should choose the Promethean option, despite its disorienting uncertainties and challenges, is the overarching theme of Season 1. It is a choice we call breaking smart, and it is available to almost everybody in the developed world, and a rapidly growing number of people in the newly-connected developing world.
These individual choices matter.
As historians such as Daron Acemoglu and James Robinson10 and Joseph Tainter11 have argued, it is the nature of human problem-solving institutions, rather than the nature of the problems themselves, that determines whether societies fail or succeed. Breaking smart at the level of individuals is what leads to organizations and nations breaking smart, which in turn leads to societies succeeding or failing.
Today, the future depends on increasing numbers of people choosing the Promethean option. Fortunately, that is precisely what is happening.
In this season of Breaking Smart, I will not attempt to predict the what and when of the future. In fact, a core element of the hacker ethos is the belief that being open to possibilities and embracing uncertainty is necessary for the actual future to unfold in positive ways. Or as computing pioneer Alan Kay put it, inventing the future is easier than predicting it.
And this is precisely what tens of thousands of small teams — small enough to be fed by no more than two pizzas, by a rule of thumb made famous by Amazon founder Jeff Bezos — are doing across the world today.
Prediction as a foundation for facing the future involves risks that go beyond simply getting it wrong. The bigger risk is getting attached to a particular what and when, a specific vision of a paradise to be sought, preserved or reclaimed. This is often a serious philosophical error — to which pastoralist mindsets are particularly prone — that seeks to limit the future.
But while I will avoid dwelling too much on the what and when, I will unabashedly advocate for a particular answer to how. Thanks to virtuous cycles already gaining in power, I believe almost all effective responses to the problems and opportunities of the coming decades will emerge out of the hacker ethos, despite its apparent peripheral role today. The credentialist ethos of extensive planning and scripting towards deterministic futures will play a minor supporting role at best. Those who adopt a Promethean mindset and break smart will play an expanding role in shaping the future. Those who adopt a pastoral mindset and retreat towards tradition will play a diminishing role, in the shrinking number of economic sectors where credentialism is still the more appropriate model.
The nature of problem-solving in the hacker mode, based on trial-and-error, iterative improvement, testing and adaptation (both automated and human-driven) allows us to identify four characteristics of how the future will emerge.
First, despite current pessimism about the continued global leadership of the United States, the US remains the single largest culture that embodies the pragmatic hacker ethos, nowhere more so than in Silicon Valley. The United States in general, and Silicon Valley in particular, will therefore continue to serve as the global exemplar of Promethean technology-driven change. And as virtual collaboration technologies improve, the Silicon Valley economic culture will increasingly become the global economic culture.
Second, the future will unfold through very small groups having very large impacts. One piece of wisdom in Silicon Valley today is that the core of the best software is nearly always written by teams of fewer than a dozen people, not by huge committee-driven development teams. This means increasing well-being for all will be achieved through small two-pizza teams beating large ones. Scale will increasingly be achieved via loosely governed ecosystems of additional participants creating wealth in ways that are hard to track using traditional economic measures. Instead of armies of Organization Men and Women employed within large corporations, and Organization Kids marching in at one end and retirees marching out at the other, the world of work will be far more diverse.
Third, the future will unfold through a gradual and continuous improvement of well-being and quality of life across the world, not through sudden emergence of a utopian software-enabled world (or sudden collapse into a dystopian world). The process will be one of fits and starts, toys and experiments, bugginess and brokenness. But the overall trend will be upwards, towards increasing prosperity for all.
Fourth, the future will unfold through rapid declines in the costs of solutions to problems, including in heavily regulated sectors historically resistant to cost-saving innovations, such as healthcare and higher education. In improvements wrought by software, poor and expensive solutions have generally been replaced by superior and cheaper (often free) solutions, and these substitution effects will accelerate.
Putting these four characteristics together, we get a picture of messy, emergent progress that economist Bradford Delong calls “slouching towards utopia“: a condition of gradual, increasing quality of life available, at gradually declining cost, to a gradually expanding portion of the global population.
A big implication is immediately clear: the asymptotic condition represents a consumer utopia. As consumers, we will enjoy far more for far less. This means that the biggest unknown today is our future as producers, which brings us to what many view as the central question today: the future of work.
The gist of a robust answer, which we will explore in Understanding Elite Discontent, was anticipated by John Maynard Keynes as far back as 1930,1 though he did not like the implications: the majority of the population will be engaged in creating and satisfying each other’s new needs in ways that even the most prescient of today’s visionaries will fail to anticipate.
While we cannot predict precisely what workers of the future will be doing — what future wants and needs workers will be satisfying — we can predict some things about how they will be doing it. Work will take on an experimental, trial-and-error character, and will take place in an environment of rich feedback, self-correction, adaptation, ongoing improvement, and continuous learning. The social order surrounding work will be a much more fluid descendant of today’s secure but stifling paycheck world on the one hand, and liberating but precarious world of free agency and contingent labor on the other.
In other words, the hacker ethos will go global and the workforce at large will break smart. As the hacker ethos spreads, we will witness what economist Edmund Phelps calls a mass flourishing2 — a state of the economy where work will be challenging and therefore fulfilling. Unchallenging, predictable work will become the preserve of machines.
Previous historical periods of mass flourishing, such as the early industrial revolution, were short-lived, and gave way, after a few decades, to societies based on a new middle class majority built around predictable patterns of work and life. This time around, the state of mass flourishing will be a sustained one: a slouching towards a consumer and producer utopia.
If this vision seems overly dramatic, consider once again the comparison to other soft technologies: software is perhaps the most imagination-expanding technology humans have invented since writing and money, and possibly more powerful than either. To operate on the assumption that it will transform the world at least as dramatically, far from being wild-eyed optimism, is sober realism.
At the heart of the historical development of computing is the age-old philosophical impasse between purist and pragmatist approaches to technology, which is particularly pronounced in software due to its seeming near-Platonic ineffability. One way to understand the distinction is through a dinnerware analogy.
Purist approaches, which rely on alluring visions, are like precious “good” china: mostly for display, and reserved exclusively for narrow uses like formal dinners. Damage through careless use can drastically lower the value of a piece. Broken or missing pieces must be replaced for the collection to retain its full display value. To purists, mixing and matching, either with humbler everyday tableware, or with different china patterns, is a kind of sacrilege.
The pragmatic approach on the other hand, is like unrestricted and frequent use of hardier everyday dinnerware. Damage through careless play does not affect value as much. Broken pieces may still be useful, and missing pieces need not be replaced if they are not actually needed. For pragmatists, mixing and matching available resources, far from being sacrilege, is something to be encouraged, especially for collaborations such as neighborhood potlucks.
In software, the difference between the two approaches is clearly illustrated by the history of the web browser.
On January 23, 1993, Marc Andreessen sent out a short email, announcing the release of Mosaic, the first graphical web browser:
07:21:17-0800 by firstname.lastname@example.org:
By the power vested in me by nobody in particular, alpha/beta version 0.5 of NCSA’s Motif-based networked information systems and World Wide Web browser, X Mosaic, is hereby released:
Along with Eric Bina he had quickly hacked the prototype together after becoming enthralled by his first view of the World Wide Web, which Tim Berners-Lee had unleashed from CERN in Geneva in 1991. Over the next year, several other colleagues joined the project, equally excited by the possibilities of the web. All were eager to share the excitement they had experienced, and to open up the future of the web to more people, especially non-technologists.
Over the course of the next few years, the graphical browser escaped the confines of the government-funded lab (the National Center for Supercomputing Applications at the University of Illinois) where it was born. As it matured at Netscape and later at Microsoft, Mozilla and Google, it steered the web in unexpected (and to some, undesirable) directions. The rapid evolution triggered both the legendary browser wars and passionate debates about the future of the Internet. Those late-nineties conflicts shaped the Internet of today.
To some visionary pioneers, such as Ted Nelson, who had been developing a purist hypertext paradigm called Xanadu for decades, the browser represented an undesirably messy direction for the evolution of the Internet. To pragmatists, the browser represented important software evolving as it should: in a pluralistic way, embodying many contending ideas, through what the Internet Engineering Task Force (IETF) calls “rough consensus and running code.”
While every major software project has drawn inspiration from both purists and pragmatists, the browser, like other pieces of software that became a mission critical part of the Internet, was primarily derived from the work and ideas of pragmatists: pioneers like Jon Postel, David Clark, Bob Kahn and Vint Cerf, who were instrumental in shaping the early development of the Internet through highly inclusive institutions like the IETF.
Today, the then-minority tradition of pragmatic hacking has matured into agile development, the dominant modern approach for making software. But the significance of this bit of history goes beyond the Internet. Increasingly, the pragmatic, agile approach to building things has spread to other kinds of engineering and beyond, to business and politics.
The nature of software has come to matter far beyond software. Agile philosophies are eating all kinds of building philosophies. To understand the nature of the world today, whether or not you are a technologist, it is crucial to understand agility and its roots in the conflict between pragmatic and purist approaches to computing.
The story of the browser was not exceptional. Until the early 1990s, almost all important software began life as purist architectural visions rather than pragmatic hands-on tinkering.
This was because early programming with punch-card mainframes was a highly constrained process. Iterative refinement was slow and expensive. Agility was a distant dream: programmers often had to wait weeks between runs. If your program didn’t work the first time, you might not have gotten another chance. Purist architectures, worked out on paper, helped minimize risk and maximize results under these conditions.
As a result, early programming was led by creative architects (often mathematicians and, with rare exceptions like Klari Von Neumann and Grace Hopper, usually male) who worked out the structure of complex programs upfront, as completely as possible. The actual coding onto punch cards was done by large teams of hands-on programmers (mostly women1) with much lower autonomy, responsible for working out implementation details.
In short, purist architecture led the way and pragmatic hands-on hacking was effectively impossible. Trial-and-error was simply too risky and slow, which meant significant hands-on creativity had to be given up in favor of productivity.
With the development of smaller computers capable of interactive input hands-on hacking became possible. At early hacker hubs, like MIT through the sixties, a high-autonomy culture of hands-on programming began to take root. Though the shift would not be widely recognized until after 2000, the creative part of programming was migrating from visioning to hands-on coding. Already by 1970, important and high-quality software, such as the Unix operating system, had emerged from the hacker culture growing at the minicomputer margins of industrial programming.
Through the seventies, a tenuous balance of power prevailed between purist architects and pragmatic hackers. With the introduction of networked personal computing in the eighties, however, hands-on hacking became the defining activity in programming. The culture of early hacker hubs like MIT and Bell Labs began to diffuse broadly through the programming world. The archetypal programmer had evolved: from interchangeable member of a large team, to the uniquely creative hacker, tinkering away at a personal computer, interacting with peers over networks. Instead of dutifully obeying an architect, the best programmers were devoting increasing amounts of creative energy to scratching personal itches.
The balance shifted decisively in favor of pragmatists with the founding of the IETF in 1986. In January of that year, a group of 21 researchers met in San Diego and planted the seeds of what would become the modern “government” of the Internet.
Despite its deceptively bureaucratic-sounding name, the IETF is like no standards organization in history, starting with the fact that it has no membership requirements and is open to all who want to participate. Its core philosophy can be found in an obscure document, The Tao of the IETF, little known outside the world of technology. It is a document that combines the informality and self-awareness of good blogs, the gravitas of a declaration of independence, and the aphoristic wisdom of Zen koans. This oft-quoted section illustrates its basic spirit:
In many ways, the IETF runs on the beliefs of its members. One of the “founding beliefs” is embodied in an early quote about the IETF from David Clark: “We reject kings, presidents and voting. We believe in rough consensus and running code”. Another early quote that has become a commonly-held belief in the IETF comes from Jon Postel: “Be conservative in what you send and liberal in what you accept”.
Though the IETF began as a gathering of government-funded researchers, it also represented a shift in the center of programming gravity from government labs to the commercial and open-source worlds. Over the nearly three decades since, it has evolved into the primary steward2 of the inclusive, pluralistic and egalitarian spirit of the Internet. In invisible ways, the IETF has shaped the broader economic and political dimensions of software eating the world.
The difference between purist and pragmatic approaches becomes clear when we compare the evolution of programming in the United States and Japan since the early eighties. Around 1982, Japan chose the purist path over the pragmatic path, by embarking on the ambitious “fifth-generation computing” effort. The highly corporatist government-led program, which caused much anxiety in America at the time, proved to be largely a dead-end. The American tradition on the other hand, outgrew its government-funded roots and gave rise to the modern Internet. Japan’s contemporary contributions to software, such as the hugely popular Ruby language designed by Yukihiro Matsumoto, belong squarely within the pragmatic hacker tradition.
I will argue that this pattern of development is not limited to computer science. Every field eaten by software experiences a migration of the creative part from visioning activities to hands-on activities, disrupting the social structure of all professions. Classical engineering fields like mechanical, civil and electrical engineering had already largely succumbed to hands-on pragmatic hacking by the nineties. Non-engineering fields like marketing are beginning to convert.
So the significance of pragmatic approaches prevailing over purist ones cannot be overstated: in the world of technology, it was the equivalent of the fall of the Berlin Wall.
While pragmatic hacking was on the rise, purist approaches entered a period of slow, painful and costly decline. Even as they grew in ambition, software projects based on purist architecture and teams of interchangeable programmers grew increasingly unmanageable. They began to exhibit the predictable failure patterns of industrial age models: massive cost-overruns, extended delays, failed launches, damaging unintended consequences, and broken, unusable systems.
These failure patterns are characteristic of what political scientist James Scott1 called authoritarian high modernism: a purist architectural aesthetic driven by the authoritarian priorities. To authoritarian high-modernists, elements of the environment that do not conform to their purist design visions appear “illegible” and anxiety-provoking. As a result, they attempt to make the environment legible by forcibly removing illegible elements. Failures follow because important elements, critical to the functioning of the environment, get removed in the process.
Geometrically laid-out suburbs, for example, are legible and conform to platonic architectural visions, even if they are unlivable and economically stagnant. Slums on the other hand, appear illegible and are anxiety-provoking to planners, even when they are full of thriving economic life. As a result, authoritarian planners level slums and relocate residents into low-cost planned housing. In the process they often destroy economic and cultural vitality.
In software, what authoritarian architects find illegible and anxiety-provoking is the messy, unplanned tinkering hackers use to figure out creative solutions. When the pragmatic model first emerged in the sixties, authoritarian architects reacted like urban planners: by attempting to clear “code slums.” These attempts took the form of increasingly rigid documentation and control processes inherited from manufacturing. In the process, they often lost the hacker knowledge keeping the project alive.
In short, authoritarian high modernism is a kind of tunnel vision. Architects are prone to it in environments that are richer than one mind can comprehend. The urge to dictate and organize is destructive, because it leads architects to destroy the apparent chaos that is vital for success.
The flaws of authoritarian high modernism first became problematic in fields like forestry, urban planning and civil engineering. Failures of authoritarian development in these fields resulted in forests ravaged by disease, unlivable “planned” cities, crony capitalism and endemic corruption. By the 1960s, in the West, pioneering critics of authoritarian models, such as the urbanist Jane Jacobs and the environmentalist Rachel Carson, had begun to correctly diagnose the problem.
By the seventies, liberal democracies had begun to adopt the familiar, more democratic consultation processes of today. These processes were adopted in computing as well, just as the early mainframe era was giving way to the minicomputer era.
Unfortunately, while democratic processes did mitigate the problems, the result was often lowered development speed, increased cost and more invisible corruption. New stakeholders brought competing utopian visions and authoritarian tendencies to the party. The problem now became reconciling conflicting authoritarian visions. Worse, any remaining illegible realities, which were anxiety-provoking to all stakeholders, were now even more vulnerable to prejudice and elimination. As a result complex technology projects often slowed to a costly, gridlocked crawl. Tyranny of the majority — expressed through autocratic representatives of particular powerful constituencies — drove whatever progress did occur. The biggest casualty was innovation, which by definition is driven by ideas that are illegible to all but a few: what Peter Thiel calls secrets — things entrepreneurs believe that nobody else does, which leads them to unpredictable breakthroughs.
The process was most clearly evident in fields like defense. In major liberal democracies, different branches of the military competed to influence the design of new weaponry, and politicians competed to create jobs in their constituencies. As a result, major projects spiraled out of control and failed in predictable ways: delayed, too expensive and technologically compromised. In the non liberal-democratic world, the consequences were even worse. Authoritarian high modernism continued (and continues today in countries like Russia and North Korea), unchecked, wreaking avoidable havoc.
Software is no exception to this pathology. As high-profile failures like the launch of healthcare.gov2 show, “democratic” processes meant to mitigate risks tend to create stalled or gridlocked processes, compounding the problem.
Both in traditional engineering fields and in software, authoritarian high modernism leads to a Catch-22 situation: you either get a runaway train wreck due to too much unchecked authoritarianism, or a train that barely moves due to a gridlock of checks and balances.
Fortunately, agile software development manages to combine both decisive authority and pluralistic visions, and mitigate risks without slowing things to a crawl. The basic principles of agile development, articulated by a group of 17 programmers in 2001, in a document known as the Agile Manifesto, represented an evolution of the pragmatic philosophy first explicitly adopted by the IETF.
The cost of this agility is a seemingly anarchic pattern of progress. Agile development models catalyze illegible, collective patterns of creativity, weaken illusions of control, and resist being yoked to driving utopian visions. Adopting agile models leads individuals and organizations to gradually increase their tolerance for anxiety in the face of apparent chaos. As a result, agile models can get more agile over time.
Not only are agile models driving reform in software, they are also spreading to traditional domains where authoritarian high-modernism first emerged. Software is beginning to eat domains like forestry, urban planning and environment protection. Open Geographic Information Systems (GIS) in forestry, open data initiatives in urban governance, and monitoring technologies in agriculture, all increase information availability while eliminating cumbersome paperware processes. As we will see in upcoming essays, enhanced information availability and lowered friction can make any field hacker-friendly. Once a field becomes hacker-friendly, software begins to eat it. Development gathers momentum: the train can begin moving faster, without leading to train wrecks, resolving the Catch-22.
Today the shift from purist to pragmatist has progressed far enough that it is also reflected at the level of the economics of software development. In past decades, economic purists argued variously for idealized open-source, market-driven or government-led development of important projects. But all found themselves faced with an emerging reality that was too complex for any one economic ideology to govern. As a result, rough consensus and running economic mechanisms have prevailed over specific economic ideologies and gridlocked debates. Today, every available economic mechanism — market-based, governmental, nonprofit and even criminal — has been deployed at the software frontier. And the same economic pragmatism is spreading to software-eaten fields.
This is a natural consequence of the dramatic increase in both participation levels and ambitions in the software world. In 1943, only a small handful of people working on classified military projects had access to the earliest computers. Even in 1974, the year of Peak Centralization, only a small and privileged group had access to the early hacker-friendly minicomputers like the DEC PDP series. But by 1993, the PC revolution had nearly delivered on Bill Gates’ vision of a computer at every desk, at least in the developed world. And by 2000, laptops and Blackberries were already foreshadowing the world of today, with near-universal access to smartphones, and an exploding number of computers per person.
The IETF slogan of rough consensus and running code (RCRC) has emerged as the only workable doctrine for both technological development and associated economic models under these conditions.
As a result of pragmatism prevailing, a nearly ungovernable Promethean fire has been unleashed. Hundreds of thousands of software entrepreneurs are unleashing innovations on an unsuspecting world by the power vested in them by “nobody in particular,” and by any economic means necessary.
It is in the context of the anxiety-inducing chaos and complexity of a mass flourishing that we then ask: what exactly is software?
Software possesses an extremely strange property: it is possible to create high-value software products with effectively zero capital outlay. As Mozilla engineer Sam Penrose put it, software programming is labor that creates capital.
This characteristic make software radically different from engineering materials like steel, and much closer to artistic media such as paint.1 As a consequence, engineer and engineering are somewhat inappropriate terms. It is something of a stretch to even think of software as a kind of engineering “material.” Though all computing requires a physical substrate, the trillions of tiny electrical charges within computer circuits, the physical embodiment of a running program, barely seem like matter.
The closest relative to this strange new medium is paper. But even paper is not as cheap or evanescent. Though we can appreciate the spirit of creative abundance with which industrial age poets tossed crumpled-up false starts into trash cans, a part of us registers the wastefulness. Paper almost qualifies as a medium for true creative abundance, but falls just short.
Software though, is a medium that not only can, but must be approached with an abundance mindset. Without a level of extensive trial-and-error and apparent waste that would bankrupt both traditional engineering and art, good software does not take shape. From the earliest days of interactive computing, when programmers chose to build games while more “serious” problems waited for computer time, to modern complaints about “trivial” apps (which often turn out to be revolutionary), scarcity-oriented thinkers have remained unable to grasp the essential nature of software for fifty years.
The difference has a simple cause: unsullied purist visions have value beyond anxiety-alleviation and planning. They are also a critical authoritarian marketing and signaling tool — like formal dinners featuring expensive china — for attracting and concentrating scarce resources in fields such as architecture. In an environment of abundance, there is much less need for visions to serve such a marketing purpose. They only need to provide a roughly correct sense of direction to those laboring at software development to create capital using whatever tools and ideas they bring to the party — like potluck participants improvising whatever resources are necessary to make dinner happen.
Translated to technical terms, the dinnerware analogy is at the heart of software engineering. Purist visions tend to arise when authoritarian architects attempt to concentrate and use scarce resources optimally, in ways they often sincerely believe is best for all. By contrast, tinkering is focused on steady progress rather than optimal end-states that realize a totalizing vision. It is usually driven by individual interests and not obsessively concerned with grand and paternalistic “best for all” objectives. The result is that purist visions seem more comforting and aesthetically appealing on the outside, while pragmatic hacking looks messy and unfocused. At the same time purist visions are much less open to new possibilities and bricolage, while pragmatic hacking is highly open to both.
Within the world of computing, the importance of abundance-oriented approaches was already recognized by the 1960s. With Moore’s Law kicking in, pioneering computer scientist Alan Kay codified the idea of abundance orientation with the observation that programmers ought to “waste transistors” in order to truly unleash the power of computing.
But even for young engineers starting out today, used to routinely renting cloudy container-loads2 of computers by the minute, the principle remains difficult to follow. Devoting skills and resources to playful tinkering still seems “wrong,” when there are obvious and serious problems desperately waiting for skilled attention. Like the protagonist in the movie Brewster’s Millions, who struggles to spend $30 million within thirty days in order to inherit $300 million, software engineers must unlearn habits born of scarcity before they can be productive in their medium.
The principle of rough consensus and running code is perhaps the essence of the abundance mindset in software.
If you are used to the collaboration processes of authoritarian organizations, the idea of rough consensus might conjure up the image of a somewhat informal committee meeting, but the similarity is superficial. Consensus in traditional organizations tends to be brokered by harmony-seeking individuals attuned to the needs of others, sensitive to constraints, and good at creating “alignment” among competing autocrats. This is a natural mode of operation when consensus is sought in order to deal with scarcity. Allocating limited resources is the typical purpose of such industrial-age consensus seeking. Under such conditions, compromise represents a spirit of sharing and civility. Unfortunately, it is also a recipe for gridlock when compromise is hard and creative breakthroughs become necessary.
By contrast, software development favors individuals with an autocratic streak, driven by an uncompromising sense of how things ought to be designed and built, which at first blush appears to contradict the idea of consensus.
Paradoxically, the IETF philosophy of eschewing “kings, presidents and voting” means that rough consensus evolves through strong-minded individuals either truly coming to an agreement, or splitting off to pursue their own dissenting ideas. Conflicts are not sorted out through compromises that leave everybody unhappy. Instead they are sorted out through the principle futurist Bob Sutton identified as critical for navigating uncertainty: strong views, weakly held.
Pragmatists, unlike the authoritarian high-modernist architects studied by James Scott, hold strong views on the basis of having contributed running code rather than abstract visions. But they also recognize others as autonomous peers, rather than as unquestioning subordinates or rivals. Faced with conflict, they are willing to work hard to persuade others, be persuaded themselves, or leave.
Rough consensus favors people who, in traditional organizations, would be considered disruptive and stubborn: these are exactly the people prone to “breaking smart.” In its most powerful form, rough consensus is about finding the most fertile directions in which to proceed rather than uncovering constraints. Constraints in software tend to be relatively few and obvious. Possibilities, however, tend to be intimidatingly vast. Resisting limiting visions, finding the most fertile direction, and allying with the right people become the primary challenges.
In a process reminiscent of the “rule of agreement” in improv theater, ideas that unleash the strongest flood of follow-on builds tend to be recognized as the most fertile and adopted as the consensus. Collaborators who spark the most intense creative chemistry tend to be recognized as the right ones. The consensus is rough because it is left as a sense of direction, instead of being worked out into a detailed, purist vision.
This general principle of fertility-seeking has been repeatedly rediscovered and articulated in a bewildering variety of specific forms. The statements have names such as the principle of least commitment (planning software), the end-to-end principle (network design), the procrastination principle (architecture), optionality (investing), paving the cowpaths (interface design), lazy evaluation (language design) and late binding (code execution). While the details, assumptions and scope of applicability of these different statements vary, they all amount to leaving the future as free and unconstrained as possible, by making as few commitments as possible in any given local context.
The principle is in fact an expression of laissez-faire engineering ethics. Donald Knuth, another software pioneer, captured the ethical dimension with his version: premature optimization is the root of all evil. The principle is the deeper reason autonomy and creativity can migrate downstream to hands-on decision-making. Leaving more decisions for the future also leads to devolving authority to those who come later.
Such principles might seem dangerously playful and short-sighted, but under conditions of increasing abundance, with falling costs of failure, they turn out to be wise. It is generally smarter to assume that problems that seem difficult and important today might become trivial or be rendered moot in the future. Behaviors that would be short-sighted in the context of scarcity become far-sighted in the context of abundance.
The original design of the Mosaic browser, for instance, reflected the optimistic assumption that everybody would have high-bandwidth access to the Internet in the future, a statement that was not true at the time, but is now largely true in the developed world. Today, many financial technology entrepreneurs are building products based on the assumption that cryptocurrencies will be widely adopted and accepted. Underlying all such optimism about technology is an optimism about humans: a belief that those who come after us will be better informed and have more capabilities, and therefore able to make more creative decisions.
The consequences of this optimistic approach are radical. Traditional processes of consensus-seeking drive towards clarity in long-term visions but are usually fuzzy on immediate next steps. By contrast, rough consensus in software deliberately seeks ambiguity in long-term outcomes and extreme clarity in immediate next steps. It is a heuristic that helps correct the cognitive bias behind Amara’s Law. Clarity in next steps counteracts the tendency to overestimate what is possible in the short term, while comfort with ambiguity in visions counteracts the tendency to underestimate what is possible in the long term. At an ethical level, rough consensus is deeply anti-authoritarian, since it avoids constraining the freedoms of future stakeholders simply to allay present anxieties. The rejection of “voting” in the IETF model is a rejection of a false sense of egalitarianism, rather than a rejection of democratic principles.
In other words, true north in software is often the direction that combines ambiguity and evidence of fertility in the most alluring way: the direction of maximal interestingness.
The decade after the dot com crash of 2000 demonstrated the value of this principle clearly. Startups derided for prioritizing “growth in eyeballs” (an “interestingness” direction) rather than clear models of steady-state profitability (a self-limiting purist vision of an idealized business) were eventually proven right. Iconic “eyeball” based businesses, such as Google and Facebook, turned out to be highly profitable. Businesses which prematurely optimized their business model in response to revenue anxieties limited their own potential and choked off their own growth.
The great practical advantage of this heuristic is that the direction of maximal interestingness can be very rapidly updated to reflect new information, by evolving the rough consensus. The term pivot, introduced by Eric Ries as part of the Lean Startup framework, has recently gained popularity for such reorientation. A pivot allows the direction of development to change rapidly, without a detailed long-term plan. It is enough to figure out experimental next steps. This ability to reorient and adopt new mental models quickly (what military strategists call a fast transient4) is at the heart of agility.
The response to new information is exactly the reverse in authoritarian development models. Because such models are based on detailed purist visions that grow more complex over time, it becomes increasingly harder to incorporate new data. As a result, the typical response to new information is to label it as an irrelevant distraction, reaffirm commitment to the original vision, and keep going. This is the runaway-train-wreck scenario. On the other hand, if the new information helps ideological opposition cohere within a democratic process, a competing purist vision can emerge. This leads to the stalled-train scenario.
The reason rough consensus avoids both these outcomes is that it is much easier to agree roughly on the most interesting direction than to either update a complex, detailed vision or bring two or more conflicting complex visions into harmony.
For this to work, an equally pragmatic implementation philosophy is necessary. One that is very different from the authoritarian high-modernist way, or as it is known in software engineering, the waterfall model (named for the way high-level purist plans flow unidirectionally towards low-level implementation work).
Not only does such a pragmatic implementation philosophy exist, it works so well that running code actually tends to outrun even the most uninhibited rough consensus process without turning into a train wreck. One illustration of this dynamic is that successful software tends to get both used and extended in ways that the original creators never anticipated – and are often pleasantly surprised by, and sometimes alarmed by. This is of course the well-known agile model. We will not get into the engineering details,5 but what matters are the consequences of using it.
The biggest consequence is this: in the waterfall model, execution usually lags vision, leading to a deficit-driven process. By contrast, in working agile processes, running code races ahead, leaving vision to catch up, creating a surplus-driven process.
Both kinds of gaps contain lurking unknowns, but of very different sorts. The surplus in the case of working agile processes is the source of many pleasant surprises: serendipity. The deficit in the case of waterfall models is the source of what William Boyd called zemblanity: “unpleasant unsurprises.”
In software, waterfall processes fail in predictable ways, like classic Greek tragedies. Agile processes on the other hand, can lead to snowballing serendipity, getting luckier and luckier, and succeeding in unexpected ways. The reason is simple: waterfall plans constrain the freedom of future participants, leading them to resent and rebel against the grand plan in predictable ways. By contrast, agile models empower future participants in a project, catalyzing creativity and unpredictable new value.
The engineering term for the serendipitous, empowering gap between running code and governing vision has now made it into popular culture in the form of a much-misunderstood idea: perpetual beta.
When Google’s Gmail service finally exited beta status in July 2009, five years after it was launched, it already had over 30 million users. By then, it was the third largest free email provider after Yahoo and Hotmail, and was growing much faster than either.1 For most of its users, it had already become their primary personal email service.
The beta label on the logo, indicating experimental prototype status, had become such a running joke that when it was finally removed, the project team included a whimsical “back to beta” feature, which allowed users to revert to the old logo. That feature itself was part of a new section of the product called Gmail Labs: a collection of settings that allowed users to turn on experimental features. The idea of perpetual beta had morphed into permanent infrastructure within Gmail for continuous experimentation.
Today, this is standard practice: all modern web-based software includes scaffolding for extensive ongoing experimentation within the deployed production site or smartphone app backend (and beyond, through developer APIs2). Some of it is even visible to users. In addition to experimental features that allow users to stay ahead of the curve, many services also offer “classic” settings that allow them to stay behind the curve — for a while. The best products use perpetual beta as a way to lead their users towards richer, more empowered behaviors, instead of following them through customer-driven processes. Backward compatibility is limited to situations of pragmatic need, rather than being treated as a religious imperative.
The Gmail story contains an answer to the obvious question about agile models you might ask if you have only experienced waterfall models: How does anything ambitious get finished by groups of stubborn individuals heading in the foggiest possible direction of “maximal interestingness” with neither purist visions nor “customer needs” guiding them?
The answer is that it doesn’t get finished. But unlike in waterfall models, this does not necessarily mean the product is incomplete. It means the vision is perpetually incomplete and growing in unbounded ways, due to ongoing evolutionary experiments. When this process works well, what engineers call technical debt can get transformed into what we might call technical surplus.3 The parts of the product that lack satisfying design justifications represent the areas of rapid innovation. The gaps in the vision are sources of serendipitous good luck. (If you are a Gmail user, browsing the “Labs” section might lead you to some serendipitous discoveries: features you did not know you wanted might already exist unofficially).
The deeper significance of perpetual beta culture in technology often goes unnoticed: in the industrial age, engineering labs were impressive, enduring buildings inside which experimental products were created. In the digital age, engineering labs are experimental sections inside impressive, enduring products. Those who bemoan the gradual decline of famous engineering labs like AT&T Bell Labs and Xerox PARC often miss the rise of even more impressive labs inside major modern products and their developer ecosystems.
Perpetual beta is now such an entrenched idea that users expect good products to evolve rapidly and serendipitously, continuously challenging their capacity to learn and adapt. They accept occasional non-critical failures as a price worth paying. Just as the ubiquitous under construction signs on the early static websites of the 1990s gave way to dynamic websites that were effectively always “under construction,” software products too have acquired an open-ended evolutionary character.
Just as rough consensus drives ideation towards “maximal interestingness”, agile processes drive evolution towards the regimes of greatest operational uncertainty, where failures are most likely to occur. In well-run modern software processes, not only is the resulting chaos tolerated, it is actively invited. Changes are often deliberately made at seemingly the worst possible times. Intuit, a maker of tax software, has a history of making large numbers of changes and updates at the height of tax season.
Conditions that cause failure, instead of being cordoned off for avoidance in the future, are deliberately and systematically recreated and explored further. There are even automated systems designed to deliberately cause failures in production systems, such as ChaosMonkey, a system developed by Netflix to randomly take production servers offline, forcing the system to heal itself or die trying.
The glimpses of perpetual beta that users can see is dwarfed by unseen backstage experimentation.
This is neither perverse, nor masochistic: it is necessary to uncover hidden risks in experimental ideas early, and to quickly resolve gridlocks with data.
The origins of this curious philosophy lie in what is known as the release early, release often (RERO) principle, usually attributed to Linus Torvalds, the primary architect of the Linux operating system. The idea is exactly what it sounds like: releasing code as early as possible, and as frequently as possible while it is actively evolving.
What makes this possible in software is that most software failures do not have life-threatening consequences.4 As a result, it is usually faster and cheaper to learn from failure than to attempt to anticipate and accommodate it via detailed planning (which is why the RERO principle is often restated in terms of failure as fail fast).
So crucial is the RERO mindset today that many companies, such as Facebook and Etsy, insist on new hires contributing and deploying a minor change to mission-critical systems on their very first day. Companies that rely on waterfall processes by contrast, often put new engineers through years of rotating assignments before trusting them with significant autonomy.
To appreciate just how counterintuitive the RERO principle is, and why it makes traditional engineers nervous, imagine a car manufacturer rushing to put every prototype into “experimental” mass production, with the intention of discovering issues through live car crashes. Or supervisors in a manufacturing plant randomly unplugging or even breaking machinery during peak demand periods. Even lean management models in manufacturing do not go this far. Due to their roots in scarcity, lean models at best mitigate the problems caused by waterfall thinking. Truly agile models on the other hand, do more: they catalyze abundance.
Perhaps the most counter-intuitive consequence of the RERO principle is this: where engineers in other disciplines attempt to minimize the number of releases, software engineers today strive to maximize the frequency of releases. The industrial-age analogy here is the stuff of comedy science fiction: an intern launching a space mission just to ferry a single paper-clip to the crew of a space station.
This tendency makes no sense within waterfall models, but is a necessary feature of agile models. The only way for execution to track the changing direction of the rough consensus as it pivots is to increase the frequency of releases. Failed experiments can be abandoned earlier, with lower sunk costs. Successful ones can migrate into the product as fast as hidden risks can be squeezed out. As a result, a lightweight sense of direction — rough consensus — is enough. There is no need to navigate by an increasingly unattainable utopian vision.
Which raises an interesting question: what happens when there are irreconcilable differences of opinion that break the rough consensus?
If creating great software takes very little capital, copying great software takes even less. This means dissent can be resolved in an interesting way that is impossible in the world of atoms. Under appropriately liberal intellectual property regimes, individuals can simply take a copy of the software and continue developing it independently. In software, this is called forking. Efforts can also combine forces, a process known as merging. Unlike the superficially similar process of spin-offs and mergers in business, forking and merging in software can be non-zero sum.
Where democratic processes would lead to gridlock and stalled development, conflicts under rough consensus and running code and release early, release often processes leads to competing, divergent paths of development that explore many possible worlds in parallel.
This approach to conflict resolution is so radically unfamiliar1 that it took nearly three decades even for pragmatic hackers to recognize forking as something to be encouraged. Twenty five years passed between the first use of the term “fork” in this sense (by Unix hacker Eric Altman in 1980) and the development of a tool that encouraged rather than discouraged it: git, developed by Linus Torvalds in 2005. Git is now the most widely used code management system in the world, and the basis for Github, the leading online code repository.
In software development, the model works so well that a nearly two-century old industrial model of work is being abandoned for one built around highly open collaboration, promiscuous forking and opt-in staffing of projects.
The dynamics of the model are most clearly visible in certain modern programming contests, such as the regular Matlab programming contests conducted by MathWorks.
Such events often allow contestants to frequently check their under-development code into a shared repository. In the early stages, such sharing allows for the rapid dissemination of the best design ideas through the contestant pool. Individuals effectively vote for the most promising ideas by appropriating them for their own designs, in effect forming temporary collaborations. Hoarding ideas or code tends to be counterproductive due to the likelihood that another contestant will stumble on the same idea, improve upon it in unexpected ways, or detect a flaw that allows it to “fail fast.” But in the later stages, the process creates tricky competitive conditions, where speed of execution beats quality of ideas. Not surprisingly, the winner is often a contestant who makes a minor, last-minute tweak to the best submitted solution, with seconds to spare.
Such contests — which exhibit in simplified forms the dynamics of the open-source community as well as practices inside leading companies — not only display the power of RCRC and RERO, they demonstrate why promiscuous forking and open sharing lead to better overall outcomes.
Software that thrives in such environments has a peculiar characteristic: what computer scientist Richard Gabriel described as worse is better.2 Working code that prioritizes visible simplicity, catalyzing effective collaboration and rapid experimentation, tends to spread rapidly and unpredictably. Overwrought code that prioritizes authoritarian, purist concerns such as formal correctness, consistency, and completeness tends to die out.
In the real world, teams form through self-selection around great code written by one or two linchpin programmers rather than contest challenges. Team members typically know each other at least casually, which means product teams tend to grow to a few dozen at most. Programmers who fail to integrate well typically leave in short order. If they cannot or do not leave, they are often explicitly told to do nothing and stay out of the way, and actively shunned and cut out of the loop if they persist.
While the precise size of an optimal team is debatable, Jeff Bezos’ two-pizza rule suggests that the number is no more than about a dozen.3
In stark contrast to the quality code developed by “worse is better” processes, software developed by teams of anonymous, interchangeable programmers, with bureaucratic top-down staffing, tends to be of terrible quality. Turning Gabriel’s phrase around, such software represents a “better is worse” outcome: utopian visions that fail predictably in implementation, if they ever progress beyond vaporware at all.
The IBM OS/2 project of the early nineties,4 conceived as a replacement for the then-dominant operating system, MS-DOS, provides a perfect illustration of “better is worse.” Each of the thousands of programmers involved was expected to design, write, debug, document, and support just 10 lines of code per day. Writing more than 10 lines was considered a sign of irresponsibility. Project estimates were arrived at by first estimating the number of lines of code in the finished project, dividing by the number of days allocated to the project, and then dividing by 10 to get the number of programmers to assign to the project. Needless to say, programmers were considered completely interchangeable. The nominal “planning” time required to complete a project could be arbitrarily halved at any time, by doubling the number of assigned engineers.5 At the same time, dozens of managers across the the company could withhold approval and hold back a release, a process ominously called “nonconcurrence.”
“Worse is better” can be a significant culture shock to those used to industrial-era work processes. The most common complaint is that a few rapidly growing startups and open-source projects typically corner a huge share of the talent supply in a region at any given time, making it hard for other projects to grow. To add insult to injury, the process can at times seem to over-feed the most capricious and silly projects while starving projects that seem more important. This process of the best talent unpredictably abandoning other efforts and swarming a few opportunities is a highly unforgiving one. It creates a few exceptional winning products and vast numbers of failed ones, leaving those with strong authoritarian opinions about “good” and “bad” technology deeply dissatisfied.
But not only does the model work, it creates vast amounts of new wealth through both technology startups and open-source projects. Today, its underlying concepts like rough consensus, pivot, fast failure, perpetual beta, promiscuous forking, opt-in and worse is better are carrying over to domains beyond software and regions beyond Silicon Valley. Wherever they spread, limiting authoritarian visions and purist ideologies retreat.
There are certainly risks with this approach, and it would be polyannish to deny them. The state of the Internet today is the sum of millions of pragmatic, expedient decisions made by hundreds of thousands of individuals delivering running code, all of which made sense at the time. These decisions undoubtedly contributed to the serious problems facing us today, ranging from the poor security of Internet protocols to the ones being debated around Net Neutrality. But arguably, had the pragmatic approach not prevailed, the Internet would not have evolved significantly beyond the original ARPANET at all. Instead of a thriving Internet economy that promises to revitalize the old economy, the world at large might have followed the Japanese down the dead-end purist path of fifth-generation mainframe computing.
Today, moreover, several solutions to such serious legacy problems are being pursued, such as blockchain technology (the software basis for cryptocurrencies like Bitcoin). These are vastly more creative than solutions that were debated in the early days of the Internet, and reflect an understanding of problems that have actually been encountered, rather than the limiting anxieties of authoritarian high-modernist visions. More importantly, they validate early decisions to resist premature optimization and leave as much creative room for future innovators as possible. Of course, if emerging solutions succeed, more lurking problems will surface that will in turn need to be solved, in the continuing pragmatic tradition of perpetual beta.
Our account of the nature of software ought to suggest an obvious conclusion: it is a deeply subversive force. For those caught on the wrong side of this force, being on the receiving end of Blitzkrieg operations by a high-functioning agile software team can feel like mounting zemblanity: a sense of inevitable doom.
This process has by now occurred often enough, that a general sense of zemblanity has overcome the traditional economy at large. Every aggressively growing startup seems like a special-forces team with an occupying army of job-eating machine-learning programs and robots following close behind.
Internally, the software-eaten economy is even more driven by disruption: the time it takes for a disruptor to become a disruptee has been radically shrinking in the last decade — and startups today are highly aware of that risk. That awareness helps explain the raw aggressiveness that they exhibit.
It is understandable that to people in the traditional economy, software eating the world sounds like a relentless war between technology and humanity.
But exactly the opposite is the case. Technological progress, unlike war or Wall Street style high finance, is not a zero-sum game, and that makes all the difference. The Promethean force of technology is today, and always has been, the force that has rescued humanity from its worst problems just when it seemed impossible to avert civilizational collapse. With every single technological advance, from the invention of writing to the invention of television, those who have failed to appreciate the non-zero-sum nature of technological evolution have prophesied doom and been proven wrong. Every time, they have made some version of the argument: this time it is different, and been proven wrong.
Instead of enduring civilizational collapse, humanity has instead ascended to a new level of well-being and prosperity each time.
Of course, this poor record of predicting collapses is not by itself proof that it is no different this time. There is no necessary reason the future has to be like the past. There is no fundamental reason our modern globalized society is uniquely immune to the sorts of game-ending catastrophes that led to the fall of the Roman empire or the Mayan civilization. The case for continued progress must be made anew with each technological advance, and new concerns, such as climate change today, must be seriously considered.
But concerns that the game might end should not lead us to limit ourselves to what philosopher James Carse6 called finite game views of the world, based on “winning” and arriving at a changeless, pure and utopian state as a prize. As we will argue in the next essay, the appropriate mindset is what Carse called an infinite game view, based on the desire to continue playing the game in increasingly generative ways. From an infinite game perspective, software eating the world is in fact the best thing that can happen to the world.
The unique characteristics of software as a technological medium have an impact beyond the profession itself. To understand the broader impact of software eating the world, we have to begin by examining the nature of technology adoption processes.
A basic divide in the world of technology is between those who believe humans are capable of significant change, and those who believe they are not. Prometheanism is the philosophy of technology that follows from the idea that humans can, do and should change. Pastoralism, on the other hand is the philosophy that change is profane. The tension between these two philosophies leads to a technology diffusion process characterized by a colloquial phrase popular in the startup world: first they ignore you, then they laugh at you, then they fight you, then you win.1
Science fiction writer Douglas Adams reduced the phenomenon to a set of three sardonic rules from the point of view of users of technology:
Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
Anything invented after you’re thirty-five is against the natural order of things.
As both these folk formulations suggest, there is certain inevitability to technological evolution, and a certain naivete to certain patterns of resistance.
To understand why this is in fact the case, consider the proposition that technological evolution is path-dependent in the short term, but not in the long term.
Major technological possibilities, once uncovered, are invariably exploited in ways that maximally unleash their potential. While there is underutilized potential left, individuals compete and keep adapting in unpredictable ways to exploit that potential. All it takes is one thing: a thriving frontier of constant tinkering and diverse value systems must exist somewhere in the world.
Specific ideas may fail. Specific uses may not endure. Localized attempts to resist may succeed, as the existence of the Amish demonstrates. Some individuals may resist some aspects of the imperative to change successfully. Entire nations may collectively decide to not explore certain possibilities. But with major technologies, it usually becomes clear very early on that the global impact is going to be of a certain magnitude and cause a corresponding amount of disruptive societal change. This is the path-independent outcome and the reason there seems to be a “right side of history” during periods of rapid technological developments.
The specifics of how, when, where and through whom a technology achieves its maximal impact are path dependent. Competing to guess the right answers is the work of entrepreneurs and investors. But once the answers are figured out, the contingent path from “weird” to “normal” will be largely forgotten, and the maximally transformed society will seem inevitable with hindsight.
The ongoing evolution of ridesharing through conflict with the taxicab industry illustrates this phenomenon well. In January 2014 for instance, striking cabdrivers in Paris attacked vehicles hired through Uber. The rioting cabdrivers smashed windshields and slashed tires, leading to immediate comparisons in the media to the original pastoralists of industrialized modernity: the Luddites of the early 19th century.2
Like the Luddite movement, the reaction to ridesharing services such as Uber and Lyft is not resistance to innovative technology per se, but something larger and more complex: an attempt to limit the scope and scale of impact in order to prevent disruption of a particular way of life. As Richard Conniff notes in a 2011 essay in the Smithsonian magazine:
As the Industrial Revolution began, workers naturally worried about being displaced by increasingly efficient machines. But the Luddites themselves “were totally fine with machines,” says Kevin Binfield, editor of the 2004 collection Writings of the Luddites. They confined their attacks to manufacturers who used machines in what they called “a fraudulent and deceitful manner” to get around standard labor practices. “They just wanted machines that made high-quality goods,” says Binfield, “and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.3
In his essay, Conniff argues that the original Luddites were simply fighting to preserve their idea of human values, and concludes that “standing up against technologies that put money or convenience above other human values” is necessary for a critical engagement of technology. Critics make similar arguments in every sector being eaten by software.
The apparent reasonableness of this view is deceptive: it is based on the wishful hope that entire societies can and should agree on what the term human values means, and use that consensus to decide which technologies to adopt. An unqualified appeal to “universal” human values is usually a call for an authoritarian imposition of decidedly non-universal values.
As the rideshare industry debates demonstrate, even consumers and producers within a single sector find it hard to achieve consensus on values. Protests by cab drivers in London in 2014 for instance, led to an increase in business4 for rideshare companies, clear evidence that consumers do not necessarily act in solidarity with incumbent producers based on shared “human values.”
It is tempting to analyze such conflicts in terms of classical capitalist or labor perspectives. The result is a predictable impasse: capitalists emphasize increased supply driving prices down, while progressives focus on loss of jobs in the taxicab industry. Both sides attempt to co-opt the political loyalties of rideshare drivers. Capitalists highlight increased entrepreneurial opportunities, while progressives highlight increased income precarity. Capitalists like to label rideshare drivers free agents or micro-entrepreneurs, while progressives prefer labels like precariat (by analogy to proletariat) or scab. Both sides attempt to make the future determinate by force-fitting it into preferred received narratives using loaded terms.
Both sides also operate by the same sense of proportions: they exaggerate the importance of the familiar and trivialize the new. Apps seem trivial, while automobiles loom large as a motif of an entire century-old way of life. Societies organized around cars seem timeless, normal, moral and self-evidently necessary to preserve and extend into the future. The smartphone at first seems to add no more than a minor element of customer convenience within a way of life that cannot possibly change. The value it adds to the picture is treated like a rounding error and ignored. As a result both sides see the conflict as a zero-sum redistribution of existing value: gains on one side, exactly offset by losses on the other side.
But as Marshall McLuhan observed, new technologies change our sense of proportions.
Even today’s foggy view of a smartphone-centric future suggests that ridesharing is evolving from convenience to necessity. By sustaining cheaper and more flexible patterns of local mobility, ridesharing enables new lifestyles in urban areas. Young professionals can better afford to work in opportunity-rich cities. Low-income service workers can expand their mobility beyond rigid public transit and the occasional expensive emergency taxi-ride. Small restaurants with limited working capital can use ridesharing-like services to offer delivery services. It is in fact getting hard to imagine how else transportation could work in a society with smartphones.
The impact is shifting from the path-dependent phase, when it wasn’t clear whether the idea was even workable, to the non-path-dependent phase, where it seems inevitable enough that other ideas can be built on top.
Such snowballing changes in patterns of life are due to what economists call consumer surplus5 (increased spending power elsewhere due to falling costs in one area of consumption) and positive spillover effects6 (unexpected benefits in unrelated industries or distant geographies). For technologies with a broad impact, these are like butterfly effects: deceptively tiny causes with huge, unpredictable effects. Due to the unpredictability of surplus and spillover, the bulk of the new wealth created by new technologies (on the order of 90% or more) eventually accrues to society at large,7 rather than the innovators who drove the early, path-dependent phase of evolution. This is the macroeconomic analog to perpetual beta: execution by many outrunning visioning by a few, driving more bottom-up experimentation and turning society itself into an innovation laboratory.
Far from the value of the smartphone app being a rounding error in the rideshare industry debate, it in fact represents the bulk of the value. It just does not accrue directly to any of the participants in the overt, visible conflict.
If adoption models were entirely dictated by the taxicab industry, this value would not exist, and the zero-sum framing would become a self-fulfilling prophecy. Similarly, when entrepreneurs try to capture all or even most of the value they set out to create, the results are counterproductive: minor evolutionary advances that again make zero-sum outcomes a self-fulfilling prophecy. Technology publishing pioneer Tim O’Reilly captured the essence of this phenomenon with the principle, “create more value than you capture.” For the highest-impact products, the societal value created dwarfs the value captured.
These largely invisible surplus and spillover effects do more than raise broad living standards. By redirecting newly freed creative energy and resources down indeterminate paths, consumer surpluses and spillover effects actually drive further technological evolution in a non-zero-sum way. The bulk of the energy leaks away to drive unexpected innovations in unrelated areas. A fraction courses through unexpected feedback paths and improves the original innovation itself, in ways the pioneers themselves do not anticipate. Similar unexpected feedback paths improve derivative inventions as well, vastly amplifying the impact beyond simple “technology diffusion.”
The story of the steam engine is a good illustration of both effects. It is widely recognized that spillover effects from James Watt’s steam engine, originally introduced in the Cornish mining industry, helped trigger the British industrial revolution. What is less well-known8 is that the steam engine itself was vastly improved by hundreds of unknown tinkerers adding “microinventions” in the decades immediately following the expiration of James Watt’s patents. Once an invention leaks into what Robert Allen calls “collective invention settings,” with a large number of individuals and firms freely sharing information and independently tinkering with an innovation, future evolution gathers unstoppable momentum and the innovation goes from “weird” to “new normal.” Besides the Cornish mining district in the early 1800s, the Connecticut Valley in the 1870s-1890s,9 Silicon Valley since 1950 and the Shenzen region of China since the 1990s are examples of flourishing collective invention settings. Together, such active creative regions constitute the global technology frontier: the worldwide zone of bricolage.
The path-dependent phase of evolution of a technology can take centuries, as Joel Mokyr shows in his classic, Lever of Riches. But once it enters a collective invention phase, surplus and spillover effects gather momentum and further evolution becomes simultaneously unpredictable and inevitable. Once the inevitability is recognized, it is possible to bet on follow-on ideas without waiting for details to become clear. Today, it is possible to bet on a future based on ridesharing and driverless cars without knowing precisely what those futures will look like.
As consumers, we experience this kind of evolution as what Buckminster Fuller called ephemeralization: the seemingly magical ability of technology to do more and more with less and less.
This is most visible today in the guise of Moore’s Law, but ephemeralization is in fact a feature of all technological evolution. Potable water was once so hard to come by, many societies suffered from endemic water-borne diseases and forced to rely on expensive and inefficient procedures like boiling water at home. Today, only around 10% of the world lacks such access.10 Diamonds were once worth fighting wars over. Today artificial diamonds, indistinguishable from natural ones, are becoming widely available.
The result is a virtuous cycle of increasing serendipity, driven by widespread lifestyle adaptation and cascades of self-improving innovation. Surplus and spillover creating more surplus and spillover. Brad deLong’s slouching towards utopia for consumers and Edmund Phelps’ mass flourishing for producers. And when the virtuous cycle is powered by a soft, world-eating technology, the steady, cumulative impact is immense.
Both critics and enthusiasts of innovation deeply misunderstand the nature of this virtuous cycle. Critics typically lament lifestyle adaptations as degeneracy and call for a return to traditional values. Many enthusiasts, instead of being inspired by a sense of unpredictable, flourishing potential, are repeatedly seduced by specific visions of the Next Big Thing, sometimes derived rather literally from popular science fiction. As a result, they lament the lack of collective attention directed towards their pet societal projects. The priorities of other enthusiasts seem degenerate.
The result in both cases is the same: calls for reining in the virtuous cycle. Both kinds of lament motivate efforts to concentrate and deploy surpluses in authoritarian ways (through retention of excessive monopolistic profits by large companies or government-led efforts funded through taxation) and contain spillover effects (by restricting access to new technological capabilities). Both are ultimately attempts to direct creative energies down a few determinate paths. Both are driven by a macroeconomic version of the Luddite hope: that it is possible to enjoy the benefits of non-zero-sum innovation without giving up predictability. For critics, it is the predictability of established patterns of life. For Next Big Thing enthusiasts, it is a specific aspirational pattern of life.
Both are varieties of pastoralism, the cultural cousin of purist approaches in engineering. Pastoralism suffers from precisely the same, predictable authoritarian high-modernist failure modes. Like purist software visions, pastoralist visions too are marked by an obsessive desire to permanently win a specific, zero-sum finite game rather than to keep playing the non-zero-sum infinite game.
When the allure of pastoralist visions is resisted, and the virtuous cycle is allowed to work, we get Promethean progress. This is unpredictable evolution in the direction of maximal societal impact, unencumbered by limiting deterministic visions. Just as the principle of rough consensus and running code creates great software, consumer surplus and spillover effects create great societies. Just as pragmatic and purist development models lead to serendipity and zemblanity in engineering respectively, Promethean and pastoral models lead to serendipity and zemblanity at the level of entire societies.
When pastoralist calls for actual retreat are heeded, the technological frontier migrates elsewhere, often causing centuries of stagnation. This was precisely what happened in China and the Islamic world around the fifteenth century, when the technological frontier shifted to Europe.
Heeding the other kind of pastoralist call, to pursue a determinate Next Big Thing at the expense of many indeterminate small things, leads to somewhat better results. Such models can deliver impressive initial gains, but invariably create a hardening landscape of authoritarian, corporatist institutions. This triggers a vicious cycle that predictably stifles innovation.
The Apollo program, for instance, fulfilled John F. Kennedy’s call to put humans on the moon within the decade. It also led to the inexorable rise of the military-industrial complex that his predecessor, Dwight D. Eisenhower, had warned against. The Soviets fared even worse: they made equally impressive strides in the space race, but the society they created collapsed on itself under the weight of authoritarianism. What prevented that outcome in the United States was the regional technological frontier migrating to the West Coast, and breaking smart from the military-industrial complex in the process. This allowed some of the creative energy being gradually stifled to escape to a more favorable environment.
With software eating the world, we are again witnessing predictable calls for pastoralist development models. Once again, the challenge is to resist the easy answers on offer.
In art, the term pastoral refers to a genre of painting and literature based on romanticized and idealized portrayals of a pastoral lifestyle, usually for urban audiences with no direct experience of the actual squalor and oppression of pre-industrial rural life.
Biblical Pastoralism: drawing inspiration for the 21st century from shepherds.
Within religious traditions, pastorals may also be associated with the motifs and symbols of uncorrupted states of being. In the West for instance, pastoral art and literature often evoke the Garden of Eden story. In Islamic societies, the first caliphate is often evoked in a similar way.
The notion of a pastoral is useful for understanding idealized understandings of any society, real or imagined, past, present or future. In Philip Roth’s American Pastoral for instance, the term is an allusion to the idealized American lifestyle enjoyed by the protagonist Seymour “Swede” Levov, before it is ruined by the social turmoil of the 1960s.
At the center of any pastoral we find essentialized notions of what it means to be human, like Adam and Eve or William Whyte’s Organization Man, arranged in a particular social order (patriarchal in this case). From these archetypes we get to pure and virtuous idealized lifestyles. Lifestyles that deviate from these understandings seem corrupt and vice-driven. The belief that “people don’t change” is at once an approximation and a prescription: people should not change except to better conform to the ideal they are assumed to already approximate. The belief justifies building technology to serve the predictable and changeless ideal and labeling unexpected uses of technology degenerate.
We owe our increasingly farcical yearning for jetpacks and flying cars, for instance, to what we might call the “World Fairs pastoral,” since the vision was strongly shaped by mid-twentieth-century World Fairs. Even at the height of its influence, it was already being satirized by television shows like The Flintstones and The Jetsons. The shows portrayed essentially the 1950s social order, full of Organization Families, transposed to past and future pastoral settings. The humor in the shows rested on audiences recognizing the escapist non-realism.
Not quite as clever as the Flintstones or Jetsons, but we try.
The World Fairs pastoral, inspired strongly by the aerospace technologies of the 1950s, represented a future imagined around flying cars, jetpacks and glamorous airlines like Pan Am. Flying cars merely updated a familiar nuclear-family lifestyle. Jetpacks appealed to the same individualist instincts as motorcycles. Airlines like Pan Am, besides being an integral part of the military-industrial complex, owed their “glamor” in part to their deliberate perpetuation of the sexist culture of the fifties. Within this vision, truly significant developments, like the rise of vastly more efficient low-cost airlines in the 70s, seemed like decline from a “Golden Age” of air travel.
Arguably, the aerospace future that actually unfolded was vastly more interesting than the one envisioned in the World Fairs pastoral. Low-cost, long-distance air travel opened up a globalized and multicultural future, broke down barriers between insular societies, and vastly increased global human mobility. Along the way, it helped dismantle much of the institutionalized sexism behind the glamour of the airline industry. These developments were enabled in large part by post-1970s software technologies,1 rather than improvements in core aerospace engineering technologies. These were precisely the technologies that were beginning to “break smart” out of the stifling influence of the military-industrial complex.
In 2012, thanks largely to these developments, for the first time in history there were over a billion international tourist arrivals worldwide.2 Software had eaten and democratized elitist air travel. Today, software is continuing to eat airplanes in deeper ways, driving the current explosion in drone technology. Again, those fixated on jetpacks and flying cars are missing the actual, much more interesting action because it is not what they predicted. When pastoralists pay attention to drones at all, they see them primarily as morally objectionable military weapons. The fact that they replace technologies of mass slaughter such as carpet bombing, and the growing number of non-military uses, are ignored.
In fact the entire World Fairs pastoral is really a case of privileged members of society, presuming to speak for all, demanding “faster horses” for all of society (in the sense of the likely apocryphal3 quote attributed to Henry Ford, “If I’d asked my customers what they wanted, they would have demanded faster horses.”)
Fortunately for the vitality of the United States and the world at large, the future proved wiser than any limiting pastoral vision of it. The aerospace story is just one among many that suddenly appear in a vastly more positive light once we drop pastoral obsessions and look at the actual unfolding action. Instead of the limited things we could imagine in the 1950s, we got much more impactful things. Software eating aerospace technology allowed it to continue progressing in the direction of maximum potential.
If pastoral visions are so limiting, why do we get so attached to them? Where do they even come from in the first place? Ironically, they arise from Promethean periods of evolution that are too successful.
The World Fairs pastoral, for instance, emerged out of a Promethean period in the United States, heralded by Alexander Hamilton in the 1790s. Hamilton recognized the enormous potential of industrial manufacturing, and in his influential 1792 Report on Manufactures,4 argued that the then-young United States ought to strive to become a manufacturing superpower. For much of the nineteenth century, Hamilton’s ideas competed for political influence5 with Thomas Jefferson’s pastoral vision of an agrarian, small-town way of life, a romanticized, sanitized version of the society that already existed.
For free Americans alive at the time, Jefferson’s vision must have seemed tangible, obviously valuable and just within reach. Hamilton’s must have seemed speculative, uncertain and profane, associated with the grime and smoke of early industrializing Britain. For almost 60 years, it was in fact Jefferson’s parochial sense of proportions that dominated American politics. It was not until the Civil War that the contradictions inherent in the Jeffersonian pastoral led to its collapse as a political force. Today, while it still supplies powerful symbolism to politicians’ speeches, all that remains of the Jeffersonian Pastoral is a nostalgic cultural memory of small-town agrarian life.
During the same period, Hamilton’s ideas, through their overwhelming success, evolved from a vague sense of direction in the 1790s into a rapidly maturing industrial social order by the 1890s. By the 1930s, this social order was already being pastoralized into an alluring vision of jetpacks and flying cars in a vast, industrialized, centralized society. A few decades later, this had turned into a sense of dead-end failure associated with the end of the Apollo program, and the reality of a massive, overbearing military-industrial complex straddling the technological world. The latter has now metastasized into an entire too-big-to-fail old economy. One indicator of the freezing of the sense of direction is that many contemporary American politicians still remain focused on physical manufacturing the way Alexander Hamilton was in 1791. What was a prescient sense of direction then has turned into nostalgia for an obsolete utopian vision today. But where we have lost our irrational attachment to the Jeffersonian Pastoral, the World Fairs pastoral is still too real to let go.
We get attached to pastorals because they offer a present condition of certainty and stability and a utopian future promise of absolutely perfected certainty and stability. Arrival at the utopia seems like a well-deserved reward for hard-won Promethean victories. Pastoral utopias are where the victors of particular historical finite games hope to secure their gains and rest indefinitely on their laurels. The dark side, of course, is that pastorals also represent fantasies of absolute and eternal power over the fate of society: absolute utopias for believers that necessarily represent dystopias for disbelievers. Totalitarian ideologies of the twentieth century, such as communism and fascism, are the product of pastoral mindsets in their most toxic forms. The Jeffersonian pastoral was a nightmare for black Americans.
When pastoral fantasies start to collapse under the weight of their own internal contradictions, long-repressed energies are unleashed. The result is a societal condition marked by widespread lifestyle experimentation based on previously repressed values. To those faced with a collapse of the World Fairs pastoral project today, this seems like an irreversible slide towards corruption and moral decay.
Because they serve as stewards of dominant pastoral visions, cultural elites are most prone to viewing unexpected developments as degeneracy. From the Greek philosopher Plato1 (who lamented the invention of writing in the 4th century BC) to the Chinese scholar, Zhang Xian Wu2 (who lamented the invention of printing in the 12th century AD), alarmist commentary on technological change has been a constant in history. A contemporary example can be found in a 2014 article3 by Paul Verhaege in The Guardian:
There are constant laments about the so-called loss of norms and values in our culture. Yet our norms and values make up an integral and essential part of our identity. So they cannot be lost, only changed. And that is precisely what has happened: a changed economy reflects changed ethics and brings about changed identity. The current economic system is bringing out the worst in us.
Viewed through any given pastoral lens, any unplanned development is more likely to subtract rather than add value. In an imagined world where cars fly, but driving is still a central rather than peripheral function, ridesharing can only be seen as subtracting taxi drivers from a complete vision. Driverless cars — the name is revealing, like “horseless carriage” — can only be seen as subtracting all drivers from the vision. And with such apparent subtraction, values and humans can only be seen as degenerating (never mind that we still ride horses for fun, and will likely continue driving cars for fun).
This tendency to view adaptation as degeneracy is perhaps why cultural elites are startlingly prone to the Luddite fallacy. This is the idea that technology-driven unemployment is a real concern, an idea that arises from the more basic assumption that there is a fixed amount of work (“lump of labor”) to be done. By this logic, if a machine does more, then there is less for people to do.
Prometheans often attribute this fallacious argument to a lack of imagination, but the roots of its appeal lie much deeper. Pastoralists are perfectly willing and able to imagine many interesting things, so long as they bring reality closer to the pastoral vision. Flying cars — and there are very imaginative ways to conceive of them — seem better than land-bound ones because drivers predictably evolving into pilots conforms to the underlying notion of human perfectibility. Drivers unpredictably evolving into smartphone-wielding free agents, and breaking smart from the Organization Man archetype, does not. Within the Jeffersonian pastoral, faster horses (not exactly trivial to breed) made for more empowered small-town yeoman farmers. Drivers of early horseless carriages were degenerate dependents, beholden to big corporations, big cities and Standard Oil.
In other words, pastoralists can imagine sustaining changes to the prevailing social order, but disruptive changes seem profane. As a result, those who adapt to disruption in unexpected ways seem like economic and cultural degenerates, rather than representing employment rebounding in unexpected ways.
History of course, has shown that the idea of technological unemployment is not just wrong, it is wildly wrong. Contemporary fears of software eating jobs is just the latest version of the argument that “people cannot change” and that this time, the true limits of human adaptability have been discovered.
This argument is absolutely correct — within the pastoral vision that it is made.
Once we remove pastoral blinders, it becomes obvious that the future of work lies in the unexpected and degenerate-seeming behaviors of today. Agriculture certainly suffered a devastating permanent loss of employment to machinery within the Jeffersonian pastoral by 1890. Fortunately, Hamilton’s profane ideas, and the degenerate citizens of the industrial world he foresaw, saved the day. The ideal Jeffersonian human, the noble small-town yeoman farmer, did in fact become practically extinct as the Jeffersonians feared. Today the pastoral-ideal human is a high-IQ credentialist Organization Man, headed for gradual extinction, unable to compete with higher-IQ machines. The degenerate, breaking-smart humans of the software-eaten world on the other hand, have no such fears. They are too busy tinkering with new possibilities to bemoan imaginary lost utopias.
John Maynard Keynes was too astute to succumb to the Luddite fallacy in this naive form. In his 1930 conception of the leisure society,4 he noted that the economy could arbitrarily expand to create and satisfy new needs, and with a lag, absorb labor as fast as automation freed it up. But Keynes too failed to recognize that with new lifestyles come new priorities, new lived values and new reasons to want to work. As a result, he saw the Promethean pattern of progress as a necessary evil on the path to a utopian leisure society based on traditional, universal religious values:
I see us free, therefore, to return to some of the most sure and certain principles of religion and traditional virtue-that avarice is a vice, that the exaction of usury is a misdemeanour, and the love of money is detestable, that those walk most truly in the paths of virtue and sane wisdom who take least thought for the morrow. We shall once more value ends above means and prefer the good to the useful. We shall honour those who can teach us how to pluck the hour and the day virtuously and well, the delightful people who are capable of taking direct enjoyment in things, the lilies of the field who toil not, neither do they spin. But beware! The time for all this is not yet. For at least another hundred years we must pretend to ourselves and to every one that fair is foul and foul is fair; for foul is useful and fair is not. Avarice and usury and precaution must be our gods for a little longer still. For only they can lead us out of the tunnel of economic necessity into daylight.
Perceptions of moral decline however, have no necessary relationship with actual moral decline. As Joseph Tainter observes in The Collapse of Complex Societies:
Values of course, vary culturally, socially and individually…What one individual, society, or culture values highly another does not…Most of us approve, in general, of that which culturally is most like or most pleasing, or at least most intelligible to us. The result is a global bedlam of idiosyncratic ideologies, each claiming exclusive possession of ‘truth.’…
The ‘decadance’ concept seems particularly detrimental [and is] notoriously difficult to define. Decadent behavior is that which differs from one’s own moral code, particular if the offender at some former time behaved in a manner of which one approves. There is no clear causal link between the morality of behavior and political fortunes.
While there is no actual moral decline in any meaningful absolute sense, the anxiety experienced by pastoralists is real. For those who yearn for paternalistic authority, more lifestyle possibilities leads to a sense of anomie rather than freedom. It triggers what the philosopher George Steiner called nostalgia for the absolute.5 Calls for a retreat to tradition or a collectivist drive towards the Next Big Thing (often an Updated Old Thing, as in the case of President Obama’s call for a “new Sputnik moment” a few years ago) share a yearning for a simpler world. But, as Steiner notes:
I do not think it will work. On the most brutal, empirical level, we have no example in history…of a complex economic and technological system backtracking to a more simple, primitive level of survival. Yes, it can be done individually. We all, I think, in the universities now have a former colleague or student somewhere planting his own organic food, living in a cabin in the forest, trying to educate his family far from school. Individually it might work. Socially, I think, it is moonshine.
In 1974, the year of peak centralization, Steiner was presciently observing the beginnings of the transformation. Today, the angst he observed on university campuses has turned into a society-wide condition of pastoral longing, and a pervasive sense of moral decay.
For Prometheans, on the other hand, not only is there no decay, there is actual moral progress.
Prometheans understand technological evolution in terms of increasing diversity of lived values, in the form of more varied actual lifestyles. From any given pastoral perspective, such increasing pluralism is a sign of moral decline, but from a Promethean perspective, it is a sign of moral progress catalyzed by new technological capabilities.
Emerging lifestyles introduce new lived values into societies. Hamilton did not just suggest a way out of the rural squalor1 that was the reality of the Jeffersonian pastoral. His way also led to the dismantlement of slavery, the rise of modern feminism and the gradual retreat of colonial oppression and racism. Today, we are not just leaving the World Fairs pastoral behind for a richer technological future. We are also leaving behind its paternalistic institutions, narrow “resource” view of nature, narrow national identities and intolerance of non-normative sexual identities.
Promethean attitudes begin with an acknowledgment of the primacy of lived values over abstract doctrines. This does not mean that lived values must be uncritically accepted or left unexamined. It just means that lived values must be judged on their own merit, rather than through the lens of a prejudiced pastoral vision.
The shift from car-centric to smartphone-centric priorities in urban transportation is just one aspect of a broader shift from hardware-centric to software-centric lifestyles. Rideshare driver, carless urban professional and low-income-high-mobility are just the tip of an iceberg that includes many other emerging lifestyles, such as eBay or Etsy merchant, blogger, indie musician and search-engine marketer. Each new software-enabled lifestyle adds a new set of lived values and more apparent profanity to society. Some, like rent-over-own values, are shared across many emerging lifestyles and threaten pastorals like the “American Dream,” built around home ownership. Others, such as dietary preferences, are becoming increasingly individualized and weaken the very idea of a single “official food pyramid” pastoral script for all.
Such broad shifts have historically triggered change all the way up to the global political order. Whether or not emerging marginal ideologies2 achieve mainstream prominence, their sense of proportions and priorities, driven by emerging lifestyles and lived values, inevitably does.
These observations are not new among historians of technology, and have led to endless debates about whether societal values drive technological change (social determinism) or whether technological change drives societal values (technological determinism). In practice, the fact that people change and disrupt the dominant prevailing ideal of “human values” renders the question moot. New lived values and new technologies simultaneously irrupt into society in the form of new lifestyles. Old lifestyles do not necessarily vanish: there are still Jeffersonian small farmers and traditional blacksmiths around the world for instance. Rather, they occupy a gradually diminishing role in the social order. As a result, new and old technologies and an increasing number of value systems coexist.
In other words, human pluralism eventually expands to accommodate the full potential of technological capabilities.3
We call this the principle of generative pluralism. Generative pluralism is what allows the virtuous cycle of surplus and spillover to operate. Ephemeralization — the ability to gradually do more with less — creates room for the pluralistic expansion of lifestyle possibilities and individual values, without constraining the future to a specific path.
The inherent unpredictability in the principle implies that both technological and social determinism are incomplete models driven by zero-sum thinking. The past cannot “determine” the future at all, because the future is more complex and diverse. It embodies new knowledge about the world and new moral wisdom, in the form of a more pluralistic and technologically sophisticated society.
Thanks to a particularly fertile kind of generative pluralism that we know as network effects, soft technologies like language and money have historically caused the greatest broad increases in complexity and pluralism. When more people speak a language or accept a currency, the potential of that language or currency increases in a non-zero-sum way. Shared languages and currencies allow more people to harmoniously co-exist, despite conflicting values, by allowing disputes to be settled through words or trade4 rather than violence. We should therefore expect software eating the world to cause an explosion in the variety of possible lifestyles, and society as a whole becoming vastly more pluralistic.
And this is in fact what we are experiencing today.
The principle also resolves the apparent conflict between human agency and “what technology wants”: Far from limiting human agency, technological evolution in fact serves as the most complete expression of it. Technology evolution takes on its unstoppable and inevitable character only after it breaks smart from authoritarian control and becomes part of unpredictable and unscripted collective invention culture. The existence of thousands of individuals and firms working relatively independently on the same frontier means that every possibility will not only be uncovered, it will be uncovered by multiple individuals, operating with different value systems, at different times and places. Even if one inventor chooses not to pursue a possibility, chances are, others will. As a result, all pastoralist forms of resistance are eventually overwhelmed. But the process retains rational resistance to paths that carry risk of ending the infinite game for all, in proportion to their severity. As global success in limiting the spread of nuclear and biological weapons shows, generative pluralism is not the same as mad scientists and James Bond villains running amok.
Prometheans who discover high-leverage unexpected possibilities enter a zone of serendipity. The universe seems to conspire to magnify their agency to superhuman levels. Pastoralists who reject change altogether as profanity turn lack of agency into a self-fulfilling prophecy, and enter a zone of zemblanity. The universe seems to conspire to diminish whatever agency they do have, resulting in the perception that technology diminishes agency.
Power, unlike capability, is zero-sum, since it is defined in terms of control over other human beings. Generative pluralism implies that on average, pastoralists are constantly ceding power to Prometheans. In the long term, however, the loss of power is primarily a psychological rather than material loss. To the extent that ephemeralization frees us of the need for power, we have less use for a disproportionate share.
As a simple example, consider a common twentieth-century battleground: public signage. Today, different languages contend for signaling power in public spaces. In highly multilingual countries, this contention can turn violent. But automated translation and augmented reality technologies5 can make it unnecessary to decide, for instance, whether public signage in the United States ought to be in English, Spanish or both. An arbitrary number of languages can share the same public spaces, and there is much less need for linguistic authoritarianism. Like physical sports in an earlier era, soft technologies such as online communities, video games and augmented reality are all slowly sublimating our most violent tendencies. The 2014 protests in Ferguson, MO, are a powerful example. Compared to the very similar civil rights riots in the 1960s, information in the form of social media coverage, rather than violence, was the primary medium of influence.
The broader lesson of the principle of generative pluralism is this: through technology, societies become intellectually capable of handling progressively more complex value-based conflicts. As societies gradually awaken to resolution mechanisms that do not require authoritarian control over the lives of others, they gradually substitute intelligence and information for power and coercion.
So far, we have tried to convey a visceral sense of what is essentially an uneven global condition of explosive positive change. Change that is progressing at all levels from individual to business to communities to the global societal order. Perhaps most important part of the change is that we are experiencing a systematic substitution of intelligence for brute authoritarian power in problem solving, allowing a condition of vastly increased pluralism to emerge.
Paradoxically, due to the roots of vocal elite discontent in pastoral sensibilities, this analysis is valid only to the extent that it feels viscerally wrong. And going by the headlines of the past few years, it certainly does.
Much of our collective sense of looming chaos and paradises being lost is in fact a clear and unambiguous sign of positive change in the world. By this model, if our current collective experience of the human condition felt utopian, with cultural elites extolling its virtues, we should be very worried indeed. Societies that present a facade of superficial pastoral harmony, as in the movie Stepford Wives, tend to be sustained by authoritarian, non-pluralistic polities, hidden demons, and invisible violence.
Innovation can in fact be defined as ongoing moral progress achieved by driving directly towards the regimes of greatest moral ambiguity, where our collective demons lurk. These are also the regimes where technology finds its maximal expressions, and it is no accident that the two coincide. Genuine progress feels like onrushing obscenity and profanity, and also requires new technological capabilities to drive it.
The subjective psychological feel of this evolutionary process is what Marshall McLuhan described in terms of a rear-view mirror effect: “we see the world through a rear-view mirror. We march backwards into the future.”
Our aesthetic and moral sensibilities are oriented by default towards romanticized memories of paradises lost. Indeed, this is the only way we can enter the future. Our constantly pastoralizing view of the world, grounded in the past, is the only one we have. The future, glimpsed only through a small rear-view mirror, is necessarily framed by the past. To extend McLuhan’s metaphor, the great temptation is to slam on the brakes and shift from what seems like reverse gear into forward gear. The paradox of progress is that what seems like the path forward is in fact the reactionary path of retreat. What seems like the direction of decline is in fact the path forward.
Today, our collective rear-view mirror is packed with seeming profanity, in the form of multiple paths of descent into hell. Among the major ones that occupy our minds are the following:
These are such complex and strongly coupled themes that conversations about any one of them quickly lead to a jumbled discussion of all of them, in the form of an ambiguous “inequality, surveillance and everything” non-question. Dickens’ memorable opening paragraph in A Tale of Two Cities captures this state of confused urgency and inchoate anxiety perfectly:
It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way – in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only.
Such a state of confused urgency often leads to hasty and ill-conceived grand pastoralist schemes by way of the well-known politician’s syllogism:1
Something must be done This is something This must be done
Promethean sensibilities suggest that the right response to the sense of urgency is not the politician’s syllogism, but counter-intuitive courses of action: driving straight into the very uncertainties the ambiguous problem statements frame. Often, when only reactionary pastoralist paths are under consideration, this means doing nothing, and allowing events to follow a natural course.
In other words, our basic answer to the non-question of “inequality, surveillance and everything” is this: the best way through it is through it. It is an answer similar in spirit to the stoic principle that “the obstacle is the way” and the Finnish concept of sisu: meeting adversity head-on by cultivating a capacity for managing stress, rather than figuring out schemes to get around it. Seemingly easier paths, as the twentieth century’s utopian experiments showed, create a great deal more pain in the long run.
Broken though they might seem, the mechanisms we need for working through “inequality, surveillance and everything” are the generative, pluralist ones we have been refining over the last century: liberal democracy, innovation, entrepreneurship, functional markets and the most thoughtful and limited new institutions we can design.
This answer will strike many as deeply unsatisfactory and perhaps even callous. Yet, time and again, when the world has been faced with seemingly impossible problems, these mechanisms have delivered.
Beyond doing the utmost possible to shield those most exposed to, and least capable of enduring, the material pain of change, it is crucial to limit ourselves and avoid the temptation of reactionary paths suggested by utopian or dystopian visions, especially those that appear in futurist guises. The idea that forward is backward and sacred is profane will never feel natural or intuitive, but innovation and progress depend on acting by these ideas anyway.
In the remaining essays in this series, we will explore what it means to act by these ideas.
Part-way through Douglas Adams’ Hitchhikers’ Guide to the Galaxy, we learn that Earth is not a planet, but a giant supercomputer built by a race of hyperintelligent aliens. Earth was designed by a predecessor supercomputer called Deep Thought, which in turn had been built to figure out the answer to the ultimate question of “Life, the Universe and Everything.” Much to the annoyance of the aliens, the answer turns out to be a cryptic and unsatisfactory “42.”
What is 7 times 6?
We concluded the previous essay with our own ultimate question of “Inequality, Surveillance and Everything.” The basic answer we offered — “the best way through it is through it” — must seem as annoying, cryptic and unsatisfactory as Deep Thought’s “42.”
In Adams’ tale, Deep Thought gently suggests to the frustrated aliens that perhaps the answer seemed cryptic because they never understood the question in the first place. Deep Thought then proceeds to design Earth to solve the much tougher problem of figuring out the actual question.
First performed as a radio show in 1978, Adams’ absurdist epic precisely portrayed the societal transformation that was gaining momentum at the time. Rapid technological progress due to computing was accompanied by cryptic and unsatisfactory answers to confused and urgent-seeming questions about the human condition. Our “Inequality, Surveillance and Everything” form of the non-question is not that different from the corresponding non-question of the late 1970s: “Cold War, Globalization and Everything.” Then, as now, the frustrating but correct answer was “the best way through it is through it.”
The Hitchhiker’s Guide can be read as a satirical anti-morality tale about pastoral sensibilities, utopian solutions and perfect answers. In their dissatisfaction with the real “Ultimate Answer,” the aliens failed to notice the truly remarkable development: they had built an astoundingly powerful computer, which had then proceeded to design an even more powerful successor.
Like the aliens, we may not be satisfied with the answers we find to timeless questions, but simply by asking the questions and attempting to answer them, we are bootstrapping our way to a more advanced society.
As we argued in the last essay, the advancement is both technological and moral, allowing for a more pluralistic society to emerge from the past.
Adams died in 2001, just as his satirical visions, which had inspired a generation of technologists, started to actually come true. Just as Deep Thought had given rise to a fictional “Earth” computer, centralized mainframe computing of the industrial era gave way to distributed, networked computing. In a rather perfect case of life imitating art, IBM researchers named a powerful chess-playing supercomputer Deep Thought in the 1990s, in honor of Adams’ fictional computer. A later version, Deep Blue, became the first computer to beat the reigning human champion in 1997. But the true successor to the IBM era of computing was the planet-straddling distributed computer we call the Internet.
Manufactured in Taiwan
Science fiction writer Neal Stephenson noted the resulting physical transformation as early as 1996, in his essay on the undersea cable-laying industry, Mother Earth, Motherboard.1 By 2004, Kevin Kelly had coined a term and launched a new site to talk about the idea of digitally integrated technology as a single, all-subsuming social reality,2 emerging on this motherboard:
I’m calling this site The Technium. It’s a word I’ve reluctantly coined to designate the greater sphere of technology – one that goes beyond hardware to include culture, law, social institutions, and intellectual creations of all types. In short, the Technium is anything that springs from the human mind. It includes hard technology, but much else of human creation as well. I see this extended face of technology as a whole system with its own dynamics.
The metaphor of the world as a single interconnected entity that subsumes human existence is an old one, and in its modern form, can be traced at least to Hobbes’ Leviathan (1651), and Herbert Spencer’s The Social Organism (1853). What is new about this specific form is that it is much more than a metaphor. The view of the world as a single, connected, substrate for computation is not just a poetic way to appreciate the world: It is a way to shape it and act upon it. For many software projects, the idea that “the network is the computer” (due to John Gage, a computing pioneer at Sun Microsystems) is the only practical perspective.
While the pre-Internet world can also be viewed as a programmable planetary computer based on paperware, what makes today’s planetary computer unique in history is that almost anyone with an Internet connection can program it at a global scale, rather than just powerful leaders with the ability to shape organizations.
The kinds of programming possible on such a vast, democratic scale have been rapidly increasing in sophistication. In November 2014 for instance, within a few days of the Internet discovering and becoming outraged by a sexist 2013 Barbie comic-book titled Computer Engineer Barbie, hacker Kathleen Tuite had created a web app (using an inexpensive cloud service called Heroku) allowing anyone to rewrite the text of the book. The hashtag #FeministHackerBarbie immediately went viral. Coupled with the web app, the hashtag unleashed a flood of creative rewrites of the Barbie book. What would have been a short-lived flood of outrage only a few years ago had turned into a breaking-smart moment for the entire software industry.
To appreciate just how remarkable this episode was, consider this: a hashtag is effectively an instantly defined soft network within the Internet, with capabilities comparable to the entire planet’s telegraph system a century ago. By associating a hashtag with the right kind of app, Tuite effectively created an entire temporary publishing company, with its own distribution network, in a matter of hours rather than decades. In the process, reactive sentiment turned into creative agency.
These capabilities emerged in just 15 years: practically overnight by the normal standards of technological change.
In 1999, SETI@home,3 the first distributed computing project to capture the popular imagination, merely seemed like a weird way to donate spare personal computing power to science. By 2007, Facebook, Twitter, YouTube, Wikipedia and Amazon’s Mechanical Turk4 had added human creativity, communication and money into the mix, and the same engineering approaches had created the social web. By 2014, experimental mechanisms developed in the culture of cat memes5 were influencing elections. The penny-ante economy of Amazon’s Mechanical Turk had evolved into a world where bitcoin miners were making fortunes, car owners were making livable incomes through ridesharing on the side, and canny artists were launching lucrative new careers on Kickstarter.
Even as the old planet-scale computer declines, the new one it gave birth to is coming of age.
In our Tale of Two Computers, the parent is a four-century-old computer whose basic architecture was laid down in the zero-sum mercantile age. It runs on paperware, credentialism, and exhaustive territorial claims that completely carve up the world with strongly regulated boundaries. Its structure is based on hierarchically arranged container-like organizations, ranging from families to nations. In this order of things, there is no natural place for a free frontier. Ideally, there is a place for everything, and everything is in its place. It is a computer designed for stability, within which innovation is a bug rather than a feature.
We’ll call this planet-scale computer the geographic world.
The child is a young, half-century old computer whose basic architecture was laid down during the Cold War. It runs on software, the hacker ethos, and soft networks that wire up the planet in ever-richer, non-exclusive, non-zero-sum ways. Its structure is based on streams like Twitter: open, non-hierarchical flows of real-time information from multiple overlapping networks. In this order of things, everything from banal household gadgets to space probes becomes part of a frontier for ceaseless innovation through bricolage. It is a computer designed for rapid, disorderly and serendipitous evolution, within which innovation, far from being a bug, is the primary feature.
We’ll call this planet-scale computer the networked world.
The networked world is not new. It is at least as old as the oldest trade routes, which have been spreading subversive ideas alongside valuable commodities throughout history. What is new is its growing ability to dominate the geographic world. The story of software eating the world is the also the story of networks eating geography.
There are two major subplots to this story. The first subplot is about bits dominating atoms. The second subplot is about the rise of a new culture of problem-solving.
In 2015, it is safe to say that the weird problem-solving mechanisms of SETI@home and kitten-picture sharing have become normal problem-solving mechanisms for all domains.
Today it seems strange to not apply networked distributed computing involving both neurons and silicon to any complex problem. The term social media is now unnecessary: Even when there are no humans involved, problem-solving on this planet-scale computer almost necessarily involves social mechanisms. Whatever the mix of humans, software and robots involved, solutions tend to involve the same “social” design elements: real-time information streams, dynamically evolving patterns of trust, fluid identities, rapidly negotiated collaborations, unexpected emergent problem decompositions, efficiently allocated intelligence, and frictionless financial transactions.
Each time a problem is solved using these elements, the networked world is strengthened.
As a result of this new and self-reinforcing normal in problem-solving, the technological foundation of our planet is evolving with extraordinary rapidity. The process is a branching, continuous one rather than the staged, sequential process suggested by labels like Web 2.0 and Web 3.01, which reflect an attempt to understand it in somewhat industrial terms. Some recently sprouted extensions and branches have already been identified and named: the Mobile Web, the Internet of Things (IoT), streaming media, Virtual Reality (VR), Augmented Reality (AR) and the blockchain. Others will no doubt emerge in profusion, further blurring the line between real and virtual.
Surprisingly, as a consequence of software eating the technology industry itself, the specifics of the hardware are not important in this evolution. Outside of the most demanding applications, data, code, and networking are all largely hardware-agnostic today.
The Internet Wayback Machine,2 developed by Brewster Kahle and Bruce Gilliat in 1996, has already preserved a history of the web across a few generations of hardware. While such efforts can sometimes seem woefully inadequate with respect to pastoralist visions of history preservation, it is important to recognize the enormity of the advance they represent over paper-based collective memories.
Crashing storage costs and continuously upgraded datacenter hardware allows corporations to indefinitely save all the data they generate. This is turning out to be cheaper than deciding what to do with it3 in real time, resulting in the Big Data approach to business. At a personal level, cloud-based services like Dropbox make your personal data trivial to move across computers.
Most code today, unlike fifty years ago, is in hardware-independent high-level programming languages rather than hardware-specific machine code. As a result of virtualization (technology that allows one piece of hardware to emulate another, a fringe technology until around 20004), most cloud-based software runs within virtual machines and “code containers” rather than directly on hardware. Containerization in shipping drove nearly a seven-fold increase5 in trade among industrialized nations over 20 years. Containerization of code is shaping up to be even more impactful in the economics of software.
Networks too, are defined primarily in software today. It is not just extremely high-level networks, such as the transient, disposable ones defined by hashtags, that exist in software. Low-level networking software can also persist across generations of switching equipment and different kinds of physical links, such as telephone lines, optic fiber cables and satellite links. Thanks to the emerging technology of software-defined networking (SDN), functions that used to be performed by network hardware are increasingly performed by software.
In other words, we don’t just live on a networked planet. We live on a planet networked by software, a distinction that makes all the difference. The software-networked planet is an entity that can exist in a continuous and coherent way despite continuous hardware churn, just as we humans experience a persistent identity, even though almost every atom in our bodies gets swapped out every few years.
This is a profound development. We are used to thinking of atoms as enduring and bits as transient and ephemeral, but in fact the reverse is more true today.
The emerging planetary computer has the capacity to retain an evolving identity and memory across evolutionary epochs in hardware, both silicon and neural. Like money and writing, software is only dependent on hardware in the short term, not in the long term. Like the US dollar or the plays of Shakespeare, software and software-enabled networks can persist through changes in physical technology.
By contrast it is challenging to preserve old hard technologies even in museums, let alone in working order as functional elements of society. When software eats hardware, however, we can physically or virtually recreate hardware as necessary, imbuing transient atoms with the permanence of bits.
For example, the Realeaux collection of 19th century engineering mechanisms, a priceless part of mechanical engineering heritage, is now available as a set of 3d printable models from Cornell University6 for students anywhere in the world to download, print and study. A higher-end example is NASA’s reverse engineering of 1970s-vintage Saturn V rocket engines.7 The complex project used structured light 3d scanning to reconstruct accurate computer models, which were then used to inform a modernized design. Such resurrection capabilities even extend to computing hardware itself. In 1997, using modern software tools, researchers at the University of Pennsylvania led by Jan Van Der Spiegel recreated ENIAC, the first modern electronic computer — in the form of an 8mm by 8mm chip.8
As a result of such capabilities, the very idea of hardware obsolescence is becoming obsolete. Rapid evolution does not preclude the persistence of the past in a world of digital abundance.
The potential in virtual and augmented reality is perhaps even higher, and the potential goes far beyond consumption devices like the Oculus VR, Magic Leap, Microsoft Hololens and the Leap 3d motion sensor. The more exciting story is that production capabilities are being democratized. In the early decades of prohibitively expensive CGI and motion capture technology, only big-budget Hollywood movies and video games could afford to create artificial realities. Today, with technologies like Microsoft’s Photosynth (which allows you to capture 3d imagery with smartphones), SketchUp, (a powerful and free 3d modeling tool), 3d Warehouse (a public repository of 3d virtual objects), Unity (a powerful game-design tool) and 3d scanning apps such as Trimensional, it is becoming possible for anyone to create living historical records and inhabitable fictions in the form of virtual environments. The Star Trek “holodeck” is almost here: our realities can stay digitally alive long after they are gone in the physical world.
These are more than cool toys. They are soft technological capabilities of enormous political significance. Software can preserve the past in the form of detailed, relivable memories that go far beyond the written word. In 1964, only the “Big 3” network television crews had the ability to film the civil rights riots in America, making the establishment record of events the only one. A song inspired by the movement was appropriately titled This revolution will not be televised. In 1991, a lone witness with a personal camcorder videotaped the tragic beating of Rodney King, triggering the Los Angeles riots.
Fast-forwarding fifteen years, in 2014, smartphones were capturing at least fragments of nearly every important development surrounding the death of Michael Brown in Ferguson, and thousands of video cameras were being deployed to challenge the perspectives offered by the major television channels. In a rare display of consensus, civil libertarians on both the right and left began demanding that all police officers and cars be equipped with cameras that cannot be turned off. Around the same time, the director of the FBI was reduced to conducting a media roadshow to attempt to stall the spread of cryptographic technologies capable of limiting government surveillance.
In just a year after the revelations of widespread surveillance by the NSA, the tables were already being turned.
It is only a matter of time before all participants in every event of importance will be able to record and share their experiences from their perspective as comprehensively as they want. These can then turn into collective, relivable, 3d memories that are much harder for any one party to manipulate in bad faith. History need no longer be written by past victors.
Even authoritarian states are finding that surveillance capabilities cut both ways in the networked world. During the 2014 #Occupy protests in Hong Kong for instance, drone imagery allowed news agencies to make independent estimates of crowd sizes,9 limiting the ability of the government to spin the story as a minor protest. Software was being used to record history from the air, even as it was being used to drive the action on the ground.
When software eats history this way, as it is happening, the ability to forget10 becomes a more important political, economic and cultural concern than the ability to remember.
When bits begin to dominate atoms, it no longer makes sense to think of virtual and physical worlds as separate, detached spheres of human existence. It no longer makes sense to think of machine and human spheres as distinct non-social and social spaces. When software eats the world, “social media,” including both human and machine elements, becomes the entire Internet. “The Internet” in turn becomes the entire world. And in this fusion of digital and physical, it is the digital that dominates.
The fallacious idea that the online world is separate from and subservient to the offline world (an idea called digital dualism, the basis for entertaining but deeply misleading movies such as Tron and The Matrix) yields to an understanding of the Internet as an alternative basis for experiencing all reality, including the old basis: geography.
Science fiction writer Bruce Sterling captured the idea of bits dominating atoms with his notion of “spimes” — enduring digital master objects that can be flexibly realized in different physical forms as the need arises. A book, for instance, is a spime rather than a paper object today, existing as a master digital copy that can evolve indefinitely, and persist beyond specific physical copies.
At a more abstract level, the idea of a “journey” becomes a spime that can be flexibly realized in many ways, through specific physical vehicles or telepresence technologies. A “television news show” becomes an abstract spime that might be realized through the medium of a regular television crew filming on location, an ordinary citizen livestreaming events she is witnessing, drone footage, or official surveillance footage obtained by activist hackers.
Spimes in fact capture the essential spirit of bricolage: turning ideas into reality using whatever is freely or cheaply available, instead of through dedicated resources controlled by authoritarian entities. This capability highlights the economic significance of bits dominating atoms. When the value of a physical resource is a function of how openly and intelligently it can be shared and used in conjunction with software, it becomes less contentious. In a world organized by atoms-over-bits logic, most resources are by definition what economists call rivalrous: if I have it, you don’t. Such captive resources are limited by the imagination and goals of one party. An example is a slice of the electromagnetic spectrum reserved for a television channel. Resources made intelligently open to all on the other hand, such as Twitter, are limited only by collective technical ingenuity. The rivalrousness of goods becomes a function of the the amount of software and imagination used to leverage them, individually or collectively.
When software eats the economy, the so-called “sharing economy” becomes the entire economy, and renting, rather than ownership, becomes the default logic driving consumption.
The fact that all this follows from “social” problem-solving mechanisms suggests that the very meaning of the word has changed. As sociologist Bruno Latour has argued, “social” is now about more than the human. It includes ideas and objects flexibly networked through software. Instead of being an externally injected alien element, technology and innovation become part of the definition of what it means to be social.
What we are living through today is a hardware and software upgrade for all of civilization. It is, in principle no different from buying a new smartphone and moving music, photos, files and contacts to it. And like a new smartphone, our new planet-scale hardware comes with powerful, but disorienting new capabilities. Capabilities that test our ability to adapt.
And of all the ways we are adapting, the single most important one is the adaptation in our problem-solving behaviors.
This is the second major subplot in our Tale of Two Computers. Wherever bits begin to dominate atoms, we solve problems differently. Instead of defining and pursuing goals we create and exploit luck.
Upgrading a planet-scale computer is, of course, a more complex matter than trading in an old smartphone for a new one, so it is not surprising that it has already taken us nearly half a century, and we’re still not done.
Since 1974, the year of peak centralization, we have been trading in a world whose functioning is driven by atoms in geography for one whose functioning is driven by bits on networks. The process has been something like vines growing all over an aging building, creeping in through the smallest cracks in the masonry to establish a new architectural logic.
The difference between the two is simple: the geographic world solves problems in goal-driven ways, through literal or metaphoric zero-sum territorial conflict. The networked world solves them in serendipitous ways, through innovations that break assumptions about how resources can be used, typically making them less rivalrous and unexpectedly abundant.
Goal-driven problem-solving follows naturally from the politician’s syllogism: we must do something; this is something; we must do this. Such goals usually follow from gaps between reality and utopian visions. Solutions are driven by the deterministic form-follows-function1 principle, which emerged with authoritarian high-modernism in the early twentieth century. At its simplest, the process looks roughly like this:
Problem selection: Choose a clear and important problem
Resourcing: Capture resources by promising to solve it
Solution: Solve the problem within promised constraints
This model is so familiar that it seems tautologically equivalent to “problem solving”. It is hard to see how problem-solving could work any other way. This model is also an authoritarian territorial claim in disguise. A problem scope defines a boundary of claimed authority. Acquiring resources means engaging in zero-sum competition to bring them into your boundary, as captive resources. Solving the problem generally means achieving promised effects within the boundary without regard to what happens outside. This means that unpleasant unintended consequences — what economists call social costs — are typically ignored, especially those which impact the least powerful.
We have already explored the limitations of this approach in previous essays, so we can just summarize them here. Choosing a problem based on “importance” means uncritically accepting pastoral problem frames and priorities. Constraining the solution with an alluring “vision” of success means limiting creative possibilities for those who come later. Innovation is severely limited: You cannot act on unexpected ideas that solve different problems with the given resources, let alone pursue the direction of maximal interestingness indefinitely. This means unseen opportunity costs can be higher than visible benefits. You also cannot easily pursue solutions that require different (and possibly much cheaper) resources than the ones you competed for: problems must be solved in pre-approved ways.
This is not a process that tolerates uncertainty or ambiguity well, let alone thrive on it. Even positive uncertainty becomes a problem: an unexpected budget surplus must be hurriedly used up, often in wasteful ways, otherwise the budget might shrink next year. Unexpected new information and ideas, especially from novel perspectives — the fuel of innovation — are by definition a negative, to be dealt with like unwanted interruptions. A new smartphone app not anticipated by prior regulations must be banned.
In the last century, the most common outcome of goal-directed problem solving in complex cases has been failure.
The networked world approach is based on a very different idea. It does not begin with utopian goals or resources captured through specific promises or threats. Instead it begins with open-ended, pragmatic tinkering that thrives on the unexpected. The process is not even recognizable as a problem-solving mechanism at first glance:
Immersion in relevant streams of ideas, people and free capabilities
Experimentation to uncover new possibilities through trial and error
Leverage to double down on whatever works unexpectedly well
Where the politician’s syllogism focuses on repairing things that look broken in relation to an ideal of changeless perfection, the tinkerer’s way focuses on possibilities for deliberate change. As Dilbert creator Scott Adams observed, “Normal people don’t understand this concept; they believe that if it ain’t broke, don’t fix it. Engineers believe that if it ain’t broke, it doesn’t have enough features yet.”2
What would be seemingly pointless disruption in an unchanging utopia becomes a way to stay one step ahead in a changing environment. This is the key difference between the two problem-solving processes: in goal-driven problem-solving, open-ended ideation is fundamentally viewed as a negative. In tinkering, it is a positive.
The first phase — inhabiting relevant streams — can look like idle procrastination on Facebook and Twitter, or idle play with cool new tools discovered on Github. But it is really about staying sensitized to developing opportunities and threats. The perpetual experimentation, as we saw in previous essays, feeds via bricolage on whatever is available. Often these are resources considered “waste” by neighboring goal-directed processes: a case of social costs being turned into assets. A great deal of modern data science for instance, begins with “data exhaust”: data of no immediate goal-directed use to an organization that would normally get discarded in an environment of high storage costs. Since the process begins with low-stakes experimentation, the cost of failures is naturally bounded. The upside, however, is unbounded: there is no necessary limit to what unexpected leveraged uses you might discover for new capabilities.
Tinkerers — be they individuals or organizations — in possession of valuable but under-utilized resources tend to do something counter-intuitive. Instead of keeping idle resources captive, they open up access to as many people as possible, with as few strings attached as possible, in the hope of catalyzing spillover tinkering. Where it works, thriving ecosystems of open-ended innovation form, and steady streams of new wealth begin to flow. Those who share interesting and unique resources in such open ways gain a kind of priceless goodwill money cannot buy. The open-source movement, Google’s Android operating system, Big Data technology, the Arduino hardware experimentation kit and the OpenROV underwater robot all began this way. Most recently, Tesla voluntarily opened up access to its electric vehicle technology patents under highly liberal terms compared to automobile industry norms.
Tinkering is a process of serendipity-seeking that does not just tolerate uncertainty and ambiguity, it requires it. When conditions for it are right, the result is a snowballing effect where pleasant surprises lead to more pleasant surprises.
What makes this a problem-solving mechanism is diversity of individual perspectives coupled with the law of large numbers (the statistical idea that rare events can become highly probable if there are enough trials going on). If an increasing number of highly diverse individuals operate this way, the chances of any given problem getting solved via a serendipitous new idea slowly rises. This is the luck of networks.
Serendipitous solutions are not just cheaper than goal-directed ones. They are typically more creative and elegant, and require much less conflict. Sometimes they are so creative, the fact that they even solve a particular problem becomes hard to recognize. For example, telecommuting and video-conferencing do more to “solve” the problem of fossil-fuel dependence than many alternative energy technologies, but are usually understood as technologies for flex-work rather than energy savings.
Ideas born of tinkering are not targeted solutions aimed at specific problems, such as “climate change” or “save the middle class,” so they can be applied more broadly. As a result, not only do current problems get solved in unexpected ways, but new value is created through surplus and spillover. The clearest early sign of such serendipity at work is unexpectedly rapid growth in the adoption of a new capability. This indicates that it is being used in many unanticipated ways, solving both seen and unseen problems, by both design and “luck”.
Venture capital is ultimately the business of detecting such signs of serendipity early and investing to accelerate it. This makes Silicon Valley the first economic culture to fully and consciously embrace the natural logic of networks. When the process works well, resources flow naturally towards whatever effort is growing and generating serendipity the fastest. The better this works, the more resources flow in ways that minimize opportunity costs.
From the inside, serendipitous problem solving feels like the most natural thing in the world. From the perspective of goal-driven problem solvers, however, it can look indistinguishable from waste and immoral priorities.
This perception exists primarily because access to the luck of sufficiently weak networks can be slowed down by sufficiently strong geographic world boundaries (what is sometimes called bahramdipity: serendipity thwarted by powerful forces). Where resources cannot stream freely to accelerate serendipity, they cannot solve problems through engineered luck, or create surplus wealth. The result is growing inequality between networked and geographic worlds.
This inequality superficially resembles the inequality within the geographic world created by malfunctioning financial markets, crony capitalism and rent-seeking behaviors. As a result, it can be hard for non-technologists to tell Wall Street and Silicon Valley apart, even though they represent two radically different moral perspectives and approaches to problem-solving. When the two collide on highly unequal terms, as they did in the cleantech sector in the late aughts, the overwhelming advantage enjoyed by geographic-world incumbents can prove too much for the networked world to conquer. In the case of cleantech, software was unable to eat the sector and solve its problems in large part due to massive subsidies and protections available to incumbents.
But this is just a temporary state. As the networked world continues to strengthen, we can expect very different outcomes the next time it takes on problems in the cleantech sector.
As a result of failures and limits that naturally accompany young and growing capabilities, the networked world can seem “unresponsive” to “real” problems.
So while both Wall Street and Silicon Valley can often seem tone-deaf and unresponsive to pressing and urgent pains while minting new billionaires with boring frequency, the causes are different. The problems of Wall Street are real, and symptomatic of a true crisis of social and economic mobility in the geographic world. Those of Silicon Valley on the other hand, exist because not everybody is sufficiently plugged into the networked world yet, limiting its power. The best response we have come up with for the former is periodic bailouts for “too big to fail” organizations in both the public and private sector. The problem of connectivity on the other hand, is slowly and serendipitously solving itself as smartphones proliferate.
This difference between the two problem-solving cultures carries over to macroeconomic phenomena as well.
Unlike booms and busts in the financial markets, which are often artificially created, technological booms and busts are an intrinsic feature of wealth creation itself. As Carlota Perez notes, technology busts in fact typically open up vast new capabilities that were overbuilt during booms. They radically expand access to the luck of networks to larger populations. The technology bust of 2000 for instance, radically expanded access to the tools of entrepreneurship and began fueling the next wave of innovation almost immediately.
The 2007 subprime mortgage bust, born of deceit and fraud, had no such serendipitous impact. It destroyed wealth overall, rather than creating it. The global financial crisis that followed is representative of a broader systematic crisis in the geographic world.
Structure, as the management theorist Alfred Chandler noted in his study of early industrial age corporations, follows strategy. Where a goal-driven strategy succeeds, the temporary scope of the original problem hardens into an enduring and policed organizational boundary. Temporary and specific claims on societal resources transform into indefinite and general captive property rights for the victors of specific political, cultural or military wars.
As a result we get containers with eternally privileged insiders and eternally excluded outsiders: geographic-world organizations. By their very design, such organizations are what Daron Acemoglu and James Robinson call extractive institutions. They are designed not just to solve a specific problem and secure the gains, but to continue extracting wealth indefinitely. Whatever the broader environmental conditions, ideally wealth, harmony and order accumulate inside the victor’s boundaries, while waste, social costs, and strife accumulate outside, to be dealt with by the losers of resource conflicts.
This description does not apply just to large banks or crony capitalist corporations. Even an organization that seems unquestionably like a universal good, such as the industrial age traditional family, comes with a societal cost. In the United States for example, laws designed to encourage marriage and home-ownership systematically disadvantage single adults and non-traditional families (who now collectively form more than half the population). Even the traditional family, as defined and subsidized by politics, is an extractive institution.
Where extractive institutions start to form, it becomes progressively harder to solve future problems in goal-driven ways. Each new problem-solving effort has more entrenched boundaries to deal with. Solving new problems usually means taking on increasingly expensive conflict to redraw boundaries as a first step. In the developed world, energy, healthcare and education are examples of sectors where problem-solving has slowed to a crawl due to a maze of regulatory and other boundaries. The result has been escalating costs and declining innovation — what economist William Baumol has labeled the “cost disease.”
The cost disease is an example of how, in their terminal state, goal-driven problem solving cultures exhaust themselves. Without open-ended innovation, the growing complexity of boundary redrawing makes most problems seem impossible. The planetary computer that is the geographic world effectively seizes up.
On the cusp of the first Internet boom, the landscape of organizations that defines the geographic world was already in deep trouble. As Giles Deleuze noted around 1992:1
We are in a generalized crisis in relation to all environments of enclosure — prison, hospital, factory, school, family…The administrations in charge never cease announcing supposedly necessary reforms…But everyone knows these environments are finished, whatever the length of their expiration periods. It’s only a matter of administering their last rites and of keeping people employed until the installation of new forces knocking at the door.
The “crisis in environments of enclosure” is a natural terminal state for the geographic world. When every shared societal resource has been claimed by a few as an eternal and inalienable right, and secured behind regulated boundaries, the only way to gain something is to deprive somebody else of it through ideology-driven conflict.
This is the zero-sum logic of mercantile economic organization, and dates to the sixteenth century. In fact, because some value is lost through conflict, in the absence of open-ended innovation, it can be worse than zero-sum: what decision theorists call negative-sum (the ultimate example of which is of course war).
By the early twentieth century, mercantilist economic logic had led to the world being completely carved up in terms of inflexible land, water, air, mineral and — perhaps most relevant today — spectrum rights. Rights that could not be freely traded or renegotiated in light of changing circumstances.
This is a grim reality we have a tendency to romanticize. As the etymology of words like organization and corporation suggests, we tend to view our social containers through anthropomorphic metaphors. We extend metaphoric and legal fictions of identity, personality, birth and death far beyond the point of diminishing marginal utility. We assume the “life” of these entities to be self-evidently worth extending into immortality. We even mourn them when they do occasionally enter irreversible decline. Companies like Kodak and Radio Shack for example, evoke such strong positive memories for many Americans that their decline seems truly tragic to many, despite the obvious irrelevance of the business models that originally fueled their rise. We assume that the fates of actual living humans is irreversibly tied to the fates of the artificial organisms they inhabit.
In fact, in the late crisis-ridden state of the geographic world, the “goal” of a typical problem-solving effort is often to “save” some anthropomorphically conceived part of society, without any critical attention devoted to whether it is still necessary, or whether better alternatives are already serendipitously emerging. If innovation is considered a necessary ingredient in the solution at all, only sustaining innovations — those that help preserve and perfect the organization in question — are considered.
Whether the intent is to “save” the traditional family, a failing corporation, a city in decline, or an entire societal class like the “American middle class,” the idea that the continued existence of any organization might be both unnecessary and unjustifiable is rejected as unthinkable. The persistence of geographic world organizations is prized for its own sake, whatever the changes in the environment.
The dark side of such anthropomorphic romanticization is what we might call geographic dualism: a stable planet-wide separation of local utopian zones secured for a privileged few and increasingly dystopian zones for many, maintained through policed boundaries. The greater the degree of geographic dualism, the clearer the divides between slums and high-rises, home owners and home renters, developing and developed nations, wrong and right sides of the tracks, regions with landfills and regions with rent-controlled housing. And perhaps the most glaring divide: secure jobs in regulated sectors with guaranteed lifelong benefits for some, at the cost of needlessly heightened precarity in a rapidly changing world for others.
In a changing environment, organizational stability valued for its own sake becomes a kind of immorality. Seeking such stability means allowing the winners of historic conflicts to enjoy the steady, fixed benefits of stability by imposing increasing adaptation costs on the losers.
In the late eighteenth century, two important developments planted the seeds of a new morality, which sparked the industrial revolution. As a result new wealth began to be created despite the extractive, stability-seeking nature of the geographic world.
With the benefit of a century of hindsight, the authoritarian high-modernist idea that form can follow function in a planned way, via coercive control, seems like wishful thinking beyond a certain scale and complexity. Two phrases popularized by the open-source movement, free as in beer and free as in speech, get at the essence of problem solving through serendipity, an approach that does work1 in large-scale and complex systems.
The way complex systems — such as planet-scale computing capabilities — evolve is perhaps best described by a statement known as Gall’s Law:
A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.
Gall’s Law is in fact much too optimistic. It is not just non-working complex systems designed from scratch that cannot be patched up. Even naturally evolved complex systems that used to work, but have now stopped working, generally cannot be patched into working order again.
The idea that a new, simpler system can revitalize a complex system in a state of terminal crisis is the essence of Promethean thinking. Though the geographic world has reached a state of terminal crisis only recently, the seeds of a simpler working system to replace it were actually planted in the eighteenth century, nearly 200 years before software joined the party. The industrial revolution itself was driven by two elements of our world being partially freed from geographic world logic: people and ideas.
In the eighteenth century, the world gradually rejected the idea that people could be property, to be exclusively claimed by other people or organizations as a problem-solving “resource,” and held captive within specific boundaries. Individual rights and at-will employment models emerged in liberal democracies, in place of institutions like slavery, serfdom and caste-based hereditary professions.
The second was ideas. Again, in the late eighteenth century, modern intellectual property rights, in the form of patents with expiration dates, became the norm. In ancient China, those who revealed the secrets of silk-making were put to death by the state. In late eighteenth century Britain, the expiration of James Watt’s patents sparked the industrial revolution.
Thanks to these two enlightened ideas, a small trickle of individual inventions turned into a steady stream of non-zero sum intellectual and capitalist progress within an otherwise mercantilist, zero-sum world. In the process, the stability-seeking logic of mercantilism was gradually replaced by the adaptive logic of creative destruction.
People and ideas became increasingly free in two distinct ways. As Richard Stallman, the pioneer of the open source movement, famously expressed it: The two kinds of freedom are free as in beer and free as in speech.
First, people and ideas were increasingly free in the sense of no longer being considered “property” to be bought and sold like beer by others.
Second, people and ideas became increasingly free in the sense of not being restricted to a single purpose. They could potentially play any role they were capable of fulfilling. For people, this second kind of freedom is usually understood in terms of specific rights such as freedom of speech, freedom of association and assembly, and freedom of religion. What is common to all these specific freedoms is that they represent freedom from the constraints imposed by authoritarian goals. This second kind of freedom is so new, it can be alarming to those used to being told what to do by authority figures.
Where both kinds of freedom exist, networks begin to form. Freedom of speech, for instance, tends to create a thriving literary and journalistic culture, which exists primarily as a network of individual creatives rather than specific organizations. Freedom of association and assembly creates new political movements, in the form of grassroots political networks.
Free people and ideas can associate in arbitrary ways, creating interesting new combinations and exploring open-ended possibilities. They can make up their own minds about whether problems declared urgent by authoritarian leaders are actually the right focus for their talents. Free ideas are even more powerful, since unlike the talents of free individuals, they are not restricted to one use at a time.
Free people and free ideas formed the “working simple system” that drove two centuries of disruptive industrial age innovation.
Tinkering — the steady operation of this working simple system — is a much more subversive force than we usually recognize, since it poses an implicit challenge to authoritarian priorities.
This is what makes tinkering an undesirable, but tolerable bug in the geographic world. So long as material constraints limited the amount of tinkering going on, the threat to authority was also limited. Since the “means of production” were not free, either as in beer or as in speech, the anti-authoritarian threat of tinkering could be contained by restricting access to them.
With software eating the world, this is changing. Tinkering is becoming much more than a minority activity pursued by the lucky few with access to well-stocked garages and junkyards. It is becoming the driver of a global mass flourishing.
As Karl Marx himself realized, the end-state of industrial capitalism is in fact the condition where the means of production become increasingly available to all. Of course, it is already becoming clear that the result is neither the utopian collectivist workers’ paradise he hoped for, nor the utopian leisure society that John Maynard Keynes hoped for. Instead, it is a world where increasingly free people, working with increasingly free ideas and means of production, operate by their own priorities. Authoritarian leaders, used to relying on coercion and policed boundaries, find it increasingly hard to enforce their priorities on others in such a world.
Chandler’s principle of structure following strategy allows us to understand what is happening as a result. If non-free people, ideas and means of production result in a world of container-like organizations, free people, ideas and means of production result in a world of streams.
A stream is simply a life context formed by all the information flowing towards you via a set of trusted connections — to free people, ideas and resources — from multiple networks. If in a traditional organization nothing is free and everything has a defined role in some grand scheme, in a stream, everything tends steadily towards free as in both beer and speech. “Social” streams enabled by computing power in the cloud and on smartphones are not a compartmentalized location for a particular kind of activity. They provide an information and connection-rich context for all activity.
Unlike organizations defined by boundaries, streams are what Acemoglu and Robinson call pluralist institutions. These are the opposite of extractive: they are open, inclusive and capable of creating wealth in non-zero-sum ways.
On Facebook for example, connections are made voluntarily (unlike reporting relationships on an org chart) and pictures or notes are usually shared freely (unlike copyrighted photos in a newspaper archive), with few restrictions on further sharing. Most of the capabilities of the platform are free-as-in-beer. What is less obvious is that they are also free-as-in-speech. Except at the extremes, Facebook does not attempt to dictate what kinds of groups you are allowed to form on the platform.
If the three most desirable things in a world defined by organizations are location, location and location,1 in the networked world they are connections, connections and connections.
Streams are not new in human culture. Before the Silk Road was a Darknet site, it was a stream of trade connecting Asia, Africa and Europe. Before there were lifestyle-designing free agents, hackers and modern tinkerers, there were the itinerant tinkers of early modernity. The collective invention settings we discussed in the last essay, such as the Cornish mining district in James Watt’s time and Silicon Valley today, are examples of early, restricted streams. The main streets of thriving major cities are also streams, where you might run into friends unexpectedly, learn about new events through posted flyers, and discover new restaurants or bars.
What is new is the idea of a digital stream created by software. While geography dominates physical streams, digital streams can dominate geography. Access to the stream of innovation that is Silicon Valley is limited by geographic factors such as cost of living and immigration barriers. Access to the stream of innovation that is Github is not. On a busy main street, you can only run into friends who also happen to be out that evening, but with Augmented Reality glasses on, you might also “run into” friends from around the world and share your physical experiences with them.
What makes streams ideal contexts for open-ended innovation through tinkering is that they constantly present unrelated people, ideas and resources in unexpected juxtapositions. This happens because streams emerge as the intersection of multiple networks. On Facebook, or even your personal email, you might be receiving updates from both family and coworkers. You might also be receiving imported updates from structurally distinct networks, such as Twitter or the distribution network of a news source. This means each new piece of information in a stream is viewed against a backdrop of overlapping, non-exclusive contexts, and a plurality of unrelated goals. At the same time, your own actions are being viewed by others in multiple unrelated ways.
As a result of such unexpected juxtapositions, you might “solve” problems you didn’t realize existed and do things that nobody realized were worth doing. For example, seeing a particular college friend and a particular coworker in the same stream might suggest a possibility for a high-value introduction: a small act of social bricolage. Because you are seen by many others from different perspectives, you might find people solving problems for you without any effort on your part. A common experience on Twitter, for example, is a Twitter-only friend tweeting an obscure but important news item, which you might otherwise have missed, just for your benefit.
When a stream is strengthened through such behaviors, every participating network is strengthened.
While Twitter and Facebook are the largest global digital streams today, there are thousands more across the Internet. Specialized ones such as Github and Stack Overflow cater to specific populations, but are open to anyone willing to learn. Newer ones such as Instagram and Whatsapp tap into the culture of younger populations. Reddit has emerged as an unusual venue for keeping up with science by interacting with actual working scientists. The developers of every agile software product in perpetual beta inhabit a stream of unexpected uses discovered by tinkering users. Slack turns the internal life of a corporation into a stream.
Streams are not restricted to humans. Twitter already has a vast population of interesting bots, ranging from House of Coates (an account that is updated by a smart house) to space probes and even sharks tagged with transmitters by researchers.2 Facebook offers pages that allow you to ‘like’ and follow movies and books.
By contrast, when you are sitting in a traditional office, working with a laptop configured exclusively for work use by an IT department, you receive updates only from one context, and can only view them against the backdrop of a single, exclusive and totalizing context. Despite the modernity of the tools deployed, the architecture of information is not very different from the paperware world. If information from other contexts leaks in, it is generally treated as a containment breach: a cause for disciplinary action in the most old-fashioned businesses. People you meet have pre-determined relationships with you, as defined by the organization chart. If you relate to a coworker in more than one way (as both a team member and a tennis buddy), that weakens the authority of the organization. The same is true of resources and ideas. Every resource is committed to a specific “official” function, and every idea is viewed from a fixed default perspective and has a fixed “official” interpretation: the organization’s “party line” or “policy.”
This has a radical consequence. When organizations work well and there are no streams, we view reality in what behavioral psychologists call functionally fixed 3 ways: people, ideas and things have fixed, single meanings. This makes them less capable of solving new problems in creative ways. In a dystopian stream-free world, the most valuable places are the innermost sanctums: these are typically the oldest organizations, most insulated from new information. But they are also the locus of the most wealth, and offer the most freedom for occupants. In China, for instance, the innermost recesses of the Communist Party are still the best place to be. In a Fortune 500 company, the best place to be is still the senior executive floor.
When streams work well on the other hand, reality becomes increasingly intertwingled (a portmanteau of intertwined and tangled), as Ted Nelson evocatively labeled the phenomenon. People, ideas and things can have multiple, fluid meanings depending on what else appears in juxtaposition with them. Creative possibilities rapidly multiply, with every new network feeding into the stream. The most interesting place to be is usually the very edge, rather than the innermost sanctums. In the United States, being a young and talented person in Silicon Valley can be more valuable and interesting than being a senior staffer in the White House. Being the founder of the fastest growing startup may offer more actual leverage than being President of the United States.
We instinctively understand the difference between the two kinds of context. In an organization, if conflicting realities leak in, we view them as distractions or interruptions, and react by trying to seal them out better. In a stream, if things get too homogeneous and non-pluralistic, we complain that things are getting boring, predictable, and turning into an echo chamber. We react by trying to open things up, so that more unexpected things can happen.
What we do not understand as instinctively is that streams are problem-solving and wealth-creation engines. We view streams as zones of play and entertainment, through the lens of the geographic-dualist assumption that play cannot also be work.
In our Tale of Two Computers, the networked world will become firmly established as the dominant planetary computer when this idea becomes instinctive, and work and play become impossible to tell apart.
The first sustainable socioeconomic order of the networked world is just beginning to emerge, and the experience of being part of a system that is growing smarter at an exponential rate is deeply unsettling to pastoralists and immensely exciting to Prometheans.
Our geographic-world intuitions and our experience of the authoritarian institutions of the twentieth century lead us to expect that any larger system we are part of will either plateau into some sort of impersonal, bureaucratic stupidity, or turn “evil” somehow and oppress us.
The first kind of apocalyptic expectation is at the heart of movies like Idiocracy and Wall-E, set in trashed futures inhabited by a degenerate humanity that has irreversibly destroyed nature.
The second kind is the fear behind the idea of the Singularity: the rise of a self-improving systemic intelligence that might oppress us. Popular literal-minded misunderstandings of the concept, rooted in digital dualism, result in movies such as Terminator. These replace the fundamental humans-against-nature conflict of the geographic world with an imagined humans-against-machines conflict of the future. As a result, believers in such dualist singularities, rather ironically for extreme technologists, are reduced to fearfully awaiting the arrival of a God-like intelligence with fingers crossed, hoping it will be benevolent.
Both fears are little more than technological obscurantism. They are motivated by a yearning for the comforting certainties of the geographic world, with its clear boundaries, cohesive identities, and idealized heavens and hells.
Neither is a meaningful fear. The networked world blurs the distinction between wealth and waste. This undermines the first fear. The serendipity of the networked world depends on free people, ideas and capabilities combining in unexpected ways: “Skynet” cannot be smarter than humans unless the humans within it are free. This undermines the second fear.
To the extent that these fears are justified at all, they reflect the terminal trajectory of the geographic world, not the early trajectory of the networked world.
An observation due to Arthur C. Clarke offers a way to understand this second trajectory: any sufficiently advanced technology is indistinguishable from magic. The networked world evolves so rapidly through innovation, it seems like a frontier of endless magic.
Clarke’s observation has inspired a number of snowclones that shed further light on where we might be headed. The first, due to Bruce Sterling, is that any sufficiently advanced civilization is indistinguishable from its own garbage. The second, due to futurist Karl Schroeder,1 is that any sufficiently advanced civilization is indistinguishable from nature.
To these we can add one from social media theorist Seb Paquet, which captures the moral we drew from our Tale of Two Computers: any sufficiently advanced kind of work is indistinguishable from play.
Putting these ideas together, we are messily slouching towards a non-pastoral utopia on an asymptotic trajectory where reality gradually blurs into magic, waste into wealth, technology into nature and work into play. `
This is a world that is breaking smart, with Promethean vigor, from its own past, like the precocious teenagers who are leading the charge. In broad strokes, this is what we mean by software eating the world.
For Prometheans, the challenge is to explore how to navigate and live in this world. A growing non-geographic-dualist understanding of it is leading to a network culture view of the human condition. If the networked world is a planet-sized distributed computer, network culture is its operating system.
Our task is like Deep Thought’s task when it began constructing its own successor: to develop an appreciation for the “merest operational parameters” of the new planet-sized computer to which we are migrating all our civilizational software and data.
Communication via algorithme au lieu de langage naturel. Pour permettre communication entre machines mais aussi entre humains.
Développe des idées de contrats lisibles par des machines et humains ==> ethereum.
Not many years ago, the idea of having a computer broadly answer questions asked in plain English seemed like science fiction. But when we released Wolfram|Alpha in 2009 one of the big surprises (not least to me!) was that we’d managed to make this actually work. And by now people routinely ask personal assistant systems—many powered by Wolfram|Alpha—zillions of questions in ordinary language every day.
Ask questions in ordinary language, get answers from Wolfram|Alpha
It all works fairly well for quick questions, or short commands (though we’re always trying to make it better!). But what about more sophisticated things? What’s the best way to communicate more seriously with AIs?
I’ve been thinking about this for quite a while, trying to fit together clues from philosophy, linguistics, neuroscience, computer science and other areas. And somewhat to my surprise, what I’ve realized recently is that a big part of the answer may actually be sitting right in front of me, in the form of what I’ve been building towards for the past 30 years: the Wolfram Language.
Maybe this is a case of having a hammer and then seeing everything as a nail. But I’m pretty sure there’s more to it. And at the very least, thinking through the issue is a way to understand more about AIs and their relation to humans.
Computation Is Powerful
The first key point—that I came to understand clearly only after a series of discoveries I made in basic science—is that computation is a very powerful thing, that lets even tiny programs (like cellular automata, or neural networks) behave in incredibly complicated ways. And it’s this kind of thing that an AI can harness.
A cellular automaton with a very simple rule set (shown in the lower left corner) that produces highly complex behavior
Looking at pictures like this we might be pessimistic: how are we humans going to communicate usefully about all that complexity? Ultimately, what we have to hope is that we can build some kind of bridge between what our brains can handle and what computation can do. And although I didn’t look at it quite this way, this turns out to be essentially just what I’ve been trying to do all these years in designing the Wolfram Language.
Language of Computational Thinking
I have seen my role as being to identify lumps of computation that people will understand and want to use, like FindShortestTour, ImageIdentify or Predict. Traditional computer languages have concentrated on low-level constructs close to the actual hardware of computers. But in the Wolfram Language I’ve instead started from what we humans understand, and then tried to capture as much of it as possible in the language.
In the early years, we were mostly dealing with fairly abstract concepts, about, say, mathematics or logic or abstract networks. But one of the big achievements of recent years—closely related to Wolfram|Alpha—has been that we’ve been able to extend the structure we built to cover countless real kinds of things in the world—like cities or movies or animals.
One might wonder: why invent a language for all this; why not just use, say, English? Well, for specific things, like “hot pink”, “new york city” or “moons of pluto”, English is good—and actually for such things the Wolfram Language lets people just use English. But when one’s trying to describe more complex things, plain English pretty quickly gets unwieldy.
Imagine for example trying to describe even a fairly simple algorithmic program. A back-and-forth dialog—“Turing-test style”—would rapidly get frustrating. And a straight piece of English would almost certainly end up with incredibly convoluted prose like one finds in complex legal documents.
The Wolfram Language specifies clearly and succinctly how to create this image. The equivalent natural-language specification is complicated and subject to misinterpretation.
But the Wolfram Language is built precisely to solve such problems. It’s set up to be readily understandable to humans, capturing the way humans describe and think about things. Yet it also has a structure that allows arbitrary complexity to be assembled and communicated. And, of course, it’s readily understandable not just by humans, but also by machines.
I realize I’ve actually been thinking and communicating in a mixture of English and Wolfram Language for years. When I give talks, for example, I’ll say something in English, then I’ll just start typing to communicate my next thought with a piece of Wolfram Language code that executes right there.
The Wolfram Language mixes well with English in documents and thought streams
But let’s get back to AI. For most of the history of computing, we’ve built programs by having human programmers explicitly write lines of code, understanding (apart from bugs!) what each line does. But achieving what can reasonably be called AI requires harnessing more of the power of computation. And to do this one has to go beyond programs that humans can directly write—and somehow automatically sample a broader swath of possible programs.
We can do this through the kind of algorithm automation we’ve long used in Mathematica and the Wolfram Language, or we can do it through explicit machine learning, or through searching the computational universe of possible programs. But however we do it, one feature of the programs that come out is that they have no reason to be understandable by humans.
Engineered programs are written to be human-readable. Automatically created or discovered programs are not necessarily human-readable.
At some level it’s unsettling. We don’t know how the programs work inside, or what they might be capable of. But we know they’re doing elaborate computation that’s in a sense irreducibly complex to analyze.
There’s another, very familiar place where the same kind of thing happens: the natural world. Whether we look at fluid dynamics, or biology, or whatever, we see all sorts of complexity. And in fact the Principle of Computational Equivalence that emerged from the basic science I did implies that this complexity is in a sense exactly the same as the complexity that can occur in computational systems.
Over the centuries we’ve been able to identify aspects of the natural world that we can understand, and then harness them to create technology that’s useful to us. And our traditional engineering approach to programming works more or less the same way.
But for AI, we have to venture out into the broader computational universe, where—as in the natural world—we’re inevitably dealing with things we cannot readily understand.
What Will AIs Do?
Let’s imagine we have a perfect, complete AI, that’s able to do anything we might reasonably associate with intelligence. Maybe it’ll get input from lots of IoT sensors. And it has all sorts of computation going on inside. But what is it ultimately going to try to do? What is its purpose going to be?
This is about to dive into some fairly deep philosophy, involving issues that have been batted around for thousands of years—but which finally are going to really matter in dealing with AIs.
One might think that as an AI becomes more sophisticated, so would its purposes, and that eventually the AI would end up with some sort of ultimate abstract purpose. But this doesn’t make sense. Because there is really no such thing as abstractly defined absolute purpose, derivable in some purely formal mathematical or computational way. Purpose is something that’s defined only with respect to humans, and their particular history and culture.
An “abstract AI”, not connected to human purposes, will just go along doing computation. And as with most cellular automata and most systems in nature, we won’t be able to identify—or attribute—any particular “purpose” to that computation, or to the system that does it.
Giving Goals for an AI
Technology has always been about automating things so humans can define goals, and then those goals can automatically be achieved by the technology.
For most kinds of technology, those goals have been tightly constrained, and not too hard to describe. But for a general computational system they can be completely arbitrary. So then the challenge is how to describe them.
What do you say to an AI to tell it what you want it to do for you? You’re not going to be able to tell it exactly what to do in each and every circumstance. You’d only be able to do that if the computations the AI could do were tightly constrained, like in traditional software engineering. But for the AI to work properly, it’s going to have to make use of broader parts of the computational universe. And it’s then a consequence of a phenomenon I call computational irreducibility that you’ll never be able to determine everything it’ll do.
So what’s the best way to define goals for an AI? It’s complicated. If the AI can experience your life alongside you—seeing what you see, reading your email, and so on—then, just like with a person you know well, you might be able to tell the AI at least simple goals just by saying them in natural language.
But what if you want to define more complex goals, or goals that aren’t closely associated with what the AI has already experienced? Then small amounts of natural language wouldn’t be enough. Perhaps the AI could go through a whole education. But a better idea would be to leverage what we have in the Wolfram Language, which in effect already has lots of knowledge of the world built into it, in a way that both the human and the AI can use.
AIs Talking to AIs
Thinking about how humans communicate with AIs is one thing. But how will AIs communicate with one another? One might imagine they could do literal transfers of their underlying representations of knowledge. But that wouldn’t work, because as soon as two AIs have had different “experiences”, the representations they use will inevitably be at least somewhat different.
And so, just like humans, the AIs are going to end up needing to use some form of symbolic language that represents concepts abstractly, without specific reference to the underlying representations of those concepts.
One might then think the AIs should just communicate in English; at least that way we’d be able to understand them! But it wouldn’t work out. Because the AIs would inevitably need to progressively extend their language—so even if it started as English, it wouldn’t stay that way.
In human natural languages, new words get added when there are new concepts that are widespread enough to make representing them in the language useful. Sometimes a new concept is associated with something new in the world (“blog”, “emoji”, “smartphone”, “clickbait”, etc.); sometimes it’s associated with a new distinction among existing things (“road” vs. “freeway”, “pattern” vs. “fractal”).
Often it’s science that gives us new distinctions between things, by identifying distinct clusters of behavior or structure. But the point is that AIs can do that on a much larger scale than humans. For example, our Image Identification Project is set up to recognize the 10,000 or so kinds of objects that we humans have everyday names for. But internally, as it’s trained on images from the world, it’s discovering all sorts of other distinctions that we don’t have names for, but that are successful at robustly separating things.
I’ve called these “post-linguistic emergent concepts” (or PLECs). And I think it’s inevitable that in a population of AIs, an ever-expanding hierarchy of PLECs will appear, forcing the language of the AIs to progressively expand.
But how could the framework of English support that? I suppose each new concept could be assigned a word formed from some hash-code-like collection of letters. But a structured symbolic language—as the Wolfram Language is—provides a much better framework. Because it doesn’t require the units of the language to be simple “words”, but allows them to be arbitrary lumps of symbolic information, such as collections of examples (so that, for example, a word can be represented by a symbolic structure that carries around its definitions).
So should AIs talk to each other in Wolfram Language? It seems to make a lot of sense—because it effectively starts from the understanding of the world that’s been developed through human knowledge, but then provides a framework for going further. It doesn’t matter how the syntax is encoded (input form, XML, JSON, binary, whatever). What matters is the structure and content that are built into the language.
Information Acquisition: The Billion-Year View
Over the course of the billions of years that life has existed on Earth, there’ve been a few different ways of transferring information. The most basic is genomics: passing information at the hardware level. But then there are neural systems, like brains. And these get information—like our Image Identification Project—by accumulating it from experiencing the world. This is the mechanism that organisms use to see, and to do many other “AI-ish” things.
But in a sense this mechanism is fundamentally limited, because every different organism—and every different brain—has to go through the whole process of learning for itself: none of the information obtained in one generation can readily be passed to the next.
But this is where our species made its great invention: natural language. Because with natural language it’s possible to take information that’s been learned, and communicate it in abstract form, say from one generation to the next. There’s still a problem however, because when natural language is received, it still has to be interpreted, in a separate way in each brain.
Information transfer: Level 0: genomics; Level 1: individual brains; Level 2: natural language; Level 3: computational knowledge language
And this is where the idea of a computational-knowledge language—like the Wolfram Language—is important: because it gives a way to communicate concepts and facts about the world, in a way that can immediately and reproducibly be executed, without requiring separate interpretation on the part of whatever receives it.
It’s probably not a stretch to say that the invention of human natural language was what led to civilization and our modern world. So then what are the implications of going to another level: of having a precise computational-knowledge language, that carries not just abstract concepts, but also a way to execute them?
One possibility is that it may define the civilization of the AIs, whatever that may turn out to be. And perhaps this may be far from what we humans—at least in our present state—can understand. But the good news is that at least in the case of the Wolfram Language, precise computational-knowledge language isn’t incomprehensible to humans; in fact, it was specifically constructed to be a bridge between what humans can understand, and what machines can readily deal with.
What If Everyone Could Code?
So let’s imagine a world in which in addition to natural language, it’s also common for communication to occur through a computational-knowledge language like the Wolfram Language. Certainly, a lot of the computational-knowledge-language communication will be between machines. But some of it will be between humans and machines, and quite possibly it would be the dominant form of communication here.
In today’s world, only a small fraction of people can write computer code—just as, 500 or so years ago, only a small fraction of people could write natural language. But what if a wave of computer literacy swept through, and the result was that most people could write knowledge-based code?
Natural language literacy enabled many features of modern society. What would knowledge-based code literacy enable? There are plenty of simple things. Today you might get a menu of choices at a restaurant. But if people could read code, there could be code for each choice, that you could readily modify to your liking. (And actually, something very much like this is soon going be possible—with Wolfram Language code—for biology and chemistry lab experiments.) Another implication of people being able to read code is for rules and contracts: instead of just writing prose to be interpreted, one can have code to be read by humans and machines alike.
But I suspect the implications of widespread knowledge-based code literacy will be much deeper—because it will not only give a wide range of people a new way to express things, but will also give them a new way to think about them.
Will It Actually Work?
So, OK, let’s say we want to use the Wolfram Language to communicate with AIs. Will it actually work? To some extent we know it already does. Because inside Wolfram|Alpha and the systems based on it, what’s happening is that natural language questions are being converted to Wolfram Language code.
But what about more elaborate applications of AI? Many places where the Wolfram Language is used are examples of AI, whether they’re computing with images or text or data or symbolic structures. Sometimes the computations involve algorithms whose goals we can precisely define, like FindShortestTour; sometimes they involve algorithms whose goals are less precise, like ImageIdentify. Sometimes the computations are couched in the form of “things to do”, sometimes as “things to look for” or “things to aim for”.
We’ve come a long way in representing the world in the Wolfram Language. But there’s still more to do. Back in the 1600s it was quite popular to try to create “philosophical languages” that would somehow symbolically capture the essence of everything one could think about. Now we need to really do this. And, for example, to capture in a symbolic way all the kinds of actions and processes that can happen, as well as things like peoples’ beliefs and mental states. As our AIs become more sophisticated and more integrated into our lives, representing these kinds of things will become more important.
For some tasks and activities we’ll no doubt be able to use pure machine learning, and never have to build up any kind of intermediate structure or language. But much as natural language was crucial in enabling our species to get where we have, so also having an abstract language will be important for the progress of AI.
I’m not sure what it would look like, but we could perhaps imagine using some kind of pure emergent language produced by the AIs. But if we do that, then we humans can expect to be left behind, and to have no chance of understanding what the AIs are doing. But with the Wolfram Language we have a bridge, because we have a language that’s suitable for both humans and AIs.
More to Say
There’s much to be said about the interplay between language and computation, humans and AIs. Perhaps I need to write a book about it. But my purpose here has been to describe a little of my current thinking, particularly my realizations about the Wolfram Language as a bridge between human understanding and AI.
With pure natural language or traditional computer language, we’ll be hard pressed to communicate much to our AIs. But what I’ve been realizing is that with Wolfram Language there’s a much richer alternative, readily extensible by the AIs, but built on a base that leverages human natural language and human knowledge to maintain a connection with what we humans can understand. We’re seeing early examples already… but there’s a lot further to go, and I’m looking forward to actually building what’s needed, as well as writing about it…
Fork de caffe une meilleure interface python et des RNN LSTM
Exposé illustré sur les réseaux de neurones !
Ça a l'air très intéressant, par rapport aux réseaux de neurones.