481 links
  • palkeo - liens
  • Home
  • Login
  • RSS Feed
  • ATOM Feed
  • Tag cloud
  • Picture wall
  • Daily
Links per page: 20 50 100
◄Older
page 9 / 25
Newer►
  • How to C (as of 2016)
    January 13, 2016 at 6:55:54 AM GMT+1 - permalink - https://matt.sh/howto-c
    c
  • thumbnail
    How software is eating the world

    Introduction (by Marc Andreessen)

    In 2007, right before the first iPhone launched, I asked Steve Jobs the obvious question: The design of the iPhone was based on discarding every physical interface element except for a touchscreen. Would users be willing to give up the then-dominant physical keypads for a soft keyboard?

    His answer was brusque: “They’ll learn.”

    Steve turned out to be right. Today, touchscreens are ubiquitous and seem normal, and other interfaces are emerging. An entire generation is now coming of age with a completely different tactile relationship to information, validating all over again Marshall McLuhan’s observation that “the medium is the message”.

    A great deal of product development is based on the assumption that products must adapt to unchanging human needs or risk being rejected. Yet, time and again, people adapt in unpredictable ways to get the most out of new tech. Creative people tinker to figure out the most interesting applications, others build on those, and entire industries are reshaped.

    People change, then forget that they changed, and act as though they always behaved a certain way and could never change again. Because of this, unexpected changes in human behavior are often dismissed as regressive rather than as potentially intelligent adaptations.

    But change happens anyway. “Software is eating the world” is the most recent historic transformation of this sort.

    In 2014, a few of us invited Venkatesh Rao to spend the year at Andreessen Horowitz as a consultant to explore the nature of such historic tech transformations. In particular, we set out to answer the question: Between both the breathless and despairing extremes of viewing the future, could an intellectually rigorous case be made for pragmatic optimism?

    As this set of essays argues — many of them inspired by a series of intensive conversations Venkat and I had — there is indeed such a case, and it follows naturally from the basic premise that people can and do change. To “break smart” is to adapt intelligently to new technological possibilities.

    With his technological background, satirical eye, and gift for deep and different takes (as anyone who follows his Ribbonfarm blog knows!), there is perhaps nobody better suited than Venkat for telling a story of the future as it breaks smart from the past.

    Whether you’re a high school kid figuring out a career or a CEO attempting to navigate the new economy, Breaking Smart should be on your go-to list of resources for thinking about the future, even as you are busy trying to shape it.

    A New Soft Technology

    Something momentous happened around the year 2000: a major new soft technology came of age. After written language and money, software is only the third major soft technology to appear in human civilization. Fifteen years into the age of software, we are still struggling to understand exactly what has happened. Marc Andreessen’s now-familiar line, software is eating the world, hints at the significance, but we are only just beginning to figure out how to think about the world in which we find ourselves.

    Only a handful of general-purpose technologies1 – electricity, steam power, precision clocks, written language, token currencies, iron metallurgy and agriculture among them – have impacted our world in the sort of deeply transformative way that deserves the description eating. And only two of these, written language and money, were soft technologies: seemingly ephemeral, but capable of being embodied in a variety of specific physical forms. Software has the same relationship to any specific sort of computing hardware as money does to coins or credit cards or writing to clay tablets and paper books.

    But only since about 2000 has software acquired the sort of unbridled power, independent of hardware specifics, that it possesses today. For the first half century of modern computing after World War II, hardware was the driving force. The industrial world mostly consumed software to meet existing needs, such as tracking inventory and payroll, rather than being consumed by it. Serious technologists largely focused on solving the clear and present problems of the industrial age rather than exploring the possibilities of computing, proper.

    Sometime around the dot com crash of 2000, though, the nature of software, and its relationship with hardware, underwent a shift. It was a shift marked by accelerating growth in the software economy and a peaking in the relative prominence of hardware.2 The shift happened within the information technology industry first, and then began to spread across the rest of the economy.

    But the economic numbers only hint at3 the profundity of the resulting societal impact. As a simple example, a 14-year-old teenager today (too young to show up in labor statistics) can learn programming, contribute significantly to open-source projects, and become a talented professional-grade programmer before age 18. This is breaking smart: an economic actor using early mastery of emerging technological leverage — in this case a young individual using software leverage — to wield disproportionate influence on the emerging future.

    Only a tiny fraction of this enormously valuable activity — the cost of a laptop and an Internet connection — would show up in standard economic metrics. Based on visible economic impact alone, the effects of such activity might even show up as a negative, in the form of technology-driven deflation. But the hidden economic significance of such an invisible story is at least comparable to that of an 18-year-old paying $100,000 over four years to acquire a traditional college degree. In the most dramatic cases, it can be as high as the value of an entire industry. The music industry is an example: a product created by a teenager, Shawn Fanning’s Napster, triggered a cascade of innovation whose primary visible impact has been the vertiginous decline of big record labels, but whose hidden impact includes an explosion in independent music production and rapid growth in the live-music sector.4

    Software eating the world is a story of the seen and the unseen: small, measurable effects that seem underwhelming or even negative, and large invisible and positive effects that are easy to miss, unless you know where to look.5

    Today, the significance of the unseen story is beginning to be widely appreciated. But as recently as fifteen years ago, when the main act was getting underway, even veteran technologists were being blindsided by the subtlety of the transition to software-first computing.

    Perhaps the subtlest element had to do with Moore’s Law, the famous 1965 observation by Intel co-founder Gordon Moore that the density with which transistors can be packed into a silicon chip doubles every 18 months. By 2000, even as semiconductor manufacturing firms began running into the fundamental limits of Moore’s Law, chip designers and device manufacturers began to figure out how to use Moore’s Law to drive down the cost and power consumption of processors rather than driving up raw performance. The results were dramatic: low-cost, low-power mobile devices, such as smartphones, began to proliferate, vastly expanding the range of what we think of as computers. Coupled with reliable and cheap cloud computing infrastructure and mobile broadband, the result was a radical increase in technological potential. Computing could, and did, become vastly more accessible, to many more people in every country on the planet, at radically lower cost and expertise levels.

    One result of this increased potential was that technologists began to grope towards a collective vision commonly called the Internet of Things. It is a vision based on the prospect of processors becoming so cheap, miniaturized and low-powered that they can be embedded, along with power sources, sensors and actuators, in just about anything, from cars and light bulbs to clothing and pills. Estimates of the economic potential of the Internet of Things – of putting a chip and software into every physical item on Earth – vary from $2.7 trillion to over $14 trillion: comparable to the entire GDP of the United States today.6

    By 2010, it had become clear that given connectivity to nearly limitless cloud computing power and advances in battery technologies, programming was no longer something only a trained engineer could do to a box connected to a screen and a keyboard. It was something even a teenager could do, to almost anything.

    The rise of ridesharing illustrates the process particularly well.

    Only a few years ago services like Uber and Lyft seemed like minor enhancements to the process of procuring and paying for cab rides. Slowly, it became obvious that ridesharing was eliminating the role of human dispatchers and lowering the level of expertise required of drivers. As data accumulated through GPS tracking and ratings mechanisms, it further became clear that trust and safety could increasingly be underwritten by data instead of brand promises and regulation. This made it possible to dramatically expand driver supply, and lower ride costs by using underutilized vehicles already on the roads.

    As the ridesharing sector took root and grew in city after city, second-order effects began to kick in. The increased convenience enables many more urban dwellers to adopt carless lifestyles. Increasing supply lowers costs, and increases accessibility for people previously limited to inconvenient public transportation. And as the idea of the carless lifestyle began to spread, urban planners began to realize that century-old trends like suburbanization, driven in part by car ownership, could no longer be taken for granted.

    The ridesharing future we are seeing emerge now is even more dramatic: the higher utilization of cars leads to lower demand for cars, and frees up resources for other kinds of consumption. Individual lifestyle costs are being lowered and insurance models are being reimagined. The future of road networks must now be reconsidered in light of greener and more efficient use of both vehicles and roads.

    Meanwhile, the emerging software infrastructure created by ridesharing is starting to have a cascading impact on businesses, such as delivery services, that rely on urban transportation and logistics systems. And finally, by proving many key component technologies, the rideshare industry is paving the way for the next major development: driverless cars.

    These developments herald a major change in our relationship to cars.

    To traditionalists, particularly in the United States, the car is a motif for an entire way of life, and the smartphone just an accessory. To early adopters who have integrated ridesharing deeply into their lives, the smartphone is the lifestyle motif, and the car is the accessory. To generations of Americans, owning a car represented freedom. To the next generation, not owning a car will represent freedom.

    And this dramatic reversal in our relationships to two important technologies – cars and smartphones – is being catalyzed by what was initially dismissed as “yet another trivial app.”

    Similar impact patterns are unfolding in sector after sector. Prominent early examples include the publishing, education, cable television, aviation, postal mail and hotel sectors. The impact is more than economic. Every aspect of the global industrial social order is being transformed by the impact of software.

    This has happened before of course: money and written language both transformed the world in similarly profound ways. Software, however, is more flexible and powerful than either.

    Writing is very flexible: we can write with a finger on sand or with an electron beam on a pinhead. Money is even more flexible: anything from cigarettes in a prison to pepper and salt in the ancient world to modern fiat currencies can work. But software can increasingly go wherever writing and money can go, and beyond. Software can also eat both, and take them to places they cannot go on their own.

    Partly as a consequence of how rarely soft, world-eating technologies erupt into human life, we have been systematically underestimating the magnitude of the forces being unleashed by software. While it might seem like software is constantly in the news, what we have already seen is dwarfed by what still remains unseen.

    The effects of this widespread underestimation are dramatic. The opportunities presented by software are expanding, and the risks of being caught on the wrong side of the transformation are dramatically increasing. Those who have correctly calibrated the impact of software are winning. Those who have miscalibrated it are losing.

    And the winners are not winning by small margins or temporarily either. Software-fueled victories in the past decade have tended to be overwhelming and irreversible faits accompli. And this appears to be true at all levels from individuals to businesses to nations. Even totalitarian dictatorships seem unable to resist the transformation indefinitely.

    So to understand how software is eating the world, we have to ask why we have been systematically underestimating its impact, and how we can recalibrate our expectations for the future.

    Getting Reoriented

    There are four major reasons we underestimate the increasing power of software. Three of these reasons drove similar patterns of miscalibration in previous technological revolutions, but one is unique to software.

    First, as futurist Roy Amara noted, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Technological change unfolds exponentially, like compound interest, and we humans seem wired to think about exponential phenomena in flawed ways.1 In the case of software, we expected too much too soon from 1995 to 2000, leading to the crash. Now in 2015, many apparently silly ideas from 2000, such as home-delivery of groceries ordered on the Internet, have become a mundane part of everyday life in many cities. But the element of surprise has dissipated, so we tend to expect too little, too far out, and are blindsided by revolutionary change in sector after sector. Change that often looks trivial or banal on the surface, but turns out to have been profound once the dust settles.

    Second, we have shifted gears from what economic historian Carlota Perez calls the installation phase of the software revolution, focused on basic infrastructure such as operating systems and networking protocols, to a deployment phase focused on consumer applications such as social networks, ridesharing and ebooks. In her landmark study of the history of technology,2 Perez demonstrates that the shift from installation to deployment phase for every major technology is marked by a chaotic transitional phase of wars, financial scandals and deep anxieties about civilizational collapse. One consequence of the chaos is that attention is absorbed by transient crises in economic, political and military affairs, and the apocalyptic fears and utopian dreams they provoke. As a result, momentous but quiet change passes unnoticed.

    Third, a great deal of the impact of software today appears in a disguised form. The genomics and nanotechnology sectors appear to be rooted in biology and materials science. The “maker” movement around 3d printing and drones appears to be about manufacturing and hardware. Dig a little deeper though, and you invariably find that the action is being driven by possibilities opened up by software more than fundamental new discoveries in those physical fields. The crashing cost of genome-sequencing is primarily due to computing, with innovations in wet chemistry playing a secondary role. Financial innovations leading to cheaper insurance and credit are software innovations in disguise. The Nest thermostat achieves energy savings not by exploiting new discoveries in thermodynamics, but by using machine learning algorithms in a creative way. The potential of this software-driven model is what prompted Google, a software company, to pay $3B to acquire Nest: a company that on the surface appeared to have merely invented a slightly better mousetrap.

    These three reasons for under-estimating the power of software had counterparts in previous technology revolutions. The railroad revolution of the nineteenth century also saw a transitional period marked by systematically flawed expectations, a bloody civil war in the United States, and extensive patterns of disguised change — such as the rise of urban living, grocery store chains, and meat consumption — whose root cause was cheap rail transport.

    The fourth reason we underestimate software, however, is a unique one: it is a revolution that is being led, in large measure, by brash young kids rather than sober adults.3

    This is perhaps the single most important thing to understand about the revolution that we have labeled software eating the world: it is being led by young people, and proceeding largely without adult supervision (though with many adults participating). This has unexpected consequences.

    As in most periods in history, older generations today run or control all key institutions worldwide. They are better organized and politically more powerful. In the United States for example, the AARP is perhaps the single most influential organization in politics. Within the current structure of the global economy, older generations can, and do, borrow unconditionally from the future at the expense of the young and the yet-to-be-born.

    But unlike most periods in history, young people today do not have to either “wait their turn” or directly confront a social order that is systematically stacked against them. Operating in the margins by a hacker ethos — a problem solving sensibility based on rapid trial-and-error and creative improvisation — they are able to use software leverage and loose digital forms of organization to create new economic, social and political wealth. In the process, young people are indirectly disrupting politics and economics and creating a new parallel social order. Instead of vying for control of venerable institutions that have already weathered several generational wars, young people are creating new institutions based on the new software and new wealth. These improvised but highly effective institutions repeatedly emerge out of nowhere, and begin accumulating political and economic power. Most importantly, they are relatively invisible. Compared to the visible power of youth counterculture in the 1960s for instance, today’s youth culture, built around messaging apps and photo-sharing, does not seem like a political force to reckon with. This culture also has a decidedly commercial rather than ideological character, as a New York Times writer (rather wistfully) noted in a 2011 piece appropriately titled Generation Sell.4 Yet, today’s youth culture is arguably more powerful as a result, representing as it does what Jane Jacobs called the “commerce syndrome” of values, rooted in pluralistic economic pragmatism, rather than the opposed “guardian syndrome” of values, rooted in exclusionary and authoritarian political ideologies.

    Chris Dixon captured this guerrilla pattern of the ongoing shift in political power with a succinct observation: what the smartest people do on the weekend is what everyone else will do during the week in ten years.

    The result is strange: what in past eras would have been a classic situation of generational conflict based on political confrontation, is instead playing out as an economy-wide technological disruption involving surprisingly little direct political confrontation. Movements such as #Occupy pale in comparison to their 1960s counterparts, and more importantly, in comparison to contemporary youth-driven economic activity.

    This does not mean, of course, that there are no political consequences. Software-driven transformations directly disrupt the middle-class life script, upon which the entire industrial social order is based. In its typical aspirational form, the traditional script is based on 12 years of regimented industrial schooling, an additional 4 years devoted to economic specialization, lifetime employment with predictable seniority-based promotions, and middle-class lifestyles. Though this script began to unravel as early as the 1970s, even for the minority (white, male, straight, abled, native-born) who actually enjoyed it, the social order of our world is still based on it. Instead of software, the traditional script runs on what we might call paperware: bureaucratic processes constructed from the older soft technologies of writing and money. Instead of the hacker ethos of flexible and creative improvisation, it is based on the credentialist ethos of degrees, certifications, licenses and regulations. Instead of being based on achieving financial autonomy early, it is based on taking on significant debt (for college and home ownership) early.

    It is important to note though, that this social order based on credentialism and paperware worked reasonably well for almost a century between approximately 1870 and 1970, and created a great deal of new wealth and prosperity. Despite its stifling effects on individualism, creativity and risk-taking, it offered its members a broader range of opportunities and more security than the narrow agrarian provincialism it supplanted. For all its shortcomings, lifetime employment in a large corporation like General Motors, with significantly higher standards of living, was a great improvement over pre-industrial rural life.

    But by the 1970s, industrialization had succeeded so wildly, it had undermined its own fundamental premises of interchangeability in products, parts and humans. As economists Jeffrey Greenwood and Mehmet Yorkuglu5 argue in a provocative paper titled 1974, that year arguably marked the end of the industrial age and the beginning of the information age. Computer-aided industrial automation was making ever-greater scale possible at ever-lower costs. At the same time, variety and uniqueness in products and services were becoming increasingly valuable to consumers in the developed world. Global competition, especially from Japan and Germany, began to directly threaten American industrial leadership. This began to drive product differentiation, a challenge that demanded originality rather than conformity from workers. Industry structures that had taken shape in the era of mass-produced products, such as Ford’s famous black Model T, were redefined to serve the demand for increasing variety. The result was arguably a peaking in all aspects of the industrial social order based on mass production and interchangeable workers roughly around 1974, a phenomenon Balaji Srinivasan has dubbed peak centralization.6

    One way to understand the shift from credentialist to hacker modes of social organization, via young people acquiring technological leverage, is through the mythological tale of Prometheus stealing fire from the heavens for human use.

    The legend of Prometheus has been used as a metaphor for technological progress at least since Mary Shelley’s Frankenstein: A Modern Prometheus. Technologies capable of eating the world typically have a Promethean character: they emerge within a mature social order (a metaphoric “heaven” that is the preserve of older elites), but their true potential is unleashed by an emerging one (a metaphoric “earth” comprising creative marginal cultures, in particular youth cultures), which gains relative power as a result. Software as a Promethean technology emerged in the heart of the industrial social order, at companies such as AT&T, IBM and Xerox, universities such as MIT and Stanford, and government agencies such as DARPA and CERN. But its Promethean character was unleashed, starting with the early hacker movement, on the open Internet and through Silicon-Valley style startups.

    As a result of a Promethean technology being unleashed, younger and older face a similar dilemma: should I abandon some of my investments in the industrial social order and join the dynamic new social order, or hold on to the status quo as long as possible?

    The decision is obviously easier if you are younger, with much less to lose. But many who are young still choose the apparent safety of the credentialist scripts of their parents. These are what David Brooks called Organization Kids (after William Whyte’s 1956 classic, The Organization Man7): those who bet (or allow their “Tiger” parents8 to bet on their behalf) on the industrial social order. If you are an adult over 30, especially one encumbered with significant family obligations or debt, the decision is harder.

    Those with a Promethean mindset and an aggressive approach to pursuing a new path can break out of the credentialist life script at any age. Those who are unwilling or unable to do so are holding on to it more tenaciously than ever.

    Young or old, those who are unable to adopt the Promethean mindset end up defaulting to what we call a pastoral mindset: one marked by yearning for lost or unattained utopias. Today many still yearn for an updated version of romanticized9 1950s American middle-class life for instance, featuring flying cars and jetpacks.

    How and why you should choose the Promethean option, despite its disorienting uncertainties and challenges, is the overarching theme of Season 1. It is a choice we call breaking smart, and it is available to almost everybody in the developed world, and a rapidly growing number of people in the newly-connected developing world.

    These individual choices matter.

    As historians such as Daron Acemoglu and James Robinson10 and Joseph Tainter11 have argued, it is the nature of human problem-solving institutions, rather than the nature of the problems themselves, that determines whether societies fail or succeed. Breaking smart at the level of individuals is what leads to organizations and nations breaking smart, which in turn leads to societies succeeding or failing.

    Today, the future depends on increasing numbers of people choosing the Promethean option. Fortunately, that is precisely what is happening.

    Towards a Mass Flourishing

    In this season of Breaking Smart, I will not attempt to predict the what and when of the future. In fact, a core element of the hacker ethos is the belief that being open to possibilities and embracing uncertainty is necessary for the actual future to unfold in positive ways. Or as computing pioneer Alan Kay put it, inventing the future is easier than predicting it.

    And this is precisely what tens of thousands of small teams — small enough to be fed by no more than two pizzas, by a rule of thumb made famous by Amazon founder Jeff Bezos — are doing across the world today.

    Prediction as a foundation for facing the future involves risks that go beyond simply getting it wrong. The bigger risk is getting attached to a particular what and when, a specific vision of a paradise to be sought, preserved or reclaimed. This is often a serious philosophical error — to which pastoralist mindsets are particularly prone — that seeks to limit the future.

    But while I will avoid dwelling too much on the what and when, I will unabashedly advocate for a particular answer to how. Thanks to virtuous cycles already gaining in power, I believe almost all effective responses to the problems and opportunities of the coming decades will emerge out of the hacker ethos, despite its apparent peripheral role today. The credentialist ethos of extensive planning and scripting towards deterministic futures will play a minor supporting role at best. Those who adopt a Promethean mindset and break smart will play an expanding role in shaping the future. Those who adopt a pastoral mindset and retreat towards tradition will play a diminishing role, in the shrinking number of economic sectors where credentialism is still the more appropriate model.

    The nature of problem-solving in the hacker mode, based on trial-and-error, iterative improvement, testing and adaptation (both automated and human-driven) allows us to identify four characteristics of how the future will emerge.

    First, despite current pessimism about the continued global leadership of the United States, the US remains the single largest culture that embodies the pragmatic hacker ethos, nowhere more so than in Silicon Valley. The United States in general, and Silicon Valley in particular, will therefore continue to serve as the global exemplar of Promethean technology-driven change. And as virtual collaboration technologies improve, the Silicon Valley economic culture will increasingly become the global economic culture.

    Second, the future will unfold through very small groups having very large impacts. One piece of wisdom in Silicon Valley today is that the core of the best software is nearly always written by teams of fewer than a dozen people, not by huge committee-driven development teams. This means increasing well-being for all will be achieved through small two-pizza teams beating large ones. Scale will increasingly be achieved via loosely governed ecosystems of additional participants creating wealth in ways that are hard to track using traditional economic measures. Instead of armies of Organization Men and Women employed within large corporations, and Organization Kids marching in at one end and retirees marching out at the other, the world of work will be far more diverse.

    Third, the future will unfold through a gradual and continuous improvement of well-being and quality of life across the world, not through sudden emergence of a utopian software-enabled world (or sudden collapse into a dystopian world). The process will be one of fits and starts, toys and experiments, bugginess and brokenness. But the overall trend will be upwards, towards increasing prosperity for all.

    Fourth, the future will unfold through rapid declines in the costs of solutions to problems, including in heavily regulated sectors historically resistant to cost-saving innovations, such as healthcare and higher education. In improvements wrought by software, poor and expensive solutions have generally been replaced by superior and cheaper (often free) solutions, and these substitution effects will accelerate.

    Putting these four characteristics together, we get a picture of messy, emergent progress that economist Bradford Delong calls “slouching towards utopia“: a condition of gradual, increasing quality of life available, at gradually declining cost, to a gradually expanding portion of the global population.

    A big implication is immediately clear: the asymptotic condition represents a consumer utopia. As consumers, we will enjoy far more for far less. This means that the biggest unknown today is our future as producers, which brings us to what many view as the central question today: the future of work.

    The gist of a robust answer, which we will explore in Understanding Elite Discontent, was anticipated by John Maynard Keynes as far back as 1930,1 though he did not like the implications: the majority of the population will be engaged in creating and satisfying each other’s new needs in ways that even the most prescient of today’s visionaries will fail to anticipate.

    While we cannot predict precisely what workers of the future will be doing — what future wants and needs workers will be satisfying — we can predict some things about how they will be doing it. Work will take on an experimental, trial-and-error character, and will take place in an environment of rich feedback, self-correction, adaptation, ongoing improvement, and continuous learning. The social order surrounding work will be a much more fluid descendant of today’s secure but stifling paycheck world on the one hand, and liberating but precarious world of free agency and contingent labor on the other.

    In other words, the hacker ethos will go global and the workforce at large will break smart. As the hacker ethos spreads, we will witness what economist Edmund Phelps calls a mass flourishing2 — a state of the economy where work will be challenging and therefore fulfilling. Unchallenging, predictable work will become the preserve of machines.

    Previous historical periods of mass flourishing, such as the early industrial revolution, were short-lived, and gave way, after a few decades, to societies based on a new middle class majority built around predictable patterns of work and life. This time around, the state of mass flourishing will be a sustained one: a slouching towards a consumer and producer utopia.

    If this vision seems overly dramatic, consider once again the comparison to other soft technologies: software is perhaps the most imagination-expanding technology humans have invented since writing and money, and possibly more powerful than either. To operate on the assumption that it will transform the world at least as dramatically, far from being wild-eyed optimism, is sober realism.

    Purists versus Pragmatists

    At the heart of the historical development of computing is the age-old philosophical impasse between purist and pragmatist approaches to technology, which is particularly pronounced in software due to its seeming near-Platonic ineffability. One way to understand the distinction is through a dinnerware analogy.

    Purist approaches, which rely on alluring visions, are like precious “good” china: mostly for display, and reserved exclusively for narrow uses like formal dinners. Damage through careless use can drastically lower the value of a piece. Broken or missing pieces must be replaced for the collection to retain its full display value. To purists, mixing and matching, either with humbler everyday tableware, or with different china patterns, is a kind of sacrilege.

    The pragmatic approach on the other hand, is like unrestricted and frequent use of hardier everyday dinnerware. Damage through careless play does not affect value as much. Broken pieces may still be useful, and missing pieces need not be replaced if they are not actually needed. For pragmatists, mixing and matching available resources, far from being sacrilege, is something to be encouraged, especially for collaborations such as neighborhood potlucks.

    In software, the difference between the two approaches is clearly illustrated by the history of the web browser.

    On January 23, 1993, Marc Andreessen sent out a short email, announcing the release of Mosaic, the first graphical web browser:

    07:21:17-0800 by marca@ncsa.uiuc.edu:

    By the power vested in me by nobody in particular, alpha/beta version 0.5 of NCSA’s Motif-based networked information systems and World Wide Web browser, X Mosaic, is hereby released:

    file://ftp.ncsa.uiuc.edu/Web/xmosaic/xmosaic-0.5.tar.Z

    Along with Eric Bina he had quickly hacked the prototype together after becoming enthralled by his first view of the World Wide Web, which Tim Berners-Lee had unleashed from CERN in Geneva in 1991. Over the next year, several other colleagues joined the project, equally excited by the possibilities of the web. All were eager to share the excitement they had experienced, and to open up the future of the web to more people, especially non-technologists.

    Over the course of the next few years, the graphical browser escaped the confines of the government-funded lab (the National Center for Supercomputing Applications at the University of Illinois) where it was born. As it matured at Netscape and later at Microsoft, Mozilla and Google, it steered the web in unexpected (and to some, undesirable) directions. The rapid evolution triggered both the legendary browser wars and passionate debates about the future of the Internet. Those late-nineties conflicts shaped the Internet of today.

    To some visionary pioneers, such as Ted Nelson, who had been developing a purist hypertext paradigm called Xanadu for decades, the browser represented an undesirably messy direction for the evolution of the Internet. To pragmatists, the browser represented important software evolving as it should: in a pluralistic way, embodying many contending ideas, through what the Internet Engineering Task Force (IETF) calls “rough consensus and running code.”

    While every major software project has drawn inspiration from both purists and pragmatists, the browser, like other pieces of software that became a mission critical part of the Internet, was primarily derived from the work and ideas of pragmatists: pioneers like Jon Postel, David Clark, Bob Kahn and Vint Cerf, who were instrumental in shaping the early development of the Internet through highly inclusive institutions like the IETF.

    Today, the then-minority tradition of pragmatic hacking has matured into agile development, the dominant modern approach for making software. But the significance of this bit of history goes beyond the Internet. Increasingly, the pragmatic, agile approach to building things has spread to other kinds of engineering and beyond, to business and politics.

    The nature of software has come to matter far beyond software. Agile philosophies are eating all kinds of building philosophies. To understand the nature of the world today, whether or not you are a technologist, it is crucial to understand agility and its roots in the conflict between pragmatic and purist approaches to computing.

    The story of the browser was not exceptional. Until the early 1990s, almost all important software began life as purist architectural visions rather than pragmatic hands-on tinkering.

    This was because early programming with punch-card mainframes was a highly constrained process. Iterative refinement was slow and expensive. Agility was a distant dream: programmers often had to wait weeks between runs. If your program didn’t work the first time, you might not have gotten another chance. Purist architectures, worked out on paper, helped minimize risk and maximize results under these conditions.

    As a result, early programming was led by creative architects (often mathematicians and, with rare exceptions like Klari Von Neumann and Grace Hopper, usually male) who worked out the structure of complex programs upfront, as completely as possible. The actual coding onto punch cards was done by large teams of hands-on programmers (mostly women1) with much lower autonomy, responsible for working out implementation details.

    In short, purist architecture led the way and pragmatic hands-on hacking was effectively impossible. Trial-and-error was simply too risky and slow, which meant significant hands-on creativity had to be given up in favor of productivity.

    With the development of smaller computers capable of interactive input hands-on hacking became possible. At early hacker hubs, like MIT through the sixties, a high-autonomy culture of hands-on programming began to take root. Though the shift would not be widely recognized until after 2000, the creative part of programming was migrating from visioning to hands-on coding. Already by 1970, important and high-quality software, such as the Unix operating system, had emerged from the hacker culture growing at the minicomputer margins of industrial programming.

    Through the seventies, a tenuous balance of power prevailed between purist architects and pragmatic hackers. With the introduction of networked personal computing in the eighties, however, hands-on hacking became the defining activity in programming. The culture of early hacker hubs like MIT and Bell Labs began to diffuse broadly through the programming world. The archetypal programmer had evolved: from interchangeable member of a large team, to the uniquely creative hacker, tinkering away at a personal computer, interacting with peers over networks. Instead of dutifully obeying an architect, the best programmers were devoting increasing amounts of creative energy to scratching personal itches.

    The balance shifted decisively in favor of pragmatists with the founding of the IETF in 1986. In January of that year, a group of 21 researchers met in San Diego and planted the seeds of what would become the modern “government” of the Internet.

    Despite its deceptively bureaucratic-sounding name, the IETF is like no standards organization in history, starting with the fact that it has no membership requirements and is open to all who want to participate. Its core philosophy can be found in an obscure document, The Tao of the IETF, little known outside the world of technology. It is a document that combines the informality and self-awareness of good blogs, the gravitas of a declaration of independence, and the aphoristic wisdom of Zen koans. This oft-quoted section illustrates its basic spirit:

    In many ways, the IETF runs on the beliefs of its members. One of the “founding beliefs” is embodied in an early quote about the IETF from David Clark: “We reject kings, presidents and voting. We believe in rough consensus and running code”. Another early quote that has become a commonly-held belief in the IETF comes from Jon Postel: “Be conservative in what you send and liberal in what you accept”.

    Though the IETF began as a gathering of government-funded researchers, it also represented a shift in the center of programming gravity from government labs to the commercial and open-source worlds. Over the nearly three decades since, it has evolved into the primary steward2 of the inclusive, pluralistic and egalitarian spirit of the Internet. In invisible ways, the IETF has shaped the broader economic and political dimensions of software eating the world.

    The difference between purist and pragmatic approaches becomes clear when we compare the evolution of programming in the United States and Japan since the early eighties. Around 1982, Japan chose the purist path over the pragmatic path, by embarking on the ambitious “fifth-generation computing” effort. The highly corporatist government-led program, which caused much anxiety in America at the time, proved to be largely a dead-end. The American tradition on the other hand, outgrew its government-funded roots and gave rise to the modern Internet. Japan’s contemporary contributions to software, such as the hugely popular Ruby language designed by Yukihiro Matsumoto, belong squarely within the pragmatic hacker tradition.

    I will argue that this pattern of development is not limited to computer science. Every field eaten by software experiences a migration of the creative part from visioning activities to hands-on activities, disrupting the social structure of all professions. Classical engineering fields like mechanical, civil and electrical engineering had already largely succumbed to hands-on pragmatic hacking by the nineties. Non-engineering fields like marketing are beginning to convert.

    So the significance of pragmatic approaches prevailing over purist ones cannot be overstated: in the world of technology, it was the equivalent of the fall of the Berlin Wall.

    Agility and Illegibility

    While pragmatic hacking was on the rise, purist approaches entered a period of slow, painful and costly decline. Even as they grew in ambition, software projects based on purist architecture and teams of interchangeable programmers grew increasingly unmanageable. They began to exhibit the predictable failure patterns of industrial age models: massive cost-overruns, extended delays, failed launches, damaging unintended consequences, and broken, unusable systems.

    These failure patterns are characteristic of what political scientist James Scott1 called authoritarian high modernism: a purist architectural aesthetic driven by the authoritarian priorities. To authoritarian high-modernists, elements of the environment that do not conform to their purist design visions appear “illegible” and anxiety-provoking. As a result, they attempt to make the environment legible by forcibly removing illegible elements. Failures follow because important elements, critical to the functioning of the environment, get removed in the process.

    Geometrically laid-out suburbs, for example, are legible and conform to platonic architectural visions, even if they are unlivable and economically stagnant. Slums on the other hand, appear illegible and are anxiety-provoking to planners, even when they are full of thriving economic life. As a result, authoritarian planners level slums and relocate residents into low-cost planned housing. In the process they often destroy economic and cultural vitality.

    In software, what authoritarian architects find illegible and anxiety-provoking is the messy, unplanned tinkering hackers use to figure out creative solutions. When the pragmatic model first emerged in the sixties, authoritarian architects reacted like urban planners: by attempting to clear “code slums.” These attempts took the form of increasingly rigid documentation and control processes inherited from manufacturing. In the process, they often lost the hacker knowledge keeping the project alive.

    In short, authoritarian high modernism is a kind of tunnel vision. Architects are prone to it in environments that are richer than one mind can comprehend. The urge to dictate and organize is destructive, because it leads architects to destroy the apparent chaos that is vital for success.

    The flaws of authoritarian high modernism first became problematic in fields like forestry, urban planning and civil engineering. Failures of authoritarian development in these fields resulted in forests ravaged by disease, unlivable “planned” cities, crony capitalism and endemic corruption. By the 1960s, in the West, pioneering critics of authoritarian models, such as the urbanist Jane Jacobs and the environmentalist Rachel Carson, had begun to correctly diagnose the problem.

    By the seventies, liberal democracies had begun to adopt the familiar, more democratic consultation processes of today. These processes were adopted in computing as well, just as the early mainframe era was giving way to the minicomputer era.

    Unfortunately, while democratic processes did mitigate the problems, the result was often lowered development speed, increased cost and more invisible corruption. New stakeholders brought competing utopian visions and authoritarian tendencies to the party. The problem now became reconciling conflicting authoritarian visions. Worse, any remaining illegible realities, which were anxiety-provoking to all stakeholders, were now even more vulnerable to prejudice and elimination. As a result complex technology projects often slowed to a costly, gridlocked crawl. Tyranny of the majority — expressed through autocratic representatives of particular powerful constituencies — drove whatever progress did occur. The biggest casualty was innovation, which by definition is driven by ideas that are illegible to all but a few: what Peter Thiel calls secrets — things entrepreneurs believe that nobody else does, which leads them to unpredictable breakthroughs.

    The process was most clearly evident in fields like defense. In major liberal democracies, different branches of the military competed to influence the design of new weaponry, and politicians competed to create jobs in their constituencies. As a result, major projects spiraled out of control and failed in predictable ways: delayed, too expensive and technologically compromised. In the non liberal-democratic world, the consequences were even worse. Authoritarian high modernism continued (and continues today in countries like Russia and North Korea), unchecked, wreaking avoidable havoc.

    Software is no exception to this pathology. As high-profile failures like the launch of healthcare.gov2 show, “democratic” processes meant to mitigate risks tend to create stalled or gridlocked processes, compounding the problem.

    Both in traditional engineering fields and in software, authoritarian high modernism leads to a Catch-22 situation: you either get a runaway train wreck due to too much unchecked authoritarianism, or a train that barely moves due to a gridlock of checks and balances.

    Fortunately, agile software development manages to combine both decisive authority and pluralistic visions, and mitigate risks without slowing things to a crawl. The basic principles of agile development, articulated by a group of 17 programmers in 2001, in a document known as the Agile Manifesto, represented an evolution of the pragmatic philosophy first explicitly adopted by the IETF.

    The cost of this agility is a seemingly anarchic pattern of progress. Agile development models catalyze illegible, collective patterns of creativity, weaken illusions of control, and resist being yoked to driving utopian visions. Adopting agile models leads individuals and organizations to gradually increase their tolerance for anxiety in the face of apparent chaos. As a result, agile models can get more agile over time.

    Not only are agile models driving reform in software, they are also spreading to traditional domains where authoritarian high-modernism first emerged. Software is beginning to eat domains like forestry, urban planning and environment protection. Open Geographic Information Systems (GIS) in forestry, open data initiatives in urban governance, and monitoring technologies in agriculture, all increase information availability while eliminating cumbersome paperware processes. As we will see in upcoming essays, enhanced information availability and lowered friction can make any field hacker-friendly. Once a field becomes hacker-friendly, software begins to eat it. Development gathers momentum: the train can begin moving faster, without leading to train wrecks, resolving the Catch-22.

    Today the shift from purist to pragmatist has progressed far enough that it is also reflected at the level of the economics of software development. In past decades, economic purists argued variously for idealized open-source, market-driven or government-led development of important projects. But all found themselves faced with an emerging reality that was too complex for any one economic ideology to govern. As a result, rough consensus and running economic mechanisms have prevailed over specific economic ideologies and gridlocked debates. Today, every available economic mechanism — market-based, governmental, nonprofit and even criminal — has been deployed at the software frontier. And the same economic pragmatism is spreading to software-eaten fields.

    This is a natural consequence of the dramatic increase in both participation levels and ambitions in the software world. In 1943, only a small handful of people working on classified military projects had access to the earliest computers. Even in 1974, the year of Peak Centralization, only a small and privileged group had access to the early hacker-friendly minicomputers like the DEC PDP series. But by 1993, the PC revolution had nearly delivered on Bill Gates’ vision of a computer at every desk, at least in the developed world. And by 2000, laptops and Blackberries were already foreshadowing the world of today, with near-universal access to smartphones, and an exploding number of computers per person.

    The IETF slogan of rough consensus and running code (RCRC) has emerged as the only workable doctrine for both technological development and associated economic models under these conditions.

    As a result of pragmatism prevailing, a nearly ungovernable Promethean fire has been unleashed. Hundreds of thousands of software entrepreneurs are unleashing innovations on an unsuspecting world by the power vested in them by “nobody in particular,” and by any economic means necessary.

    It is in the context of the anxiety-inducing chaos and complexity of a mass flourishing that we then ask: what exactly is software?

    Rough Consensus and Maximal Interestingness

    Software possesses an extremely strange property: it is possible to create high-value software products with effectively zero capital outlay. As Mozilla engineer Sam Penrose put it, software programming is labor that creates capital.

    This characteristic make software radically different from engineering materials like steel, and much closer to artistic media such as paint.1 As a consequence, engineer and engineering are somewhat inappropriate terms. It is something of a stretch to even think of software as a kind of engineering “material.” Though all computing requires a physical substrate, the trillions of tiny electrical charges within computer circuits, the physical embodiment of a running program, barely seem like matter.

    The closest relative to this strange new medium is paper. But even paper is not as cheap or evanescent. Though we can appreciate the spirit of creative abundance with which industrial age poets tossed crumpled-up false starts into trash cans, a part of us registers the wastefulness. Paper almost qualifies as a medium for true creative abundance, but falls just short.

    Software though, is a medium that not only can, but must be approached with an abundance mindset. Without a level of extensive trial-and-error and apparent waste that would bankrupt both traditional engineering and art, good software does not take shape. From the earliest days of interactive computing, when programmers chose to build games while more “serious” problems waited for computer time, to modern complaints about “trivial” apps (which often turn out to be revolutionary), scarcity-oriented thinkers have remained unable to grasp the essential nature of software for fifty years.

    The difference has a simple cause: unsullied purist visions have value beyond anxiety-alleviation and planning. They are also a critical authoritarian marketing and signaling tool — like formal dinners featuring expensive china — for attracting and concentrating scarce resources in fields such as architecture. In an environment of abundance, there is much less need for visions to serve such a marketing purpose. They only need to provide a roughly correct sense of direction to those laboring at software development to create capital using whatever tools and ideas they bring to the party — like potluck participants improvising whatever resources are necessary to make dinner happen.

    Translated to technical terms, the dinnerware analogy is at the heart of software engineering. Purist visions tend to arise when authoritarian architects attempt to concentrate and use scarce resources optimally, in ways they often sincerely believe is best for all. By contrast, tinkering is focused on steady progress rather than optimal end-states that realize a totalizing vision. It is usually driven by individual interests and not obsessively concerned with grand and paternalistic “best for all” objectives. The result is that purist visions seem more comforting and aesthetically appealing on the outside, while pragmatic hacking looks messy and unfocused. At the same time purist visions are much less open to new possibilities and bricolage, while pragmatic hacking is highly open to both.

    Within the world of computing, the importance of abundance-oriented approaches was already recognized by the 1960s. With Moore’s Law kicking in, pioneering computer scientist Alan Kay codified the idea of abundance orientation with the observation that programmers ought to “waste transistors” in order to truly unleash the power of computing.

    But even for young engineers starting out today, used to routinely renting cloudy container-loads2 of computers by the minute, the principle remains difficult to follow. Devoting skills and resources to playful tinkering still seems “wrong,” when there are obvious and serious problems desperately waiting for skilled attention. Like the protagonist in the movie Brewster’s Millions, who struggles to spend $30 million within thirty days in order to inherit $300 million, software engineers must unlearn habits born of scarcity before they can be productive in their medium.

    The principle of rough consensus and running code is perhaps the essence of the abundance mindset in software.

    If you are used to the collaboration processes of authoritarian organizations, the idea of rough consensus might conjure up the image of a somewhat informal committee meeting, but the similarity is superficial. Consensus in traditional organizations tends to be brokered by harmony-seeking individuals attuned to the needs of others, sensitive to constraints, and good at creating “alignment” among competing autocrats. This is a natural mode of operation when consensus is sought in order to deal with scarcity. Allocating limited resources is the typical purpose of such industrial-age consensus seeking. Under such conditions, compromise represents a spirit of sharing and civility. Unfortunately, it is also a recipe for gridlock when compromise is hard and creative breakthroughs become necessary.

    By contrast, software development favors individuals with an autocratic streak, driven by an uncompromising sense of how things ought to be designed and built, which at first blush appears to contradict the idea of consensus.

    Paradoxically, the IETF philosophy of eschewing “kings, presidents and voting” means that rough consensus evolves through strong-minded individuals either truly coming to an agreement, or splitting off to pursue their own dissenting ideas. Conflicts are not sorted out through compromises that leave everybody unhappy. Instead they are sorted out through the principle futurist Bob Sutton identified as critical for navigating uncertainty: strong views, weakly held.

    Pragmatists, unlike the authoritarian high-modernist architects studied by James Scott, hold strong views on the basis of having contributed running code rather than abstract visions. But they also recognize others as autonomous peers, rather than as unquestioning subordinates or rivals. Faced with conflict, they are willing to work hard to persuade others, be persuaded themselves, or leave.

    Rough consensus favors people who, in traditional organizations, would be considered disruptive and stubborn: these are exactly the people prone to “breaking smart.” In its most powerful form, rough consensus is about finding the most fertile directions in which to proceed rather than uncovering constraints. Constraints in software tend to be relatively few and obvious. Possibilities, however, tend to be intimidatingly vast. Resisting limiting visions, finding the most fertile direction, and allying with the right people become the primary challenges.

    In a process reminiscent of the “rule of agreement” in improv theater, ideas that unleash the strongest flood of follow-on builds tend to be recognized as the most fertile and adopted as the consensus. Collaborators who spark the most intense creative chemistry tend to be recognized as the right ones. The consensus is rough because it is left as a sense of direction, instead of being worked out into a detailed, purist vision.

    This general principle of fertility-seeking has been repeatedly rediscovered and articulated in a bewildering variety of specific forms. The statements have names such as the principle of least commitment (planning software), the end-to-end principle (network design), the procrastination principle (architecture), optionality (investing), paving the cowpaths (interface design), lazy evaluation (language design) and late binding (code execution). While the details, assumptions and scope of applicability of these different statements vary, they all amount to leaving the future as free and unconstrained as possible, by making as few commitments as possible in any given local context.

    The principle is in fact an expression of laissez-faire engineering ethics. Donald Knuth, another software pioneer, captured the ethical dimension with his version: premature optimization is the root of all evil. The principle is the deeper reason autonomy and creativity can migrate downstream to hands-on decision-making. Leaving more decisions for the future also leads to devolving authority to those who come later.

    Such principles might seem dangerously playful and short-sighted, but under conditions of increasing abundance, with falling costs of failure, they turn out to be wise. It is generally smarter to assume that problems that seem difficult and important today might become trivial or be rendered moot in the future. Behaviors that would be short-sighted in the context of scarcity become far-sighted in the context of abundance.

    The original design of the Mosaic browser, for instance, reflected the optimistic assumption that everybody would have high-bandwidth access to the Internet in the future, a statement that was not true at the time, but is now largely true in the developed world. Today, many financial technology entrepreneurs are building products based on the assumption that cryptocurrencies will be widely adopted and accepted. Underlying all such optimism about technology is an optimism about humans: a belief that those who come after us will be better informed and have more capabilities, and therefore able to make more creative decisions.

    The consequences of this optimistic approach are radical. Traditional processes of consensus-seeking drive towards clarity in long-term visions but are usually fuzzy on immediate next steps. By contrast, rough consensus in software deliberately seeks ambiguity in long-term outcomes and extreme clarity in immediate next steps. It is a heuristic that helps correct the cognitive bias behind Amara’s Law. Clarity in next steps counteracts the tendency to overestimate what is possible in the short term, while comfort with ambiguity in visions counteracts the tendency to underestimate what is possible in the long term. At an ethical level, rough consensus is deeply anti-authoritarian, since it avoids constraining the freedoms of future stakeholders simply to allay present anxieties. The rejection of “voting” in the IETF model is a rejection of a false sense of egalitarianism, rather than a rejection of democratic principles.

    In other words, true north in software is often the direction that combines ambiguity and evidence of fertility in the most alluring way: the direction of maximal interestingness.

    The decade after the dot com crash of 2000 demonstrated the value of this principle clearly. Startups derided for prioritizing “growth in eyeballs” (an “interestingness” direction) rather than clear models of steady-state profitability (a self-limiting purist vision of an idealized business) were eventually proven right. Iconic “eyeball” based businesses, such as Google and Facebook, turned out to be highly profitable. Businesses which prematurely optimized their business model in response to revenue anxieties limited their own potential and choked off their own growth.

    The great practical advantage of this heuristic is that the direction of maximal interestingness can be very rapidly updated to reflect new information, by evolving the rough consensus. The term pivot, introduced by Eric Ries as part of the Lean Startup framework, has recently gained popularity for such reorientation. A pivot allows the direction of development to change rapidly, without a detailed long-term plan. It is enough to figure out experimental next steps. This ability to reorient and adopt new mental models quickly (what military strategists call a fast transient4) is at the heart of agility.

    The response to new information is exactly the reverse in authoritarian development models. Because such models are based on detailed purist visions that grow more complex over time, it becomes increasingly harder to incorporate new data. As a result, the typical response to new information is to label it as an irrelevant distraction, reaffirm commitment to the original vision, and keep going. This is the runaway-train-wreck scenario. On the other hand, if the new information helps ideological opposition cohere within a democratic process, a competing purist vision can emerge. This leads to the stalled-train scenario.

    The reason rough consensus avoids both these outcomes is that it is much easier to agree roughly on the most interesting direction than to either update a complex, detailed vision or bring two or more conflicting complex visions into harmony.

    For this to work, an equally pragmatic implementation philosophy is necessary. One that is very different from the authoritarian high-modernist way, or as it is known in software engineering, the waterfall model (named for the way high-level purist plans flow unidirectionally towards low-level implementation work).

    Not only does such a pragmatic implementation philosophy exist, it works so well that running code actually tends to outrun even the most uninhibited rough consensus process without turning into a train wreck. One illustration of this dynamic is that successful software tends to get both used and extended in ways that the original creators never anticipated – and are often pleasantly surprised by, and sometimes alarmed by. This is of course the well-known agile model. We will not get into the engineering details,5 but what matters are the consequences of using it.

    The biggest consequence is this: in the waterfall model, execution usually lags vision, leading to a deficit-driven process. By contrast, in working agile processes, running code races ahead, leaving vision to catch up, creating a surplus-driven process.

    Both kinds of gaps contain lurking unknowns, but of very different sorts. The surplus in the case of working agile processes is the source of many pleasant surprises: serendipity. The deficit in the case of waterfall models is the source of what William Boyd called zemblanity: “unpleasant unsurprises.”

    In software, waterfall processes fail in predictable ways, like classic Greek tragedies. Agile processes on the other hand, can lead to snowballing serendipity, getting luckier and luckier, and succeeding in unexpected ways. The reason is simple: waterfall plans constrain the freedom of future participants, leading them to resent and rebel against the grand plan in predictable ways. By contrast, agile models empower future participants in a project, catalyzing creativity and unpredictable new value.

    The engineering term for the serendipitous, empowering gap between running code and governing vision has now made it into popular culture in the form of a much-misunderstood idea: perpetual beta.

    Running Code and Perpetual Beta

    When Google’s Gmail service finally exited beta status in July 2009, five years after it was launched, it already had over 30 million users. By then, it was the third largest free email provider after Yahoo and Hotmail, and was growing much faster than either.1 For most of its users, it had already become their primary personal email service.

    The beta label on the logo, indicating experimental prototype status, had become such a running joke that when it was finally removed, the project team included a whimsical “back to beta” feature, which allowed users to revert to the old logo. That feature itself was part of a new section of the product called Gmail Labs: a collection of settings that allowed users to turn on experimental features. The idea of perpetual beta had morphed into permanent infrastructure within Gmail for continuous experimentation.

    Today, this is standard practice: all modern web-based software includes scaffolding for extensive ongoing experimentation within the deployed production site or smartphone app backend (and beyond, through developer APIs2). Some of it is even visible to users. In addition to experimental features that allow users to stay ahead of the curve, many services also offer “classic” settings that allow them to stay behind the curve — for a while. The best products use perpetual beta as a way to lead their users towards richer, more empowered behaviors, instead of following them through customer-driven processes. Backward compatibility is limited to situations of pragmatic need, rather than being treated as a religious imperative.

    The Gmail story contains an answer to the obvious question about agile models you might ask if you have only experienced waterfall models: How does anything ambitious get finished by groups of stubborn individuals heading in the foggiest possible direction of “maximal interestingness” with neither purist visions nor “customer needs” guiding them?

    The answer is that it doesn’t get finished. But unlike in waterfall models, this does not necessarily mean the product is incomplete. It means the vision is perpetually incomplete and growing in unbounded ways, due to ongoing evolutionary experiments. When this process works well, what engineers call technical debt can get transformed into what we might call technical surplus.3 The parts of the product that lack satisfying design justifications represent the areas of rapid innovation. The gaps in the vision are sources of serendipitous good luck. (If you are a Gmail user, browsing the “Labs” section might lead you to some serendipitous discoveries: features you did not know you wanted might already exist unofficially).

    The deeper significance of perpetual beta culture in technology often goes unnoticed: in the industrial age, engineering labs were impressive, enduring buildings inside which experimental products were created. In the digital age, engineering labs are experimental sections inside impressive, enduring products. Those who bemoan the gradual decline of famous engineering labs like AT&T Bell Labs and Xerox PARC often miss the rise of even more impressive labs inside major modern products and their developer ecosystems.

    Perpetual beta is now such an entrenched idea that users expect good products to evolve rapidly and serendipitously, continuously challenging their capacity to learn and adapt. They accept occasional non-critical failures as a price worth paying. Just as the ubiquitous under construction signs on the early static websites of the 1990s gave way to dynamic websites that were effectively always “under construction,” software products too have acquired an open-ended evolutionary character.

    Just as rough consensus drives ideation towards “maximal interestingness”, agile processes drive evolution towards the regimes of greatest operational uncertainty, where failures are most likely to occur. In well-run modern software processes, not only is the resulting chaos tolerated, it is actively invited. Changes are often deliberately made at seemingly the worst possible times. Intuit, a maker of tax software, has a history of making large numbers of changes and updates at the height of tax season.

    Conditions that cause failure, instead of being cordoned off for avoidance in the future, are deliberately and systematically recreated and explored further. There are even automated systems designed to deliberately cause failures in production systems, such as ChaosMonkey, a system developed by Netflix to randomly take production servers offline, forcing the system to heal itself or die trying.

    The glimpses of perpetual beta that users can see is dwarfed by unseen backstage experimentation.

    This is neither perverse, nor masochistic: it is necessary to uncover hidden risks in experimental ideas early, and to quickly resolve gridlocks with data.

    The origins of this curious philosophy lie in what is known as the release early, release often (RERO) principle, usually attributed to Linus Torvalds, the primary architect of the Linux operating system. The idea is exactly what it sounds like: releasing code as early as possible, and as frequently as possible while it is actively evolving.

    What makes this possible in software is that most software failures do not have life-threatening consequences.4 As a result, it is usually faster and cheaper to learn from failure than to attempt to anticipate and accommodate it via detailed planning (which is why the RERO principle is often restated in terms of failure as fail fast).

    So crucial is the RERO mindset today that many companies, such as Facebook and Etsy, insist on new hires contributing and deploying a minor change to mission-critical systems on their very first day. Companies that rely on waterfall processes by contrast, often put new engineers through years of rotating assignments before trusting them with significant autonomy.

    To appreciate just how counterintuitive the RERO principle is, and why it makes traditional engineers nervous, imagine a car manufacturer rushing to put every prototype into “experimental” mass production, with the intention of discovering issues through live car crashes. Or supervisors in a manufacturing plant randomly unplugging or even breaking machinery during peak demand periods. Even lean management models in manufacturing do not go this far. Due to their roots in scarcity, lean models at best mitigate the problems caused by waterfall thinking. Truly agile models on the other hand, do more: they catalyze abundance.

    Perhaps the most counter-intuitive consequence of the RERO principle is this: where engineers in other disciplines attempt to minimize the number of releases, software engineers today strive to maximize the frequency of releases. The industrial-age analogy here is the stuff of comedy science fiction: an intern launching a space mission just to ferry a single paper-clip to the crew of a space station.

    This tendency makes no sense within waterfall models, but is a necessary feature of agile models. The only way for execution to track the changing direction of the rough consensus as it pivots is to increase the frequency of releases. Failed experiments can be abandoned earlier, with lower sunk costs. Successful ones can migrate into the product as fast as hidden risks can be squeezed out. As a result, a lightweight sense of direction — rough consensus — is enough. There is no need to navigate by an increasingly unattainable utopian vision.

    Which raises an interesting question: what happens when there are irreconcilable differences of opinion that break the rough consensus?

    Software as Subversion

    If creating great software takes very little capital, copying great software takes even less. This means dissent can be resolved in an interesting way that is impossible in the world of atoms. Under appropriately liberal intellectual property regimes, individuals can simply take a copy of the software and continue developing it independently. In software, this is called forking. Efforts can also combine forces, a process known as merging. Unlike the superficially similar process of spin-offs and mergers in business, forking and merging in software can be non-zero sum.

    Where democratic processes would lead to gridlock and stalled development, conflicts under rough consensus and running code and release early, release often processes leads to competing, divergent paths of development that explore many possible worlds in parallel.

    This approach to conflict resolution is so radically unfamiliar1 that it took nearly three decades even for pragmatic hackers to recognize forking as something to be encouraged. Twenty five years passed between the first use of the term “fork” in this sense (by Unix hacker Eric Altman in 1980) and the development of a tool that encouraged rather than discouraged it: git, developed by Linus Torvalds in 2005. Git is now the most widely used code management system in the world, and the basis for Github, the leading online code repository.

    In software development, the model works so well that a nearly two-century old industrial model of work is being abandoned for one built around highly open collaboration, promiscuous forking and opt-in staffing of projects.

    The dynamics of the model are most clearly visible in certain modern programming contests, such as the regular Matlab programming contests conducted by MathWorks.

    Such events often allow contestants to frequently check their under-development code into a shared repository. In the early stages, such sharing allows for the rapid dissemination of the best design ideas through the contestant pool. Individuals effectively vote for the most promising ideas by appropriating them for their own designs, in effect forming temporary collaborations. Hoarding ideas or code tends to be counterproductive due to the likelihood that another contestant will stumble on the same idea, improve upon it in unexpected ways, or detect a flaw that allows it to “fail fast.” But in the later stages, the process creates tricky competitive conditions, where speed of execution beats quality of ideas. Not surprisingly, the winner is often a contestant who makes a minor, last-minute tweak to the best submitted solution, with seconds to spare.

    Such contests — which exhibit in simplified forms the dynamics of the open-source community as well as practices inside leading companies — not only display the power of RCRC and RERO, they demonstrate why promiscuous forking and open sharing lead to better overall outcomes.

    Software that thrives in such environments has a peculiar characteristic: what computer scientist Richard Gabriel described as worse is better.2 Working code that prioritizes visible simplicity, catalyzing effective collaboration and rapid experimentation, tends to spread rapidly and unpredictably. Overwrought code that prioritizes authoritarian, purist concerns such as formal correctness, consistency, and completeness tends to die out.

    In the real world, teams form through self-selection around great code written by one or two linchpin programmers rather than contest challenges. Team members typically know each other at least casually, which means product teams tend to grow to a few dozen at most. Programmers who fail to integrate well typically leave in short order. If they cannot or do not leave, they are often explicitly told to do nothing and stay out of the way, and actively shunned and cut out of the loop if they persist.

    While the precise size of an optimal team is debatable, Jeff Bezos’ two-pizza rule suggests that the number is no more than about a dozen.3

    In stark contrast to the quality code developed by “worse is better” processes, software developed by teams of anonymous, interchangeable programmers, with bureaucratic top-down staffing, tends to be of terrible quality. Turning Gabriel’s phrase around, such software represents a “better is worse” outcome: utopian visions that fail predictably in implementation, if they ever progress beyond vaporware at all.

    The IBM OS/2 project of the early nineties,4 conceived as a replacement for the then-dominant operating system, MS-DOS, provides a perfect illustration of “better is worse.” Each of the thousands of programmers involved was expected to design, write, debug, document, and support just 10 lines of code per day. Writing more than 10 lines was considered a sign of irresponsibility. Project estimates were arrived at by first estimating the number of lines of code in the finished project, dividing by the number of days allocated to the project, and then dividing by 10 to get the number of programmers to assign to the project. Needless to say, programmers were considered completely interchangeable. The nominal “planning” time required to complete a project could be arbitrarily halved at any time, by doubling the number of assigned engineers.5 At the same time, dozens of managers across the the company could withhold approval and hold back a release, a process ominously called “nonconcurrence.”

    “Worse is better” can be a significant culture shock to those used to industrial-era work processes. The most common complaint is that a few rapidly growing startups and open-source projects typically corner a huge share of the talent supply in a region at any given time, making it hard for other projects to grow. To add insult to injury, the process can at times seem to over-feed the most capricious and silly projects while starving projects that seem more important. This process of the best talent unpredictably abandoning other efforts and swarming a few opportunities is a highly unforgiving one. It creates a few exceptional winning products and vast numbers of failed ones, leaving those with strong authoritarian opinions about “good” and “bad” technology deeply dissatisfied.

    But not only does the model work, it creates vast amounts of new wealth through both technology startups and open-source projects. Today, its underlying concepts like rough consensus, pivot, fast failure, perpetual beta, promiscuous forking, opt-in and worse is better are carrying over to domains beyond software and regions beyond Silicon Valley. Wherever they spread, limiting authoritarian visions and purist ideologies retreat.

    There are certainly risks with this approach, and it would be polyannish to deny them. The state of the Internet today is the sum of millions of pragmatic, expedient decisions made by hundreds of thousands of individuals delivering running code, all of which made sense at the time. These decisions undoubtedly contributed to the serious problems facing us today, ranging from the poor security of Internet protocols to the ones being debated around Net Neutrality. But arguably, had the pragmatic approach not prevailed, the Internet would not have evolved significantly beyond the original ARPANET at all. Instead of a thriving Internet economy that promises to revitalize the old economy, the world at large might have followed the Japanese down the dead-end purist path of fifth-generation mainframe computing.

    Today, moreover, several solutions to such serious legacy problems are being pursued, such as blockchain technology (the software basis for cryptocurrencies like Bitcoin). These are vastly more creative than solutions that were debated in the early days of the Internet, and reflect an understanding of problems that have actually been encountered, rather than the limiting anxieties of authoritarian high-modernist visions. More importantly, they validate early decisions to resist premature optimization and leave as much creative room for future innovators as possible. Of course, if emerging solutions succeed, more lurking problems will surface that will in turn need to be solved, in the continuing pragmatic tradition of perpetual beta.

    Our account of the nature of software ought to suggest an obvious conclusion: it is a deeply subversive force. For those caught on the wrong side of this force, being on the receiving end of Blitzkrieg operations by a high-functioning agile software team can feel like mounting zemblanity: a sense of inevitable doom.

    This process has by now occurred often enough, that a general sense of zemblanity has overcome the traditional economy at large. Every aggressively growing startup seems like a special-forces team with an occupying army of job-eating machine-learning programs and robots following close behind.

    Internally, the software-eaten economy is even more driven by disruption: the time it takes for a disruptor to become a disruptee has been radically shrinking in the last decade — and startups today are highly aware of that risk. That awareness helps explain the raw aggressiveness that they exhibit.

    It is understandable that to people in the traditional economy, software eating the world sounds like a relentless war between technology and humanity.

    But exactly the opposite is the case. Technological progress, unlike war or Wall Street style high finance, is not a zero-sum game, and that makes all the difference. The Promethean force of technology is today, and always has been, the force that has rescued humanity from its worst problems just when it seemed impossible to avert civilizational collapse. With every single technological advance, from the invention of writing to the invention of television, those who have failed to appreciate the non-zero-sum nature of technological evolution have prophesied doom and been proven wrong. Every time, they have made some version of the argument: this time it is different, and been proven wrong.

    Instead of enduring civilizational collapse, humanity has instead ascended to a new level of well-being and prosperity each time.

    Of course, this poor record of predicting collapses is not by itself proof that it is no different this time. There is no necessary reason the future has to be like the past. There is no fundamental reason our modern globalized society is uniquely immune to the sorts of game-ending catastrophes that led to the fall of the Roman empire or the Mayan civilization. The case for continued progress must be made anew with each technological advance, and new concerns, such as climate change today, must be seriously considered.

    But concerns that the game might end should not lead us to limit ourselves to what philosopher James Carse6 called finite game views of the world, based on “winning” and arriving at a changeless, pure and utopian state as a prize. As we will argue in the next essay, the appropriate mindset is what Carse called an infinite game view, based on the desire to continue playing the game in increasingly generative ways. From an infinite game perspective, software eating the world is in fact the best thing that can happen to the world.

    Prometheans and Pastoralists

    The unique characteristics of software as a technological medium have an impact beyond the profession itself. To understand the broader impact of software eating the world, we have to begin by examining the nature of technology adoption processes.

    A basic divide in the world of technology is between those who believe humans are capable of significant change, and those who believe they are not. Prometheanism is the philosophy of technology that follows from the idea that humans can, do and should change. Pastoralism, on the other hand is the philosophy that change is profane. The tension between these two philosophies leads to a technology diffusion process characterized by a colloquial phrase popular in the startup world: first they ignore you, then they laugh at you, then they fight you, then you win.1

    Science fiction writer Douglas Adams reduced the phenomenon to a set of three sardonic rules from the point of view of users of technology:

    Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
    Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
    Anything invented after you’re thirty-five is against the natural order of things.
    As both these folk formulations suggest, there is certain inevitability to technological evolution, and a certain naivete to certain patterns of resistance.

    To understand why this is in fact the case, consider the proposition that technological evolution is path-dependent in the short term, but not in the long term.

    Major technological possibilities, once uncovered, are invariably exploited in ways that maximally unleash their potential. While there is underutilized potential left, individuals compete and keep adapting in unpredictable ways to exploit that potential. All it takes is one thing: a thriving frontier of constant tinkering and diverse value systems must exist somewhere in the world.

    Specific ideas may fail. Specific uses may not endure. Localized attempts to resist may succeed, as the existence of the Amish demonstrates. Some individuals may resist some aspects of the imperative to change successfully. Entire nations may collectively decide to not explore certain possibilities. But with major technologies, it usually becomes clear very early on that the global impact is going to be of a certain magnitude and cause a corresponding amount of disruptive societal change. This is the path-independent outcome and the reason there seems to be a “right side of history” during periods of rapid technological developments.

    The specifics of how, when, where and through whom a technology achieves its maximal impact are path dependent. Competing to guess the right answers is the work of entrepreneurs and investors. But once the answers are figured out, the contingent path from “weird” to “normal” will be largely forgotten, and the maximally transformed society will seem inevitable with hindsight.

    The ongoing evolution of ridesharing through conflict with the taxicab industry illustrates this phenomenon well. In January 2014 for instance, striking cabdrivers in Paris attacked vehicles hired through Uber. The rioting cabdrivers smashed windshields and slashed tires, leading to immediate comparisons in the media to the original pastoralists of industrialized modernity: the Luddites of the early 19th century.2

    Like the Luddite movement, the reaction to ridesharing services such as Uber and Lyft is not resistance to innovative technology per se, but something larger and more complex: an attempt to limit the scope and scale of impact in order to prevent disruption of a particular way of life. As Richard Conniff notes in a 2011 essay in the Smithsonian magazine:

    As the Industrial Revolution began, workers naturally worried about being displaced by increasingly efficient machines. But the Luddites themselves “were totally fine with machines,” says Kevin Binfield, editor of the 2004 collection Writings of the Luddites. They confined their attacks to manufacturers who used machines in what they called “a fraudulent and deceitful manner” to get around standard labor practices. “They just wanted machines that made high-quality goods,” says Binfield, “and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.3

    In his essay, Conniff argues that the original Luddites were simply fighting to preserve their idea of human values, and concludes that “standing up against technologies that put money or convenience above other human values” is necessary for a critical engagement of technology. Critics make similar arguments in every sector being eaten by software.

    The apparent reasonableness of this view is deceptive: it is based on the wishful hope that entire societies can and should agree on what the term human values means, and use that consensus to decide which technologies to adopt. An unqualified appeal to “universal” human values is usually a call for an authoritarian imposition of decidedly non-universal values.

    As the rideshare industry debates demonstrate, even consumers and producers within a single sector find it hard to achieve consensus on values. Protests by cab drivers in London in 2014 for instance, led to an increase in business4 for rideshare companies, clear evidence that consumers do not necessarily act in solidarity with incumbent producers based on shared “human values.”

    It is tempting to analyze such conflicts in terms of classical capitalist or labor perspectives. The result is a predictable impasse: capitalists emphasize increased supply driving prices down, while progressives focus on loss of jobs in the taxicab industry. Both sides attempt to co-opt the political loyalties of rideshare drivers. Capitalists highlight increased entrepreneurial opportunities, while progressives highlight increased income precarity. Capitalists like to label rideshare drivers free agents or micro-entrepreneurs, while progressives prefer labels like precariat (by analogy to proletariat) or scab. Both sides attempt to make the future determinate by force-fitting it into preferred received narratives using loaded terms.

    Both sides also operate by the same sense of proportions: they exaggerate the importance of the familiar and trivialize the new. Apps seem trivial, while automobiles loom large as a motif of an entire century-old way of life. Societies organized around cars seem timeless, normal, moral and self-evidently necessary to preserve and extend into the future. The smartphone at first seems to add no more than a minor element of customer convenience within a way of life that cannot possibly change. The value it adds to the picture is treated like a rounding error and ignored. As a result both sides see the conflict as a zero-sum redistribution of existing value: gains on one side, exactly offset by losses on the other side.

    But as Marshall McLuhan observed, new technologies change our sense of proportions.

    Even today’s foggy view of a smartphone-centric future suggests that ridesharing is evolving from convenience to necessity. By sustaining cheaper and more flexible patterns of local mobility, ridesharing enables new lifestyles in urban areas. Young professionals can better afford to work in opportunity-rich cities. Low-income service workers can expand their mobility beyond rigid public transit and the occasional expensive emergency taxi-ride. Small restaurants with limited working capital can use ridesharing-like services to offer delivery services. It is in fact getting hard to imagine how else transportation could work in a society with smartphones.

    The impact is shifting from the path-dependent phase, when it wasn’t clear whether the idea was even workable, to the non-path-dependent phase, where it seems inevitable enough that other ideas can be built on top.

    Such snowballing changes in patterns of life are due to what economists call consumer surplus5 (increased spending power elsewhere due to falling costs in one area of consumption) and positive spillover effects6 (unexpected benefits in unrelated industries or distant geographies). For technologies with a broad impact, these are like butterfly effects: deceptively tiny causes with huge, unpredictable effects. Due to the unpredictability of surplus and spillover, the bulk of the new wealth created by new technologies (on the order of 90% or more) eventually accrues to society at large,7 rather than the innovators who drove the early, path-dependent phase of evolution. This is the macroeconomic analog to perpetual beta: execution by many outrunning visioning by a few, driving more bottom-up experimentation and turning society itself into an innovation laboratory.

    Far from the value of the smartphone app being a rounding error in the rideshare industry debate, it in fact represents the bulk of the value. It just does not accrue directly to any of the participants in the overt, visible conflict.

    If adoption models were entirely dictated by the taxicab industry, this value would not exist, and the zero-sum framing would become a self-fulfilling prophecy. Similarly, when entrepreneurs try to capture all or even most of the value they set out to create, the results are counterproductive: minor evolutionary advances that again make zero-sum outcomes a self-fulfilling prophecy. Technology publishing pioneer Tim O’Reilly captured the essence of this phenomenon with the principle, “create more value than you capture.” For the highest-impact products, the societal value created dwarfs the value captured.

    These largely invisible surplus and spillover effects do more than raise broad living standards. By redirecting newly freed creative energy and resources down indeterminate paths, consumer surpluses and spillover effects actually drive further technological evolution in a non-zero-sum way. The bulk of the energy leaks away to drive unexpected innovations in unrelated areas. A fraction courses through unexpected feedback paths and improves the original innovation itself, in ways the pioneers themselves do not anticipate. Similar unexpected feedback paths improve derivative inventions as well, vastly amplifying the impact beyond simple “technology diffusion.”

    The story of the steam engine is a good illustration of both effects. It is widely recognized that spillover effects from James Watt’s steam engine, originally introduced in the Cornish mining industry, helped trigger the British industrial revolution. What is less well-known8 is that the steam engine itself was vastly improved by hundreds of unknown tinkerers adding “microinventions” in the decades immediately following the expiration of James Watt’s patents. Once an invention leaks into what Robert Allen calls “collective invention settings,” with a large number of individuals and firms freely sharing information and independently tinkering with an innovation, future evolution gathers unstoppable momentum and the innovation goes from “weird” to “new normal.” Besides the Cornish mining district in the early 1800s, the Connecticut Valley in the 1870s-1890s,9 Silicon Valley since 1950 and the Shenzen region of China since the 1990s are examples of flourishing collective invention settings. Together, such active creative regions constitute the global technology frontier: the worldwide zone of bricolage.

    The path-dependent phase of evolution of a technology can take centuries, as Joel Mokyr shows in his classic, Lever of Riches. But once it enters a collective invention phase, surplus and spillover effects gather momentum and further evolution becomes simultaneously unpredictable and inevitable. Once the inevitability is recognized, it is possible to bet on follow-on ideas without waiting for details to become clear. Today, it is possible to bet on a future based on ridesharing and driverless cars without knowing precisely what those futures will look like.

    As consumers, we experience this kind of evolution as what Buckminster Fuller called ephemeralization: the seemingly magical ability of technology to do more and more with less and less.

    This is most visible today in the guise of Moore’s Law, but ephemeralization is in fact a feature of all technological evolution. Potable water was once so hard to come by, many societies suffered from endemic water-borne diseases and forced to rely on expensive and inefficient procedures like boiling water at home. Today, only around 10% of the world lacks such access.10 Diamonds were once worth fighting wars over. Today artificial diamonds, indistinguishable from natural ones, are becoming widely available.

    The result is a virtuous cycle of increasing serendipity, driven by widespread lifestyle adaptation and cascades of self-improving innovation. Surplus and spillover creating more surplus and spillover. Brad deLong’s slouching towards utopia for consumers and Edmund Phelps’ mass flourishing for producers. And when the virtuous cycle is powered by a soft, world-eating technology, the steady, cumulative impact is immense.

    Both critics and enthusiasts of innovation deeply misunderstand the nature of this virtuous cycle. Critics typically lament lifestyle adaptations as degeneracy and call for a return to traditional values. Many enthusiasts, instead of being inspired by a sense of unpredictable, flourishing potential, are repeatedly seduced by specific visions of the Next Big Thing, sometimes derived rather literally from popular science fiction. As a result, they lament the lack of collective attention directed towards their pet societal projects. The priorities of other enthusiasts seem degenerate.

    The result in both cases is the same: calls for reining in the virtuous cycle. Both kinds of lament motivate efforts to concentrate and deploy surpluses in authoritarian ways (through retention of excessive monopolistic profits by large companies or government-led efforts funded through taxation) and contain spillover effects (by restricting access to new technological capabilities). Both are ultimately attempts to direct creative energies down a few determinate paths. Both are driven by a macroeconomic version of the Luddite hope: that it is possible to enjoy the benefits of non-zero-sum innovation without giving up predictability. For critics, it is the predictability of established patterns of life. For Next Big Thing enthusiasts, it is a specific aspirational pattern of life.

    Both are varieties of pastoralism, the cultural cousin of purist approaches in engineering. Pastoralism suffers from precisely the same, predictable authoritarian high-modernist failure modes. Like purist software visions, pastoralist visions too are marked by an obsessive desire to permanently win a specific, zero-sum finite game rather than to keep playing the non-zero-sum infinite game.

    When the allure of pastoralist visions is resisted, and the virtuous cycle is allowed to work, we get Promethean progress. This is unpredictable evolution in the direction of maximal societal impact, unencumbered by limiting deterministic visions. Just as the principle of rough consensus and running code creates great software, consumer surplus and spillover effects create great societies. Just as pragmatic and purist development models lead to serendipity and zemblanity in engineering respectively, Promethean and pastoral models lead to serendipity and zemblanity at the level of entire societies.

    When pastoralist calls for actual retreat are heeded, the technological frontier migrates elsewhere, often causing centuries of stagnation. This was precisely what happened in China and the Islamic world around the fifteenth century, when the technological frontier shifted to Europe.

    Heeding the other kind of pastoralist call, to pursue a determinate Next Big Thing at the expense of many indeterminate small things, leads to somewhat better results. Such models can deliver impressive initial gains, but invariably create a hardening landscape of authoritarian, corporatist institutions. This triggers a vicious cycle that predictably stifles innovation.

    The Apollo program, for instance, fulfilled John F. Kennedy’s call to put humans on the moon within the decade. It also led to the inexorable rise of the military-industrial complex that his predecessor, Dwight D. Eisenhower, had warned against. The Soviets fared even worse: they made equally impressive strides in the space race, but the society they created collapsed on itself under the weight of authoritarianism. What prevented that outcome in the United States was the regional technological frontier migrating to the West Coast, and breaking smart from the military-industrial complex in the process. This allowed some of the creative energy being gradually stifled to escape to a more favorable environment.

    With software eating the world, we are again witnessing predictable calls for pastoralist development models. Once again, the challenge is to resist the easy answers on offer.

    The Allure of Pastoralism

    In art, the term pastoral refers to a genre of painting and literature based on romanticized and idealized portrayals of a pastoral lifestyle, usually for urban audiences with no direct experience of the actual squalor and oppression of pre-industrial rural life.

    Biblical Pastoralism: drawing inspiration for the 21st century from shepherds.

    Within religious traditions, pastorals may also be associated with the motifs and symbols of uncorrupted states of being. In the West for instance, pastoral art and literature often evoke the Garden of Eden story. In Islamic societies, the first caliphate is often evoked in a similar way.

    The notion of a pastoral is useful for understanding idealized understandings of any society, real or imagined, past, present or future. In Philip Roth’s American Pastoral for instance, the term is an allusion to the idealized American lifestyle enjoyed by the protagonist Seymour “Swede” Levov, before it is ruined by the social turmoil of the 1960s.

    At the center of any pastoral we find essentialized notions of what it means to be human, like Adam and Eve or William Whyte’s Organization Man, arranged in a particular social order (patriarchal in this case). From these archetypes we get to pure and virtuous idealized lifestyles. Lifestyles that deviate from these understandings seem corrupt and vice-driven. The belief that “people don’t change” is at once an approximation and a prescription: people should not change except to better conform to the ideal they are assumed to already approximate. The belief justifies building technology to serve the predictable and changeless ideal and labeling unexpected uses of technology degenerate.

    We owe our increasingly farcical yearning for jetpacks and flying cars, for instance, to what we might call the “World Fairs pastoral,” since the vision was strongly shaped by mid-twentieth-century World Fairs. Even at the height of its influence, it was already being satirized by television shows like The Flintstones and The Jetsons. The shows portrayed essentially the 1950s social order, full of Organization Families, transposed to past and future pastoral settings. The humor in the shows rested on audiences recognizing the escapist non-realism.

    Not quite as clever as the Flintstones or Jetsons, but we try.

    The World Fairs pastoral, inspired strongly by the aerospace technologies of the 1950s, represented a future imagined around flying cars, jetpacks and glamorous airlines like Pan Am. Flying cars merely updated a familiar nuclear-family lifestyle. Jetpacks appealed to the same individualist instincts as motorcycles. Airlines like Pan Am, besides being an integral part of the military-industrial complex, owed their “glamor” in part to their deliberate perpetuation of the sexist culture of the fifties. Within this vision, truly significant developments, like the rise of vastly more efficient low-cost airlines in the 70s, seemed like decline from a “Golden Age” of air travel.

    Arguably, the aerospace future that actually unfolded was vastly more interesting than the one envisioned in the World Fairs pastoral. Low-cost, long-distance air travel opened up a globalized and multicultural future, broke down barriers between insular societies, and vastly increased global human mobility. Along the way, it helped dismantle much of the institutionalized sexism behind the glamour of the airline industry. These developments were enabled in large part by post-1970s software technologies,1 rather than improvements in core aerospace engineering technologies. These were precisely the technologies that were beginning to “break smart” out of the stifling influence of the military-industrial complex.

    In 2012, thanks largely to these developments, for the first time in history there were over a billion international tourist arrivals worldwide.2 Software had eaten and democratized elitist air travel. Today, software is continuing to eat airplanes in deeper ways, driving the current explosion in drone technology. Again, those fixated on jetpacks and flying cars are missing the actual, much more interesting action because it is not what they predicted. When pastoralists pay attention to drones at all, they see them primarily as morally objectionable military weapons. The fact that they replace technologies of mass slaughter such as carpet bombing, and the growing number of non-military uses, are ignored.

    In fact the entire World Fairs pastoral is really a case of privileged members of society, presuming to speak for all, demanding “faster horses” for all of society (in the sense of the likely apocryphal3 quote attributed to Henry Ford, “If I’d asked my customers what they wanted, they would have demanded faster horses.”)

    Fortunately for the vitality of the United States and the world at large, the future proved wiser than any limiting pastoral vision of it. The aerospace story is just one among many that suddenly appear in a vastly more positive light once we drop pastoral obsessions and look at the actual unfolding action. Instead of the limited things we could imagine in the 1950s, we got much more impactful things. Software eating aerospace technology allowed it to continue progressing in the direction of maximum potential.

    If pastoral visions are so limiting, why do we get so attached to them? Where do they even come from in the first place? Ironically, they arise from Promethean periods of evolution that are too successful.

    The World Fairs pastoral, for instance, emerged out of a Promethean period in the United States, heralded by Alexander Hamilton in the 1790s. Hamilton recognized the enormous potential of industrial manufacturing, and in his influential 1792 Report on Manufactures,4 argued that the then-young United States ought to strive to become a manufacturing superpower. For much of the nineteenth century, Hamilton’s ideas competed for political influence5 with Thomas Jefferson’s pastoral vision of an agrarian, small-town way of life, a romanticized, sanitized version of the society that already existed.

    For free Americans alive at the time, Jefferson’s vision must have seemed tangible, obviously valuable and just within reach. Hamilton’s must have seemed speculative, uncertain and profane, associated with the grime and smoke of early industrializing Britain. For almost 60 years, it was in fact Jefferson’s parochial sense of proportions that dominated American politics. It was not until the Civil War that the contradictions inherent in the Jeffersonian pastoral led to its collapse as a political force. Today, while it still supplies powerful symbolism to politicians’ speeches, all that remains of the Jeffersonian Pastoral is a nostalgic cultural memory of small-town agrarian life.

    During the same period, Hamilton’s ideas, through their overwhelming success, evolved from a vague sense of direction in the 1790s into a rapidly maturing industrial social order by the 1890s. By the 1930s, this social order was already being pastoralized into an alluring vision of jetpacks and flying cars in a vast, industrialized, centralized society. A few decades later, this had turned into a sense of dead-end failure associated with the end of the Apollo program, and the reality of a massive, overbearing military-industrial complex straddling the technological world. The latter has now metastasized into an entire too-big-to-fail old economy. One indicator of the freezing of the sense of direction is that many contemporary American politicians still remain focused on physical manufacturing the way Alexander Hamilton was in 1791. What was a prescient sense of direction then has turned into nostalgia for an obsolete utopian vision today. But where we have lost our irrational attachment to the Jeffersonian Pastoral, the World Fairs pastoral is still too real to let go.

    We get attached to pastorals because they offer a present condition of certainty and stability and a utopian future promise of absolutely perfected certainty and stability. Arrival at the utopia seems like a well-deserved reward for hard-won Promethean victories. Pastoral utopias are where the victors of particular historical finite games hope to secure their gains and rest indefinitely on their laurels. The dark side, of course, is that pastorals also represent fantasies of absolute and eternal power over the fate of society: absolute utopias for believers that necessarily represent dystopias for disbelievers. Totalitarian ideologies of the twentieth century, such as communism and fascism, are the product of pastoral mindsets in their most toxic forms. The Jeffersonian pastoral was a nightmare for black Americans.

    When pastoral fantasies start to collapse under the weight of their own internal contradictions, long-repressed energies are unleashed. The result is a societal condition marked by widespread lifestyle experimentation based on previously repressed values. To those faced with a collapse of the World Fairs pastoral project today, this seems like an irreversible slide towards corruption and moral decay.

    Understanding Elite Discontent

    Because they serve as stewards of dominant pastoral visions, cultural elites are most prone to viewing unexpected developments as degeneracy. From the Greek philosopher Plato1 (who lamented the invention of writing in the 4th century BC) to the Chinese scholar, Zhang Xian Wu2 (who lamented the invention of printing in the 12th century AD), alarmist commentary on technological change has been a constant in history. A contemporary example can be found in a 2014 article3 by Paul Verhaege in The Guardian:

    There are constant laments about the so-called loss of norms and values in our culture. Yet our norms and values make up an integral and essential part of our identity. So they cannot be lost, only changed. And that is precisely what has happened: a changed economy reflects changed ethics and brings about changed identity. The current economic system is bringing out the worst in us.

    Viewed through any given pastoral lens, any unplanned development is more likely to subtract rather than add value. In an imagined world where cars fly, but driving is still a central rather than peripheral function, ridesharing can only be seen as subtracting taxi drivers from a complete vision. Driverless cars — the name is revealing, like “horseless carriage” — can only be seen as subtracting all drivers from the vision. And with such apparent subtraction, values and humans can only be seen as degenerating (never mind that we still ride horses for fun, and will likely continue driving cars for fun).

    This tendency to view adaptation as degeneracy is perhaps why cultural elites are startlingly prone to the Luddite fallacy. This is the idea that technology-driven unemployment is a real concern, an idea that arises from the more basic assumption that there is a fixed amount of work (“lump of labor”) to be done. By this logic, if a machine does more, then there is less for people to do.

    Prometheans often attribute this fallacious argument to a lack of imagination, but the roots of its appeal lie much deeper. Pastoralists are perfectly willing and able to imagine many interesting things, so long as they bring reality closer to the pastoral vision. Flying cars — and there are very imaginative ways to conceive of them — seem better than land-bound ones because drivers predictably evolving into pilots conforms to the underlying notion of human perfectibility. Drivers unpredictably evolving into smartphone-wielding free agents, and breaking smart from the Organization Man archetype, does not. Within the Jeffersonian pastoral, faster horses (not exactly trivial to breed) made for more empowered small-town yeoman farmers. Drivers of early horseless carriages were degenerate dependents, beholden to big corporations, big cities and Standard Oil.

    In other words, pastoralists can imagine sustaining changes to the prevailing social order, but disruptive changes seem profane. As a result, those who adapt to disruption in unexpected ways seem like economic and cultural degenerates, rather than representing employment rebounding in unexpected ways.

    History of course, has shown that the idea of technological unemployment is not just wrong, it is wildly wrong. Contemporary fears of software eating jobs is just the latest version of the argument that “people cannot change” and that this time, the true limits of human adaptability have been discovered.

    This argument is absolutely correct — within the pastoral vision that it is made.

    Once we remove pastoral blinders, it becomes obvious that the future of work lies in the unexpected and degenerate-seeming behaviors of today. Agriculture certainly suffered a devastating permanent loss of employment to machinery within the Jeffersonian pastoral by 1890. Fortunately, Hamilton’s profane ideas, and the degenerate citizens of the industrial world he foresaw, saved the day. The ideal Jeffersonian human, the noble small-town yeoman farmer, did in fact become practically extinct as the Jeffersonians feared. Today the pastoral-ideal human is a high-IQ credentialist Organization Man, headed for gradual extinction, unable to compete with higher-IQ machines. The degenerate, breaking-smart humans of the software-eaten world on the other hand, have no such fears. They are too busy tinkering with new possibilities to bemoan imaginary lost utopias.

    John Maynard Keynes was too astute to succumb to the Luddite fallacy in this naive form. In his 1930 conception of the leisure society,4 he noted that the economy could arbitrarily expand to create and satisfy new needs, and with a lag, absorb labor as fast as automation freed it up. But Keynes too failed to recognize that with new lifestyles come new priorities, new lived values and new reasons to want to work. As a result, he saw the Promethean pattern of progress as a necessary evil on the path to a utopian leisure society based on traditional, universal religious values:

    I see us free, therefore, to return to some of the most sure and certain principles of religion and traditional virtue-that avarice is a vice, that the exaction of usury is a misdemeanour, and the love of money is detestable, that those walk most truly in the paths of virtue and sane wisdom who take least thought for the morrow. We shall once more value ends above means and prefer the good to the useful. We shall honour those who can teach us how to pluck the hour and the day virtuously and well, the delightful people who are capable of taking direct enjoyment in things, the lilies of the field who toil not, neither do they spin.
    
    But beware! The time for all this is not yet. For at least another hundred years we must pretend to ourselves and to every one that fair is foul and foul is fair; for foul is useful and fair is not. Avarice and usury and precaution must be our gods for a little longer still. For only they can lead us out of the tunnel of economic necessity into daylight.

    Perceptions of moral decline however, have no necessary relationship with actual moral decline. As Joseph Tainter observes in The Collapse of Complex Societies:

    Values of course, vary culturally, socially and individually…What one individual, society, or culture values highly another does not…Most of us approve, in general, of that which culturally is most like or most pleasing, or at least most intelligible to us. The result is a global bedlam of idiosyncratic ideologies, each claiming exclusive possession of ‘truth.’…

    The ‘decadance’ concept seems particularly detrimental [and is] notoriously difficult to define. Decadent behavior is that which differs from one’s own moral code, particular if the offender at some former time behaved in a manner of which one approves. There is no clear causal link between the morality of behavior and political fortunes.

    While there is no actual moral decline in any meaningful absolute sense, the anxiety experienced by pastoralists is real. For those who yearn for paternalistic authority, more lifestyle possibilities leads to a sense of anomie rather than freedom. It triggers what the philosopher George Steiner called nostalgia for the absolute.5 Calls for a retreat to tradition or a collectivist drive towards the Next Big Thing (often an Updated Old Thing, as in the case of President Obama’s call for a “new Sputnik moment” a few years ago) share a yearning for a simpler world. But, as Steiner notes:

    I do not think it will work. On the most brutal, empirical level, we have no example in history…of a complex economic and technological system backtracking to a more simple, primitive level of survival. Yes, it can be done individually. We all, I think, in the universities now have a former colleague or student somewhere planting his own organic food, living in a cabin in the forest, trying to educate his family far from school. Individually it might work. Socially, I think, it is moonshine.

    In 1974, the year of peak centralization, Steiner was presciently observing the beginnings of the transformation. Today, the angst he observed on university campuses has turned into a society-wide condition of pastoral longing, and a pervasive sense of moral decay.

    For Prometheans, on the other hand, not only is there no decay, there is actual moral progress.

    The Principle of Generative Pluralism

    Prometheans understand technological evolution in terms of increasing diversity of lived values, in the form of more varied actual lifestyles. From any given pastoral perspective, such increasing pluralism is a sign of moral decline, but from a Promethean perspective, it is a sign of moral progress catalyzed by new technological capabilities.

    Emerging lifestyles introduce new lived values into societies. Hamilton did not just suggest a way out of the rural squalor1 that was the reality of the Jeffersonian pastoral. His way also led to the dismantlement of slavery, the rise of modern feminism and the gradual retreat of colonial oppression and racism. Today, we are not just leaving the World Fairs pastoral behind for a richer technological future. We are also leaving behind its paternalistic institutions, narrow “resource” view of nature, narrow national identities and intolerance of non-normative sexual identities.

    Promethean attitudes begin with an acknowledgment of the primacy of lived values over abstract doctrines. This does not mean that lived values must be uncritically accepted or left unexamined. It just means that lived values must be judged on their own merit, rather than through the lens of a prejudiced pastoral vision.

    The shift from car-centric to smartphone-centric priorities in urban transportation is just one aspect of a broader shift from hardware-centric to software-centric lifestyles. Rideshare driver, carless urban professional and low-income-high-mobility are just the tip of an iceberg that includes many other emerging lifestyles, such as eBay or Etsy merchant, blogger, indie musician and search-engine marketer. Each new software-enabled lifestyle adds a new set of lived values and more apparent profanity to society. Some, like rent-over-own values, are shared across many emerging lifestyles and threaten pastorals like the “American Dream,” built around home ownership. Others, such as dietary preferences, are becoming increasingly individualized and weaken the very idea of a single “official food pyramid” pastoral script for all.

    Such broad shifts have historically triggered change all the way up to the global political order. Whether or not emerging marginal ideologies2 achieve mainstream prominence, their sense of proportions and priorities, driven by emerging lifestyles and lived values, inevitably does.

    These observations are not new among historians of technology, and have led to endless debates about whether societal values drive technological change (social determinism) or whether technological change drives societal values (technological determinism). In practice, the fact that people change and disrupt the dominant prevailing ideal of “human values” renders the question moot. New lived values and new technologies simultaneously irrupt into society in the form of new lifestyles. Old lifestyles do not necessarily vanish: there are still Jeffersonian small farmers and traditional blacksmiths around the world for instance. Rather, they occupy a gradually diminishing role in the social order. As a result, new and old technologies and an increasing number of value systems coexist.

    In other words, human pluralism eventually expands to accommodate the full potential of technological capabilities.3

    We call this the principle of generative pluralism. Generative pluralism is what allows the virtuous cycle of surplus and spillover to operate. Ephemeralization — the ability to gradually do more with less — creates room for the pluralistic expansion of lifestyle possibilities and individual values, without constraining the future to a specific path.

    The inherent unpredictability in the principle implies that both technological and social determinism are incomplete models driven by zero-sum thinking. The past cannot “determine” the future at all, because the future is more complex and diverse. It embodies new knowledge about the world and new moral wisdom, in the form of a more pluralistic and technologically sophisticated society.

    Thanks to a particularly fertile kind of generative pluralism that we know as network effects, soft technologies like language and money have historically caused the greatest broad increases in complexity and pluralism. When more people speak a language or accept a currency, the potential of that language or currency increases in a non-zero-sum way. Shared languages and currencies allow more people to harmoniously co-exist, despite conflicting values, by allowing disputes to be settled through words or trade4 rather than violence. We should therefore expect software eating the world to cause an explosion in the variety of possible lifestyles, and society as a whole becoming vastly more pluralistic.

    And this is in fact what we are experiencing today.

    The principle also resolves the apparent conflict between human agency and “what technology wants”: Far from limiting human agency, technological evolution in fact serves as the most complete expression of it. Technology evolution takes on its unstoppable and inevitable character only after it breaks smart from authoritarian control and becomes part of unpredictable and unscripted collective invention culture. The existence of thousands of individuals and firms working relatively independently on the same frontier means that every possibility will not only be uncovered, it will be uncovered by multiple individuals, operating with different value systems, at different times and places. Even if one inventor chooses not to pursue a possibility, chances are, others will. As a result, all pastoralist forms of resistance are eventually overwhelmed. But the process retains rational resistance to paths that carry risk of ending the infinite game for all, in proportion to their severity. As global success in limiting the spread of nuclear and biological weapons shows, generative pluralism is not the same as mad scientists and James Bond villains running amok.

    Prometheans who discover high-leverage unexpected possibilities enter a zone of serendipity. The universe seems to conspire to magnify their agency to superhuman levels. Pastoralists who reject change altogether as profanity turn lack of agency into a self-fulfilling prophecy, and enter a zone of zemblanity. The universe seems to conspire to diminish whatever agency they do have, resulting in the perception that technology diminishes agency.

    Power, unlike capability, is zero-sum, since it is defined in terms of control over other human beings. Generative pluralism implies that on average, pastoralists are constantly ceding power to Prometheans. In the long term, however, the loss of power is primarily a psychological rather than material loss. To the extent that ephemeralization frees us of the need for power, we have less use for a disproportionate share.

    As a simple example, consider a common twentieth-century battleground: public signage. Today, different languages contend for signaling power in public spaces. In highly multilingual countries, this contention can turn violent. But automated translation and augmented reality technologies5 can make it unnecessary to decide, for instance, whether public signage in the United States ought to be in English, Spanish or both. An arbitrary number of languages can share the same public spaces, and there is much less need for linguistic authoritarianism. Like physical sports in an earlier era, soft technologies such as online communities, video games and augmented reality are all slowly sublimating our most violent tendencies. The 2014 protests in Ferguson, MO, are a powerful example. Compared to the very similar civil rights riots in the 1960s, information in the form of social media coverage, rather than violence, was the primary medium of influence.

    The broader lesson of the principle of generative pluralism is this: through technology, societies become intellectually capable of handling progressively more complex value-based conflicts. As societies gradually awaken to resolution mechanisms that do not require authoritarian control over the lives of others, they gradually substitute intelligence and information for power and coercion.

    The Future in the Rear-View Mirror

    So far, we have tried to convey a visceral sense of what is essentially an uneven global condition of explosive positive change. Change that is progressing at all levels from individual to business to communities to the global societal order. Perhaps most important part of the change is that we are experiencing a systematic substitution of intelligence for brute authoritarian power in problem solving, allowing a condition of vastly increased pluralism to emerge.

    Paradoxically, due to the roots of vocal elite discontent in pastoral sensibilities, this analysis is valid only to the extent that it feels viscerally wrong. And going by the headlines of the past few years, it certainly does.

    Much of our collective sense of looming chaos and paradises being lost is in fact a clear and unambiguous sign of positive change in the world. By this model, if our current collective experience of the human condition felt utopian, with cultural elites extolling its virtues, we should be very worried indeed. Societies that present a facade of superficial pastoral harmony, as in the movie Stepford Wives, tend to be sustained by authoritarian, non-pluralistic polities, hidden demons, and invisible violence.

    Innovation can in fact be defined as ongoing moral progress achieved by driving directly towards the regimes of greatest moral ambiguity, where our collective demons lurk. These are also the regimes where technology finds its maximal expressions, and it is no accident that the two coincide. Genuine progress feels like onrushing obscenity and profanity, and also requires new technological capabilities to drive it.

    The subjective psychological feel of this evolutionary process is what Marshall McLuhan described in terms of a rear-view mirror effect: “we see the world through a rear-view mirror. We march backwards into the future.”

    Our aesthetic and moral sensibilities are oriented by default towards romanticized memories of paradises lost. Indeed, this is the only way we can enter the future. Our constantly pastoralizing view of the world, grounded in the past, is the only one we have. The future, glimpsed only through a small rear-view mirror, is necessarily framed by the past. To extend McLuhan’s metaphor, the great temptation is to slam on the brakes and shift from what seems like reverse gear into forward gear. The paradox of progress is that what seems like the path forward is in fact the reactionary path of retreat. What seems like the direction of decline is in fact the path forward.

    Today, our collective rear-view mirror is packed with seeming profanity, in the form of multiple paths of descent into hell. Among the major ones that occupy our minds are the following:

    • Technological Unemployment: The debate around technological unemployment and the concern that “this time it is different” with AI and robots “eating all the jobs.”
    • Inequality: The rising concern around persistent inequality and the fear that software, unlike previous technologies, does not offer much opportunity outside of an emerging intellectual elite of programmers and financiers.
    • “Real” Problems: The idea that “real” problems such as climate change, collapsing biodiversity, healthcare, water scarcity and energy security are being neglected, while talent and energy are being frivolously expended on “trivial” photo-sharing apps.
    • “Real” Innovation: The idea that “real” innovation in areas such as space exploration, flying cars and jetpacks has stagnated.
    • National Competitiveness: The idea that software eating the world threatens national competitiveness based on manufacturing prowess and student performance on standardized tests.
    • Cultural Decline: The idea that social networks, and seemingly “low-quality” new media and online education are destroying intellectual culture.
    • Cybersecurity: The concern that vast new powers of repression are being gained by authoritarian forces, threatening freedom everywhere: Surveillance and cyberwarfare technologies (the latter ranging from worms like Stuxnet created by intelligence agencies, to drone strikes) beyond the reach of average citizens.
    • The End of the Internet: The concern that new developments due to commercial interests pose a deep and existential threat to the freedoms and possibilities that we have come to associate with the Internet.

    These are such complex and strongly coupled themes that conversations about any one of them quickly lead to a jumbled discussion of all of them, in the form of an ambiguous “inequality, surveillance and everything” non-question. Dickens’ memorable opening paragraph in A Tale of Two Cities captures this state of confused urgency and inchoate anxiety perfectly:

    It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way – in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only.

    Such a state of confused urgency often leads to hasty and ill-conceived grand pastoralist schemes by way of the well-known politician’s syllogism:1

    Something must be done
    
    This is something
    
    This must be done

    Promethean sensibilities suggest that the right response to the sense of urgency is not the politician’s syllogism, but counter-intuitive courses of action: driving straight into the very uncertainties the ambiguous problem statements frame. Often, when only reactionary pastoralist paths are under consideration, this means doing nothing, and allowing events to follow a natural course.

    In other words, our basic answer to the non-question of “inequality, surveillance and everything” is this: the best way through it is through it. It is an answer similar in spirit to the stoic principle that “the obstacle is the way” and the Finnish concept of sisu: meeting adversity head-on by cultivating a capacity for managing stress, rather than figuring out schemes to get around it. Seemingly easier paths, as the twentieth century’s utopian experiments showed, create a great deal more pain in the long run.

    Broken though they might seem, the mechanisms we need for working through “inequality, surveillance and everything” are the generative, pluralist ones we have been refining over the last century: liberal democracy, innovation, entrepreneurship, functional markets and the most thoughtful and limited new institutions we can design.

    This answer will strike many as deeply unsatisfactory and perhaps even callous. Yet, time and again, when the world has been faced with seemingly impossible problems, these mechanisms have delivered.

    Beyond doing the utmost possible to shield those most exposed to, and least capable of enduring, the material pain of change, it is crucial to limit ourselves and avoid the temptation of reactionary paths suggested by utopian or dystopian visions, especially those that appear in futurist guises. The idea that forward is backward and sacred is profane will never feel natural or intuitive, but innovation and progress depend on acting by these ideas anyway.

    In the remaining essays in this series, we will explore what it means to act by these ideas.

    A Tale of Two Computers

    Part-way through Douglas Adams’ Hitchhikers’ Guide to the Galaxy, we learn that Earth is not a planet, but a giant supercomputer built by a race of hyperintelligent aliens. Earth was designed by a predecessor supercomputer called Deep Thought, which in turn had been built to figure out the answer to the ultimate question of “Life, the Universe and Everything.” Much to the annoyance of the aliens, the answer turns out to be a cryptic and unsatisfactory “42.”

    What is 7 times 6?

    We concluded the previous essay with our own ultimate question of “Inequality, Surveillance and Everything.” The basic answer we offered — “the best way through it is through it” — must seem as annoying, cryptic and unsatisfactory as Deep Thought’s “42.”

    In Adams’ tale, Deep Thought gently suggests to the frustrated aliens that perhaps the answer seemed cryptic because they never understood the question in the first place. Deep Thought then proceeds to design Earth to solve the much tougher problem of figuring out the actual question.

    First performed as a radio show in 1978, Adams’ absurdist epic precisely portrayed the societal transformation that was gaining momentum at the time. Rapid technological progress due to computing was accompanied by cryptic and unsatisfactory answers to confused and urgent-seeming questions about the human condition. Our “Inequality, Surveillance and Everything” form of the non-question is not that different from the corresponding non-question of the late 1970s: “Cold War, Globalization and Everything.” Then, as now, the frustrating but correct answer was “the best way through it is through it.”

    The Hitchhiker’s Guide can be read as a satirical anti-morality tale about pastoral sensibilities, utopian solutions and perfect answers. In their dissatisfaction with the real “Ultimate Answer,” the aliens failed to notice the truly remarkable development: they had built an astoundingly powerful computer, which had then proceeded to design an even more powerful successor.

    Like the aliens, we may not be satisfied with the answers we find to timeless questions, but simply by asking the questions and attempting to answer them, we are bootstrapping our way to a more advanced society.

    As we argued in the last essay, the advancement is both technological and moral, allowing for a more pluralistic society to emerge from the past.

    Adams died in 2001, just as his satirical visions, which had inspired a generation of technologists, started to actually come true. Just as Deep Thought had given rise to a fictional “Earth” computer, centralized mainframe computing of the industrial era gave way to distributed, networked computing. In a rather perfect case of life imitating art, IBM researchers named a powerful chess-playing supercomputer Deep Thought in the 1990s, in honor of Adams’ fictional computer. A later version, Deep Blue, became the first computer to beat the reigning human champion in 1997. But the true successor to the IBM era of computing was the planet-straddling distributed computer we call the Internet.

    Manufactured in Taiwan

    Science fiction writer Neal Stephenson noted the resulting physical transformation as early as 1996, in his essay on the undersea cable-laying industry, Mother Earth, Motherboard.1 By 2004, Kevin Kelly had coined a term and launched a new site to talk about the idea of digitally integrated technology as a single, all-subsuming social reality,2 emerging on this motherboard:

    I’m calling this site The Technium. It’s a word I’ve reluctantly coined to designate the greater sphere of technology – one that goes beyond hardware to include culture, law, social institutions, and intellectual creations of all types. In short, the Technium is anything that springs from the human mind. It includes hard technology, but much else of human creation as well. I see this extended face of technology as a whole system with its own dynamics.

    The metaphor of the world as a single interconnected entity that subsumes human existence is an old one, and in its modern form, can be traced at least to Hobbes’ Leviathan (1651), and Herbert Spencer’s The Social Organism (1853). What is new about this specific form is that it is much more than a metaphor. The view of the world as a single, connected, substrate for computation is not just a poetic way to appreciate the world: It is a way to shape it and act upon it. For many software projects, the idea that “the network is the computer” (due to John Gage, a computing pioneer at Sun Microsystems) is the only practical perspective.

    While the pre-Internet world can also be viewed as a programmable planetary computer based on paperware, what makes today’s planetary computer unique in history is that almost anyone with an Internet connection can program it at a global scale, rather than just powerful leaders with the ability to shape organizations.

    The kinds of programming possible on such a vast, democratic scale have been rapidly increasing in sophistication. In November 2014 for instance, within a few days of the Internet discovering and becoming outraged by a sexist 2013 Barbie comic-book titled Computer Engineer Barbie, hacker Kathleen Tuite had created a web app (using an inexpensive cloud service called Heroku) allowing anyone to rewrite the text of the book. The hashtag #FeministHackerBarbie immediately went viral. Coupled with the web app, the hashtag unleashed a flood of creative rewrites of the Barbie book. What would have been a short-lived flood of outrage only a few years ago had turned into a breaking-smart moment for the entire software industry.

    To appreciate just how remarkable this episode was, consider this: a hashtag is effectively an instantly defined soft network within the Internet, with capabilities comparable to the entire planet’s telegraph system a century ago. By associating a hashtag with the right kind of app, Tuite effectively created an entire temporary publishing company, with its own distribution network, in a matter of hours rather than decades. In the process, reactive sentiment turned into creative agency.

    These capabilities emerged in just 15 years: practically overnight by the normal standards of technological change.

    In 1999, SETI@home,3 the first distributed computing project to capture the popular imagination, merely seemed like a weird way to donate spare personal computing power to science. By 2007, Facebook, Twitter, YouTube, Wikipedia and Amazon’s Mechanical Turk4 had added human creativity, communication and money into the mix, and the same engineering approaches had created the social web. By 2014, experimental mechanisms developed in the culture of cat memes5 were influencing elections. The penny-ante economy of Amazon’s Mechanical Turk had evolved into a world where bitcoin miners were making fortunes, car owners were making livable incomes through ridesharing on the side, and canny artists were launching lucrative new careers on Kickstarter.

    Even as the old planet-scale computer declines, the new one it gave birth to is coming of age.

    In our Tale of Two Computers, the parent is a four-century-old computer whose basic architecture was laid down in the zero-sum mercantile age. It runs on paperware, credentialism, and exhaustive territorial claims that completely carve up the world with strongly regulated boundaries. Its structure is based on hierarchically arranged container-like organizations, ranging from families to nations. In this order of things, there is no natural place for a free frontier. Ideally, there is a place for everything, and everything is in its place. It is a computer designed for stability, within which innovation is a bug rather than a feature.

    We’ll call this planet-scale computer the geographic world.

    The child is a young, half-century old computer whose basic architecture was laid down during the Cold War. It runs on software, the hacker ethos, and soft networks that wire up the planet in ever-richer, non-exclusive, non-zero-sum ways. Its structure is based on streams like Twitter: open, non-hierarchical flows of real-time information from multiple overlapping networks. In this order of things, everything from banal household gadgets to space probes becomes part of a frontier for ceaseless innovation through bricolage. It is a computer designed for rapid, disorderly and serendipitous evolution, within which innovation, far from being a bug, is the primary feature.

    We’ll call this planet-scale computer the networked world.

    The networked world is not new. It is at least as old as the oldest trade routes, which have been spreading subversive ideas alongside valuable commodities throughout history. What is new is its growing ability to dominate the geographic world. The story of software eating the world is the also the story of networks eating geography.

    There are two major subplots to this story. The first subplot is about bits dominating atoms. The second subplot is about the rise of a new culture of problem-solving.

    The Immortality of Bits

    In 2015, it is safe to say that the weird problem-solving mechanisms of SETI@home and kitten-picture sharing have become normal problem-solving mechanisms for all domains.

    Today it seems strange to not apply networked distributed computing involving both neurons and silicon to any complex problem. The term social media is now unnecessary: Even when there are no humans involved, problem-solving on this planet-scale computer almost necessarily involves social mechanisms. Whatever the mix of humans, software and robots involved, solutions tend to involve the same “social” design elements: real-time information streams, dynamically evolving patterns of trust, fluid identities, rapidly negotiated collaborations, unexpected emergent problem decompositions, efficiently allocated intelligence, and frictionless financial transactions.

    Each time a problem is solved using these elements, the networked world is strengthened.

    As a result of this new and self-reinforcing normal in problem-solving, the technological foundation of our planet is evolving with extraordinary rapidity. The process is a branching, continuous one rather than the staged, sequential process suggested by labels like Web 2.0 and Web 3.01, which reflect an attempt to understand it in somewhat industrial terms. Some recently sprouted extensions and branches have already been identified and named: the Mobile Web, the Internet of Things (IoT), streaming media, Virtual Reality (VR), Augmented Reality (AR) and the blockchain. Others will no doubt emerge in profusion, further blurring the line between real and virtual.

    Surprisingly, as a consequence of software eating the technology industry itself, the specifics of the hardware are not important in this evolution. Outside of the most demanding applications, data, code, and networking are all largely hardware-agnostic today.

    The Internet Wayback Machine,2 developed by Brewster Kahle and Bruce Gilliat in 1996, has already preserved a history of the web across a few generations of hardware. While such efforts can sometimes seem woefully inadequate with respect to pastoralist visions of history preservation, it is important to recognize the enormity of the advance they represent over paper-based collective memories.

    Crashing storage costs and continuously upgraded datacenter hardware allows corporations to indefinitely save all the data they generate. This is turning out to be cheaper than deciding what to do with it3 in real time, resulting in the Big Data approach to business. At a personal level, cloud-based services like Dropbox make your personal data trivial to move across computers.

    Most code today, unlike fifty years ago, is in hardware-independent high-level programming languages rather than hardware-specific machine code. As a result of virtualization (technology that allows one piece of hardware to emulate another, a fringe technology until around 20004), most cloud-based software runs within virtual machines and “code containers” rather than directly on hardware. Containerization in shipping drove nearly a seven-fold increase5 in trade among industrialized nations over 20 years. Containerization of code is shaping up to be even more impactful in the economics of software.

    Networks too, are defined primarily in software today. It is not just extremely high-level networks, such as the transient, disposable ones defined by hashtags, that exist in software. Low-level networking software can also persist across generations of switching equipment and different kinds of physical links, such as telephone lines, optic fiber cables and satellite links. Thanks to the emerging technology of software-defined networking (SDN), functions that used to be performed by network hardware are increasingly performed by software.

    In other words, we don’t just live on a networked planet. We live on a planet networked by software, a distinction that makes all the difference. The software-networked planet is an entity that can exist in a continuous and coherent way despite continuous hardware churn, just as we humans experience a persistent identity, even though almost every atom in our bodies gets swapped out every few years.

    This is a profound development. We are used to thinking of atoms as enduring and bits as transient and ephemeral, but in fact the reverse is more true today.

    bitsoveratoms75ppi

    The emerging planetary computer has the capacity to retain an evolving identity and memory across evolutionary epochs in hardware, both silicon and neural. Like money and writing, software is only dependent on hardware in the short term, not in the long term. Like the US dollar or the plays of Shakespeare, software and software-enabled networks can persist through changes in physical technology.

    By contrast it is challenging to preserve old hard technologies even in museums, let alone in working order as functional elements of society. When software eats hardware, however, we can physically or virtually recreate hardware as necessary, imbuing transient atoms with the permanence of bits.

    For example, the Realeaux collection of 19th century engineering mechanisms, a priceless part of mechanical engineering heritage, is now available as a set of 3d printable models from Cornell University6 for students anywhere in the world to download, print and study. A higher-end example is NASA’s reverse engineering of 1970s-vintage Saturn V rocket engines.7 The complex project used structured light 3d scanning to reconstruct accurate computer models, which were then used to inform a modernized design. Such resurrection capabilities even extend to computing hardware itself. In 1997, using modern software tools, researchers at the University of Pennsylvania led by Jan Van Der Spiegel recreated ENIAC, the first modern electronic computer — in the form of an 8mm by 8mm chip.8

    As a result of such capabilities, the very idea of hardware obsolescence is becoming obsolete. Rapid evolution does not preclude the persistence of the past in a world of digital abundance.

    The potential in virtual and augmented reality is perhaps even higher, and the potential goes far beyond consumption devices like the Oculus VR, Magic Leap, Microsoft Hololens and the Leap 3d motion sensor. The more exciting story is that production capabilities are being democratized. In the early decades of prohibitively expensive CGI and motion capture technology, only big-budget Hollywood movies and video games could afford to create artificial realities. Today, with technologies like Microsoft’s Photosynth (which allows you to capture 3d imagery with smartphones), SketchUp, (a powerful and free 3d modeling tool), 3d Warehouse (a public repository of 3d virtual objects), Unity (a powerful game-design tool) and 3d scanning apps such as Trimensional, it is becoming possible for anyone to create living historical records and inhabitable fictions in the form of virtual environments. The Star Trek “holodeck” is almost here: our realities can stay digitally alive long after they are gone in the physical world.

    These are more than cool toys. They are soft technological capabilities of enormous political significance. Software can preserve the past in the form of detailed, relivable memories that go far beyond the written word. In 1964, only the “Big 3” network television crews had the ability to film the civil rights riots in America, making the establishment record of events the only one. A song inspired by the movement was appropriately titled This revolution will not be televised. In 1991, a lone witness with a personal camcorder videotaped the tragic beating of Rodney King, triggering the Los Angeles riots.

    Fast-forwarding fifteen years, in 2014, smartphones were capturing at least fragments of nearly every important development surrounding the death of Michael Brown in Ferguson, and thousands of video cameras were being deployed to challenge the perspectives offered by the major television channels. In a rare display of consensus, civil libertarians on both the right and left began demanding that all police officers and cars be equipped with cameras that cannot be turned off. Around the same time, the director of the FBI was reduced to conducting a media roadshow to attempt to stall the spread of cryptographic technologies capable of limiting government surveillance.

    In just a year after the revelations of widespread surveillance by the NSA, the tables were already being turned.

    It is only a matter of time before all participants in every event of importance will be able to record and share their experiences from their perspective as comprehensively as they want. These can then turn into collective, relivable, 3d memories that are much harder for any one party to manipulate in bad faith. History need no longer be written by past victors.

    Even authoritarian states are finding that surveillance capabilities cut both ways in the networked world. During the 2014 #Occupy protests in Hong Kong for instance, drone imagery allowed news agencies to make independent estimates of crowd sizes,9 limiting the ability of the government to spin the story as a minor protest. Software was being used to record history from the air, even as it was being used to drive the action on the ground.

    When software eats history this way, as it is happening, the ability to forget10 becomes a more important political, economic and cultural concern than the ability to remember.

    When bits begin to dominate atoms, it no longer makes sense to think of virtual and physical worlds as separate, detached spheres of human existence. It no longer makes sense to think of machine and human spheres as distinct non-social and social spaces. When software eats the world, “social media,” including both human and machine elements, becomes the entire Internet. “The Internet” in turn becomes the entire world. And in this fusion of digital and physical, it is the digital that dominates.

    The fallacious idea that the online world is separate from and subservient to the offline world (an idea called digital dualism, the basis for entertaining but deeply misleading movies such as Tron and The Matrix) yields to an understanding of the Internet as an alternative basis for experiencing all reality, including the old basis: geography.

    Science fiction writer Bruce Sterling captured the idea of bits dominating atoms with his notion of “spimes” — enduring digital master objects that can be flexibly realized in different physical forms as the need arises. A book, for instance, is a spime rather than a paper object today, existing as a master digital copy that can evolve indefinitely, and persist beyond specific physical copies.

    At a more abstract level, the idea of a “journey” becomes a spime that can be flexibly realized in many ways, through specific physical vehicles or telepresence technologies. A “television news show” becomes an abstract spime that might be realized through the medium of a regular television crew filming on location, an ordinary citizen livestreaming events she is witnessing, drone footage, or official surveillance footage obtained by activist hackers.

    Spimes in fact capture the essential spirit of bricolage: turning ideas into reality using whatever is freely or cheaply available, instead of through dedicated resources controlled by authoritarian entities. This capability highlights the economic significance of bits dominating atoms. When the value of a physical resource is a function of how openly and intelligently it can be shared and used in conjunction with software, it becomes less contentious. In a world organized by atoms-over-bits logic, most resources are by definition what economists call rivalrous: if I have it, you don’t. Such captive resources are limited by the imagination and goals of one party. An example is a slice of the electromagnetic spectrum reserved for a television channel. Resources made intelligently open to all on the other hand, such as Twitter, are limited only by collective technical ingenuity. The rivalrousness of goods becomes a function of the the amount of software and imagination used to leverage them, individually or collectively.

    When software eats the economy, the so-called “sharing economy” becomes the entire economy, and renting, rather than ownership, becomes the default logic driving consumption.

    The fact that all this follows from “social” problem-solving mechanisms suggests that the very meaning of the word has changed. As sociologist Bruno Latour has argued, “social” is now about more than the human. It includes ideas and objects flexibly networked through software. Instead of being an externally injected alien element, technology and innovation become part of the definition of what it means to be social.

    What we are living through today is a hardware and software upgrade for all of civilization. It is, in principle no different from buying a new smartphone and moving music, photos, files and contacts to it. And like a new smartphone, our new planet-scale hardware comes with powerful, but disorienting new capabilities. Capabilities that test our ability to adapt.

    And of all the ways we are adapting, the single most important one is the adaptation in our problem-solving behaviors.

    This is the second major subplot in our Tale of Two Computers. Wherever bits begin to dominate atoms, we solve problems differently. Instead of defining and pursuing goals we create and exploit luck.

    Tinkering versus Goal

    Upgrading a planet-scale computer is, of course, a more complex matter than trading in an old smartphone for a new one, so it is not surprising that it has already taken us nearly half a century, and we’re still not done.

    Since 1974, the year of peak centralization, we have been trading in a world whose functioning is driven by atoms in geography for one whose functioning is driven by bits on networks. The process has been something like vines growing all over an aging building, creeping in through the smallest cracks in the masonry to establish a new architectural logic.

    The difference between the two is simple: the geographic world solves problems in goal-driven ways, through literal or metaphoric zero-sum territorial conflict. The networked world solves them in serendipitous ways, through innovations that break assumptions about how resources can be used, typically making them less rivalrous and unexpectedly abundant.

    Goal-driven problem-solving follows naturally from the politician’s syllogism: we must do something; this is something; we must do this. Such goals usually follow from gaps between reality and utopian visions. Solutions are driven by the deterministic form-follows-function1 principle, which emerged with authoritarian high-modernism in the early twentieth century. At its simplest, the process looks roughly like this:

    Problem selection: Choose a clear and important problem
    Resourcing: Capture resources by promising to solve it
    Solution: Solve the problem within promised constraints
    This model is so familiar that it seems tautologically equivalent to “problem solving”. It is hard to see how problem-solving could work any other way. This model is also an authoritarian territorial claim in disguise. A problem scope defines a boundary of claimed authority. Acquiring resources means engaging in zero-sum competition to bring them into your boundary, as captive resources. Solving the problem generally means achieving promised effects within the boundary without regard to what happens outside. This means that unpleasant unintended consequences — what economists call social costs — are typically ignored, especially those which impact the least powerful.

    We have already explored the limitations of this approach in previous essays, so we can just summarize them here. Choosing a problem based on “importance” means uncritically accepting pastoral problem frames and priorities. Constraining the solution with an alluring “vision” of success means limiting creative possibilities for those who come later. Innovation is severely limited: You cannot act on unexpected ideas that solve different problems with the given resources, let alone pursue the direction of maximal interestingness indefinitely. This means unseen opportunity costs can be higher than visible benefits. You also cannot easily pursue solutions that require different (and possibly much cheaper) resources than the ones you competed for: problems must be solved in pre-approved ways.

    This is not a process that tolerates uncertainty or ambiguity well, let alone thrive on it. Even positive uncertainty becomes a problem: an unexpected budget surplus must be hurriedly used up, often in wasteful ways, otherwise the budget might shrink next year. Unexpected new information and ideas, especially from novel perspectives — the fuel of innovation — are by definition a negative, to be dealt with like unwanted interruptions. A new smartphone app not anticipated by prior regulations must be banned.

    In the last century, the most common outcome of goal-directed problem solving in complex cases has been failure.

    The networked world approach is based on a very different idea. It does not begin with utopian goals or resources captured through specific promises or threats. Instead it begins with open-ended, pragmatic tinkering that thrives on the unexpected. The process is not even recognizable as a problem-solving mechanism at first glance:

    Immersion in relevant streams of ideas, people and free capabilities
    Experimentation to uncover new possibilities through trial and error
    Leverage to double down on whatever works unexpectedly well
    Where the politician’s syllogism focuses on repairing things that look broken in relation to an ideal of changeless perfection, the tinkerer’s way focuses on possibilities for deliberate change. As Dilbert creator Scott Adams observed, “Normal people don’t understand this concept; they believe that if it ain’t broke, don’t fix it. Engineers believe that if it ain’t broke, it doesn’t have enough features yet.”2

    What would be seemingly pointless disruption in an unchanging utopia becomes a way to stay one step ahead in a changing environment. This is the key difference between the two problem-solving processes: in goal-driven problem-solving, open-ended ideation is fundamentally viewed as a negative. In tinkering, it is a positive.

    The first phase — inhabiting relevant streams — can look like idle procrastination on Facebook and Twitter, or idle play with cool new tools discovered on Github. But it is really about staying sensitized to developing opportunities and threats. The perpetual experimentation, as we saw in previous essays, feeds via bricolage on whatever is available. Often these are resources considered “waste” by neighboring goal-directed processes: a case of social costs being turned into assets. A great deal of modern data science for instance, begins with “data exhaust”: data of no immediate goal-directed use to an organization that would normally get discarded in an environment of high storage costs. Since the process begins with low-stakes experimentation, the cost of failures is naturally bounded. The upside, however, is unbounded: there is no necessary limit to what unexpected leveraged uses you might discover for new capabilities.

    Tinkerers — be they individuals or organizations — in possession of valuable but under-utilized resources tend to do something counter-intuitive. Instead of keeping idle resources captive, they open up access to as many people as possible, with as few strings attached as possible, in the hope of catalyzing spillover tinkering. Where it works, thriving ecosystems of open-ended innovation form, and steady streams of new wealth begin to flow. Those who share interesting and unique resources in such open ways gain a kind of priceless goodwill money cannot buy. The open-source movement, Google’s Android operating system, Big Data technology, the Arduino hardware experimentation kit and the OpenROV underwater robot all began this way. Most recently, Tesla voluntarily opened up access to its electric vehicle technology patents under highly liberal terms compared to automobile industry norms.

    Tinkering is a process of serendipity-seeking that does not just tolerate uncertainty and ambiguity, it requires it. When conditions for it are right, the result is a snowballing effect where pleasant surprises lead to more pleasant surprises.

    What makes this a problem-solving mechanism is diversity of individual perspectives coupled with the law of large numbers (the statistical idea that rare events can become highly probable if there are enough trials going on). If an increasing number of highly diverse individuals operate this way, the chances of any given problem getting solved via a serendipitous new idea slowly rises. This is the luck of networks.

    Serendipitous solutions are not just cheaper than goal-directed ones. They are typically more creative and elegant, and require much less conflict. Sometimes they are so creative, the fact that they even solve a particular problem becomes hard to recognize. For example, telecommuting and video-conferencing do more to “solve” the problem of fossil-fuel dependence than many alternative energy technologies, but are usually understood as technologies for flex-work rather than energy savings.

    Ideas born of tinkering are not targeted solutions aimed at specific problems, such as “climate change” or “save the middle class,” so they can be applied more broadly. As a result, not only do current problems get solved in unexpected ways, but new value is created through surplus and spillover. The clearest early sign of such serendipity at work is unexpectedly rapid growth in the adoption of a new capability. This indicates that it is being used in many unanticipated ways, solving both seen and unseen problems, by both design and “luck”.

    Venture capital is ultimately the business of detecting such signs of serendipity early and investing to accelerate it. This makes Silicon Valley the first economic culture to fully and consciously embrace the natural logic of networks. When the process works well, resources flow naturally towards whatever effort is growing and generating serendipity the fastest. The better this works, the more resources flow in ways that minimize opportunity costs.

    From the inside, serendipitous problem solving feels like the most natural thing in the world. From the perspective of goal-driven problem solvers, however, it can look indistinguishable from waste and immoral priorities.

    This perception exists primarily because access to the luck of sufficiently weak networks can be slowed down by sufficiently strong geographic world boundaries (what is sometimes called bahramdipity: serendipity thwarted by powerful forces). Where resources cannot stream freely to accelerate serendipity, they cannot solve problems through engineered luck, or create surplus wealth. The result is growing inequality between networked and geographic worlds.

    This inequality superficially resembles the inequality within the geographic world created by malfunctioning financial markets, crony capitalism and rent-seeking behaviors. As a result, it can be hard for non-technologists to tell Wall Street and Silicon Valley apart, even though they represent two radically different moral perspectives and approaches to problem-solving. When the two collide on highly unequal terms, as they did in the cleantech sector in the late aughts, the overwhelming advantage enjoyed by geographic-world incumbents can prove too much for the networked world to conquer. In the case of cleantech, software was unable to eat the sector and solve its problems in large part due to massive subsidies and protections available to incumbents.

    But this is just a temporary state. As the networked world continues to strengthen, we can expect very different outcomes the next time it takes on problems in the cleantech sector.

    As a result of failures and limits that naturally accompany young and growing capabilities, the networked world can seem “unresponsive” to “real” problems.

    So while both Wall Street and Silicon Valley can often seem tone-deaf and unresponsive to pressing and urgent pains while minting new billionaires with boring frequency, the causes are different. The problems of Wall Street are real, and symptomatic of a true crisis of social and economic mobility in the geographic world. Those of Silicon Valley on the other hand, exist because not everybody is sufficiently plugged into the networked world yet, limiting its power. The best response we have come up with for the former is periodic bailouts for “too big to fail” organizations in both the public and private sector. The problem of connectivity on the other hand, is slowly and serendipitously solving itself as smartphones proliferate.

    This difference between the two problem-solving cultures carries over to macroeconomic phenomena as well.

    Unlike booms and busts in the financial markets, which are often artificially created, technological booms and busts are an intrinsic feature of wealth creation itself. As Carlota Perez notes, technology busts in fact typically open up vast new capabilities that were overbuilt during booms. They radically expand access to the luck of networks to larger populations. The technology bust of 2000 for instance, radically expanded access to the tools of entrepreneurship and began fueling the next wave of innovation almost immediately.

    The 2007 subprime mortgage bust, born of deceit and fraud, had no such serendipitous impact. It destroyed wealth overall, rather than creating it. The global financial crisis that followed is representative of a broader systematic crisis in the geographic world.

    The Zemblanity of Containers

    Structure, as the management theorist Alfred Chandler noted in his study of early industrial age corporations, follows strategy. Where a goal-driven strategy succeeds, the temporary scope of the original problem hardens into an enduring and policed organizational boundary. Temporary and specific claims on societal resources transform into indefinite and general captive property rights for the victors of specific political, cultural or military wars.

    containers75ppi

    As a result we get containers with eternally privileged insiders and eternally excluded outsiders: geographic-world organizations. By their very design, such organizations are what Daron Acemoglu and James Robinson call extractive institutions. They are designed not just to solve a specific problem and secure the gains, but to continue extracting wealth indefinitely. Whatever the broader environmental conditions, ideally wealth, harmony and order accumulate inside the victor’s boundaries, while waste, social costs, and strife accumulate outside, to be dealt with by the losers of resource conflicts.

    This description does not apply just to large banks or crony capitalist corporations. Even an organization that seems unquestionably like a universal good, such as the industrial age traditional family, comes with a societal cost. In the United States for example, laws designed to encourage marriage and home-ownership systematically disadvantage single adults and non-traditional families (who now collectively form more than half the population). Even the traditional family, as defined and subsidized by politics, is an extractive institution.

    Where extractive institutions start to form, it becomes progressively harder to solve future problems in goal-driven ways. Each new problem-solving effort has more entrenched boundaries to deal with. Solving new problems usually means taking on increasingly expensive conflict to redraw boundaries as a first step. In the developed world, energy, healthcare and education are examples of sectors where problem-solving has slowed to a crawl due to a maze of regulatory and other boundaries. The result has been escalating costs and declining innovation — what economist William Baumol has labeled the “cost disease.”

    The cost disease is an example of how, in their terminal state, goal-driven problem solving cultures exhaust themselves. Without open-ended innovation, the growing complexity of boundary redrawing makes most problems seem impossible. The planetary computer that is the geographic world effectively seizes up.

    On the cusp of the first Internet boom, the landscape of organizations that defines the geographic world was already in deep trouble. As Giles Deleuze noted around 1992:1

    We are in a generalized crisis in relation to all environments of enclosure — prison, hospital, factory, school, family…The administrations in charge never cease announcing supposedly necessary reforms…But everyone knows these environments are finished, whatever the length of their expiration periods. It’s only a matter of administering their last rites and of keeping people employed until the installation of new forces knocking at the door.

    The “crisis in environments of enclosure” is a natural terminal state for the geographic world. When every shared societal resource has been claimed by a few as an eternal and inalienable right, and secured behind regulated boundaries, the only way to gain something is to deprive somebody else of it through ideology-driven conflict.

    This is the zero-sum logic of mercantile economic organization, and dates to the sixteenth century. In fact, because some value is lost through conflict, in the absence of open-ended innovation, it can be worse than zero-sum: what decision theorists call negative-sum (the ultimate example of which is of course war).

    By the early twentieth century, mercantilist economic logic had led to the world being completely carved up in terms of inflexible land, water, air, mineral and — perhaps most relevant today — spectrum rights. Rights that could not be freely traded or renegotiated in light of changing circumstances.

    This is a grim reality we have a tendency to romanticize. As the etymology of words like organization and corporation suggests, we tend to view our social containers through anthropomorphic metaphors. We extend metaphoric and legal fictions of identity, personality, birth and death far beyond the point of diminishing marginal utility. We assume the “life” of these entities to be self-evidently worth extending into immortality. We even mourn them when they do occasionally enter irreversible decline. Companies like Kodak and Radio Shack for example, evoke such strong positive memories for many Americans that their decline seems truly tragic to many, despite the obvious irrelevance of the business models that originally fueled their rise. We assume that the fates of actual living humans is irreversibly tied to the fates of the artificial organisms they inhabit.

    In fact, in the late crisis-ridden state of the geographic world, the “goal” of a typical problem-solving effort is often to “save” some anthropomorphically conceived part of society, without any critical attention devoted to whether it is still necessary, or whether better alternatives are already serendipitously emerging. If innovation is considered a necessary ingredient in the solution at all, only sustaining innovations — those that help preserve and perfect the organization in question — are considered.

    Whether the intent is to “save” the traditional family, a failing corporation, a city in decline, or an entire societal class like the “American middle class,” the idea that the continued existence of any organization might be both unnecessary and unjustifiable is rejected as unthinkable. The persistence of geographic world organizations is prized for its own sake, whatever the changes in the environment.

    The dark side of such anthropomorphic romanticization is what we might call geographic dualism: a stable planet-wide separation of local utopian zones secured for a privileged few and increasingly dystopian zones for many, maintained through policed boundaries. The greater the degree of geographic dualism, the clearer the divides between slums and high-rises, home owners and home renters, developing and developed nations, wrong and right sides of the tracks, regions with landfills and regions with rent-controlled housing. And perhaps the most glaring divide: secure jobs in regulated sectors with guaranteed lifelong benefits for some, at the cost of needlessly heightened precarity in a rapidly changing world for others.

    In a changing environment, organizational stability valued for its own sake becomes a kind of immorality. Seeking such stability means allowing the winners of historic conflicts to enjoy the steady, fixed benefits of stability by imposing increasing adaptation costs on the losers.

    In the late eighteenth century, two important developments planted the seeds of a new morality, which sparked the industrial revolution. As a result new wealth began to be created despite the extractive, stability-seeking nature of the geographic world.

    Free as in Beer, and as in Speech

    With the benefit of a century of hindsight, the authoritarian high-modernist idea that form can follow function in a planned way, via coercive control, seems like wishful thinking beyond a certain scale and complexity. Two phrases popularized by the open-source movement, free as in beer and free as in speech, get at the essence of problem solving through serendipity, an approach that does work1 in large-scale and complex systems.

    The way complex systems — such as planet-scale computing capabilities — evolve is perhaps best described by a statement known as Gall’s Law:

    A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.

    Gall’s Law is in fact much too optimistic. It is not just non-working complex systems designed from scratch that cannot be patched up. Even naturally evolved complex systems that used to work, but have now stopped working, generally cannot be patched into working order again.

    The idea that a new, simpler system can revitalize a complex system in a state of terminal crisis is the essence of Promethean thinking. Though the geographic world has reached a state of terminal crisis only recently, the seeds of a simpler working system to replace it were actually planted in the eighteenth century, nearly 200 years before software joined the party. The industrial revolution itself was driven by two elements of our world being partially freed from geographic world logic: people and ideas.

    In the eighteenth century, the world gradually rejected the idea that people could be property, to be exclusively claimed by other people or organizations as a problem-solving “resource,” and held captive within specific boundaries. Individual rights and at-will employment models emerged in liberal democracies, in place of institutions like slavery, serfdom and caste-based hereditary professions.

    The second was ideas. Again, in the late eighteenth century, modern intellectual property rights, in the form of patents with expiration dates, became the norm. In ancient China, those who revealed the secrets of silk-making were put to death by the state. In late eighteenth century Britain, the expiration of James Watt’s patents sparked the industrial revolution.

    Thanks to these two enlightened ideas, a small trickle of individual inventions turned into a steady stream of non-zero sum intellectual and capitalist progress within an otherwise mercantilist, zero-sum world. In the process, the stability-seeking logic of mercantilism was gradually replaced by the adaptive logic of creative destruction.

    People and ideas became increasingly free in two distinct ways. As Richard Stallman, the pioneer of the open source movement, famously expressed it: The two kinds of freedom are free as in beer and free as in speech.

    First, people and ideas were increasingly free in the sense of no longer being considered “property” to be bought and sold like beer by others.

    Second, people and ideas became increasingly free in the sense of not being restricted to a single purpose. They could potentially play any role they were capable of fulfilling. For people, this second kind of freedom is usually understood in terms of specific rights such as freedom of speech, freedom of association and assembly, and freedom of religion. What is common to all these specific freedoms is that they represent freedom from the constraints imposed by authoritarian goals. This second kind of freedom is so new, it can be alarming to those used to being told what to do by authority figures.

    Where both kinds of freedom exist, networks begin to form. Freedom of speech, for instance, tends to create a thriving literary and journalistic culture, which exists primarily as a network of individual creatives rather than specific organizations. Freedom of association and assembly creates new political movements, in the form of grassroots political networks.

    Free people and ideas can associate in arbitrary ways, creating interesting new combinations and exploring open-ended possibilities. They can make up their own minds about whether problems declared urgent by authoritarian leaders are actually the right focus for their talents. Free ideas are even more powerful, since unlike the talents of free individuals, they are not restricted to one use at a time.

    Free people and free ideas formed the “working simple system” that drove two centuries of disruptive industrial age innovation.

    Tinkering — the steady operation of this working simple system — is a much more subversive force than we usually recognize, since it poses an implicit challenge to authoritarian priorities.

    This is what makes tinkering an undesirable, but tolerable bug in the geographic world. So long as material constraints limited the amount of tinkering going on, the threat to authority was also limited. Since the “means of production” were not free, either as in beer or as in speech, the anti-authoritarian threat of tinkering could be contained by restricting access to them.

    With software eating the world, this is changing. Tinkering is becoming much more than a minority activity pursued by the lucky few with access to well-stocked garages and junkyards. It is becoming the driver of a global mass flourishing.

    As Karl Marx himself realized, the end-state of industrial capitalism is in fact the condition where the means of production become increasingly available to all. Of course, it is already becoming clear that the result is neither the utopian collectivist workers’ paradise he hoped for, nor the utopian leisure society that John Maynard Keynes hoped for. Instead, it is a world where increasingly free people, working with increasingly free ideas and means of production, operate by their own priorities. Authoritarian leaders, used to relying on coercion and policed boundaries, find it increasingly hard to enforce their priorities on others in such a world.

    Chandler’s principle of structure following strategy allows us to understand what is happening as a result. If non-free people, ideas and means of production result in a world of container-like organizations, free people, ideas and means of production result in a world of streams.

    The Serendipity of Streams

    A stream is simply a life context formed by all the information flowing towards you via a set of trusted connections — to free people, ideas and resources — from multiple networks. If in a traditional organization nothing is free and everything has a defined role in some grand scheme, in a stream, everything tends steadily towards free as in both beer and speech. “Social” streams enabled by computing power in the cloud and on smartphones are not a compartmentalized location for a particular kind of activity. They provide an information and connection-rich context for all activity.

    streams75ppi

    Unlike organizations defined by boundaries, streams are what Acemoglu and Robinson call pluralist institutions. These are the opposite of extractive: they are open, inclusive and capable of creating wealth in non-zero-sum ways.

    On Facebook for example, connections are made voluntarily (unlike reporting relationships on an org chart) and pictures or notes are usually shared freely (unlike copyrighted photos in a newspaper archive), with few restrictions on further sharing. Most of the capabilities of the platform are free-as-in-beer. What is less obvious is that they are also free-as-in-speech. Except at the extremes, Facebook does not attempt to dictate what kinds of groups you are allowed to form on the platform.

    If the three most desirable things in a world defined by organizations are location, location and location,1 in the networked world they are connections, connections and connections.

    Streams are not new in human culture. Before the Silk Road was a Darknet site, it was a stream of trade connecting Asia, Africa and Europe. Before there were lifestyle-designing free agents, hackers and modern tinkerers, there were the itinerant tinkers of early modernity. The collective invention settings we discussed in the last essay, such as the Cornish mining district in James Watt’s time and Silicon Valley today, are examples of early, restricted streams. The main streets of thriving major cities are also streams, where you might run into friends unexpectedly, learn about new events through posted flyers, and discover new restaurants or bars.

    What is new is the idea of a digital stream created by software. While geography dominates physical streams, digital streams can dominate geography. Access to the stream of innovation that is Silicon Valley is limited by geographic factors such as cost of living and immigration barriers. Access to the stream of innovation that is Github is not. On a busy main street, you can only run into friends who also happen to be out that evening, but with Augmented Reality glasses on, you might also “run into” friends from around the world and share your physical experiences with them.

    What makes streams ideal contexts for open-ended innovation through tinkering is that they constantly present unrelated people, ideas and resources in unexpected juxtapositions. This happens because streams emerge as the intersection of multiple networks. On Facebook, or even your personal email, you might be receiving updates from both family and coworkers. You might also be receiving imported updates from structurally distinct networks, such as Twitter or the distribution network of a news source. This means each new piece of information in a stream is viewed against a backdrop of overlapping, non-exclusive contexts, and a plurality of unrelated goals. At the same time, your own actions are being viewed by others in multiple unrelated ways.

    As a result of such unexpected juxtapositions, you might “solve” problems you didn’t realize existed and do things that nobody realized were worth doing. For example, seeing a particular college friend and a particular coworker in the same stream might suggest a possibility for a high-value introduction: a small act of social bricolage. Because you are seen by many others from different perspectives, you might find people solving problems for you without any effort on your part. A common experience on Twitter, for example, is a Twitter-only friend tweeting an obscure but important news item, which you might otherwise have missed, just for your benefit.

    When a stream is strengthened through such behaviors, every participating network is strengthened.

    While Twitter and Facebook are the largest global digital streams today, there are thousands more across the Internet. Specialized ones such as Github and Stack Overflow cater to specific populations, but are open to anyone willing to learn. Newer ones such as Instagram and Whatsapp tap into the culture of younger populations. Reddit has emerged as an unusual venue for keeping up with science by interacting with actual working scientists. The developers of every agile software product in perpetual beta inhabit a stream of unexpected uses discovered by tinkering users. Slack turns the internal life of a corporation into a stream.

    Streams are not restricted to humans. Twitter already has a vast population of interesting bots, ranging from House of Coates (an account that is updated by a smart house) to space probes and even sharks tagged with transmitters by researchers.2 Facebook offers pages that allow you to ‘like’ and follow movies and books.

    By contrast, when you are sitting in a traditional office, working with a laptop configured exclusively for work use by an IT department, you receive updates only from one context, and can only view them against the backdrop of a single, exclusive and totalizing context. Despite the modernity of the tools deployed, the architecture of information is not very different from the paperware world. If information from other contexts leaks in, it is generally treated as a containment breach: a cause for disciplinary action in the most old-fashioned businesses. People you meet have pre-determined relationships with you, as defined by the organization chart. If you relate to a coworker in more than one way (as both a team member and a tennis buddy), that weakens the authority of the organization. The same is true of resources and ideas. Every resource is committed to a specific “official” function, and every idea is viewed from a fixed default perspective and has a fixed “official” interpretation: the organization’s “party line” or “policy.”

    This has a radical consequence. When organizations work well and there are no streams, we view reality in what behavioral psychologists call functionally fixed 3 ways: people, ideas and things have fixed, single meanings. This makes them less capable of solving new problems in creative ways. In a dystopian stream-free world, the most valuable places are the innermost sanctums: these are typically the oldest organizations, most insulated from new information. But they are also the locus of the most wealth, and offer the most freedom for occupants. In China, for instance, the innermost recesses of the Communist Party are still the best place to be. In a Fortune 500 company, the best place to be is still the senior executive floor.

    When streams work well on the other hand, reality becomes increasingly intertwingled (a portmanteau of intertwined and tangled), as Ted Nelson evocatively labeled the phenomenon. People, ideas and things can have multiple, fluid meanings depending on what else appears in juxtaposition with them. Creative possibilities rapidly multiply, with every new network feeding into the stream. The most interesting place to be is usually the very edge, rather than the innermost sanctums. In the United States, being a young and talented person in Silicon Valley can be more valuable and interesting than being a senior staffer in the White House. Being the founder of the fastest growing startup may offer more actual leverage than being President of the United States.

    We instinctively understand the difference between the two kinds of context. In an organization, if conflicting realities leak in, we view them as distractions or interruptions, and react by trying to seal them out better. In a stream, if things get too homogeneous and non-pluralistic, we complain that things are getting boring, predictable, and turning into an echo chamber. We react by trying to open things up, so that more unexpected things can happen.

    What we do not understand as instinctively is that streams are problem-solving and wealth-creation engines. We view streams as zones of play and entertainment, through the lens of the geographic-dualist assumption that play cannot also be work.

    In our Tale of Two Computers, the networked world will become firmly established as the dominant planetary computer when this idea becomes instinctive, and work and play become impossible to tell apart.

    Breaking Smart

    The first sustainable socioeconomic order of the networked world is just beginning to emerge, and the experience of being part of a system that is growing smarter at an exponential rate is deeply unsettling to pastoralists and immensely exciting to Prometheans.

    trashedplanet75ppi

    Our geographic-world intuitions and our experience of the authoritarian institutions of the twentieth century lead us to expect that any larger system we are part of will either plateau into some sort of impersonal, bureaucratic stupidity, or turn “evil” somehow and oppress us.

    The first kind of apocalyptic expectation is at the heart of movies like Idiocracy and Wall-E, set in trashed futures inhabited by a degenerate humanity that has irreversibly destroyed nature.

    The second kind is the fear behind the idea of the Singularity: the rise of a self-improving systemic intelligence that might oppress us. Popular literal-minded misunderstandings of the concept, rooted in digital dualism, result in movies such as Terminator. These replace the fundamental humans-against-nature conflict of the geographic world with an imagined humans-against-machines conflict of the future. As a result, believers in such dualist singularities, rather ironically for extreme technologists, are reduced to fearfully awaiting the arrival of a God-like intelligence with fingers crossed, hoping it will be benevolent.

    Both fears are little more than technological obscurantism. They are motivated by a yearning for the comforting certainties of the geographic world, with its clear boundaries, cohesive identities, and idealized heavens and hells.

    Neither is a meaningful fear. The networked world blurs the distinction between wealth and waste. This undermines the first fear. The serendipity of the networked world depends on free people, ideas and capabilities combining in unexpected ways: “Skynet” cannot be smarter than humans unless the humans within it are free. This undermines the second fear.

    To the extent that these fears are justified at all, they reflect the terminal trajectory of the geographic world, not the early trajectory of the networked world.

    An observation due to Arthur C. Clarke offers a way to understand this second trajectory: any sufficiently advanced technology is indistinguishable from magic. The networked world evolves so rapidly through innovation, it seems like a frontier of endless magic.

    Clarke’s observation has inspired a number of snowclones that shed further light on where we might be headed. The first, due to Bruce Sterling, is that any sufficiently advanced civilization is indistinguishable from its own garbage. The second, due to futurist Karl Schroeder,1 is that any sufficiently advanced civilization is indistinguishable from nature.

    To these we can add one from social media theorist Seb Paquet, which captures the moral we drew from our Tale of Two Computers: any sufficiently advanced kind of work is indistinguishable from play.

    Putting these ideas together, we are messily slouching towards a non-pastoral utopia on an asymptotic trajectory where reality gradually blurs into magic, waste into wealth, technology into nature and work into play. `

    This is a world that is breaking smart, with Promethean vigor, from its own past, like the precocious teenagers who are leading the charge. In broad strokes, this is what we mean by software eating the world.

    For Prometheans, the challenge is to explore how to navigate and live in this world. A growing non-geographic-dualist understanding of it is leading to a network culture view of the human condition. If the networked world is a planet-sized distributed computer, network culture is its operating system.

    Our task is like Deep Thought’s task when it began constructing its own successor: to develop an appreciation for the “merest operational parameters” of the new planet-sized computer to which we are migrating all our civilizational software and data.

    December 19, 2015 at 2:14:02 AM GMT+1 - permalink - http://breakingsmart.com/season-1/
    société ai
  • thumbnail
    Censys

    Scanneur en mode shodan, mais non-commercial (?)

    December 16, 2015 at 9:36:46 PM GMT+1 - permalink - https://censys.io/
    shodan sécurité scanner
  • Cliffski's Blog | Hi, I’m from the games industry. Governments, please stop us.

    This may not be popular, but its how I feel. First, some background and disclaimers. I run a small games company making games for the PC, strategy games with an up front payment. We don’t make ‘free to play’ games or have micro transactions. Also, I’m pretty much a capitalist. I am not a big fan of government regulation in general. I am a ‘get rid of red tape’ kind of guy. I actually oppose tax breaks for game development. I am not a friend of regulation. But nevertheless.

    I awake this morning to read about this:

    Image1

    Some background: Star Citizen is a space game. Its being made by someone who made space games years ago, and they ‘crowd-funded’ the money to make this one. The game is way behind schedule, and is of course, not finished yet. They just passed $100,000,000 in money raised. They can do this because individual ships in the game are for sale, even though you bought the game. I guess at this point we could just say ‘A fool and his money are soon parted’, but yet we do not do this with gambling addiction. In fact we some countries have extremely strict laws on gambling, precisely because they know addiction is a thing, and that people need to be saved from themselves.

    Can spending money on games be a problem? Frankly yes, and its because games marketing and the science of advertising has changed beyond recognition from when games first appeared. Games ads have often been dubious, and tacky, but the problem is that now they are such a huge business, the stakes are higher, people are prepared to go further. On the fringes we have this crap:

    taprao

    But in the mainstream, even advertised in prime-time TV spots we have this crap:

    hqdefault

    And this stuff works. ‘Game of War’ makes a lot of money. That ad campaign cost them $40,000,000. (Source). Expensive? not when you earn a million dollars A DAY: (Source).

    Image2Now if you don’t play games, you might be thinking ‘so what? they must be good games, you are jealous! But no! In fact all the coverage of games like Evony and Game Of War illustrates just how bad they are. They earn so much because the makers of those type of games have an incredibly fine tuned and skillful marketing department bent on psychological manipulation. You think I’m exaggerating? Read this. Some choice quotes:

    “We take Facebook stalking to a whole new level. You spend enough money, we will friend you. Not officially, but with a fake account. Maybe it’s a hot girl who shows too much cleavage? That’s us. We learned as much before friending you, but once you let us in, we have the keys to the kingdom.”

    Lets think about this for a minute. A company hires people to stalk its customers and befriend them so they can build up a psychological profile of each customer to allow them to extract more money. This is not market research, this is not game design. This is psychological warfare. Lines have been crossed so much we cannot even see them behind us with binoculars. We need to reign this stuff in. Its not just psychological warfare, but warfare where you, the customer, are woefully outgunned, and losing. Some people are losing catastrophically.

    You know how much you hate those ads that track you around the internet reminding you of stuff you looked at but didn’t buy? That is amateur hour compared to the crap that some games companies are pulling these days. The problem is, we have NO regulation. AFAIK no law prevents a company stalking its customers on facebook. We live in an age where marketers have already tried using MRI scans on live subjects to test advertising responsiveness. You think you are not manipulated by ads? Get real, read some of the latest books on the topic.We are only a short step away from convincing AI bots that pretend to be our new flirty friends in game that urge us to keep playing, keep upgrading, keep spending.

    Modern advertising is so powerful we should be legislating the crap out of this sort of thing. How bad do we let it get before we get some government imposed rules? We are in the early days of mass-population study and manipulation, the days where us, the gamers describe a game as ‘addicting’ as a positive. Maybe it isn’t such a positive after all. Maybe we need to start worrying about if a game is actually good, rather than just ‘addicting’. Maybe we need people to step in and save us from ourselves. We are basically still just hairless apes. We do not possess anything like the self-control or free-will that we think we do.

    Like alcohol, gambling, smoking or eating, most of us do not find gaming addictive. Thus we fail to see the problem. it depends how you are wired. See this ‘awards screen’ in company of heroes 2:

    2i9pao3

    To most of us, thats just silly, and too big, and OTT. But if you suffer from OCD, that can be a BIG BIG problem for you. They KNOW this. Its why it is done. it works. Keep playing kid. keep playing. KEEP PLAYING. This sort of thing doesn’t need to work on everyone. If it works on just 1% and we can get them to spend $1,000 a month on our game (who cares if they can afford it?), then its worth doing.

    I hate regulation, but sometimes you need it. Stopping a business dumping waste in a river is a good idea. Stopping companies treating their customers like animals that can be psychologically trapped and exploited is a good idea too. This stuff is too easy. Save us from ourselves.

    December 16, 2015 at 9:01:25 PM GMT+1 - permalink - http://positech.co.uk/cliffsblog/2015/12/13/hi-im-from-the-games-industry-governments-please-stop-us/
    publicité société
  • This is for friends. Because without them I probably won't be there. But also because there's weird things going on.

    Hey Friend, been a long time. Usually this would be a conversation I have with you over an instant messaging media. We would argue, because I need to confront my views, and you'll help me to step back a little bit and try to force me to take care of me.

    This conversation would probably splitted across several media and people, because this is how I function, in weird ways and without focus.

    On the 13th of November, coming back from le Louvres to Saint Denis - where I live - you sent me a SMS asking me if I was safe. I did heard a loud noise from the Stade de France when I was heading out the subway to my home, but since there was a match I just flagged it as "weird noise made by sports fan". I didn't understood why I received this text.

    Then, once home. I started a web browser. After receiving half a dozen a tweet of various instance of you, I reassured you by posting that I was home and safe on twitter. And then, with my room-mate and coworker we just thin about the huge amount of work that we would have to do on Monday - and even before that.

    I told you, I work in strange ways. I wasn't emotionally affected by the death of 300 people. It's random and I knew no one there. The shooting happened in places I can happen to go, but it's as random as a plane crash (and in fact there's a higher probability to be killed in a plane crash than being hit in a terrorist event).

    I checked upon friends (or waited for news)(yeah, I suck at maintaining friendship, I think you're kind of aware of that now) to be sure everyone was mostly safe. And then I waited for the political disaster that will ensure. Until the next Monday I really hoped that our politicians would do something clever, like calling for respect and fraternity and unity.

    You called me naïve, but if I'm not that naïve, then I turn cynical. I tried very hard to shut down my inner voices warning me of what would come next. And since you told me that being cynical might hurt you, I try to avoid that. Also it's better for my moral and my depression.

    And then our Beloved Socialist President of the Republican Democratic Palpatine ordered the Senate to vote the martial law … Mmm, no, I'm on the wrong movie here. It was the talk of Mr. Hollande in front of the congress - higher and lower chamber gathered at Versailles - when he asserted that we were at war. And that we need to form an alliance with Putin and Assad to fight ISIS. And that we need to extend and modify the State of Emergency, and the Constitution.

    This is where I broke up. Syria is still a hard political subject for me. You know that since I talk a lot about it. You even asked me to get diagnosed because I might have some sort of trauma. SO, yes, this is where my emotions finally set me adrift.

    What people call emotion wave or surge are - in my case - chaotic tsunamis destroying anything that might be related to reason. That's my poison. That's what will kill me in the end. You're important there, in the fact that you help me resurface in those situation and kind of freeze the emotional disaster.

    We talked about it. I see no hope in our current situation. Warrant-less search and warrant-less house arrest; total stop of support of any kind toward the refugees - who already had a hard time; suspension of the right to protest and, more generally, confiscation of the political debate by the politicians - Mr. Valls said that he won't accept any discussion about the incidence of social or economic factor on terrorism; those are what we live on now.

    I mean, I'm used to see army in the street of Paris. In fact, I never knew them without troops - the bombing attack of 1995 happened at a time I wasn't that much in Paris and since then troops are always in the street. But now, their in battle suit, helmet and bullet proof vests, way to much weapon for my sanity, etc.

    Cops did change also. They weren't on a short leash before, but now they're out for blood and revenge. Usually, even on the few forbidden protests I was at, there's always a way to get out if you ask nicely, they will let you go without hustle - they're basically filtering you to be sure you won't sucker punch them, but in the end you can escape before they arrest everyone. But on the 28th of November, there wasn't such a thing like a possible escape. They wanted to fight.

    There was a public announce that unemployment was on the raise just before the COP21. And nothing in the government deemed important to say anything about it. I mean, they're supposed to be socialists for fuck sake. They should at least says that they will work on a new way to count unemployed people, or that they will do something about it. But they only speaks about security. Mr Valls eve stating that "Security if the first of liberty" which, ironically, is a quote made by JM. Le Pen as a slogan for it's presidential elections back in the eighties.

    We have a socialist prime minister, defending a security only program, based on pricniple established by the far right movement.

    That's about the state of our politics in France. But don't get me wrong, The FN is a bit worse than he PS in that he will actually do what they said they're gonna do, and they plan to cut funding for planed parenthood (which depends largely on regional funding), and other nice stuff.

    Politicians wants me to vote to block the National Front, in a national movement aganst fascism. But I won't. I do not see the point on voting for a lack of response to social issues, just for the sake of protecting us against fascism. Politicians who enabled the police state, who are asking for a republican merge, who are saying that young people in teh suburb should cultivate themselves, who plans to bomb people in collaboration with Turkish, Russian and Syrian - all extremely democratic - governments, who reduce democratic life to vote, who won't do a thing about the unemployment, wants my vote to oppose fascism?

    You see my dearest friend, you asked me to look on the bright side. But it's more than hard to do that. You told me that bitterness is like Beaujolais Nouveau. You can drink a bit of it, it can even be good - and I disagree on Beaujolais Nouveau being a good wine ever - but too much and it will kills you. Or hurt you.

    I don't know.

    I work at La Quadrature du Net now. And I really try to avoid the repetitive self destruct pattern that leads me to chain burn out. Me or other staffers. Or you.

    During the attacks on the 13th of November, I focused on the solidarity part of it. That's what I'm trying to do. That's why I keep informed on the Syrian situation by following the White Helmets.

    But there's something that is absent of our political life in France. We have traditional organisations who covers for themselves without caring about anything else than their way to power: syndicates, political parties. We do have old style NGO, advocating nd lobbying behind the scenes. We have radical groups who are busy fighting cops. But we do not have orgs who works on party. Militantism in France is a serious business. And if you're not working yourself to death you're doing it wrong. ANd you end up without anyone willing to take up the fight, to think on long term strategies, to federate smaller groups who exhausts themselves beyond repair.

    And I hear you. I need to focus on the positive sides. So that's what I'm trying to do. There's some good stuff happening. LQDN is finally having a nice and more inclusive community - there's a lot of effort to do, but it's in progress. I'm working there to build tools to bother our deputies - piphone and similar stuff, provide tools to flatten the democratic process. Or at least to help the circulation of information.

    And that's my target. You said me that we're in for a long fight. I'm not even sure we can win this fight, and the nihilistic part of me keep thinking that it's useless. But since I try to not killing myself, I need something. If I can bother an intelligence officer, a head of office somewhere, deputies or senators, ministers or head of state that's a win.

    If, when they see us, in the press, or elsewhere, or when they hear about us those people think "Oh no … not them again … my day is now ruined" then, it's a win. It won't makes them stop doing shit, but at least, I'll smile when thinking about all the pain they'll get.

    And in the meantime, we should try harder working with other small organisation specialised in other aspect of the fight. There's a lot to do with queers, feminists, ant racist groups. And I really think that's where I can help - beyond the purely technical point.

    So, you see, I'm trying to stop sipping the bitterness part of things. It's hard 'cause I've turned cynical/realist. And because I love the bitterness. But you're right. I should stop drinking it.

    I'm happy you're here. Because at least I can talk to you. And there's here also. This post is fucked up, and makes no sense. But I think it's a bit like what's the political life looks like. Socialist calling voters to vote for traditionalists.

    It's fucked up. But I'm gonna ignore that, because it's useless and I can't spend any more energy on that. I'll focus on building things.

    Thanks for still being here.

    December 8, 2015 at 10:14:49 PM GMT+1 - permalink - https://about.okhin.fr/posts/Friend/
    société philo
  • thumbnail
    Le deuil de la démocratie représentative | Grise Bouille

    D’abord, un chiffre pour remettre les pendules à l’heure : 91%. C’est le pourcentage de français qui n’a pas voté pour le FN1. Moins d’un français sur 10 a donné une voix à ce parti. Et de fait, que le FN soit « le premier parti de France » n’est pas en soi le symbole d’une droitisation ou d’une radicalisation rampante de la société français. C’est le symbole de la mort de la démocratie représentative, le signe ultime que celle-ci ne représente plus rien ni personne.

    Hier, je n’ai pas voté. Je n’irai pas plus dimanche prochain. Ami votant2, je sais que, probablement, tu me méprises, tu as envie de me hurler dessus, de me dire que c’est honteux, que des gens sont morts pour que je puisse voter, qu’à cause de moi le fascisme pourrait s’installer. Je ne t’en veux pas, j’étais pareil il y a à peine 4 ans.

    Les étapes du deuil

    Tu connais peut-être les 5 étapes du deuil de Elisabeth Kübler-Ross. Ça n’a pas forcément une grande valeur scientifique, mais ça permet de schématiser certains mécanismes émotionnels. Laisse-moi te les énoncer :

    Déni
    Colère
    Marchandage
    Dépression
    Acceptation
    Ami votant, je sais déjà que tu as dépassé le stade du déni : tu sais pertinemment que la démocratie représentative est morte. Sinon, tu voterais pour des idées qui te correspondent, tu voterais pour faire avancer la société, pour donner ton avis sur la direction à prendre. Mais tu ne fais pas cela : au contraire, tu votes « utile », tu votes pour faire barrage à un parti, tu votes pour « le moins pire ». C’est déjà un aveu que le système est mort.

    En fait, tu oscilles entre les étapes 2 et 3. Entre la colère envers un système qui se fout de ta gueule, la colère contre les abstentionnistes qui ne jouent pas le jeu… et le marchandage. « Allez, si je vote pour le moins pire, système, tu continues à vivoter ? Allez, peut-être que si on vote PS cette fois, il fera une vraie politique de gauche ? Allez système, tu veux pas continuer à faire semblant de marcher un peu si je fais des concessions de mon côté ? Si je mets mes convictions de côté, tu veux bien ne pas être totalement lamentable ? »

    Encore une fois, je comprends le principe, j’étais au même point lors des dernières élections présidentielles. J’appelais les gens à voter, je critiquais les abstentionnistes qui se permettaient de se plaindre alors que, merde, ils n’avaient pas pris la peine de faire leur devoir de citoyen. Je savais pertinemment que le PS au pouvoir ne ferait aucun miracle, que fondamentalement rien ne changerait par rapport à l’UMP, à part à la marge. Mais il fallait bien choisir le moins pire. La démocratie représentative était déjà morte, je le savais. Le vote utile, on nous le rabâchait depuis avant même que j’ai le droit de vote. Sans parler du référendum de 2005 où ça sentait déjà fort le sapin. Mais je n’avais pas terminé mon deuil. Et puis Hollande est passé.

    Les derniers coups de pelle

    Je ne pourrais jamais assez remercier François Hollande. Il m’a aidé à terminer mon deuil. En me renvoyant ma voix en pleine figure, en m’appuyant bien profondément la tête dans les restes puants et décomposés de notre système politique. Le quinquennat de François Hollande aura été la plus parfaite, la plus magnifique démonstration que le vote est une arnaque et que le pouvoir du peuple est une immense illusion. Le changement, c’est maintenant ! Rappelle-toi, le PS avait tous les pouvoirs en 2012 : la présidence, l’Assemblée, les villes, les régions… merde, même le Sénat était passé à gauche ! Une première ! Les types avaient les mains libres et carte blanche pour tout. Il fallait écouter Copé, la pleureuse « profondément choquée », nous expliquer l’énorme danger que représentaient ces pleins pouvoirs. Lutter contre la finance ? Imposer les revenus du capital comme ceux du travail ? Interdire le cumul des mandats ?

    LOL NOPE.

    Au lieu de ça, nous aurons eu la même merde qu’avant. Parfois en pire. Course à la croissance alors même que nous produisons déjà trop pour la planète. Course au plein emploi alors que le travail est condamné à disparaître (ce qui, je le rappelle, devrait être une bonne nouvelle). Course à la productivité alors que les syndromes d’épuisement professionnel se multiplient et que le mal-être des travailleurs se généralise. Diminution de ce qu’on nous matraque comme étant « le coût du travail » mais qu’un employé sensé devrait comprendre comme « mon niveau de vie ». Détricotage méthodique des services publics qui devraient au contraire être renforcés.

    Nous n’attendions rien de Hollande, il a réussi à faire pire. Des lois liberticides au nom d’une sécurité qu’elles ne garantiront même pas. Un État d’Urgence à durée indéterminée. Des militants assignés à résidence pour leurs convictions. Des manifestations politiques interdites. Des gamins mis en garde à vue parce qu’ils ne respectent pas une minute de silence. Heureusement que c’est sous un parti qui se dit « républicain » que tout cela se passe, sinon, on pourrait doucement commencer à s’inquiéter.

    Vous me traitez d’irresponsable parce que je n’ai pas été voter dimanche ? Moi je me trouve irresponsable d’avoir légitimité notre gouvernement actuel en votant en 2012. Depuis 2012, j’ai fait comme beaucoup de monde : j’ai traversé le stade 4, celui de la dépression. À me dire que nous étions définitivement foutus, que même lorsqu’un parti qui se disait en opposition totale avec le précédent se vautrait à ce point dans la même politique insupportable, il n’y avait plus de solution. Que la démocratie était morte, et que nous allions crever avec elle. Ami votant, admets-le, tu as eu la même réaction. Mais comme toujours, à chaque vote, tu régresses, tu retournes à l’étape 3, au marchandage, à te dire que peut-être, on pourra incliner un peu le système en s’asseyant sur nos convictions.

    Moi, j’ai passé le cap. Je suis à l’étape 5, à l’acceptation. La démocratie représentative est morte, point. Que cela soit une bonne chose ou non, l’avenir le dira, mais le fait demeure : ce système est mort. Tu penses que retourner à l’étape de marchandage, c’est garder de l’espoir et qu’accepter la mort de notre système, c’est le désespoir. Je ne suis pas d’accord. Faire son deuil, c’est bien. C’est même nécessaire pour passer à autre chose et, enfin, avancer.

    La démocratie est morte, vive la démocratie !

    Tu remarqueras que je persiste à ajouter « représentative » quand je parle de mort de la démocratie. Parce que je ne crois pas que la démocratie elle-même soit morte : je pense que la démocratie réelle n’a jamais vécu en France. Le système dans lequel nous vivons se rapproche plus d’une « aristocratie élective » : nous sélectionnons nos dirigeants dans un panel d’élites autoproclamées qui ne change jamais, là où la démocratie voudrait que les citoyens soient tour à tour dirigeants et dirigés. Le simple fait que l’on parle de « classe politique » est le déni même de la notion de représentation qui est censée faire fonctionner notre démocratie représentative : la logique voudrait que ces politiciens soient issus des mêmes classes qu’ils dirigent. Attention, ne crachons pas dans la soupe, notre système est bien mieux qu’une dictature, à n’en pas douter. Mais ça n’est pas une démocratie. Je te renvoie à ce sujet à ce documentaire, J’ai pas voté, que tout le monde devrait voir avant de sauter à la gorge des abstentionnistes.

    Des gens sont morts pour qu’on puisse voter ? Non, ils sont morts parce qu’ils voulaient donner au peuple le droit à s’autodéterminer, parce qu’ils voulaient la démocratie. Est-ce qu’on pense sérieusement, en voyant la grande foire à neuneu que sont les campagnes électorales, que c’est pour cela que des gens sont morts ? Pour que des guignols cravatés paradent pendant des semaines pour que nous allions tous, la mort dans l’âme, désigner celui dont on espère qu’il nous entubera le moins ? Je trouve ce système bien plus insultant pour la mémoire des combattants de la démocratie que l’abstention.

    Alors oui, j’ai fait mon deuil, et ça me permet d’avoir de l’espoir pour la suite. Parce que pendant que la grande imposture politicarde se poursuit sur les plateaux-télé, nous, citoyens de tous bords, essayons de trouver des solutions. Plus le temps passe, plus le nombre de gens ayant terminé leur deuil augmente, plus ces gens s’intéressent réellement à la politique et découvrent des idées nouvelles, politiques et sociétales : tirage au sort, mandats uniques et non-renouvelables, revenu de base, etc. Des solutions envisageables, des morceaux de savoir, de culture politique… de l’éducation populaire, en somme. Rien ne dit que ces solutions fonctionneront, mais tout nous dit que le système actuel ne fonctionne pas. Et lorsque ce système s’effondrera, ce sera à ces petits morceaux de savoir disséminés un peu partout dans la population qu’il faudra se raccrocher. L’urgence aujourd’hui, c’est de répandre ces idées pour préparer la suite. Ami votant, tu as tout à gagner à nous rejoindre, parce que tu as de toute évidence une conscience politique et qu’elle est gâchée, utilisée pour te battre contre des moulins à vent.

    Notre système est un vieil ordinateur à moitié déglingué. Tu peux continuer d’imaginer qu’en réinstallant le même logiciel (PS ou LR, choisis ton camp camarade), il finira par fonctionner. D’autres utilisent la bonne vieille méthode de la claque sur la bécane (le vote FN) : on sait bien que ça ne sert à rien et que ça ne va certainement pas améliorer l’état de l’ordi, mais ça soulage. Certains imaginent qu’en déboulonnant l’Unité Centrale et en hackant petit à petit le système, on finira par faire bouger les choses (la députée Isabelle Attard est un bon exemple, personnellement je la surnomme l’outlier, la donnée qui ne rentre pas dans le modèle statistique du politicien). Ce n’est pas la pire des idées. On a même parlé de rebooter la France. Qui sait, si on arrive à mettre sur pied une telle stratégie en 2017, possible que je ressorte ma carte d’électeur du placard. Mais les plus nombreux, les abstentionnistes, ont laissé tomber le vieil ordinateur et cherchent juste à en trouver un nouveau qui fonctionne.

    Alors on fait quoi ? Soyons clairs, je suis comme tout le monde, je n’ai aucune idée de la manière dont on peut passer à autre chose, pour instaurer une vraie démocratie. Une transition démocratique pourrait s’opérer en douceur en modifiant les institutions petit à petit : tout le monde aurait à y gagner. Politiciens compris, car l’alternative est peut-être l’explosion, et c’est une alternative à l’issue très incertaine. Mais clairement, nous ne prenons pas la direction d’une transition non-violente.

    Je continue pour ma part à penser que, comme le disait Asimov, « la violence est le dernier refuge de l’incompétence ». Mais nous constatons chaque jour un peu plus notre impuissance dans ce système, et les politiciens actuels seraient bien avisés de corriger le tir avant qu’il ne soit trop tard. Avant que les citoyens ne se ruent dans ce dernier refuge.

    December 8, 2015 at 8:29:20 PM GMT+1 - permalink - http://grisebouille.net/le-deuil-de-la-democratie-representative/
    société
  • Bernard Stiegler : « Les gens qui perdent le sentiment d’exister votent Front national » - Rue89 - L'Obs

    En lisant le livre de Bernard Stiegler, « Aimer, s’aimer, nous aimer » (Galilée, 2003), on peut ressentir un sentiment de découragement.

    Le philosophe explique dans son livre que les électeurs FN sont, comme beaucoup d’entre nous dans cette société malade, victimes de troubles narcissiques. Pour s’en sortir, ils ont la particularité de désigner des boucs émissaires. C’est un symptôme, une façon d’évacuer le mal-être.

    Il est impossible de discuter avec des troubles et des symptômes (seuls les psys savent faire). Les journalistes peuvent donc continuer à s’agiter, à « fact-checker », à enquêter, à essayer de comprendre à coups de portraits, ils n’ont aucune prise sur rien, me suis-je dit.

    Je suis allée demander à Bernard Stiegler ce que la presse peut et doit faire au lendemain des élections européennes, qui ont vu le FN atteindre le score de 25% des votants.

    Une conférence, ce samedi
    Ars Industrialis, le groupe de travail de Bernard Stiegler, organise une conférence ce samedi baptisée « Extrême nouveauté, extrême désenchantement, extrême droite ». La réunion aura lieu au Théâtre Gérard Philipe à Saint-Denis (Seine-Saint-Denis), de 14 heures à 18 heures. L'entrée est gratuite.

    Rue89 : Dans « Aimer, s’aimer, nous aimer », vous dites que les électeurs FN souffrent d’un défaut de « narcissisme primordial ». Dès lors, peuvent-ils changer d’avis comme une personne rationnelle ?

    Bernard Stiegler : Je parle avec des gens du Front national, il y en a même que j’aime bien. Je vous le dis très franchement : certains sont plutôt sympathiques. La plupart ne sont pas des racistes ou des antisémites, mais des gens très malheureux. Mais pour votre question, la réponse est non. Je n’essaye jamais de les dissuader de voter pour le Front national. Plus j’essaierais de le faire, et plus ils voteraient pour le Front national. C’est complètement inutile.

    C’est d’autant plus inefficace que, pour une part, ils n’ont pas tort d’exprimer une souffrance. Le paranoïaque, le psychotique, le névrotique ne racontent jamais que des bêtises. Il y a toujours un fond de vérité. Le problème, c’est que ce fond de vérité qui devient pathologique exprime une maladie qui n’est pas seulement celle de ces électeurs : c’est celle de notre société.

    Ce qui est spécifique dans la pathologie des électeurs du Front national, c’est que par leur vote, qu’ils le veuillent ou non, ils s’en prennent à des boucs émissaires.

    Comment avez-vous compris ce qui les faisait souffrir ?

    Je parle, dans le livre que vous citez, de Richard Durn [responsable de la tuerie du conseil municipal de Nanterre en 2002, ndlr]. Je me suis intéressé au sujet après avoir lu un extrait de son journal intime cité dans Le Monde et dans lequel Durn disait « avoir perdu le sentiment d’exister ». Ces mots m’ont énormément frappé. Moi aussi j’ai parfois le sentiment de ne pas exister. Et moi aussi je suis passé à l’acte : j’ai braqué des banques...

    En lisant l’article, je me suis dit que ce type était extrêmement dangereux, mais qu’on était des millions comme lui. Et je me suis dit qu’un jour, les gens qui perdent le sentiment d’exister, de plus en plus nombreux, voteraient pour le Front national au lieu de tuer des gens ou de braquer des banques.

    J’ai été scandalisé par l’attitude d’Eva Joly, au lendemain du premier tour de l’élection présidentielle, qui a qualifié le score du Front national de « tâche indélébile sur les valeurs de la démocratie ». Ce sont des propos honteux.

    Comment expliquer cette liquidation du narcissisme primordial ?

    Elle vient de l’organisation illimitée de la consommation via le marketing et la télévision. Vous avez vu le film « Le Festin de Babette » ? C’est une histoire magnifique : une Française qui vit au Danemark décide de faire un repas immense, somptuaire. Le film raconte la préparation de ce repas coûteux par une personne modeste, et c’est extraordinaire.

    Quand j’étais enfant, le repas du dimanche avait beaucoup d’importance. Il était courant dans les classes populaires de faire des festins, comme Gervaise et son oie dans « L’Assommoir ». Il est très important de recevoir, de se rassembler.

    C’est ce que le consumérisme a concrètement détruit : il n’y a que du prêt-à-porter, du prêt-à-manger – de la malbouffe et plus de fête.

    Comment faire réaliser aux électeurs du FN que cette souffrance est déconnectée du chiffre de l’immigration ?

    Cela ne sert à rien de leur dire : ils ne l’entendront jamais. Précisément, ils entendent autre chose si vous leur dites cela. Ils entendent que vous n’avez pas écouté leur problème. Et ils ont raison. S’en prendre à un bouc émissaire [ce que Bernard Stiegler nomme un « pharmakos », dans « Pharmacologie du Front national », Flammarion, 2013, ndlr] est un symptôme. C’est un symptôme horrifique, extrêmement dangereux, et le nazisme est l’exploitation de ce symptôme à l’échelle cauchemardesque du XXe siècle. Une telle horreur peut tout à fait revenir – c’est même plus que probable : si rien de décisif ne se passe, c’est ce qui finira par arriver. Et cela dépend de nous que cela n’arrive pas – mais ce n’est pas en insultant les électeurs du FN que cela s’arrangera.

    Il est totalement vain de dire aux gens d’arrêter de symptomatiser : il faut les soigner, je veux dire prendre soin d’eux (au sens que j’ai donné à ce mot dans « Prendre soin »), s’occuper d’eux, leur donner des perspectives, leur tenir un autre discours que celui de François Hollande et de Nicolas Sarkozy...

    Comment les journalistes peuvent-ils participer à ces soins ?

    Je pense qu’il est urgent que la presse reprenne son rôle, qui est de défendre des idées, de les faire se confronter, et par là, de construire des opinions. Cela veut dire faire des choix politiques, esthétiques, intellectuels, sociaux, etc. – et les assumer. Le Monde diplomatique continue de faire ce travail, et c’est pourquoi je ne manque jamais l’occasion de le lire, même s’il m’énerve souvent.

    Aujourd’hui, la désespérance est le fond de commerce du Front national. Pour redonner de l’espoir, il faut donner la parole à ceux qui ont quelque chose à dire et qui sont prêts au débat public – et par là reconstruire une pensée, des concepts et des perspectives, et les socialiser.

    L’idée que les gens ne veulent pas penser est totalement fausse : quand le Collège de France a mis en ligne ses cours, des millions d’heures de cours ont été téléchargées. Ars Industrialis, qui fait des conférence souvent difficiles, a une très large audience. Ce que les gens refusent n’est pas la pensée : c’est la langue de bois, d’où qu’elle vienne.

    Si le capitalisme consumériste s’effondre et qu’il n’y a pas ce travail d’invention d’une alternative à ce qui fut la base de ce consumérisme, à savoir le fordo-keynésianisme (la « croissance »), et qui s’est définitivement épuisé, l’extrême droite s’imposera partout – bien au-delà de la France et de l’Europe.

    Vous pensez que c’est sur le point d’arriver ?

    Dans les années 80, il s’est passé quelque chose de très important. Il y a eu la « révolution conservatrice », fondée sur l’idée qu’il valait mieux liquider l’Etat et financiariser le capitalisme en laissant la production se développer hors de l’Occident – et cela a été le début du chômage de masse.

    Cette liquidation a créé une insolvabilité de masse dissimulée par les systèmes de subprimes et de « credit default swap » très profitables aux spéculateurs mais ruineux pour l’économie, un hyperconsumérisme extrêmement toxique sur le plan environnemental, une grande misère symbolique sur le plan mental, et une précarisation généralisée provoquant un sentiment d’insécurité bien réelle et une désintégration sociale.

    Cette désintégration rend impossible l’intégration non pas des immigrés, mais de la population elle-même dans son ensemble, les immigrés y étant exposés plus que tous évidemment.

    La crise de 2008 a mis au clair cette insolvabilité et cette fragilité extrême et structurelle. Et elle a ruiné durablement la confiance – ce à quoi Snowden mais aussi Fukushima et bien d’autres catastrophes ont ajouté leurs effets.

    Un système économique ne peut pas fonctionner sans confiance – et il n’y a plus de confiance. Comment peut-il y en avoir quand 55% des jeunes Espagnols sont au chômage et que tout le monde s’en moque – cependant que l’automatisation est en train réduire l’emploi dans tous les secteurs et dans tous les pays ? Qui a parlé de tout cela au cours de la campagne sur
    l’Europe ?

    Les caissières disparaissent...

    Oui, on n’a plus besoin de caissière, et on n’aura bientôt plus besoin de chauffeurs de camion – ni de nombreux techniciens, ingénieurs, etc. Ce qui est en train d’advenir, c’est la disparition de l’emploi. Pas un mot de cette question dans le tout récent rapport Pisani-Ferry si j’en crois la presse – pas plus que dans le rapport Gallois d’il y a presque deux ans déjà... Que de temps perdu ! Et que de fureur accumulée !

    L’automatisation va se développer désormais massivement, notamment parce que le numérique permet d’intégrer toutes sortes d’automatismes jusqu’alors isolés, et qu’il en résulte une baisse rapide du coût des robots.

    Jeff Bezos, le patron d’Amazon, est en train d’en installer partout dans tous ses entrepôts. Arnaud Montebourg a annoncé il y a un an qu’il allait lancer un plan de robotique française.

    Le coût de l’automatisation va diminuer, et les PME françaises vont de plus en plus pouvoir s’y engager – même si elle ne le veulent pas, en raison de la concurrence, et le chômage va monter en flêche. Il n’y a qu’une solution pour contrer la montée proportionnelle du FN, c’est de créer une alternative au modèle keynésien : un modèle contributif.

    Pouvez-vous donner un exemple concret de modèle contributif ?

    Dans l’économie contributive, il n’y a plus de salariat ni de propriété industrielle au sens classique. Pour vous donner un exemple, j’ai travaillé il y a quelques années avec des étudiants stylistes sur un modèle d’entreprise de mode contributive. L’entreprise devenait un club d’amateurs de mode, dont certains contribuaientt par des idées, d’autres par des achats, d’autres par un travail de confection, d’autres par tout cela à la fois ou alternativement.

    A son époque lointaine, devenue aujourd’hui mythique et totalement révolue, la Fnac était une sorte de coopérative où les vendeurs étaient d’abord des passionnés de musique ou de photo, et où les adhérents de la Fnac n’étaient pas des consommateurs, mais des amateurs.

    Il y a des gens qui s’expriment extrêmement bien dans leur façon de s’habiller. Ils ont du goût, ils savent agencer des vêtements. Je pense que leur savoir peut être partagé et valorisé.

    Et comment seraient-ils rémunérés ?

    Ce n’est pas à l’échelle micro-économique de la firme qu’il faut poser et résoudre ce problème : c’est une question de macro-économie qui doit dépasser le couple valeur d’usage/valeur d’échange, et promouvoir ce que nous appelons valeur pratique (c’est-à-dire savoirs) et valeur sociétale (c’est-à-dire qui renforce fonctionnellement la solidarité).

    C’est la valorisation mutuelle et par une puissance publique réinventée de ce qu’Amartya Sen appelle les « capabilités » – c’est-à-dire les savoir-faire, les savoir-vivre et les savoirs formels – qui constitue la base d’une économie contributive. C’est en fait la généralisation du modèle des intermittents du spectacle, qui cultivent leurs savoirs avec l’aide de leur revenu intermittent et qui les valorisent lorsqu’ils entrent en production, et que l’on voudrait détruire au moment même où il faudrait en généraliser l’état d’esprit si intelligent.

    J’y reviens, quel rôle peut jouer la presse dans cette réflexion sur le modèle économique actuel ?

    D’abord, elle-même devrait inventer, pour elle-même, de tels dispositifs contributifs. Le fonds d’aide à la presse devrait servir à cela, et les journalistes devraient se battre pour cela. Ensuite, il faut que la presse parle de l’automatisation et plus généralement du numérique en un sens approfondi et non « tendance » ou dans la rubrique « geek », et qu’elle ne soit pas dans le déni. L’automatisation vient, il faut l’assumer, et arrêter de dire qu’on va inverser la courbe du chômage. Celui-ci va considérablement augmenter.

    Toutes sortes de gens réfléchissent à des scénarios qui permettraient d’entrer dans un nouveau monde – en Amérique latine par exemple, mais aussi en Amérique du Nord. Il faut leur donner la parole. Et il faut solliciter l’intelligence des lecteurs plutôt que de présupposer qu’ils ne recherchent que le scoop ou l’information sensationnelle et vulgaire.

    Désormais, le FN se présente aussi comme l’un de ces scénarios alternatifs à l’ultralibéralisme...

    Oui, c’est très malin. Ce matin, j’ai eu la grande surprise de lire une déclaration de Florian Philippot [vice-président du Front national, ndlr] qui défendait la grève de la SNCF dans Libération, au nom du service public. Imaginez le désarroi des syndicalistes de la CGT et de SUD.

    Le Front national, c’est une idéologie ultralibérale déguisée en anti-ultralibéralisme. Jean-Marie Le Pen est un ultralibéral. Il l’a toujours dit, et il l’est plus que jamais. Il est absolument contre l’Etat, contre les fonctionnaires.

    Quant à Marine Le Pen, quoiqu’elle dise, elle a besoin de l’ultralibéralisme pour se développer : c’est son terreau parce que ce qui attire chez elle ses électeurs et la désignation de boucs émissaires, ce qui provoque cette recherche de boucs émissaires est l’ultralibéralisme au service du capitalisme financiarisé pulsionnel et spéculatif. Qu’est-ce que le FN ? C’est le grand spécialiste des inversions de causalités.

    Le FN vit sur l’idée que la souffrance est attribuable aux immigrés parce que personne n’a le courage de fournir les vrais schémas de causalité nouveaux qui s’imposent.

    Le FN distille la peur en parlant des milliers de Mohamed Merah en latence. Mais ces jeunes qui partent en Syrie ne souffrent-ils pas du même trouble narcissique que les électeurs du FN ?

    Bien entendu. J’ai appelé cela le complexe d’Antigone. « Antigone » est un texte absolument fondamental.

    Je soutiens que les terroristes intégristes, beurs ou blancs, nés et élevés en France, qui d’un seul coup, se mettent à devenir musulmans, sont des petites Antigone. Je ne veux pas les défendre en disant cela. Ce que je veux dire, c’est qu’un adolescent a besoin de sublimer – et de le faire comme toujours « au nom de la loi ». Antigone est une adolescente qui défend la « loi divine ». Merah est aussi un adolescent.

    Ces mômes-là, à un moment, ont besoin de s’identifier à leur père, puis à une figure de rupture avec le père qu’ils accusent alors de ne pas incarner correctement et sincèrement la loi. Ils cherchent alors d’autres figures identificatoires. Mais s’ils ne trouvent plus de possibilité d’identification dans la société, et s’ils vivent dans une société qui est en train de s’effondrer, ils sont prêts pour s’engager dans ce que j’ai appelé une sublimation négative – qui peut conduire au pire. Ce sont là encore des symptômes.

    Vous pouvez faire tout ce que vous voulez, cela se développera encore longtemps et inévitablement si la société ne produit pas vite des capacités nouvelles d’identification positive sur des idées républicaines, constructives et vraiment porteuses d’avenir.

    December 7, 2015 at 9:23:11 AM GMT+1 - permalink - http://rue89.nouvelobs.com/2014/06/27/bernard-stiegler-les-gens-perdent-sentiment-dexister-votent-front-national-253270
    société philo
  • Everyone can be a target

    My flight from Athens where I had the pleasure to attend the first “reproducible world summit” landed at the scheduled time in Paris (CDG). While people were getting their bags and putting their coats on, one member of the cabin crew announced that a police control will take place outside the plane and that we should have ready identity documents. My seat was in the back of the plane, so I had time to wait in the cold of the jetway while all passengers were controlled one after the other.

    Three cops were set up right at the entrance of the terminal. One was taking cards or passports and looking at people's face. He then passed it to another cop in front of a suitcase that seemed to contain a scanner and a computer. The third one was just standing against the wall watching. When it was my turn, after scanning my passport, the cop had this nice gesture where she started to move her hand holding the passport toward me —just like she had just did a hundred times—before pulling it backward when the result appeared on the screen.

    I had the confirmation that I was registered as a dangerous political activist in 2012 when David Dufresne published Magasin Général. One report from the interior intelligence service from 2008 was leaked to promote the book. My name had not been properly redacted from the very first version that went online and was associated with a political self-organized space in Dijon. Some Debian Developers had the pleasure to visit that space in 2005, 2006, or 2007. The report was full of mistakes, like almost all police files, so I don't want to comment on it.

    The good news is that since then, I have stopped being paranoid. I knew, and thus could take appropriate precautions. Just like every time I have to approach an airport, all my (encrypted) electronic devices were turned off. I had shaved a couple hours before. I know a lawyer ready to represent me. I am fully aware that it's best to say as little as possible.

    Although it has been a while since I had such a blatant confirmation that I was still a registered anarchist. It should not be a surprise though. Once you are in, there's no way out.

    I was then asked to step aside while they proceeded with the rest of the queue. I put my backpack down and leaned against the wall. Once they were over, one of the cops asked me to follow him. We walked through the corridors to reach the office of the border police. While we were walking, they asked me a series of questions. I'm not mentioning the pauses in between, but here's what I can remember:

    — Do you have a connection?
    — No.
    — Are you going to Paris?
    — To my parents' in the suburbs.
    — How long have you been staying in Greece?
    — 5 days.
    — flipping the pages of my passport And you come back from the U.S. in Feb. 2015?
    — No, that was the maximum stay. I was there in August 2014.
    — Why were you in Greece? Vacations?
    — Work. I was at a meeting.
    — What do you do?
    — Free software.
    — What is that?
    — I am a developper.
    — Oh, computers.
    — Yes.
    — Is that why you also were in the US?
    — Yes, it was another conference.
    — And so you travel because of that. That's nice.
    — …
    — Are you a freelancer?
    — I work with a coop, but yes.

    The cop also commented that they had to do some simple checks and that they would then let me go as I was coming back. I did not trust this but said nothing.

    When then passed through a door where the cop had to use their badge to unlock it. I was asked to sit on a chair in the corridor between two offices (as far as I could see). I could hear one cop explaining the situation to the next: “— Il a une fiche. — Ah, une fiche.” They seemed quite puzzled that I was not controlled when I flew out on Monday.

    After some minutes, another cop came back asking me for my boarding pass. Some more minutes later, he came back asking me if the address on my passport was still valid. I replied “no”. They gave me a piece of paper asking me for my current address, a phone number and an email address. As these information are all easy to find, I thought it was easier to comply. I gave my @irq7.fr address that I use for all public administrations. When the cop saw it, he asked:

    — What is it?
    — I don't understand.
    — Is it your company?
    — It is a non-profit.

    He gave me my passport back and showed me the way out.

    (I will spare you details on the discussion I had to listen while waiting between two cops about how one loved to build models of military weapons used in wars against communism because of his origins. And that he was pissed off because fucking Europe disallowed some (toxic) paints he was used to.)

    To the best of my understanding, what happened is that they made a phone call and were just asked to update my personal details by the intelligence service.

    I don't know, but I'm left to wonder if all these people might just have been controlled because I was on the flight.

    All-in-all this didn't take too long: one hour after leaving the plane I was on the platform for the regional trains. The cops stayed polite the whole time. I am privileged: French citizen, white, able to speak French with a teacher accent. I am pretty sure it would have not been that good if I had been displaying a long beard or a djellaba.

    I took the time to document this because I know too many people who think that what the French government is doing doesn't concern them. It does. It's been a couple of years now that antiterrorism is how governments keep people in check. But we are reaching a whole new level now. We are talking about cops keeping their guns while off duties, house searches at any hours without judge oversight, and the government wants to change the constitution to make the “state of emergency” permanent. We've seen so many abuses in just two weeks. It will not go well. Meanwhile, instead of asking themselves why young people are killing others and themselves, state officials prefer dropping bombs. Which will surely prevent people ready to die from using suicidal tactics, right?

    We are at the dawn of an environmental crisis that will end humanity. Every human on this planet is concerned. People get beaten up when they march to pressure governments to do something about it. We need to unite and resist. And yes, we are going to get hurt but freedom is not free.

    December 6, 2015 at 1:48:07 PM GMT+1 - permalink - https://people.debian.org/~lunar/blog/posts/everyone_can_be_a_target/
    société philo
  • totalism.org

    Hackerspace & lieu de vie à Lanzarote.

    December 2, 2015 at 9:24:36 PM GMT+1 - permalink - http://totalism.org/
    travail job vie
  • HackerCouch

    Couchsurfing for Hackers, by Hackers

    December 2, 2015 at 9:16:17 PM GMT+1 - permalink - https://hackercouch.com/
    travail job voyage
  • thumbnail
    Les Inrocks - Comment "les médiocres ont pris le pouvoir"

    Dans La Médiocratie (Lux), le philosophe Alain Deneault critique la médiocrité d’un monde où tout n’est plus fait que pour satisfaire le marché. Entretien.
    “Les médiocres sont de retour dans la vallée fertile”, déclarait aux Inrocks le journaliste Daniel Mermet lors de son éviction de France Inter, en juin 2014. Le philosophe Alain Deneault, considérant la conjoncture globale, va plus loin : “Il n’y a eu aucune prise de la Bastille, rien de comparable à l’incendie du Reichstag, et l’Aurore n’a encore tiré aucun coup de feu. Pourtant, l’assaut a bel et bien été lancé et couronné de succès : les médiocres ont pris le pouvoir”. C’est cette révolution silencieuse qu’il analyse de long en large dans La Médiocratie (Lux), un livre coup de poing. De passage à Paris, cet enseignant en science politique à l’Université de Montréal nous explique le fond de sa pensée. Entretien.

    Comment les médiocres ont-ils pris le pouvoir selon vous ? Depuis quand est-il valorisé d’être moyen ?

    Alain Deneault – La généalogie de cette prise de pouvoir a deux branches. L’une remonte au XIXe siècle, à l’époque où on a transformé progressivement les “métiers” en “emplois”. Cela supposait une standardisation du travail, c’est-à-dire qu’on en fasse une chose moyenne. On a généré une sorte de moyenne standardisée, requise pour organiser le travail à grande échelle sur le mode aliénant que l’on sait, et qui a été décrit par Marx. On a fait de ce travail moyen quelque chose de désincarné, qui perd du sens, et qui n’est plus qu’un “moyen” pour le capital de croître, et pour les travailleurs de subsister.

    L’autre versant de cette prise de pouvoir réside dans la transformation de la politique en culture de la gestion. L’abandon progressif des grands principes, des orientations et de la cohérence au profit d’une approche circonstancielle, où n’interviennent plus que des “partenaires” sur des projets bien précis sans qu’intervienne la notion de bien commun, a conduit à faire de nous des citoyens qui “jouent le jeu”, qui se plient à toutes sortes de pratiques étrangères aux champs des convictions, des compétences et des initiatives. Cet art de la gestion est appelé “gouvernance”.

    Ces deux phénomènes ont amené des penseurs au XXe siècle à constater que la médiocrité n’était plus une affaire marginale, qui concernait des gens peu futés qui arrivaient à se rendre utiles, mais qu’elle faisait désormais système. En tant que professeur, qu’administrateur, qu’artiste, on est obligé de se plier à des modalités hégémoniques pour subsister.

    Au niveau politique, cela a pour conséquence que chaque sujet est analysé sous l’angle du problem solving. Ce qui se passe en France en ce moment est emblématique : en réponse aux attaques terroristes, on bombarde, on répond par une stratégie de la solution au sens chirurgical du terme, alors qu’il faudrait prendre du recul et être plus subtil.

    L’avènement de la médiocratie est-il à lier à la révolution libérale qui a eu lieu dans les années 80, au conformisme dans les entreprises et à la mise au pas du monde du travail qui en a découlé ?

    Oui, et c’est d’autant plus vrai que la gouvernance mis en place par les technocrates de Margaret Thatcher a transformé l’ultralibéralisme en une approche réaliste. L’option du néolibéralisme n’est plus une option, mais quelque chose d’aussi normal que de respirer. La gouvernance a réussi à déguiser l’idéologie ultralibérale en savoir, en mode de vie en société, comme si c’était le socle à partir duquel on devrait délibérer, alors que ça devrait être l’objet de la délibération.

    Désormais on ne parle plus du bien commun, on fait comme si l’intérêt général n’était plus que la somme d’intérêts particuliers que les uns et les autres sont ponctuellement invités à défendre. On est amené à n’être plus que le petit lobbyiste de ses intérêts privés, ou de ses intérêts de clan. C’est à partir de là que la culture du grenouillage, des arrangement douteux, se développe.

    Selon vous “l’expert est la figure centrale de la médiocratie”. Comment expliquez-vous ce paradoxe ?

    L’expert ne se contente pas de rendre disponible un savoir auprès de gens qui délibèrent. Il est un idéologue qui déguise son discours d’intérêt en savoir. A l’université, un étudiant devra désormais se demander au cours de son orientation s’il veut devenir expert ou intellectuel, sachant que l’expertise consistera surtout à vendre son cerveau à des acteurs qui ont intérêt à calibrer la production de notre travail intellectuel d’une manière orientée, de façon à satisfaire des intérêts.

    Vous citez à ce titre le recteur de l’Université de Montréal, qui disait en 2011 : “Les cerveaux doivent correspondre aux besoins des entreprises”.

    Tout à fait, c’est comme Patrick Le Lay [ancien PDG de TF1, ndlr], qui déclarait en 2004 : “Ce que nous vendons à Coca-Cola, c’est du temps de cerveau humain disponible”. Ce recteur, voit son institution – une des universités les plus importantes de la francophonie – comme une entreprise qui vend des cerveaux à l’industrie. Celle-ci occupe d’ailleurs plusieurs sièges au Conseil d’administration de cette université, et décide donc en partie de son orientation.

    On est dans un monde où le savoir est généré pour satisfaire l’entreprise, alors que le rôle des intellectuels est de faire de l’entreprise un objet de la pensée. Edward Said en parle très bien : l’expert ne se préoccupe pas de ce que son savoir génère. On peut très bien être géologue, aller chercher du zinc ou du cuivre au Katanga, mais être totalement incompétent quand il s’agit de penser les incidences de cette pratique à l’échelle du Congo. L’industrie ne veut pas qu’ils soient compétents, car ce n’est pas dans son intérêt.

    A l’inverse, l’intellectuel agira en “amateur”, c’est-à-dire en aimant son sujet et en se sentant concerné par toutes ses dimensions, ce qui appelle nécessairement à l’interdisciplinarité.

    Vous expliquez que le discours politique a été colonisé par un vocable centriste, celui de la “gouvernance”. Ce que vous déplorez sous le terme de “médiocratie”, n’est-ce pas finalement la fin des utopies ?

    Je n’irai pas jusque là. Ce n’est pas une terminologie centriste, mais d’extrême-centre, qui s’est développée – c’est presque le contraire. Un discours centriste se situe sciemment sur un axe gauche/droite, alors que le discours d’extrême centre ne tolère rien d’autre que lui-même. Il ne se situe pas sur un spectre mais en nie plutôt la réalité et la légitimité.

    Les tenants de la gouvernance sont loin d’être pondérés, contrairement à ce que leur vocable pourrait laisser croire. Ce sont des sophistes des temps modernes, qui ont l’art d’amadouer les syndicats en leur faisant croire qu’ils souhaitent prendre en compte leurs aspirations lors de “Conférences sociales”. En réalité ils militent pour que ceux-ci soient acquis à leurs positions a priori. Leur prétendue synthèse est en fait un discours radical, souvent en phase avec des pratiques inégalitaires et antidémocratiques. Un ordre qui met en péril 80 % des écosystèmes, et qui permet à 1 % des plus riches d’avoir 50 % des actifs mondiaux n’a rien de pondéré.

    La médiocratie semble en effet être dotée d’une formidable faculté à tout dépolitiser, alors que ce qu’elle propose est radical : vous citez par exemple la loi 78 encadrant strictement le droit à manifester, qui était passée au Québec en 2012. Comment repolitiser la société ?

    Je milite pour le retour à des mots investis de sens, tous ceux que la gouvernance a voulu abolir, caricaturer ou récupérer : la citoyenneté, le peuple, le conflit, les classes, le débat, les droits collectifs, le service public, le bien commun… Ces notions ont été transformées en “partenariat”, en “société civile”, en “responsabilité sociale des entreprises”, en “acceptabilité sociale”, en “sécurité humaine”, etc. Autant de mots-valises qui ont expulsé du champ politique des références rationnelles qui avaient du sens. Le mot “démocratie” lui-même est progressivement remplacé par celui de “gouvernance”. Ces mots méritent d’être réhabilités, comme ceux de “patient”, d’usager, d’abonné, spectateur, qui ont tous été remplacés par celui de “clients”. Cette réduction de tout à des logiques commerciales abolit la politique et mène à un évanouissement des références qui permettent aux gens d’agir.

    On n’a pas le choix entre agir ou penser: quand on a agi, c’est qu’on a pensé, et pour penser, il faut avoir les termes qui conviennent. Ce ne sont pas des utopies mais des traditions mobilisatrices dans l’histoire qui sont en train d’être détruites. Aujourd’hui les Etats ne sont plus que les partenaires d’entreprises qui ont un statut équivalent. On greffe des petits intérêts aux grands, mais pendant ce temps là il n’y a pas de notion commune.

    La COP21 est-elle un bon exemple de ce processus de gouvernance, puisqu’elle est sponsorisée par des entreprises et des banques qui font de l’évasion fiscale ?

    Ce qui est emblématique de la gouvernance dans la COP21, ce sont tous les préparatifs qui ont consisté à accueillir dans l’agenda autant de propositions émanant d’écologistes, que de propositions émanant de Total. Comme du point de vue du climat, le gaz est mieux que le pétrole, Total propose de se reconvertir, quitte à ruiner les nappes phréatiques. C’est ça la gouvernance : on transforme en grand débat de société l’organisation d’un rapport de force dans lequel, pour gommer les oppositions, les plus forts essayent d’amener les plus faibles à adhérer à leurs projets dans une mascarade de consultation et de délibération.

    Rancière a écrit que nous sommes tous équitablement dotés de ce qui est requis pour gouverner. Le tirage au sort est-il une solution pour réaffirmer l’idée de bien commun ?

    Rancière était mon directeur de thèse. Le tirage au sort ne doit pas être considéré comme une panacée. Ce qui est intéressant c’est toute la pensée sous-jacente à cette proposition. Dans La Haine de la démocratie, Rancière développe cette idée sans a priori militant. Si par exemple en France, au Québec ou au Canada on faisait élire le Sénat au sort, ça changerait considérablement le positionnement des gens. On aurait un autre lien aux institutions. On redécouvrirait alors qu’en ce qui concerne les enjeux généraux de la vie publique, personne n’est plus compétent qu’un autre.

    Rancière a raison de dire à ce titre que très peu de gens sont démocrates. Ce mot finit par tellement gêner, malgré les usages abusifs qu’on en a fait, qu’on est en train de le remplacer par celui de gouvernance, plus compatible avec ceux qui veulent utiliser la consultation et l’opinion à des fins de manipulation.

    Personnellement je ne suis pas pour faire de grands bonds utopiques. On ne va pas tirer au sort du jour au lendemain tous nos représentants. Commencer par le Sénat, une chambre haute, qui n’a qu’une force de blocage et pas de proposition, rassurerait les gens. Ce serait une manière de responsabiliser les citoyens, à condition d’inventer des mécanismes pour s’assurer qu’il n’y ait pas de trafic d’influence.

    Après les attentats du 13 novembre, la lutte contre le terrorisme rend les discours critiques assez inaudibles de manière générale. Elle incite la population à remettre le bien commun entre les mains d’un gouvernement, voire d’un homme providentiel, plutôt qu’à s’en saisir…

    C’est très certainement ce à quoi le gouvernement aspire. La lutte contre le terrorisme est une bêtise conceptuelle, qui équivaut à dire que l’on va lutter contre les grenades. Dire qu’on fait la guerre au terrorisme c’est ériger un discours martial contre un adversaire qui n’a encore une fois “pas de visage”, ce qui est une aubaine pour le pouvoir. Et d’un point de vue tactique c’est une folie, car dans les conditions de possibilité historiques actuelles cela va générer encore plus de tensions, qui risquent d’exposer encore plus les Français à la barbarie qui s’est déployée le 13 novembre. Sur le plan intérieur cela va conduire le pays à prendre des mesures d’exception encore plus drastiques et liberticides par rapport à des adversaires toujours plus flous. Pour finalement mettre entre parenthèse ce qui est si insupportable pour les gens de pouvoir, la démocratie.

    December 2, 2015 at 9:44:15 AM GMT+1 - permalink - http://www.lesinrocks.com/2015/12/01/actualite/comment-les-m%C3%A9diocres-ont-pris-le-pouvoir-11791161/
    société philo
  • Mots de casse — David Larlet

    Je me dis parfois qu’il serait tellement simple pour un service web de refuser à leurs usagers l’authentification ponctuellement (et/ou à un faible pourcentage) pour obtenir un mot de passe alternatif qui serait très probablement utilisable sur un autre service. C’est même pire car avec ces deux mots de passe (ou plus…), je couvre potentiellement 100% de leurs vies numériques. Ces hypothèses demanderaient à être vérifiées.

    December 1, 2015 at 3:45:31 PM GMT+1 - permalink - https://larlet.fr/david/stream/2015/12/01/
    sécurité hacking
  • thumbnail
    The End of the Internet Dream? — Backchannel — Medium

    Twenty years ago I attended my first Def Con. I believed in a free, open, reliable, interoperable Internet: a place where anyone can say anything, and anyone who wants to hear it can listen and respond. I believed in the Hacker Ethic: that information should be freely accessible and that computer technology was going to make the world a better place. I wanted to be a part of making these dreams — the Dream of Internet Freedom — come true. As an attorney, I wanted to protect hackers and coders from the predations of law so that they could do this important work. Many of the people in this room have spent their lives doing that work.
    But today, that Dream of Internet Freedom is dying.
    For better or for worse, we’ve prioritized things like security, online civility, user interface, and intellectual property interests above freedom and openness. The Internet is less open and more centralized. It’s more regulated. And increasingly it’s less global, and more divided. These trends: centralization, regulation, and globalization are accelerating. And they will define the future of our communications network, unless something dramatic changes.
    Twenty years from now,
    • You won’t necessarily know anything about the decisions that affect your rights, like whether you get a loan, a job, or if a car runs over you. Things will get decided by data-crunching computer algorithms and no human will really be able to understand why.
    • The Internet will become a lot more like TV and a lot less like the global conversation we envisioned 20 years ago.
    • Rather than being overturned, existing power structures will be reinforced and replicated, and this will be particularly true for security.
    •Internet technology design increasingly facilitates rather than defeats censorship and control.
    It doesn’t have to be this way. But to change course, we need to ask some hard questions and make some difficult decisions.
    What does it mean for companies to know everything about us, and for computer algorithms to make life and death decisions? Should we worry more about another terrorist attack in New York, or the ability of journalists and human rights workers around the world to keep working? How much free speech does a free society really need?
    How can we stop being afraid and start being sensible about risk? Technology has evolved into a Golden Age for Surveillance. Can technology now establish a balance of power between governments and the governed that would guard against social and political oppression? Given that decisions by private companies define individual rights and security, how can we act on that understanding in a way that protects the public interest and doesn’t squelch innovation? Whose responsibility is digital security? What is the future of the Dream of Internet Freedom?
    The Dream of Internet Freedom
    For me, the Dream of Internet Freedom started in 1984 with Steven Levy’s book “Hackers, Heroes of the Computer Revolution.” Levy told the story of old school coders and engineers who believed that all information should be freely accessible. They imagined that computers would empower people to make our own decisions about what was right and wrong. Empowering people depended on the design principle of decentralization. Decentralization was built into the very DNA of the early Internet, smart endpoints, but dumb pipes, that would carry whatever brilliant glories the human mind and heart could create to whomever wanted to listen.
    This idea, that we could be in charge of our own intellectual destinies, appealed to me immensely. In 1986, I entered New College, a liberal arts school in Sarasota, Florida. Its motto is “Each student is responsible in the last analysis for his or her education.” That same year, I read the Hacker Manifesto, written by The Mentor and published in Phrack magazine. I learned that hackers, like my fellow academic nerds at New College, were also people that didn’t want to be spoon-fed intellectual baby food. Hackers wanted free access to information, they mistrusted authority, they wanted to change the world — to a place where people could explore and curiosity was its own reward.
    In 1991 I started using the public Internet. I remember sending a chat request to a sysop, asking for help. And then I could see the letters that he was typing appearing in real time on my screen, viscerally knowing for the first time that this technology allowed talking to someone, anyone, everyone, in real time, anywhere. That’s when I really began to believe that the Dream of Internet Freedom could one day become a reality.
    Twenty years ago, I was a criminal defense attorney, and I learned that hackers were getting in trouble for some tricks that I thought were actually pretty cool. As a prison advocate in the San Francisco Sheriff’s Department, I represented a guy who was looking at six more months in jail for hooting into the pay phone and getting free calls home. My research on that case made me realize there were a lot of laws that could impact hackers, and that I could help.
    That was also the year that a guy by the name of Marty Rimm wrote a “study” saying that pornography was running rampant on the Internet. A law review published the paper, and Time Magazine touted it, and that’s all it took for Congress to be off to the races. The cyberporn hysteria resulted in Congress passing the Communications Decency Act of 1996 (CDA), an attempt to regulate online pornography.
    For all you porn lovers out there, that would be a big disappointment. But there was something worse about the CDA. To stop porn, the government had to take the position that the Internet wasn’t fully protected by the First Amendment. And that would mean the government could block all kinds of things. The Internet wouldn’t be like a library. The Internet would be like TV. And TV in 1985 was actually really bad.
    But this was even worse because we had higher hopes for the Internet. The Internet was a place where everyone could be a publisher and a creator. The Internet was global. And the Internet had everything on the shelves. Congress was squandering that promise.
    At that time, John Perry Barlow, lyricist for the Grateful Dead, a rancher, a founder of the Electronic Frontier Foundation, wrote what is essentially a poem about love for the Internet. Barlow wrote:
    Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.

    Barlow was reacting to the CDA and the assertion that the Internet should be less — not more — free than books and magazines. But he was also expressing weariness with business as usual, and our shared hope that the Internet would place our reading, our associations and our thoughts outside of government control.
    It turns out that Marty Rimm and the Communications Decency Act didn’t kill Internet freedom. Instead, there was a strange twist of fate that we legal scholars like to call “irony”. In 1997 in a case called ACLU v. Reno, the U.S. Supreme Court struck down the CDA. It said that the First Amendment’s freedom of expression fully applies to the Internet.
    The only part that remains of the CDA is a part that might seem like it achieves the opposite of Congress’s goal to get rid of online porn. It says that Internet providers don’t have to police their networks for pornography or most other unwanted content, and can’t get in trouble for failing to do so. This provision of the CDA is why the Internet is a platform for so much “user generated content,” whether videos, comments, social network posts, whatever.
    Together, the Hacker Ethic, the Hacker Manifesto, and the Declaration of Independence of Cyberspace, ACLU v. Reno, and even the remaining piece of the CDA, describe a more or less radical dream, but one that many, if not most, of the people in this room have believed in and worked for. But today I’m standing here before you to tell you that these dreams aren’t coming true. Instead, twenty years on, the future not only looks a lot less dreamy than it once did, it looks like it might be worse.
    Racism and sexism have proven resilient enough to thrive in the digital world. There are many, many examples of this, but let me use statistics, and anecdotes to make the point.
    Statistically: At Google, women make up 30 percent of the company’s overall workforce, but hold only 17 percent of the company’s tech jobs. At Facebook, 15 percent of tech roles are staffed by women. At Twitter, 10 percent.
    Anecdotally: Look around you at your fellow audience members. How very male and white this field is.
    I find this so strange. The security community has historically been very good at finding, cultivating, and rewarding talent from unconventional candidates. Many of the most successful security experts never went to college, or even finished high school. A statistically disproportionate number of you are on the autism spectrum. Being gay or transgender is not a big deal and hasn’t been for years. A 15-year-old Aaron Swartz hung out with Doug Engelbart, creator of the computer mouse. Inclusion is at the very heart of the Hacker ethic.
    And people of color and women are naturally inclined to be hackers. We learn early on that the given rules don’t work for us, and that we have to manipulate them to succeed, even where others might wish us to fail.
    This field should be in the lead in evolving a race, class, age, and religiously open society, but it hasn’t been. We could conscientiously try to do this better. We could, and in my opinion should, commit to cultivating talent in unconventional places.
    Today, our ability to know, modify and trust the technology we use is limited by both the law and our capacity for understanding complex systems. The Hands On Imperative is on life support. “The Freedom to Tinker” might sound like a hobby, but it’s quite important. It means our ability to study, modify and ultimately understand the technology we use — and that structures and defines our lives.
    The Hands On Imperative is dying for two reasons. We are limited by both the law and our capacity for understanding complex systems.
    The law: Two examples. It was exactly ten years ago that Black Hat staff spent all night cutting pages out of attendee books and re-stuffing conference sacks with new CDs. Security researcher Mike Lynn was scheduled to give a talk about a previously unknown category of vulnerability, specifically flaws in Internet routers. Cisco, and Mike Lynn’s employer ISS, decided at the last minute to try to keep the vulnerability a secret, ordering Mike to give a different talk and leveraging copyright law to force Black Hat to destroy all copies of Mike’s slides. There’s nothing that cries out censorship like cutting pages out of books.
    On stage the next morning, Mike quit his job, donned a white baseball cap — literally a white hat — and presented his original research anyway. Cisco and ISS retaliated by suing him.
    I was Mike’s lawyer. We managed to fight back that case, and the criminal investigation that the companies also instigated against him. But the message from the lawsuit was loud and clear — and not just to Mike. This is our software, not yours. This is our router, not yours. You’re just a licensee and we’ll tell you what you are allowed to do in the EULA. You can’t decompile this, you can’t study it, you can’t tell anyone what you find.
    Aaron Swartz was another sacrificial lamb on the altar of network control. Aaron was charged with violating the Computer Fraud and Abuse Act (CFAA) because he wrote a script to automate the downloading of academic journal articles. Much of this information wasn’t even copyrighted. But Aaron was a hacker, and he challenged the system. They went after him with a vengeance. The case was based on the assertion that Aaron’s access to the journal articles was “unauthorized” even though he was authorized as a Harvard student to download the same articles.
    Aaron killed himself, under immense stress from prosecutors twisting his arm to plead guilty to a political-career-ending felony, or face years in prison.
    Here, too, the message was clear. You need our permission to operate in this world. If you step over the line we draw, if you automate, if you download too fast, if you type something weird in the URL bar on your browser, and we don’t like it, or we don’t like you, then we will get you.
    In the future will we re-secure the Freedom to Tinker? That means Congress forgoing the tough-on-cybercrime hand waving it engages in every year — annual proposals, to make prison sentences more severe under the CFAA, as if any of the suspected perpetrators of the scores of major breaches of the past two or three years — China, North Korea, who knows who else — would be deterred by such a thing. These proposals just scare the good guys, they don’t stop the attackers.
    We’d have to declare that users own and can modify the software we buy and download — despite software licenses and the Digital Millennium Copyright Act (DMCA).
    This is going to be increasingly important. Over the next 20 years software will be embedded in everything, from refrigerators to cars to medical devices.
    Without the Freedom to Tinker, the right to reverse engineer these products, we will be living in a world of opaque black boxes. We don’t know what they do, and you’ll be punished for peeking inside.
    Using licenses and law to control and keep secrets about your products is just one reason why in the future we may know far less about the world around us and how it works than we currently do.
    Today, technology is generating more information about us than ever before, and will increasingly do so, making a map of everything we do, changing the balance of power between us, businesses and governments. In the next 20 years, we will see amazing advances in artificial intelligence and machine learning. Software programs are going to be deciding whether a car runs people over, or drives off a bridge. Software programs are going to decide who gets a loan, and who gets a job. If intellectual property law will protect these programs from serious study, then the public will have no idea how these decisions are being made. Professor Frank Pasquale has called this the Black Box Society. Take secrecy and the profit motive, add a billion pieces of data, and shake.
    In a Black Box Society, how can we ensure that the outcome is in the public interest? The first step is obviously transparency, but our ability to understand is limited by current law and also by the limits of our human intelligence. The companies that make these products might not necessarily know how their product works either. Without adequate information, how can we democratically influence or oversee these decisions? We are going to have to learn how, or live in a society that is less fair and less free.
    We are also going to have to figure out who should be responsible when software fails.

    So far, there’s been very little regulation of software security. Yes, the Federal Trade Commission has jumped in where vendors misrepresented what the software would do. But that is going to change. People are sick and tired of crappy software. And they aren’t going to take it any more. The proliferation of networked devices — the Internet of Things — is going to mean all kinds of manufacturers traditionally subject to products liability are also software purveyors. If an autonomous car crashes, or a networked toaster catches on fire, you can bet there is going to be product liability. Chrysler just recalled 1.4 million cars because of the vulnerabilities that Charlie Miller and Chris Valasek are going to be talking about later today. It’s a short step from suing Tesla to suing Oracle for insecure software… with all the good and the bad that will come of that.
    I think software liability is inevitable. I think it’s necessary. I think it will make coding more expensive, and more conservative. I think we’ll do a crappy job of it for a really long time. I don’t know what we’re going to end up with. But I know that it’s going to be a lot harder on the innovators than on the incumbents.
    Today, the physical design and the business models that fund the communications networks we use have changed in ways that facilitate rather than defeat censorship and control. But before I delve into issues of privacy, security and free expression, let’s take a few steps back and ask how we got to where we are today.
    The design of the early public Internet was end-to-end. That meant dumb pipes that would carry anything, and smart edges, where application and content innovation would occur. This design principle was intentional. The Internet would not just enable communication, but would do so in a decentralized, radically democratic way. Power to the people, not to the governments or companies that run the pipes.
    The Internet has evolved, as technologies do. Today, broadband Internet providers want to build smart pipes that discriminate for quality of service, differential pricing, and other new business models. Hundreds of millions of people conduct their social interactions over just a few platforms like TenCent and Facebook.
    What does this evolution mean for the public? In his book The Master Switch, Professor Tim Wu looks at phones, radio, television, movies. He sees what he calls “the cycle.”
    History shows a typical progression of information technologies, from somebody’s hobby to somebody’s industry; from jury-rigged contraption to slick production marvel; from a freely accessible channel to one strictly controlled by a single corporation or cartel — from open to closed system.
    Eventually, innovators or regulators smash apart the closed system, and the cycle begins afresh. In the book, Tim asks the question I’m asking you. Is the Internet subject to this cycle? Will it be centralized and corporately controlled? Will it be freely accessible, a closed system or something in between?
    If we don’t do things differently, the Internet is going to end up being TV. First, I said we’ve neglected openness and freedom in favor of other interests like intellectual property, and that’s true.
    But it’s also true that a lot of people affirmatively no longer share the Dream of Internet Freedom, if they ever did. They don’t think it’s the utopia that I’ve made it out to be. Rather, the Dream of Internet Freedom collided head on with the ugly awfulness called Other People. Nasty comments, 4chan, /b/tards, revenge porn, jihadists, Nazis. Increasingly I hear law professors, experts in the First Amendment, the doctrine of overbreadth and the chilling effect, talk about how to legislate this stuff they don’t like out of existence.
    Second, there are the three trends I told you about: centralization, regulation and globalization.
    · Centralization means a cheap and easy point for control and surveillance.
    · Regulation means exercise of government power in favor of domestic, national interests and private entities with economic influence over lawmakers.
    · Globalization means more governments are getting into the Internet regulation mix. They want to both protect and to regulate their citizens. And remember, the next billion Internet users are going to come from countries without a First Amendment, without a Bill of Rights, maybe even without due process or the rule of law. So these limitations won’t necessarily be informed by what we in the U.S. consider basic civil liberties.
    Now when I say that the Internet is headed for corporate control, it may sound like I’m blaming corporations. When I say that the Internet is becoming more closed because governments are policing the network, it may sound like I’m blaming the police. I am. But I’m also blaming you. And me. Because the things that people want are helping drive increased centralization, regulation and globalization.
    Remember blogs? Who here still keeps a blog regularly? I had a blog, but now I post updates on Facebook. A lot of people here at Black Hat host their own email servers, but almost everyone else I know uses gmail. We like the spam filtering and the malware detection. When I had an iPhone, I didn’t jailbreak it. I trusted the security of the vetted apps in the Apple store. When I download apps, I click yes on the permissions. I love it when my phone knows I’m at the store and reminds me to buy milk.
    This is happening in no small part because we want lots of cool products “in the cloud.” But the cloud isn’t an amorphous collection of billions of water droplets. The cloud is actually a finite and knowable number of large companies with access to or control over large pieces of the Internet. It’s Level 3 for fiber optic cables, Amazon for servers, Akamai for CDN, Facebook for their ad network, Google for Android and the search engine. It’s more of an oligopoly than a cloud. And, intentionally or otherwise, these products are now choke points for control, surveillance and regulation.
    So as things keep going in this direction, what does it mean for privacy, security and freedom of expression? What will be left of the Dream of Internet Freedom?
    Privacy
    The first casualty of centralization has been privacy. And since privacy is essential to liberty, the future will be less free.
    This is the Golden Age of Surveillance. Today, technology is generating more information about us than ever before, and will increasingly do so, making a map of everything we do, changing the balance of power between us, businesses, and governments. The government has built the technological infrastructure and the legal support for mass surveillance, almost entirely in secret.
    Here’s a quiz. What do emails, buddy lists, drive back ups, social networking posts, web browsing history, your medical data, your bank records, your face print, your voice print, your driving patterns and your DNA have in common?
    Answer: The U.S. Department of Justice (DOJ) doesn’t think any of these things are private. Because the data is technically accessible to service providers or visible in public, it should be freely accessible to investigators and spies.
    And yet, to paraphrase Justice Sonya Sotomayor, this data can reveal your contacts with “the psychiatrist, the plastic surgeon, the abortion clinic, the AIDS treatment center, the strip club, the criminal defense attorney, the by-the-hour motel, the union meeting, the mosque, synagogue or church, or the gay bar.”
    So technology is increasingly proliferating data…and the law is utterly failing to protect it. Believe it or not, considering how long we’ve had commercial email, there’s only one civilian appellate court that’s decided the question of email privacy. It’s the Sixth Circuit Court of Appeals in 2006, in U.S. v. Warshak. Now that court said that people do have a reasonable expectation of privacy in their emails. Therefore, emails are protected by the Fourth Amendment and the government needs a warrant to get them. This ruling only answers part of the question for part of this country — Kentucky, Tennessee, Michigan and Ohio. Because of it, almost all service providers require some kind of warrant before turning over your emails to criminal investigators. But the DOJ continues to push against Warshak in public, but also secretly.
    But I want to emphasize how important the ruling is, because I think many people might not fully understand what the reasonable expectation of privacy and a warrant requirement mean. It means that a judge polices access, so that there has to be a good reason for the search or seizure, it can’t be arbitrary. It also means that the search has to be targeted, because a warrant has to specifically describe what is going to be searched. The warrant requirement is not only a limitation on arbitrary police action, it should also limit mass surveillance.
    But in the absence of privacy protection — pushed by our own government — the law isn’t going to protect our information from arbitrary, suspicion-less massive surveillance, even as that data generation proliferates out of control.
    Centralization means that your information is increasingly available from “the cloud,” an easy one stop shopping point to get data not just about you, but about everyone. And it gives the government a legal argument to get around the Fourth Amendment warrant requirement.
    Regulation is not protecting your data and at worst is actually ensuring that governments can get easy access to this data. The DOJ pushes:
    · Provider assistance provisions to require providers to assist with spying;
    · Corporate immunity for sharing data with the government, for example giving AT&T immunity in its complicity with NSA’s illegal domestic spying and in CISPA, CISA and other surveillance proposals masquerading as security information sharing bills;
    · And, not so much yet in the U.S. but in other countries, data retention obligations that essentially deputize companies to spy on their users for the government.
    Globalization gives the U.S. a way to spy on Americans…by spying on foreigners we talk to. Our government uses the fact that the network is global against us. The NSA conducts massive spying overseas, and Americans’ data gets caught in the net. And, by insisting that foreigners have no Fourth Amendment privacy rights, it’s easy to reach the conclusion that you don’t have such rights either, as least when you’re talking to or even about foreigners.
    Surveillance couldn’t get much worse, but in the next 20 years, it actually will. Now we have networked devices, the so-called Internet of Things, that will keep track of our home heating, and how much food we take out of our refrigerator, and our exercise, sleep, heartbeat, and more. These things are taking our off-line physical lives and making them digital and networked, in other words, surveillable.
    To have any hope of attaining the Dream of Internet Freedom, we have to implement legal reforms to stop suspicion-less spying. We have to protect email and our physical location from warrantless searches. We have to stop overriding the few privacy laws we have to gain with a false sense of online security. We have to utterly reject secret surveillance laws, if only because secret law is an abomination in a democracy.
    Are we going to do any of these things?
    Security
    Despite the way many people talk about it, security it isn’t the opposite of privacy. You can improve security without infringing privacy — for example by locking cockpit doors. And not all invasions of privacy help security. In fact, privacy protects security. A human rights worker in Syria or a homosexual in India needs privacy, or they may be killed.
    Instead, we should think about security with more nuance. Online threats mean different things depending on whose interests you have at stake — governments, corporations, political associations, individuals. Whether something is “secure” is a function of whose security you are concerned with. In other words, security is in the eye of the beholder. Further, security need not be zero sum: Because we are talking about global information networks, security improvements can benefit all, just as security vulnerabilities can hurt all.

    The battleground of the future is that people in power want more security for themselves at the expense of others. The U.S. Government talks about security as “cyber”. When I hear “cyber” I hear shorthand for military domination of the Internet, as General Michael Hayden, former NSA and CIA head, has said — ensuring U.S. access and denying access to our enemies. Security for me, but not for thee. Does that sound like an open, free, robust, global Internet to you?
    Here’s just one public example: our government wants weakened cryptography, back doors in popular services and devices so that it can surveil us (remember, without a warrant). It is unmoved by the knowledge that these back doors will be used by criminals and oppressive governments alike. Meanwhile, it overclassifies, maintains secret law, withholds documents from open government requests, goes after whistleblowers and spies on journalists.
    Here’s another. The White House is pushing for the Department of Homeland Security to be the hub for security threat information sharing. That means DHS will decide who gets vulnerability information… and who doesn’t.
    I see governments and elites picking and choosing security haves and security have nots. In other words, security will be about those in power trying to get more power.
    This isn’t building security for a global network. What’s at stake is the well-being of vulnerable communities and minorities that need security most. What’s at stake is the very ability of citizens to petition the government. Of religious minorities to practice their faith without fear of reprisals. Of gay people to find someone to love. This state of affairs should worry anyone who is outside the mainstream, whether an individual, a political or religious group or a start up without market power.
    Freedom of Expression
    Today, the physical architecture and the corporate ownership of the communications networks we use have changed in ways that facilitate rather than defeat censorship and control. In the U.S., copyright was the first cause for censorship, but now we are branching out to political speech.
    Governments see the power of platforms and have proposed that social media companies alert federal authorities when they become aware of terrorist-related content on their sites. A U.N. panel last month called on the firms to respond to accusations that their sites are being exploited by the Islamic State and other groups. At least at this point, there’s no affirmative obligation to police in the U.S.
    But you don’t have to have censorship laws if you can bring pressure to bear. People cheer when Google voluntarily delists so-called revenge porn, when YouTube deletes ISIS propaganda videos, when Twitter adopts tougher policies on hate speech. The end result is collateral censorship, by putting pressure on platforms and intermediaries, governments can indirectly control what we say and what we experience.
    What that means is that governments, or corporations, or the two working together increasingly decide what we can see. It’s not true that anyone can say anything and be heard anywhere. It’s more true that your breast feeding photos aren’t welcome and, increasingly, that your unorthodox opinions about radicalism will get you placed on a list.
    Make no mistake, this censorship is inherently discriminatory. Muslim “extremist” speech is cause for alarm and deletion. But no one is talking about stopping Google from returning search results for the Confederate flag.
    Globalization means other governments are in the censorship mix. I’m not just talking about governments like Russia and China. There’s also the European Union, with its laws against hate speech, Holocaust denial, and its developing Right To Be Forgotten. Each country wants to enforce its own laws and protect and police its citizens as it sees fit, and that means a different internet experience for different countries or regions. In Europe, accurate information is being delisted from search engines, to make it harder or impossible to find. So much for talking to everyone everywhere in real time. So much for having everything on the Internet shelf.
    Worse, governments are starting to enforce their laws outside their borders through blocking orders to major players like Google and to ISPs. France is saying to Google, don’t return search results that violate our laws to anyone, even if it’s protected speech that we are entitled to in the U.S. If you follow this through to the obvious conclusion, every country will censor everywhere. It will be intellectual baby food.
    How much free speech does a free society really need? Alternatively how much sovereignty should a nation give up to enable a truly global network to flourish?
    Right now, if we don’t change course and begin to really value having a place for even the edgy and disruptive speech, our choice is between network balkanization and a race to the bottom.
    Which will we pick?
    The Next 20 Years
    The future for freedom and openness appears to be far bleaker than we had hoped for 20 years ago. But it doesn’t have to be that way. Let me describe another future where the Internet Dream lives and thrives.
    We start to think globally. We need to deter another terrorist attack in New York, but we can’t ignore impact our decisions have on journalists and human rights workers around the world. We strongly value both.
    We build in decentralization where possible: Power to the People. And strong end to end encryption can start to right the imbalance between tech, law and human rights.
    We realize the government has no role in dictating communications technology design.
    We start being afraid of the right things and stop being driven by irrational fear. We reform the CFAA, the DMCA, the Patriot Act and foreign surveillance law. We stop being so sensitive about speech and we let noxious bullshit air out. If a thousand flowers bloom, the vast majority of them will be beautiful.
    Today we’ve reached an inflection point. If we change paths, it is still possible that the Dream of Internet Freedom can become true. But if we don’t, it won’t. The Internet will continue to evolve into a slick, stiff, controlled and closed thing. And that dream I have — that so many of you have — will be dead. If so, we need to think about creating the technology for the next lifecycle of the revolution. In the next 20 years we need to get ready to smash the Internet apart and build something new and better.

    November 23, 2015 at 11:16:13 PM GMT+1 * - permalink - https://www.wired.com/2015/08/the-end-of-the-internet-dream/
    internet censure philo
  • thumbnail
    Don’t Worry, Everybody Else is Crazy Too

    Human beings make a big deal about being normal. We’re probably the only species for which it’s normal to think you’re not normal.

    Every society operates under thousands of unspoken rules, and when you break them people get nervous. There are acceptable and unacceptable ways to stand in line at the bank, order at restaurants, and answer the phone. There are appropriate and inappropriate birthday gifts, wedding toasts, and hugging styles.

    Every type of social situation has its own subsection of laws and procedures. You can make everyone around you instantly uncomfortable just by facing the back wall while riding an elevator, or asking a fellow bus passenger if they want to hear a story.

    Miraculously, most of us have learned most of these rules by the time we become adults, at least enough to fulfill our basic responsibilities without causing a scene. The moment kids are born, they begin to absorb clues about what’s okay and what’s not by continually watching and emulating.

    We learn some of these rules in explicit mini-lessons from our parents and teachers, and occasionally friends, when they pull us aside and tell us, “We don’t talk about pee at the dinner table,” or “We don’t bring up sports betting around Eddie.”

    We also learn the location of certain boundaries when we bump up against them, by remembering which acts triggered dirty looks, and which got laughs, or no reaction at all. Over time, we learn that we can avoid awkward and painful collisions with these boundaries by simply doing what other people are doing, and not doing what they’re not doing.

    Stand where the other people are standing. When other people are quiet, be quiet. When they’re eating, eat. When they’re being somber, be somber. When they laugh, laugh (even if you don’t get the joke).

    This survival tactic eventually becomes a part of our worldview. Humans are an easily frightened, highly social species, and we put together a sense of how things are supposed to be—of how we’re supposed to be—by what seems normal for the people around us. How do you know if you’re in good health for someone your age? For some places and times in history, failing health at age 48 is expected; in 21st-century USA, it means something’s gone wrong.

    Every life is mostly private

    Our reliance on using norms for guidance gets us through a lot of confusing social situations, but it creates a huge problem when it comes to evaluating ourselves.

    We can’t compare ourselves to what we can’t see, and most of a person’s life is invisible to everybody else. Our thoughts, feelings, moods, urges, impressions, expectations and other intangible qualities happen only on the inside, yet they constitute the largest part of our lives. They aren’t just important to us—essentially, they are us.

    Life is ultimately a solo trip, and most of the landscape is mental. Even when it comes to your closest loved ones, you never get access to another person’s internal experience. They can talk about it, or hint at it through their actions, but everything behind their eyes is fundamentally off limits to you, while to them it’s everything.

    Our public selves are that one-tenth of the iceberg that sees the Sun. The other 90% is who we are only to ourselves, and we have nothing to compare it to. You can’t tell, just by observing, whether other people have a similar inner world to yours, especially socially unacceptable feelings like intense guilt, or feelings of incompetence, or apathy, or uncontrollable sympathy.

    One of the behaviors we learn to emulate is to always present our “best face”, so we learn to keep our most insecure and ugly thoughts to ourselves. This leaves a lot of us wondering if we’re crazy, or especially messed up inside.

    Many of the emails I get from readers are private disclosures that they feel like impostors: they have successfully fooled their friends, family and co-workers into thinking they have things together, but they’re only pretending. Their stories are so similar it’s almost unbelievable. Usually they have a respectable-sounding career and home life, but they feel particularly fragile and troubled compared to how everyone around them appears to be.

    My answer is always that I feel that way too, or at least my own version of it, on a regular basis. Hearing these stories over and over has all but confirmed my suspicion that human beings live with a consistent discrepancy between what we’re each like in our private world, and what we think others are like.

    Somewhere along the line, human beings have convinced themselves that the normal way for a grown human being to feel is prepared, secure and competent. Serious feelings of anxiety, incompetence, guilt or insecurity must always mean something’s wrong with you—either there must be some past life event that justifies these feelings, or you’re just crazy.

    You might get comforting glimpses of the dark, bulbous root of someone else’s iceberg by reading Sylvia Plath poems or Cormac McCarthy books, but in social situations it is as hidden (and as officially non-existent) as the Pentagon’s security schedule.

    You’re on your own but you’re not alone

    The other day on Reddit, someone asked any therapists and psychologists in the audience to answer a question: What is something that most people think they are alone in feeling/experiencing?

    Dozens of therapists answered, and hundreds of people learned that their unique inner problems weren’t unique and might not even be problematic. They’re just hard to see in others, because most people never share them, except maybe with a therapist. (The thread is definitely worth a read.)

    The “Impostor syndrome” I mentioned was a really common one. So if you’re the one who thinks their entire career is a fluke and that it will all soon be exposed in a nightmarish intervention scenario at your office, you are not alone.

    A lot of perfectly sane people have deep insecurities, dark thoughts, and peculiar aversions to everyday things. Intrusive thoughts, about sex, violence, humiliation, suicide, the end of the world—not at all uncommon.

    We all have our own craziness going on, but we’re very good at hiding it from everyone else. While some of our neurotic patterns are serious enough to warrant treatment, a lot of it is quite normal.

    All of our personal dilemmas and life situations aside, simply being human is just plain hard. We want to make it look easy though, because almost everyone else does. But if you could look right down through everyone else’s iceberg—if you could see exactly how much insecurity, stress and craziness there is hidden in the average office floor or subway car—you might be glad for your own.

    November 23, 2015 at 9:24:14 PM GMT+1 - permalink - http://www.raptitude.com/2015/11/dont-worry-everybody-else-is-crazy-too/
    philo vie
  • How Should We Talk to AIs?—Stephen Wolfram Blog

    Communication via algorithme au lieu de langage naturel. Pour permettre communication entre machines mais aussi entre humains.
    Développe des idées de contrats lisibles par des machines et humains ==> ethereum.


    Not many years ago, the idea of having a computer broadly answer questions asked in plain English seemed like science fiction. But when we released Wolfram|Alpha in 2009 one of the big surprises (not least to me!) was that we’d managed to make this actually work. And by now people routinely ask personal assistant systems—many powered by Wolfram|Alpha—zillions of questions in ordinary language every day.
    Ask questions in ordinary language, get answers from Wolfram|Alpha
    It all works fairly well for quick questions, or short commands (though we’re always trying to make it better!). But what about more sophisticated things? What’s the best way to communicate more seriously with AIs?
    I’ve been thinking about this for quite a while, trying to fit together clues from philosophy, linguistics, neuroscience, computer science and other areas. And somewhat to my surprise, what I’ve realized recently is that a big part of the answer may actually be sitting right in front of me, in the form of what I’ve been building towards for the past 30 years: the Wolfram Language.
    Maybe this is a case of having a hammer and then seeing everything as a nail. But I’m pretty sure there’s more to it. And at the very least, thinking through the issue is a way to understand more about AIs and their relation to humans.
    Computation Is Powerful
    The first key point—that I came to understand clearly only after a series of discoveries I made in basic science—is that computation is a very powerful thing, that lets even tiny programs (like cellular automata, or neural networks) behave in incredibly complicated ways. And it’s this kind of thing that an AI can harness.
    A cellular automaton with a very simple rule set (shown in the lower left corner) that produces highly complex behavior
    Looking at pictures like this we might be pessimistic: how are we humans going to communicate usefully about all that complexity? Ultimately, what we have to hope is that we can build some kind of bridge between what our brains can handle and what computation can do. And although I didn’t look at it quite this way, this turns out to be essentially just what I’ve been trying to do all these years in designing the Wolfram Language.
    Language of Computational Thinking
    I have seen my role as being to identify lumps of computation that people will understand and want to use, like FindShortestTour, ImageIdentify or Predict. Traditional computer languages have concentrated on low-level constructs close to the actual hardware of computers. But in the Wolfram Language I’ve instead started from what we humans understand, and then tried to capture as much of it as possible in the language.
    In the early years, we were mostly dealing with fairly abstract concepts, about, say, mathematics or logic or abstract networks. But one of the big achievements of recent years—closely related to Wolfram|Alpha—has been that we’ve been able to extend the structure we built to cover countless real kinds of things in the world—like cities or movies or animals.
    One might wonder: why invent a language for all this; why not just use, say, English? Well, for specific things, like “hot pink”, “new york city” or “moons of pluto”, English is good—and actually for such things the Wolfram Language lets people just use English. But when one’s trying to describe more complex things, plain English pretty quickly gets unwieldy.
    Imagine for example trying to describe even a fairly simple algorithmic program. A back-and-forth dialog—“Turing-test style”—would rapidly get frustrating. And a straight piece of English would almost certainly end up with incredibly convoluted prose like one finds in complex legal documents.
    The Wolfram Language specifies clearly and succinctly how to create this image. The equivalent natural-language specification is complicated and subject to misinterpretation.
    But the Wolfram Language is built precisely to solve such problems. It’s set up to be readily understandable to humans, capturing the way humans describe and think about things. Yet it also has a structure that allows arbitrary complexity to be assembled and communicated. And, of course, it’s readily understandable not just by humans, but also by machines.
    I realize I’ve actually been thinking and communicating in a mixture of English and Wolfram Language for years. When I give talks, for example, I’ll say something in English, then I’ll just start typing to communicate my next thought with a piece of Wolfram Language code that executes right there.
    The Wolfram Language mixes well with English in documents and thought streams
    Understanding AIs
    But let’s get back to AI. For most of the history of computing, we’ve built programs by having human programmers explicitly write lines of code, understanding (apart from bugs!) what each line does. But achieving what can reasonably be called AI requires harnessing more of the power of computation. And to do this one has to go beyond programs that humans can directly write—and somehow automatically sample a broader swath of possible programs.
    We can do this through the kind of algorithm automation we’ve long used in Mathematica and the Wolfram Language, or we can do it through explicit machine learning, or through searching the computational universe of possible programs. But however we do it, one feature of the programs that come out is that they have no reason to be understandable by humans.
    Engineered programs are written to be human-readable. Automatically created or discovered programs are not necessarily human-readable.
    At some level it’s unsettling. We don’t know how the programs work inside, or what they might be capable of. But we know they’re doing elaborate computation that’s in a sense irreducibly complex to analyze.
    There’s another, very familiar place where the same kind of thing happens: the natural world. Whether we look at fluid dynamics, or biology, or whatever, we see all sorts of complexity. And in fact the Principle of Computational Equivalence that emerged from the basic science I did implies that this complexity is in a sense exactly the same as the complexity that can occur in computational systems.
    Over the centuries we’ve been able to identify aspects of the natural world that we can understand, and then harness them to create technology that’s useful to us. And our traditional engineering approach to programming works more or less the same way.
    But for AI, we have to venture out into the broader computational universe, where—as in the natural world—we’re inevitably dealing with things we cannot readily understand.
    What Will AIs Do?
    Let’s imagine we have a perfect, complete AI, that’s able to do anything we might reasonably associate with intelligence. Maybe it’ll get input from lots of IoT sensors. And it has all sorts of computation going on inside. But what is it ultimately going to try to do? What is its purpose going to be?
    This is about to dive into some fairly deep philosophy, involving issues that have been batted around for thousands of years—but which finally are going to really matter in dealing with AIs.
    One might think that as an AI becomes more sophisticated, so would its purposes, and that eventually the AI would end up with some sort of ultimate abstract purpose. But this doesn’t make sense. Because there is really no such thing as abstractly defined absolute purpose, derivable in some purely formal mathematical or computational way. Purpose is something that’s defined only with respect to humans, and their particular history and culture.
    An “abstract AI”, not connected to human purposes, will just go along doing computation. And as with most cellular automata and most systems in nature, we won’t be able to identify—or attribute—any particular “purpose” to that computation, or to the system that does it.
    Giving Goals for an AI
    Technology has always been about automating things so humans can define goals, and then those goals can automatically be achieved by the technology.
    For most kinds of technology, those goals have been tightly constrained, and not too hard to describe. But for a general computational system they can be completely arbitrary. So then the challenge is how to describe them.
    What do you say to an AI to tell it what you want it to do for you? You’re not going to be able to tell it exactly what to do in each and every circumstance. You’d only be able to do that if the computations the AI could do were tightly constrained, like in traditional software engineering. But for the AI to work properly, it’s going to have to make use of broader parts of the computational universe. And it’s then a consequence of a phenomenon I call computational irreducibility that you’ll never be able to determine everything it’ll do.
    So what’s the best way to define goals for an AI? It’s complicated. If the AI can experience your life alongside you—seeing what you see, reading your email, and so on—then, just like with a person you know well, you might be able to tell the AI at least simple goals just by saying them in natural language.
    But what if you want to define more complex goals, or goals that aren’t closely associated with what the AI has already experienced? Then small amounts of natural language wouldn’t be enough. Perhaps the AI could go through a whole education. But a better idea would be to leverage what we have in the Wolfram Language, which in effect already has lots of knowledge of the world built into it, in a way that both the human and the AI can use.
    AIs Talking to AIs
    Thinking about how humans communicate with AIs is one thing. But how will AIs communicate with one another? One might imagine they could do literal transfers of their underlying representations of knowledge. But that wouldn’t work, because as soon as two AIs have had different “experiences”, the representations they use will inevitably be at least somewhat different.
    And so, just like humans, the AIs are going to end up needing to use some form of symbolic language that represents concepts abstractly, without specific reference to the underlying representations of those concepts.
    One might then think the AIs should just communicate in English; at least that way we’d be able to understand them! But it wouldn’t work out. Because the AIs would inevitably need to progressively extend their language—so even if it started as English, it wouldn’t stay that way.
    In human natural languages, new words get added when there are new concepts that are widespread enough to make representing them in the language useful. Sometimes a new concept is associated with something new in the world (“blog”, “emoji”, “smartphone”, “clickbait”, etc.); sometimes it’s associated with a new distinction among existing things (“road” vs. “freeway”, “pattern” vs. “fractal”).
    Often it’s science that gives us new distinctions between things, by identifying distinct clusters of behavior or structure. But the point is that AIs can do that on a much larger scale than humans. For example, our Image Identification Project is set up to recognize the 10,000 or so kinds of objects that we humans have everyday names for. But internally, as it’s trained on images from the world, it’s discovering all sorts of other distinctions that we don’t have names for, but that are successful at robustly separating things.
    I’ve called these “post-linguistic emergent concepts” (or PLECs). And I think it’s inevitable that in a population of AIs, an ever-expanding hierarchy of PLECs will appear, forcing the language of the AIs to progressively expand.
    But how could the framework of English support that? I suppose each new concept could be assigned a word formed from some hash-code-like collection of letters. But a structured symbolic language—as the Wolfram Language is—provides a much better framework. Because it doesn’t require the units of the language to be simple “words”, but allows them to be arbitrary lumps of symbolic information, such as collections of examples (so that, for example, a word can be represented by a symbolic structure that carries around its definitions).
    So should AIs talk to each other in Wolfram Language? It seems to make a lot of sense—because it effectively starts from the understanding of the world that’s been developed through human knowledge, but then provides a framework for going further. It doesn’t matter how the syntax is encoded (input form, XML, JSON, binary, whatever). What matters is the structure and content that are built into the language.
    Information Acquisition: The Billion-Year View
    Over the course of the billions of years that life has existed on Earth, there’ve been a few different ways of transferring information. The most basic is genomics: passing information at the hardware level. But then there are neural systems, like brains. And these get information—like our Image Identification Project—by accumulating it from experiencing the world. This is the mechanism that organisms use to see, and to do many other “AI-ish” things.
    But in a sense this mechanism is fundamentally limited, because every different organism—and every different brain—has to go through the whole process of learning for itself: none of the information obtained in one generation can readily be passed to the next.
    But this is where our species made its great invention: natural language. Because with natural language it’s possible to take information that’s been learned, and communicate it in abstract form, say from one generation to the next. There’s still a problem however, because when natural language is received, it still has to be interpreted, in a separate way in each brain.
    Information transfer: Level 0: genomics; Level 1: individual brains; Level 2: natural language; Level 3: computational knowledge language
    And this is where the idea of a computational-knowledge language—like the Wolfram Language—is important: because it gives a way to communicate concepts and facts about the world, in a way that can immediately and reproducibly be executed, without requiring separate interpretation on the part of whatever receives it.
    It’s probably not a stretch to say that the invention of human natural language was what led to civilization and our modern world. So then what are the implications of going to another level: of having a precise computational-knowledge language, that carries not just abstract concepts, but also a way to execute them?
    One possibility is that it may define the civilization of the AIs, whatever that may turn out to be. And perhaps this may be far from what we humans—at least in our present state—can understand. But the good news is that at least in the case of the Wolfram Language, precise computational-knowledge language isn’t incomprehensible to humans; in fact, it was specifically constructed to be a bridge between what humans can understand, and what machines can readily deal with.
    What If Everyone Could Code?
    So let’s imagine a world in which in addition to natural language, it’s also common for communication to occur through a computational-knowledge language like the Wolfram Language. Certainly, a lot of the computational-knowledge-language communication will be between machines. But some of it will be between humans and machines, and quite possibly it would be the dominant form of communication here.
    In today’s world, only a small fraction of people can write computer code—just as, 500 or so years ago, only a small fraction of people could write natural language. But what if a wave of computer literacy swept through, and the result was that most people could write knowledge-based code?
    Natural language literacy enabled many features of modern society. What would knowledge-based code literacy enable? There are plenty of simple things. Today you might get a menu of choices at a restaurant. But if people could read code, there could be code for each choice, that you could readily modify to your liking. (And actually, something very much like this is soon going be possible—with Wolfram Language code—for biology and chemistry lab experiments.) Another implication of people being able to read code is for rules and contracts: instead of just writing prose to be interpreted, one can have code to be read by humans and machines alike.
    But I suspect the implications of widespread knowledge-based code literacy will be much deeper—because it will not only give a wide range of people a new way to express things, but will also give them a new way to think about them.
    Will It Actually Work?
    So, OK, let’s say we want to use the Wolfram Language to communicate with AIs. Will it actually work? To some extent we know it already does. Because inside Wolfram|Alpha and the systems based on it, what’s happening is that natural language questions are being converted to Wolfram Language code.
    But what about more elaborate applications of AI? Many places where the Wolfram Language is used are examples of AI, whether they’re computing with images or text or data or symbolic structures. Sometimes the computations involve algorithms whose goals we can precisely define, like FindShortestTour; sometimes they involve algorithms whose goals are less precise, like ImageIdentify. Sometimes the computations are couched in the form of “things to do”, sometimes as “things to look for” or “things to aim for”.
    We’ve come a long way in representing the world in the Wolfram Language. But there’s still more to do. Back in the 1600s it was quite popular to try to create “philosophical languages” that would somehow symbolically capture the essence of everything one could think about. Now we need to really do this. And, for example, to capture in a symbolic way all the kinds of actions and processes that can happen, as well as things like peoples’ beliefs and mental states. As our AIs become more sophisticated and more integrated into our lives, representing these kinds of things will become more important.
    For some tasks and activities we’ll no doubt be able to use pure machine learning, and never have to build up any kind of intermediate structure or language. But much as natural language was crucial in enabling our species to get where we have, so also having an abstract language will be important for the progress of AI.
    I’m not sure what it would look like, but we could perhaps imagine using some kind of pure emergent language produced by the AIs. But if we do that, then we humans can expect to be left behind, and to have no chance of understanding what the AIs are doing. But with the Wolfram Language we have a bridge, because we have a language that’s suitable for both humans and AIs.
    More to Say
    There’s much to be said about the interplay between language and computation, humans and AIs. Perhaps I need to write a book about it. But my purpose here has been to describe a little of my current thinking, particularly my realizations about the Wolfram Language as a bridge between human understanding and AI.
    With pure natural language or traditional computer language, we’ll be hard pressed to communicate much to our AIs. But what I’ve been realizing is that with Wolfram Language there’s a much richer alternative, readily extensible by the AIs, but built on a base that leverages human natural language and human knowledge to maintain a connection with what we humans can understand. We’re seeing early examples already… but there’s a lot further to go, and I’m looking forward to actually building what’s needed, as well as writing about it…

    November 21, 2015 at 2:29:47 PM GMT+1 - permalink - http://blog.stephenwolfram.com/2015/11/how-should-we-talk-to-ais/?utm_content=buffer5eb6a&utm_medium=social
    ia ai
  • thumbnail
    Lettre à ma génération : moi je n'irai pas qu'en terrasse

    Ce qu’on est en train de vivre mérite que chacun se pose un instant à la terrasse de lui-même, et lève la tête pour regarder la société où il vit. Et qui sait... peut-être qu'un peu plus loin, dans un lambeau de ciel blanc accroché aux immeubles, il apercevra la société qu’il espère.

    November 21, 2015 at 2:13:05 PM GMT+1 - permalink - http://blogs.mediapart.fr/blog/sarah-roubato/201115/lettre-ma-generation-moi-je-nirai-pas-quen-terrasse#comments
    philo société
  • En guerre — David Larlet

    Nous sommes en guerre.

    Nous sommes en guerre contre une système nocif d’une extrême violence (cache) que nous alimentons chaque jour autour du capitalisme. Un système qui renforce les inégalités et conduit à une perte d’identité que certains retrouvent dans le fanatisme ou la religion. Il y a pourtant des alternatives pour un travail moins dégradant.

    Nous sommes en guerre contre notre nature animale qui fait qu’il y a des viols, des charniers, des mutilations, des meurtres. Tous les jours. Notre nature humaine réclame de la bienveillance, de l’éducation et une vision partagée pour prendre le dessus. Elle nous demande de prendre du recul sur nos émotions.

    Nous sommes en guerre contre notre auto-destruction en tant qu’espèce. Les conflits sont climatiques (cache) et ne pourront s’apaiser dans un écosystème qui change à une telle vitesse. Le danger n’est pas la surpopulation mais la sur-concentration de cette population qui cristallise les tensions et les haines.

    Nous sommes en guerre depuis toujours, c’est ce qui nous pousse à progresser. Le numérique n’est qu’un catalyseur qui réduit le temps et l’espace, confisquant notre attention au service du marketing. Il est possible d’en faire autre chose, pour du bien commun, pour de l’entraide et de la réflexion distribuée.

    Alors oui, nous sommes en guerre et cette guerre s’appelle vivre en communauté dans un espace fini. Je ne crois pas à l’insouciance perdue qu’il faudrait retrouver (cache), même collectivement. Saisissons cette chance pour prendre pleinement conscience ensemble de cet état avec du recul et du discernement. Nous reconstruisons ce monde chaque jour et je crois en notre capacité à le faire évoluer pour réunir les conditions propices à une vie digne pour tous. Nous avons le choix de voir nos enfants vivre en paix ou reposer en paix.

    November 20, 2015 at 8:58:42 PM GMT+1 - permalink - https://larlet.fr/david/stream/2015/11/20/
    société philo vie
  • thumbnail
    Democracies fail when they ask too little of their citizens

    ‘He turned out to be the same as every other politician.’ That was the complaint I kept hearing in Athens shortly after the leftist Prime Minister Alexis Tsipras had signed up to exactly the kind of bailout deal he once vehemently opposed. In reality, it was not so much that he was the same as other politicians as that he faced the same constraints as other Greek politicians: a failing economy, implacable creditors and frustrated fellow leaders. It was these external constraints that compelled him to act against his campaign rhetoric. The fact that Athenians were surprised and dismayed by this reflects a deeper problem with contemporary Western democracy – one to which their ancient forebears knew the solution.

    Modern states are plagued by the problem of ‘rational ignorance’. The chance that any individual’s vote will make a difference is so vanishingly small that it would be irrational for anyone to bother taking a serious interest in the issues and candidates. And so, many people don’t – and then fall for implausible rhetoric. In this way, democracy has come to mean little more than electing politicians on the basis of their promises, then watching them fail to keep them.

    This was not the case in the Athens of two and a half thousand years ago. Then, democracy – rule by the people – meant active participation in the running of the state, if not continually, then at least periodically throughout one’s life. As Aristotle put it: ‘to rule and be ruled in turn.’ This participation was a right but also a responsibility. It was intended not only to create a better state, but to create better citizens: engagement in the political process was an education in the soberingly complex realities of decision-making.

    Male citizens were expected to serve not only in the army or on juries, as is the case with some modern states, but also to attend the main decision-making assembly in person. And while some executive offices were elected, most were selected by lottery – including that of Prime Minister, whose term of office was one day. Any male citizen could find himself representing his community, or receiving foreign dignitaries.

    There is much we can learn from this. Modern states are of course much bigger than ancient Athens and, thankfully, have wider suffrage. Numbers alone mean we couldn’t all be Prime Minister for a day. But there is scope for government at every level – local, regional, national and international – to be made radically participatory. For example: legislative bodies could be wholly or partially selected by lottery. Obvious candidates are second chambers in bicameral parliaments, such as the British House of Lords.

    Even better might be separate assemblies summoned to review each proposed new law or area of government. This would hugely increase the number of people involved in the legislative system. The ancient Athenians managed exactly this; today, digital technology would make it even easier.

    In addition, civil services could be more open to internships and short-term appointments. Serving the state in this way (at some level) could even be compulsory, just as military service has been. Ad hoc one‑issue assemblies, citizen initiative programmes and wider consultation on legislation are all ways of including more people in the decision-making process.

    Democracy need not mean voting for politicians who all turn out to be the same. It can mean ordinary people actively participating in governing themselves. As Aristotle knew, this would make for both wiser decisions and wiser citizens.

    November 19, 2015 at 2:24:43 PM GMT+1 - permalink - https://aeon.co/opinions/democracies-fail-when-they-ask-too-little-of-their-citizens
    société
  • Thoughts — David Larlet

    I used to think that religion is for weak people, people who prefers to delegate comfortably their thoughts and sometimes manpower to one central authority.

    It hits me lately that politic follows the exact same pattern, annihilating any self-consciousness and thus self-esteem. Electing a president at the head of a nation creates the feeling that you did your job as a citizen for the next X years but being a citizen is not a one-shot, it’s a daily challenge to find your place in the society not as a consumer but as an actor. Our societies are relying on one person with his government to drive our countries for a few years without any long-term vision, a scapegoat for our lack of thinking, our lack of acting, our lack of humanity. Where is your dignity when you can’t even think and act by yourself?

    In 1721, Montesquieu published his Persian letters and the 14th is very important to me, here is an extract (in French) but I recommend the whole reading:

    O Troglodites, what moves you to this; uprightness becomes a burden to you. In your present condition, having no head, you are constrained in your own despite to be virtuous; otherwise your very existence would be at stake, and you would relapse into the wretched state of your ancestors. But this seems to you too heavy a yoke; you would rather become the subjects of a king, and submit to laws of his framing-laws less exacting than your present customs. You know that then you would be able to satisfy your ambition, and while away the time in slothful luxury; and that, provided you avoided the graver crimes, there would be no necessity for virtue.

    Such an idealist! Nobody can live and work in this context today. Virtue, really? Almost a hundred of geeks at Github are proving that it’s possible, see that blog post from Ryan Tomayko:

    Telling people what to do is lazy. Instead, try to convince them with argument. This is how humans interact when there’s no artificial authority structure and it works great. If you can’t convince people through argument then maybe you shouldn’t be doing it. […] Essentially, I try to create little mini-managers, each responsible for managing a single person: their self.

    Confirmed by Brandon Keepers, working there for 6 months:

    Anarchy works wonderfully in a small group of individuals with a high level of trust. Everyone at GitHub has full access and permission to do whatever they want. Do great things and you earn respect. Abuse that freedom and you violate everyone’s trust.

    Marriage is another way of delegation, behind the love story that’s a way to state administratively (and sometimes religiously) that you’re forming a couple. Validating your love by a piece of sheet and a ring instead of daily attention, it surely deserves a huge celebration in our attention-deficient world.

    That’s why I’m agnostic. That’s why I’m a blank voter. That’s why I’m running my own company. That’s why I’m not married.

    The worst part is that by delegating, you can loose your knowledge too. Think about it in our geeky world of Clouds, Proxys, Frameworks, each introducing more and more opaque layers.

    We are Tailorizing the Web and soon nobody will be able anymore to put a service online without heavily relying on an uncontrolled — delegated — stack.

    November 16, 2015 at 11:08:31 PM GMT+1 - permalink - https://larlet.fr/david/thoughts/#delegating
    philo religion web
  • thumbnail
    NASA Clean Air Study - Wikipedia, the free encyclopedia

    Plantes qui nettoient l'air.

    November 14, 2015 at 2:33:11 PM GMT+1 - permalink - https://en.wikipedia.org/wiki/NASA_Clean_Air_Study
    plantes
Links per page: 20 50 100
◄Older
page 9 / 25
Newer►
Shaarli - The personal, minimalist, super fast, database-free, bookmarking service by the Shaarli community - Help/documentation