Twenty years ago I attended my first Def Con. I believed in a free, open, reliable, interoperable Internet: a place where anyone can say anything, and anyone who wants to hear it can listen and respond. I believed in the Hacker Ethic: that information should be freely accessible and that computer technology was going to make the world a better place. I wanted to be a part of making these dreams — the Dream of Internet Freedom — come true. As an attorney, I wanted to protect hackers and coders from the predations of law so that they could do this important work. Many of the people in this room have spent their lives doing that work.
But today, that Dream of Internet Freedom is dying.
For better or for worse, we’ve prioritized things like security, online civility, user interface, and intellectual property interests above freedom and openness. The Internet is less open and more centralized. It’s more regulated. And increasingly it’s less global, and more divided. These trends: centralization, regulation, and globalization are accelerating. And they will define the future of our communications network, unless something dramatic changes.
Twenty years from now,
• You won’t necessarily know anything about the decisions that affect your rights, like whether you get a loan, a job, or if a car runs over you. Things will get decided by data-crunching computer algorithms and no human will really be able to understand why.
• The Internet will become a lot more like TV and a lot less like the global conversation we envisioned 20 years ago.
• Rather than being overturned, existing power structures will be reinforced and replicated, and this will be particularly true for security.
•Internet technology design increasingly facilitates rather than defeats censorship and control.
It doesn’t have to be this way. But to change course, we need to ask some hard questions and make some difficult decisions.
What does it mean for companies to know everything about us, and for computer algorithms to make life and death decisions? Should we worry more about another terrorist attack in New York, or the ability of journalists and human rights workers around the world to keep working? How much free speech does a free society really need?
How can we stop being afraid and start being sensible about risk? Technology has evolved into a Golden Age for Surveillance. Can technology now establish a balance of power between governments and the governed that would guard against social and political oppression? Given that decisions by private companies define individual rights and security, how can we act on that understanding in a way that protects the public interest and doesn’t squelch innovation? Whose responsibility is digital security? What is the future of the Dream of Internet Freedom?
The Dream of Internet Freedom
For me, the Dream of Internet Freedom started in 1984 with Steven Levy’s book “Hackers, Heroes of the Computer Revolution.” Levy told the story of old school coders and engineers who believed that all information should be freely accessible. They imagined that computers would empower people to make our own decisions about what was right and wrong. Empowering people depended on the design principle of decentralization. Decentralization was built into the very DNA of the early Internet, smart endpoints, but dumb pipes, that would carry whatever brilliant glories the human mind and heart could create to whomever wanted to listen.
This idea, that we could be in charge of our own intellectual destinies, appealed to me immensely. In 1986, I entered New College, a liberal arts school in Sarasota, Florida. Its motto is “Each student is responsible in the last analysis for his or her education.” That same year, I read the Hacker Manifesto, written by The Mentor and published in Phrack magazine. I learned that hackers, like my fellow academic nerds at New College, were also people that didn’t want to be spoon-fed intellectual baby food. Hackers wanted free access to information, they mistrusted authority, they wanted to change the world — to a place where people could explore and curiosity was its own reward.
In 1991 I started using the public Internet. I remember sending a chat request to a sysop, asking for help. And then I could see the letters that he was typing appearing in real time on my screen, viscerally knowing for the first time that this technology allowed talking to someone, anyone, everyone, in real time, anywhere. That’s when I really began to believe that the Dream of Internet Freedom could one day become a reality.
Twenty years ago, I was a criminal defense attorney, and I learned that hackers were getting in trouble for some tricks that I thought were actually pretty cool. As a prison advocate in the San Francisco Sheriff’s Department, I represented a guy who was looking at six more months in jail for hooting into the pay phone and getting free calls home. My research on that case made me realize there were a lot of laws that could impact hackers, and that I could help.
That was also the year that a guy by the name of Marty Rimm wrote a “study” saying that pornography was running rampant on the Internet. A law review published the paper, and Time Magazine touted it, and that’s all it took for Congress to be off to the races. The cyberporn hysteria resulted in Congress passing the Communications Decency Act of 1996 (CDA), an attempt to regulate online pornography.
For all you porn lovers out there, that would be a big disappointment. But there was something worse about the CDA. To stop porn, the government had to take the position that the Internet wasn’t fully protected by the First Amendment. And that would mean the government could block all kinds of things. The Internet wouldn’t be like a library. The Internet would be like TV. And TV in 1985 was actually really bad.
But this was even worse because we had higher hopes for the Internet. The Internet was a place where everyone could be a publisher and a creator. The Internet was global. And the Internet had everything on the shelves. Congress was squandering that promise.
At that time, John Perry Barlow, lyricist for the Grateful Dead, a rancher, a founder of the Electronic Frontier Foundation, wrote what is essentially a poem about love for the Internet. Barlow wrote:
Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.
Barlow was reacting to the CDA and the assertion that the Internet should be less — not more — free than books and magazines. But he was also expressing weariness with business as usual, and our shared hope that the Internet would place our reading, our associations and our thoughts outside of government control.
It turns out that Marty Rimm and the Communications Decency Act didn’t kill Internet freedom. Instead, there was a strange twist of fate that we legal scholars like to call “irony”. In 1997 in a case called ACLU v. Reno, the U.S. Supreme Court struck down the CDA. It said that the First Amendment’s freedom of expression fully applies to the Internet.
The only part that remains of the CDA is a part that might seem like it achieves the opposite of Congress’s goal to get rid of online porn. It says that Internet providers don’t have to police their networks for pornography or most other unwanted content, and can’t get in trouble for failing to do so. This provision of the CDA is why the Internet is a platform for so much “user generated content,” whether videos, comments, social network posts, whatever.
Together, the Hacker Ethic, the Hacker Manifesto, and the Declaration of Independence of Cyberspace, ACLU v. Reno, and even the remaining piece of the CDA, describe a more or less radical dream, but one that many, if not most, of the people in this room have believed in and worked for. But today I’m standing here before you to tell you that these dreams aren’t coming true. Instead, twenty years on, the future not only looks a lot less dreamy than it once did, it looks like it might be worse.
Racism and sexism have proven resilient enough to thrive in the digital world. There are many, many examples of this, but let me use statistics, and anecdotes to make the point.
Statistically: At Google, women make up 30 percent of the company’s overall workforce, but hold only 17 percent of the company’s tech jobs. At Facebook, 15 percent of tech roles are staffed by women. At Twitter, 10 percent.
Anecdotally: Look around you at your fellow audience members. How very male and white this field is.
I find this so strange. The security community has historically been very good at finding, cultivating, and rewarding talent from unconventional candidates. Many of the most successful security experts never went to college, or even finished high school. A statistically disproportionate number of you are on the autism spectrum. Being gay or transgender is not a big deal and hasn’t been for years. A 15-year-old Aaron Swartz hung out with Doug Engelbart, creator of the computer mouse. Inclusion is at the very heart of the Hacker ethic.
And people of color and women are naturally inclined to be hackers. We learn early on that the given rules don’t work for us, and that we have to manipulate them to succeed, even where others might wish us to fail.
This field should be in the lead in evolving a race, class, age, and religiously open society, but it hasn’t been. We could conscientiously try to do this better. We could, and in my opinion should, commit to cultivating talent in unconventional places.
Today, our ability to know, modify and trust the technology we use is limited by both the law and our capacity for understanding complex systems. The Hands On Imperative is on life support. “The Freedom to Tinker” might sound like a hobby, but it’s quite important. It means our ability to study, modify and ultimately understand the technology we use — and that structures and defines our lives.
The Hands On Imperative is dying for two reasons. We are limited by both the law and our capacity for understanding complex systems.
The law: Two examples. It was exactly ten years ago that Black Hat staff spent all night cutting pages out of attendee books and re-stuffing conference sacks with new CDs. Security researcher Mike Lynn was scheduled to give a talk about a previously unknown category of vulnerability, specifically flaws in Internet routers. Cisco, and Mike Lynn’s employer ISS, decided at the last minute to try to keep the vulnerability a secret, ordering Mike to give a different talk and leveraging copyright law to force Black Hat to destroy all copies of Mike’s slides. There’s nothing that cries out censorship like cutting pages out of books.
On stage the next morning, Mike quit his job, donned a white baseball cap — literally a white hat — and presented his original research anyway. Cisco and ISS retaliated by suing him.
I was Mike’s lawyer. We managed to fight back that case, and the criminal investigation that the companies also instigated against him. But the message from the lawsuit was loud and clear — and not just to Mike. This is our software, not yours. This is our router, not yours. You’re just a licensee and we’ll tell you what you are allowed to do in the EULA. You can’t decompile this, you can’t study it, you can’t tell anyone what you find.
Aaron Swartz was another sacrificial lamb on the altar of network control. Aaron was charged with violating the Computer Fraud and Abuse Act (CFAA) because he wrote a script to automate the downloading of academic journal articles. Much of this information wasn’t even copyrighted. But Aaron was a hacker, and he challenged the system. They went after him with a vengeance. The case was based on the assertion that Aaron’s access to the journal articles was “unauthorized” even though he was authorized as a Harvard student to download the same articles.
Aaron killed himself, under immense stress from prosecutors twisting his arm to plead guilty to a political-career-ending felony, or face years in prison.
Here, too, the message was clear. You need our permission to operate in this world. If you step over the line we draw, if you automate, if you download too fast, if you type something weird in the URL bar on your browser, and we don’t like it, or we don’t like you, then we will get you.
In the future will we re-secure the Freedom to Tinker? That means Congress forgoing the tough-on-cybercrime hand waving it engages in every year — annual proposals, to make prison sentences more severe under the CFAA, as if any of the suspected perpetrators of the scores of major breaches of the past two or three years — China, North Korea, who knows who else — would be deterred by such a thing. These proposals just scare the good guys, they don’t stop the attackers.
We’d have to declare that users own and can modify the software we buy and download — despite software licenses and the Digital Millennium Copyright Act (DMCA).
This is going to be increasingly important. Over the next 20 years software will be embedded in everything, from refrigerators to cars to medical devices.
Without the Freedom to Tinker, the right to reverse engineer these products, we will be living in a world of opaque black boxes. We don’t know what they do, and you’ll be punished for peeking inside.
Using licenses and law to control and keep secrets about your products is just one reason why in the future we may know far less about the world around us and how it works than we currently do.
Today, technology is generating more information about us than ever before, and will increasingly do so, making a map of everything we do, changing the balance of power between us, businesses and governments. In the next 20 years, we will see amazing advances in artificial intelligence and machine learning. Software programs are going to be deciding whether a car runs people over, or drives off a bridge. Software programs are going to decide who gets a loan, and who gets a job. If intellectual property law will protect these programs from serious study, then the public will have no idea how these decisions are being made. Professor Frank Pasquale has called this the Black Box Society. Take secrecy and the profit motive, add a billion pieces of data, and shake.
In a Black Box Society, how can we ensure that the outcome is in the public interest? The first step is obviously transparency, but our ability to understand is limited by current law and also by the limits of our human intelligence. The companies that make these products might not necessarily know how their product works either. Without adequate information, how can we democratically influence or oversee these decisions? We are going to have to learn how, or live in a society that is less fair and less free.
We are also going to have to figure out who should be responsible when software fails.
So far, there’s been very little regulation of software security. Yes, the Federal Trade Commission has jumped in where vendors misrepresented what the software would do. But that is going to change. People are sick and tired of crappy software. And they aren’t going to take it any more. The proliferation of networked devices — the Internet of Things — is going to mean all kinds of manufacturers traditionally subject to products liability are also software purveyors. If an autonomous car crashes, or a networked toaster catches on fire, you can bet there is going to be product liability. Chrysler just recalled 1.4 million cars because of the vulnerabilities that Charlie Miller and Chris Valasek are going to be talking about later today. It’s a short step from suing Tesla to suing Oracle for insecure software… with all the good and the bad that will come of that.
I think software liability is inevitable. I think it’s necessary. I think it will make coding more expensive, and more conservative. I think we’ll do a crappy job of it for a really long time. I don’t know what we’re going to end up with. But I know that it’s going to be a lot harder on the innovators than on the incumbents.
Today, the physical design and the business models that fund the communications networks we use have changed in ways that facilitate rather than defeat censorship and control. But before I delve into issues of privacy, security and free expression, let’s take a few steps back and ask how we got to where we are today.
The design of the early public Internet was end-to-end. That meant dumb pipes that would carry anything, and smart edges, where application and content innovation would occur. This design principle was intentional. The Internet would not just enable communication, but would do so in a decentralized, radically democratic way. Power to the people, not to the governments or companies that run the pipes.
The Internet has evolved, as technologies do. Today, broadband Internet providers want to build smart pipes that discriminate for quality of service, differential pricing, and other new business models. Hundreds of millions of people conduct their social interactions over just a few platforms like TenCent and Facebook.
What does this evolution mean for the public? In his book The Master Switch, Professor Tim Wu looks at phones, radio, television, movies. He sees what he calls “the cycle.”
History shows a typical progression of information technologies, from somebody’s hobby to somebody’s industry; from jury-rigged contraption to slick production marvel; from a freely accessible channel to one strictly controlled by a single corporation or cartel — from open to closed system.
Eventually, innovators or regulators smash apart the closed system, and the cycle begins afresh. In the book, Tim asks the question I’m asking you. Is the Internet subject to this cycle? Will it be centralized and corporately controlled? Will it be freely accessible, a closed system or something in between?
If we don’t do things differently, the Internet is going to end up being TV. First, I said we’ve neglected openness and freedom in favor of other interests like intellectual property, and that’s true.
But it’s also true that a lot of people affirmatively no longer share the Dream of Internet Freedom, if they ever did. They don’t think it’s the utopia that I’ve made it out to be. Rather, the Dream of Internet Freedom collided head on with the ugly awfulness called Other People. Nasty comments, 4chan, /b/tards, revenge porn, jihadists, Nazis. Increasingly I hear law professors, experts in the First Amendment, the doctrine of overbreadth and the chilling effect, talk about how to legislate this stuff they don’t like out of existence.
Second, there are the three trends I told you about: centralization, regulation and globalization.
· Centralization means a cheap and easy point for control and surveillance.
· Regulation means exercise of government power in favor of domestic, national interests and private entities with economic influence over lawmakers.
· Globalization means more governments are getting into the Internet regulation mix. They want to both protect and to regulate their citizens. And remember, the next billion Internet users are going to come from countries without a First Amendment, without a Bill of Rights, maybe even without due process or the rule of law. So these limitations won’t necessarily be informed by what we in the U.S. consider basic civil liberties.
Now when I say that the Internet is headed for corporate control, it may sound like I’m blaming corporations. When I say that the Internet is becoming more closed because governments are policing the network, it may sound like I’m blaming the police. I am. But I’m also blaming you. And me. Because the things that people want are helping drive increased centralization, regulation and globalization.
Remember blogs? Who here still keeps a blog regularly? I had a blog, but now I post updates on Facebook. A lot of people here at Black Hat host their own email servers, but almost everyone else I know uses gmail. We like the spam filtering and the malware detection. When I had an iPhone, I didn’t jailbreak it. I trusted the security of the vetted apps in the Apple store. When I download apps, I click yes on the permissions. I love it when my phone knows I’m at the store and reminds me to buy milk.
This is happening in no small part because we want lots of cool products “in the cloud.” But the cloud isn’t an amorphous collection of billions of water droplets. The cloud is actually a finite and knowable number of large companies with access to or control over large pieces of the Internet. It’s Level 3 for fiber optic cables, Amazon for servers, Akamai for CDN, Facebook for their ad network, Google for Android and the search engine. It’s more of an oligopoly than a cloud. And, intentionally or otherwise, these products are now choke points for control, surveillance and regulation.
So as things keep going in this direction, what does it mean for privacy, security and freedom of expression? What will be left of the Dream of Internet Freedom?
Privacy
The first casualty of centralization has been privacy. And since privacy is essential to liberty, the future will be less free.
This is the Golden Age of Surveillance. Today, technology is generating more information about us than ever before, and will increasingly do so, making a map of everything we do, changing the balance of power between us, businesses, and governments. The government has built the technological infrastructure and the legal support for mass surveillance, almost entirely in secret.
Here’s a quiz. What do emails, buddy lists, drive back ups, social networking posts, web browsing history, your medical data, your bank records, your face print, your voice print, your driving patterns and your DNA have in common?
Answer: The U.S. Department of Justice (DOJ) doesn’t think any of these things are private. Because the data is technically accessible to service providers or visible in public, it should be freely accessible to investigators and spies.
And yet, to paraphrase Justice Sonya Sotomayor, this data can reveal your contacts with “the psychiatrist, the plastic surgeon, the abortion clinic, the AIDS treatment center, the strip club, the criminal defense attorney, the by-the-hour motel, the union meeting, the mosque, synagogue or church, or the gay bar.”
So technology is increasingly proliferating data…and the law is utterly failing to protect it. Believe it or not, considering how long we’ve had commercial email, there’s only one civilian appellate court that’s decided the question of email privacy. It’s the Sixth Circuit Court of Appeals in 2006, in U.S. v. Warshak. Now that court said that people do have a reasonable expectation of privacy in their emails. Therefore, emails are protected by the Fourth Amendment and the government needs a warrant to get them. This ruling only answers part of the question for part of this country — Kentucky, Tennessee, Michigan and Ohio. Because of it, almost all service providers require some kind of warrant before turning over your emails to criminal investigators. But the DOJ continues to push against Warshak in public, but also secretly.
But I want to emphasize how important the ruling is, because I think many people might not fully understand what the reasonable expectation of privacy and a warrant requirement mean. It means that a judge polices access, so that there has to be a good reason for the search or seizure, it can’t be arbitrary. It also means that the search has to be targeted, because a warrant has to specifically describe what is going to be searched. The warrant requirement is not only a limitation on arbitrary police action, it should also limit mass surveillance.
But in the absence of privacy protection — pushed by our own government — the law isn’t going to protect our information from arbitrary, suspicion-less massive surveillance, even as that data generation proliferates out of control.
Centralization means that your information is increasingly available from “the cloud,” an easy one stop shopping point to get data not just about you, but about everyone. And it gives the government a legal argument to get around the Fourth Amendment warrant requirement.
Regulation is not protecting your data and at worst is actually ensuring that governments can get easy access to this data. The DOJ pushes:
· Provider assistance provisions to require providers to assist with spying;
· Corporate immunity for sharing data with the government, for example giving AT&T immunity in its complicity with NSA’s illegal domestic spying and in CISPA, CISA and other surveillance proposals masquerading as security information sharing bills;
· And, not so much yet in the U.S. but in other countries, data retention obligations that essentially deputize companies to spy on their users for the government.
Globalization gives the U.S. a way to spy on Americans…by spying on foreigners we talk to. Our government uses the fact that the network is global against us. The NSA conducts massive spying overseas, and Americans’ data gets caught in the net. And, by insisting that foreigners have no Fourth Amendment privacy rights, it’s easy to reach the conclusion that you don’t have such rights either, as least when you’re talking to or even about foreigners.
Surveillance couldn’t get much worse, but in the next 20 years, it actually will. Now we have networked devices, the so-called Internet of Things, that will keep track of our home heating, and how much food we take out of our refrigerator, and our exercise, sleep, heartbeat, and more. These things are taking our off-line physical lives and making them digital and networked, in other words, surveillable.
To have any hope of attaining the Dream of Internet Freedom, we have to implement legal reforms to stop suspicion-less spying. We have to protect email and our physical location from warrantless searches. We have to stop overriding the few privacy laws we have to gain with a false sense of online security. We have to utterly reject secret surveillance laws, if only because secret law is an abomination in a democracy.
Are we going to do any of these things?
Security
Despite the way many people talk about it, security it isn’t the opposite of privacy. You can improve security without infringing privacy — for example by locking cockpit doors. And not all invasions of privacy help security. In fact, privacy protects security. A human rights worker in Syria or a homosexual in India needs privacy, or they may be killed.
Instead, we should think about security with more nuance. Online threats mean different things depending on whose interests you have at stake — governments, corporations, political associations, individuals. Whether something is “secure” is a function of whose security you are concerned with. In other words, security is in the eye of the beholder. Further, security need not be zero sum: Because we are talking about global information networks, security improvements can benefit all, just as security vulnerabilities can hurt all.
The battleground of the future is that people in power want more security for themselves at the expense of others. The U.S. Government talks about security as “cyber”. When I hear “cyber” I hear shorthand for military domination of the Internet, as General Michael Hayden, former NSA and CIA head, has said — ensuring U.S. access and denying access to our enemies. Security for me, but not for thee. Does that sound like an open, free, robust, global Internet to you?
Here’s just one public example: our government wants weakened cryptography, back doors in popular services and devices so that it can surveil us (remember, without a warrant). It is unmoved by the knowledge that these back doors will be used by criminals and oppressive governments alike. Meanwhile, it overclassifies, maintains secret law, withholds documents from open government requests, goes after whistleblowers and spies on journalists.
Here’s another. The White House is pushing for the Department of Homeland Security to be the hub for security threat information sharing. That means DHS will decide who gets vulnerability information… and who doesn’t.
I see governments and elites picking and choosing security haves and security have nots. In other words, security will be about those in power trying to get more power.
This isn’t building security for a global network. What’s at stake is the well-being of vulnerable communities and minorities that need security most. What’s at stake is the very ability of citizens to petition the government. Of religious minorities to practice their faith without fear of reprisals. Of gay people to find someone to love. This state of affairs should worry anyone who is outside the mainstream, whether an individual, a political or religious group or a start up without market power.
Freedom of Expression
Today, the physical architecture and the corporate ownership of the communications networks we use have changed in ways that facilitate rather than defeat censorship and control. In the U.S., copyright was the first cause for censorship, but now we are branching out to political speech.
Governments see the power of platforms and have proposed that social media companies alert federal authorities when they become aware of terrorist-related content on their sites. A U.N. panel last month called on the firms to respond to accusations that their sites are being exploited by the Islamic State and other groups. At least at this point, there’s no affirmative obligation to police in the U.S.
But you don’t have to have censorship laws if you can bring pressure to bear. People cheer when Google voluntarily delists so-called revenge porn, when YouTube deletes ISIS propaganda videos, when Twitter adopts tougher policies on hate speech. The end result is collateral censorship, by putting pressure on platforms and intermediaries, governments can indirectly control what we say and what we experience.
What that means is that governments, or corporations, or the two working together increasingly decide what we can see. It’s not true that anyone can say anything and be heard anywhere. It’s more true that your breast feeding photos aren’t welcome and, increasingly, that your unorthodox opinions about radicalism will get you placed on a list.
Make no mistake, this censorship is inherently discriminatory. Muslim “extremist” speech is cause for alarm and deletion. But no one is talking about stopping Google from returning search results for the Confederate flag.
Globalization means other governments are in the censorship mix. I’m not just talking about governments like Russia and China. There’s also the European Union, with its laws against hate speech, Holocaust denial, and its developing Right To Be Forgotten. Each country wants to enforce its own laws and protect and police its citizens as it sees fit, and that means a different internet experience for different countries or regions. In Europe, accurate information is being delisted from search engines, to make it harder or impossible to find. So much for talking to everyone everywhere in real time. So much for having everything on the Internet shelf.
Worse, governments are starting to enforce their laws outside their borders through blocking orders to major players like Google and to ISPs. France is saying to Google, don’t return search results that violate our laws to anyone, even if it’s protected speech that we are entitled to in the U.S. If you follow this through to the obvious conclusion, every country will censor everywhere. It will be intellectual baby food.
How much free speech does a free society really need? Alternatively how much sovereignty should a nation give up to enable a truly global network to flourish?
Right now, if we don’t change course and begin to really value having a place for even the edgy and disruptive speech, our choice is between network balkanization and a race to the bottom.
Which will we pick?
The Next 20 Years
The future for freedom and openness appears to be far bleaker than we had hoped for 20 years ago. But it doesn’t have to be that way. Let me describe another future where the Internet Dream lives and thrives.
We start to think globally. We need to deter another terrorist attack in New York, but we can’t ignore impact our decisions have on journalists and human rights workers around the world. We strongly value both.
We build in decentralization where possible: Power to the People. And strong end to end encryption can start to right the imbalance between tech, law and human rights.
We realize the government has no role in dictating communications technology design.
We start being afraid of the right things and stop being driven by irrational fear. We reform the CFAA, the DMCA, the Patriot Act and foreign surveillance law. We stop being so sensitive about speech and we let noxious bullshit air out. If a thousand flowers bloom, the vast majority of them will be beautiful.
Today we’ve reached an inflection point. If we change paths, it is still possible that the Dream of Internet Freedom can become true. But if we don’t, it won’t. The Internet will continue to evolve into a slick, stiff, controlled and closed thing. And that dream I have — that so many of you have — will be dead. If so, we need to think about creating the technology for the next lifecycle of the revolution. In the next 20 years we need to get ready to smash the Internet apart and build something new and better.