If technology is increasingly a place where we live, it needs to have space for the soul, like how the library makes room for a healthy, elevated mindset while the current Penn Station inspires despair. Beauty is an important element, but purpose also matters. I think this is what Kelly is hinting at. Using technology for commerce, efficiency, and ease are not enough of a higher purpose for something that dominates a great part of our lives. The heart demands a bigger dream.
What is it all for? What can we imagine? These questions become critical as we find ourselves in a time where we are confronted with questions about identity, self-worth, community, and citizenship in this connected world. If technology is not only for profit and ease, what is it for? We must use our soulful imaginations and be specific.
I promised a post a few days ago about distributed search engines, but I\'ve been dilly-dallying about it. It\'s the holidays, we\'re all full of turkey and cookies.
In my earlier post, I fretted about how Google and other centralized search services like it had become a bottleneck to finding information online, and could therefore become a tempting target in the drive to regulate ( and even censor ) Internet content. But there is a more powerful, positive argument to make in favor of distributed search engines — people are assembling their own collections of information, in the form of websites, discussion groups, blogs, and more traditional forms of writing, but there is still no way to selectively search this content. You can go to Google and search the entire Internet, or you can use a variety of rudimentary seach tools on your own comptuer or individual public websites. What you can\'t do is say \"search the New York Times, the blogs in my blogroll, and the Wayback machine for documents similar to the email message I just sent\". A distributed system would fill that middle ground.
Right up front it\'s important to say that peer-to-peer search engines wouldn\'t be intended to replace of centralized services like Google, any more than weblogs have supplanted large news or commentary sites like Salon or the New York Times. Instead, they would serve the same purpose as weblogs do, which is to create neighborhoods for specialized information, and make it easy to find, join, and participate in niche communities of knowledge.
Mena Trott mentions a phenomenon that you can often see by monitoring your referrer logs - a post on an arcane topic will become the hub of a little universe of interest. In her case, an attached discussion became the locus for a whole little special-interest group, with visitors coming in via Google, answering one another\'s questions and keeping the post \'alive\' outside the context of the weblog itself.
A peer-to-peer search engine would make such microcommunities easier to find, and easier to sustain. Instead of relying on an Internet-wide portal like Google, you would run searches through a personal search client; this could be a Web application, or a more fully-featured desktop application, like a blog aggregator . The client would let you seek out searchable collections through a kind of meta-search, akin to the way Gnutella and other file sharing networks discover new nodes , and create \"search lists\" of interesting sites to send queries to, much like an iTunes playlist. You could also keep a list of favorite queries, which you would periodically send out to chosen blocks of search engines, to find newly added material.
Queries would go out to each little search engine, get their results through a standardized API ( most likely a web serivices protocol ), and return a ranked list of relevant hits. The engine could then recombine those into a single ranked list of hits, and allow you to do all the usual post-filtering — exact phrase matches, sorting by date, and everything else we\'re used to being able to do in a decent search engine.
The net result of this would be a search network whose topology would be just as interesting as the current network of hyperlinks, and clever people would find clever ways to combine the two to make it even easier to find and join interesting conversations.
This is truly a job for the LazyWeb - the technical hurdles are not that great, and the blogging community can be the first to benefit from a working system. Then, when Google puts up the mandatory 700-pixel portrait of John Ashcroft on its homepage and removes the search box, we\'ll at least have something to fall back on.
Who Will Command The Robot Armies?
When John Allsopp invited me here, I told him how excited I was discuss a topic that's been heavy on my mind: accountability in automated systems.
But then John explained that in order for the economics to work, and for it to make sense to fly me to Australia, there needed to actually be an audience.
So today I present to you my exciting new talk:
Who Will Command the Robot Armies?
The Military
Let's start with the most obvious answer—the military.
This is the Predator, the forerunner of today's aerial drones. Those things under its wing are Hellfire missiles.
These two weapons are the chocolate and peanut butter of robot warfare. In 2001, CIA agents got tired of looking at Osama Bin Laden through the camera of a surveillance drone, and figured out they could strap some missiles to the thing. And now we can't build these things fast enough.
We're now several generations in to this technology, and soldiers now have smaller, portable UAVs they can throw like a paper airplane. You launch them in the field, and they buzz around and give you a safe way to do reconaissance.
There are also portable UAVs with explosives in their nose, so you can fire them out of a tube and then direct them against a target—a group of soldiers, an orphanage, or a bunker–and make them perform a kamikaze attack.
The Army has been developing unmanned vehicles that work on land, little tanks that roll around with a gun on top, with a wire attached for control, like the cheap remote-controlled toys you used to get at Christmas.
Here you see a demo of a valiant robot dragging a wounded soldier to safety.
The Russians have their own versions of these things, of course. Here's a cute little mini-tank that patrols the perimeter of a defense installation.
I imagine it asking you who you are in a heavy Slavic accent before firing its many weapons into your fleeing body.
Not all these robots are intended as weapons. The Army is trying to automate transportation, sometimes in weird-looking ways like this robotic dog monster.
DARPA funded research into this little bit of nightmare fuel, a kind of headless horse, that can cover rough terrain and carry gear on its back.
So progress with autonomous and automated systems in the military is rapid.
The obvious question as these systems improve is whether there will ever be a moment when machines are allowed to decide to kill people without human intervention.
I think there's a helpful analogy here with the Space Shuttle.
The Space Shuttle was an almost entirely automated spacecraft. The only thing on it that was not automated was button that dropped the landing gear. The system was engineered that way on purpose, so that the Shuttle had to have a crew.
The spacecraft could perform almost an entire mission solo, but it would not be able to put its wheels down.
When the Russians built their shuttle clone, they removed this human point of control. The only flight the Buran ever made was done on autopilot, with no people aboard.
I think we'll see a similar evolution in autonomous weapons. They will evolve to a point to where they are fully capable of finding and killing their targets, but the designers will keep a single point of control.
And then someone will remove that point of control.
Last week I had a whole elaborate argument about how that could happen under a Clinton Administration. But today I don't need it.
It's important to talk about the political dynamic driving the development of military robots.
In the United States, we've just entered the sixteenth year of a state of emergency. It has been renewed annually since 2001.
It has become common political rhetoric in America to say that 'we're at war', even though being 'at war' means something vastly different for Americans than, say, Syrians.
(Instead of showing you pictures of war, I'm going to show pictures of kids I met in Yemen in 2014. These are the people our policies affect most.)
The goal of military automation is to make American soldiers less vulnerable. This laudable goal also serves a cynical purpose.
Wounded veterans are a valuable commodity in American politics, but we can't produce them in large numbers without facing a backlash.
Letting robots do more of the fighting makes it possible to engage in low-level wars for decades at a time, without creating political pressure for peace.
As it becomes harder to inflict casualties on Western armies, their opponents turn to local civilian targets. These are the real victims of terrorism; people who rarely make the news but suffer immensely from the state of permanent warfare.
Once in a long while, a terror group is able to successfully mount an attack in the West. When this happens, we panic.
The inevitable hardening of our policy fuels a dynamic of grievance and revenge that keeps the cycle going.
While I don't think anyone in the Army is cynical enough to say it, there are institutional incentives to permanent warfare.
An army that can practice is much better than one that can only train. Its leaders, tactics, and technologies are tested under real field conditions. And in 'wartime', cutting military budgets becomes politically impossible.
These remote, imbalanced wars also allow us to experiment with surveillance and automation technologies that would never pass ethical muster back home.
And as we'll see, a lot of them make it back home anyway.
It's worth remarking how odd it is to have a North American superpower policing remote areas of Pakistan or Yemen with flying robots.
Imagine if Indonesia were flying drones over northern Australia, to monitor whether anyone there was saying bad things about Muslims there.
Half of Queensland would be in flames, and everyone in this room would be on a warship about to land in Jakarta.
The Police
My second contender for who will command the robot armies is the police.
Technologies that we develop to fight our distant wars get brought back, or leak back, into civilian life back home.
The most visible domestic effect of America's foreign wars has been the quantity of military surplus equipment that ends up being given to police.
Local police departments around the country (and here in Australia) have armored vehicles, military rifles, night vision googles and other advanced equipment.
After the Dallas police massacre, the shooter was finally killed by a remotely-controled bomb disposal robot initially designed for use by the military in Iraq.
I remember how surprising it was after the Boston marathon bombings to see the Boston police emerge dressed like the bad guys from a low-budget sci-fi thriller. They went full Rambo, showing up wth armored personnel carriers and tanks.
Still, cops will be cops. Though they shut down all of downtown Boston, the police did make sure the donut shops stayed open.
The militarization of our police extends to their behavior, and the way they interact with their fellow citizens.
Many of our police officers are veterans. Their experience in foreign wars colors the attitudes and tactics they adopt back home.
Less visible, but just as important, are the surveillance technologies that make it back into civilian life.
These include drones with gigapixel cameras that can conduct surveillance over entire cities, and whose software can follow dozens of vehicles and pedestrians automatically.
The United States Border Patrol has become an enthusiastic (albeit not very effective) adopter of unmanned aerial vehicles.
These are also being used here in Australia, along with unmanned marine vehicles, to intercept refugees arriving by sea.
Another gift of the Iraq war is the Stingray, a fake base station that hijacks cell phone traffic, and is now being used rather furtively by police departments across the United States.
When we talk about government surveillance, there's tendency to fixate on national agencies like the NSA or CIA. These are big, capable bureaucracies, and they certainly do a lot of spying.
But these agencies have an internal culture of following rules (even when the rules are secret) and an institutional committment to a certain kind of legality. They're staffed by career professionals.
None of these protections apply when you're dealing with local law enforcement. I trust the NSA and CIA to not overstep their authority much more than I trust some deputy sherrif in East Dillweed, Arizona.
Unfortunately, local police are getting access to some very advanced technology.
So for example San Diego cops are swabbing people for DNA without their consent, and taking photos for use in a massive face recognition database. Half the American population now has their face in such a database.
And the FBI is working on a powerful 'next-generation' identification system that will be broadly available to other government agencies, with minimal controls.
The Internet of Things
But here the talk is getting grim! Let's remember that not all robots are out to kill us, or monitor us.
There are all kinds of robots that simply want to help us and live with us in our homes, and make us happy.
Let's talk about those friendly robots for a while.
Consider the Juicebro! The Juicebro is a $700 Internet-connected juice smasher that sits on your countertop.
Juicebro makes juice from $7 packets of pre-chopped vegetables with a QR code on the back. If the Internet connection is down, or the QR code does not validate, Juicebro will refuse to make you juice. Juicebro won't take that risk!
Flatev makes sad little tortillas from a Keurig-like capsule of dough, and puts them in a drawer. Each dough packet costs $1.
The Vessyl is a revolutionary smart cup that tells you what you're drinking.
Here, for example, the Vessyl has sensed that you are drinking a beer.
(This feature can probably be hard-coded in Australia.)
Because of engineering difficulties, the Vessyl is not quite ready for sale. Instead, its makers are selling the Pryme, a $99 smart cup that can only detect water.
You'll know right to the milliliter how much water you're drinking.
The Kuvée is the $200 smart wine bottle with a touchscreen that tells you right on the bottle what the wine tastes like.
My favorite thing about the Kuvée is that if you don't charge it, it can't pour the wine.
The Wilson X connected football detects "velocity, distance, spiral efficiency, spin rate and whether a pass was caught or dropped." It remembers these statistics forever.
No more guesswork with the Wilson connected football!
The Molekule is one of my favorite devices, a human-portable air freshener that "breaks down pollutants on a molecular level".
At only eight kilos, you can lug it around comfortably as you pad barefoot from room to room.
Molekule makes sure you never breathe a single molecule of un-purified air.
Here is the Internet connected kettle! There was a fun bit of drama with this just a couple of weeks ago, when the data scientist Mark Rittman spent eleven hours trying to connect it to his automated home.
The kettle initially grabbed an IP address and tried to hide:
3 hrs later and still no tea. Mandatory recalibration caused wifi base station reset, now port-scanning network to find where kettle is now.
Then there was a postmodern moment when the attention Rittman's ordeal was getting on Twitter started causing his home system to go haywire:
Now the Hadoop cluster in the garage is going nuts due to RT to @internetofshit, saturating network + blocking MQTT integration with Amazon Echo
Finally, after 11 hours, Rittman was able to get everything working and posted this triumphal tweet:
Well the kettle is back online and responding to voice control, but now we're eating dinner in the dark while the lights download a firmware update.
Internet connected kettle, everybody!
Peggy is the web-connected clothespin with a humidity sensor that messages you when your clothes are dry.
I'm not sure if you're supposed to buy a couple dozen of these, or if you're meant to use only one, and dry items one after the other.
This smart mirror couples with a smart scale to help you start your morning right.
Step on the scale, look in the mirror, and find out how much more you weigh, and if you have any new wrinkles.
Flosstime is the world's first and possibly last smart floss dispenser. It blinks at you accusingly when it is time to floss, and provocatively spits out a thread of floss for you to grab.
I especially like the user design for when there are two people using htis device. You're supposed to to take it off its mounting, flip a switch on its back to user #2, and then back away slowly so the motion detector doesn't register your presence.
Spire is a little stone that you clip to your belt that reminds you to breathe.
Are you sick and tired of waiting twelve minutes for cookies?
The CHiP smart oven will make you a batch of cookies in under ten minutes!
The my.Flow is a smart tampon. The sensor connects with a cord to a monitor that you wear clipped to the outside of your belt, and messages you when it's time to change your tampon.
Nothing gives you peace of mind like connecting something inside your body to the outside of your clothing.
Here is Huggies TweetPee, which is exactly what you're most afraid it will be.
This moisture sensor clips to your baby's diaper and sends you a tweet when it is wet.
Huggies tried to make a similar sensor to detect when the diaper is full of shit, but it proved impossible to distinguish from normal activity on Twitter.
Finally, meet Kisha, the umbrella that tells you when it's raining.
All of these devices taken together make for quite a smart home. Every one of them comes with an app, and none of them seem to consider the cumulative effect of so many notifications and apps on people's sanity.
They are like little birds clamoring to be fed, oblivious to everything else.
The people who design these devices don't think about how they are supposed to peacefully coexist in a world full of other smart objects.
This raises the question of who will step up and figure out how to make the Internet of Things work together as a cohesive whole.
Evil Hackers
Of course, the answer is hackers!
Before we talk about them, let's enjoy this stock photo.
I've been programming for a number of years, but I've still never been in a situation where green binary code is being projected onto my hoodie. Yet this seems to happen all the time when you're breaking into computer systems.
Notice also how poor this guy's ergonomics are. That hood is nowhere near parallel to the laptop screen.
This poor hacker has it even worse!
He doesn't even have a standing desk, so he's forced to hold the laptop up with one hand, like a waiter.
But despite these obstacles, hackers are able to reliably break into all kinds of IoT devices.
And since these devices all need access to the Internet, so they can harass your phone, they are impossible to secure.
This map could stand for so many things right now.
But before the election it was just a map of denial-of-service attacks against a major DNS provider, that knocked a lot of big-name sites offline in the United States.
This particular botnet used webcams with hard-coded passwords. But there is no shortage of vulnerable devices to choose from.
In August, researchers published a remote attack against a smart lightbulb protocol. For some reason, smart lightbulbs need to talk to each other.
“Hey, are you on?”
“Yeah, I'm on.”
“Wanna blink?”
“Sure!”
In their proof of concept, the authors were able to infect smart light bulbs in a chain reaction, using a drive-by car or a drone for the initial hack.
The bulbs can be permanently disabled, or made to put out a loud radio signal that will disrupt wifi anywhere nearby.
Since these devices can't be trusted to talk to the Internet by themselves, one solution is to have a master device that polices net access for all the others, a kind of robot butler to keep an eye on the staff.
Google recently introduced Google Home, which looks like an Orwellian air freshener. It sits in your house, listens through always-on microphones, and plays reassuring music through speakers in its base.
So maybe it's Google who will command the robot armies! They have the security expertise to build such a device and the programming ability to make it useful.
Yet Google already controls our online life to a troubling degree. Here is a company that runs your search engine, web browser, manages your email, DNS, phone operating system, and now your phone itself.
Moreover, Doubleclick and Google Analytics tell Google about your activity across every inch of the web.
Now this company wants to put an always-on connected microphone in every room of your home.
What could go wrong?
For examples of failure, always turn to Yahoo.
On the same day that Google announced Google Home, Reuters revealed that Yahoo had secretly installed software in 2014 to search though all incoming email at the request of the US government.
What was especially alarming was the news that Yahoo had done this behind the backs of its own security team.
This tells us that whatever safeguards Google puts in its always-on home microphone will not protect us from abuses by government, even if everyone at Google security were prepared to resign in protest.
And that's a real problem.
Over the last two decades, the government's ability to spy on its citizens has grown immeasurably.
Mostly this is due to technology transfer from the commercial Internet, whose economic model is mass surveillance. Techniques and software that work in the marketplace are quickly adopted by intelligence agencies worldwide.
President Obama has been fairly sparing in his use of this power. I say this not to praise him, but actually to condemn him. His relative restraint, and his administration's obsession with secrecy, have masked the full extent of power that is available to the executive branch.
Now that power is being passed on to a new President, and we are going to learn all about what it can do.
Amazon
So Google is out! The company knows too much, and it's too easy for the information it collects to fall into tiny, orange hands.
Maybe Amazon can command the robot armies? They sell a similar device to Google Home, a pretty cylinder called Echo that listens to voice commands. Unlike Home, it's already widely available.
And our relationship with Amazon is straightforward compared to Google. Amazon just wants to sell us shit. There's none of Google's obliqueness, creepy advertising, and mysterious secret projects designed to save the world.
Amazon Echo is a popular device, especially with parents who like being able to do things with voice commands.
And recently they've added little hockey pucks that you're supposed to put around your house, so that there's microphone coverage everywhere.
Amazon knows all about robot armies. For starters, they run the cloud, one of the biggest automated systems in the world.
And they have ambitious ideas about how robots could serve us in the future.
Amazon's vision of how we'll automate our lives is delightfully loopy. Consider the buttons they sell that let you re-order any product.
I lifted this image right from their website. When would this scenario ever be useful? Is this a long weekend after some bad curry? How much time are we talking about here?
And what do you do when the doorbell rings?
It's too bad, then, that, Amazon has got Trump problems of its own.
Here's a tweet from Jeff Bezos—the man who controls "the Cloud" and the Washington Post—two days after the election.
Congratulations to @realDonaldTrump. I for one give him my most open mind and wish him great success in his service to the country.
People are opening their minds so far their brains are falling out.
I'd like to talk about a different kind of robot army that Amazon commands.
Most of you know that the word "robot" comes from a 1920 play by Karel Čapek.
I finally read this play and was surprised to learn that the robots in it were not mechanical beings. They were made of flesh and bone, just like people, except that were assembled instead of being born.
Čapek's robots resemble human beings but don't feel pain or fear, and focus only on their jobs.
In other words, they're the ideal employee.
Amazon has been trying to achieve this perfect robotic workforce for years. Many of the people who work in its warehouses are seasonal hires, who don't get even the limited benefits and job security of the regular warehouse staff.
Amazon hires such workers through a subsidiary called Integrity. If you know anything about American business culture, you'll know that a company called "Integrity" can only be pure evil.
Working indirectly for Amazon like this is an exercise in precariousness. Integrity employees don't know from day to day whether they still have a job. Sometimes their key card is simply turned off.
A lot of what we consider high-tech startups work by repackaging low-wage labor.
Take Blue Apron, one of a thousand "box of raw food" startups that have popped in recent years. Blue Apron lets you cook a meal without having to decide on a recipe or shop for ingredients. It's kind of like a sous-chef simulator.
Blue Apron relies on a poorly-trained, low wage workforce to assemble and deliver these boxes. They've had repeated problems with workplace violence and safety at their Richmond facility.
It's odd that this human labor is so invisible.
Wealthy consumers in the West have become enamored with "artisanal" products. We love to hear how our organic pork is raised, or what hopes and dreams live inside the heart of the baker who shapes our rustic loaves.
But we're not as interested in finding out who assembled our laptop.
In fact, a big selling point of online services is not having to deal with other human beings. We never engage with the pickers in an Amazon warehouse that assemble our magical delivery. And I will never learn who is chopping vegetables for my JuiceBro packet.
So is labor something laudable or not?
Our software systems treat labor as a completely fungible commodity, and workers as interchangeable cogs. We try to put a nice spin on this frightening view of labor by calling it the "gig economy".
The gig economy disguises precariousness as empowerment. You can pick your own hours, work only as much as you want, and set your own schedule.
For professionals, that kind of freedom is attractive. For people in low-wage jobs, it's a disaster. A job has predictable hours, predictable pay, and confers stability and social standing.
The gig economy takes all that away. You work whatever hours are available, with no guarantee that there will be more work tomorrow.
I do give Amazon credit for one thing: their white-collar employees are just as miserable as their factory staff. They don't discriminate.
As we automate more of middle management, we are moving towards a world of scriptable people—human beings whose labor is controlled by an algorithm or API.
Amazon has gone further than anyone else in this direction with Mechanical Turk.
Mechanical Turk is named after an 18th-century device that purported to be a chess-playing automaton. In reality, it had a secret compartment where a human player could squeeze himself in unseen.
So the service is literally named after a box that people squeezed themselves into to pretend to be a machine. And it has that troubling, Orientalist angle to boot.
A fascinating thing about Mechanical Turk is how heavily it's used for social science research, including research into low-wage labor.
Social scientists love having access to a broad set of survey-takers, but don't think about the implications (or ethics) of using these scriptable people, who spend their entire workday filling out similar surveys.
A lot of our social science is being conducted by having these people we treat like robots fill out surveys.
My favorite Internet of Things device is a fan called the Ethical Turk that subverts this whole idea of scriptable people.
This clever fan (by the brilliant Simone Rebaudengo) recognizes moral dilemmas and submits them to a human being for adjudication. Conscious of the limits of robotkind, it asks people for ethical help.
For example, if the fan detects that there are two people in front of it, it won't know which one to cool. So it uploads a photograph of the situation to Mechanical Turk, which assigns the task to a human being. The human makes the ethical decision and returns an answer along with a justification. The robot obeys the answer, and displays the justification on a little LCD screen.
The fan has dials on the side that let you select the religion and educational level of the person making the ethical choice.
My favorite thing about this project is how well it subverts Amazon's mechanization of labor by using human beings for the one thing that makes them truly human. People become a kind of ethics co-processor.
The Robot Within
Let me talk briefly about the robots inside us.
We all aspire to live in the moment like Zen masters. I know that right now I'm completely immersed in this talk, and you feel equally alive and alert, fully engaged in what I'm saying. We're fellow passengers on a high-speed train of thought headed to God knows where.
But it's also true that we spend much of our lives on autopilot. We have our daily routine, our habits, and there are many tasks that we perform with less than our full attention.
In those situations, we can find ourselves behaving a bit like robots.
All of modern advertising is devoted to catching us in those moments of weakness. And automation and tracking has opened up new frontiers in how advertisers can try to manipulate our behavior.
Cathy Carleton is a marketing executive who flies a lot on US Airways. At some point, she noticed that she was consistently being put in the last boarding group. Boarding last means not having enough room for your bag, so it's one of those petty annoyances that compounds when you travel a lot.
After some months of being last to board every plane, she realized that the airline was pushing her to get the US Airways credit card, one of whose perks is that you get to board in an early group.
This kind of triple bank shot of tracking, advertising and behavior modification was never possible in the past, but now it's a routine part of our lives.
I have a particular fascination with chatbots, the weird next stage in corporate personhood. The defining feature of the chatbot is its insincerity. Both you and the chatbot (or the low-wage worker pretending to be the chatbot) know that you're talking to a fictitious persona, but you have the conversation anyway.
By pretending to be people, chatbots seek access to a level of emotional engagement that we normally only offer to human beings.
And if we're not paying attention, we give it to them.
So it's fun to watch them fail in inhuman ways.
A few weeks ago I was riffing with people on Twitter about what kinds of devices we'd find in Computer Hell. At some point I suggested that Computer Hell would be served by America's most hated cable company:
Computer Hell is proudly served by Comcast
Seconds later, the Comcast bot posted a reply:
@pinboard Good afternoon. I'd be happy to look into any connection problems you're having...
The same thing happened after I tweeted about Google:
Sobering to think that the ad-funded company running your phone, DNS, browser, search engine and email might not cherish your privacy.
Google Home looks pretty great though.
The chatbot only noticed my second tweet, and thanked me fulsomely for my interest. (Unfortunately that reply has been taken down. Either the Google bot got smarter, or an intern was made to vet all conversations for irony).
While these examples are fun, the chatbot experience really isn't. It's companies trying to hijack our sociability with computer software, in order to manipulate us more effectively. And as the software gets better, these interactions will start to take a social and cognitive toll.
Social Media
Sometimes you don't even notice when you're acting like a robot.
This is a picture of my cat, Holly.
My roommate once called me over all excited to show me that he'd taught Holly to fetch.
I watched her walk up to him with a toy in her mouth and drop it at his feet. He picked it up and threw it, and she ran and brought it back several times until she had had enough.
He beamed at me. "She does this a couple of times a day."
He was about to go back to whatever complicated coding task the cat had interrupted, but something about the situation felt strange. We thought for a moment, our combined human brains trying to work out the implications.
My roommate hadn't trained the cat to do anything.
She had trained him to be her cat toy.
I think of this whenever I read about Facebook. Facebook tells us that by liking and sharing stuff on social media, we can train their algorithm to better understand what we find relevant, and improve it for ourselves and everyone else.
Here, for example, is a screenshot from a live feed of the war in Syria. People are reacting to it on Facebook as they watch, and their reaction emoji scroll from right to left. It's unsettling.
What Facebook is really doing is training us to click more. Every click means money, so the site shows us whatever it has to to to maximize those clicks.
The result can be tragic. With no ethical brake to the game, and no penalty for disinformation, outright lies and hatred can spread unchecked. Whatever Facebook needs to put on your screen for you to click is what you will see.
In the recent US election, Facebook was the primary news source for 44% of people, over half of whom used it as their only news source.
Voters in our last election who had a 'red state' profile saw absolutely outrageous stories on their newsfeed. There was a cottage industry in Macedonia writing fake stories that would get boosted by Facebook's algorithm. There were no consequences to this, other than electing an orange monster.
But Facebook insists it's a tech company, not a media company.
Chad and Brad
My final nominees for commanders of the robot armies are Chad and Brad.
Chad and Brad are not specific people. They're my mental shorthand for developers who are just trying to crush out some code out on deadline, and don't think about the wider consequences of their actions.
The principle of charity says that we should assume Chad and Brad are not trying to fuck up intentionally, or in such awful ways.
Consider Pokémon Go, which when it was initially released required full access to your Gmail account. To play America's most popular game, you practically had to give it power of attorney.
And first action Pokémon Go had you take was to photograph the inside of your house.
You might think this was a brilliant conspiracy to seize control of millions of Gmail accounts, or harvest a trove of private photographs.
But it was only Chad and Brad, not thinking things through.
ProPublica recently discovered that you could target housing and employment ads on Facebook based on 'ethnic affinity', a proxy for race.
It's hard to express how illegal this is in the United States. The entire civil rights movement happened to outlaw this kind of discrimination.
My theory is that every Facebook lawyer who saw this interface had a fatal heart attack. And when no one registered any objection, Chad and Brad shipped it.
Here's an example from Andy Freeland of Uber's flat-fare zone in Los Angeles.
You can see that the boundary of this zone follows racial divisions. If you live in a black part of LA, you're out of luck with Uber. Whoever designed this feature probably just sorted by ZIP code and picked a contiguous area above an income threshold. But the results are discriminatory.
What makes Chad and Brad a potent force is that you rarely see their thoughtlessness so clearly. People are alert to racial discrimination, so sometimes we catch it. But there's a lot more we don't catch, and modern machine learning techniques make it hard to audit systems for carelessness or compliance.
Here is a similar map of Uber's flat-fare zone in Chicago. If you know the city, you'll notice it's got an odd shape, and excludes the predominantly black south side of the city, south of the diagonal line. I've shown the actual Chicago city limits on the right, so you can compare.
Or consider this screenshot from Facebook, taken last night. Facebook added a nice little feature that says 'you have new elected representatives, click here to find out who they are!
When you do, it asks you for your street address. So to find out that Trump got elected, I have to give a service that knows everything about me except my address (and who has a future member of Trump's cabinet on its board) the one piece of information that it lacks.
This is just the kind of sloppy coding we see every day, but it plays out at really high stakes.
The Chads and Brads of this world control algorithms that decide if you get a loan, if you're more likely to be on a watch list, and what kind of news you see.
For more on this topic, I highly recommend Cathy O'Neill's new book, Weapons of Math Destruction.
Conclusion
So who will command the robot armies?
Is it the army? The police?
Nefarious hackers? Google, or Amazon?
Some tired coder who just can't be bothered?
Facebook, or Twitter?
Brands?
I wanted to end this talk on a note of hope. I wanted to say that ultimately who commands the robot armies will be up to us.
That it will be some version of "we the people" that takes these tools and uses them with the care they require.
But it just isn't true.
The real answer to who will command the robot armies is: Whoever wants it the most.
And right now we don't want it. Because taking command would mean taking responsibility.
Facebook says it's not their fault what people share on the site, even if it's completely fabricated, and helps decide an election.
Twitter says there's nothing they can do about vicious racists using the site as a political weapon. Their hands are tied!
Uber says they can't fight market forces or regulate people's right to drive for below minimum wage.
Amazon says they can't pay their employees a living wage because they aren't even technically employees.
And everyone agrees that the answer to these problems is not regulation, but new and better technologies, and more automation.
Nobody wants the responsibility; everybody wants the control.
Instead of accountability, all we can think of is the next wave of technology that will make everything better. Rockets, robots, and self-driving cars.
We innovated ourselves into this mess, and we'll innovate our way out of it.
Eventually, our technology will get so advanced that we can build sentient machines, and they will help us create (somehow) a model society.
Getting there is just a question of being sufficiently clever.
On my way to this conference from Europe, I stopped in Dubai and Singapore to break the journey up a little bit.
I didn't think about the symbolism of these places, or how they related to this talk.
But as I walked around, the symbolism of both places was hard to ignore.
Dubai, of course, is a brand new city that has grown up in an empty desert. It's like a Las Vegas without any fun, but with much better Indian food.
In Dubai, the gig economy has been taken to its logical conclusion. Labor is fungible, anonymous, and politically inert. Workers serve at the whim of the employer, and are sent back to their home countries when they're not wanted.
There are different castes of foreign workers—western expats lead a fairy cozy life, while South Indian laborers and Filipino nannies have it rough.
But no matter what you do, you can never hope to be a citizen.
Across all the Gulf states there is a permanent underclass of indentured laborers with no effective legal rights. It's the closest thing the developed world has to slavery.
Singapore, where I made my second stop, is a different kind of animal.
Unlike Dubai, Singapore is an integrated multi-ethnic society where prosperity is widely shared, and corruption is practically nonexistent.
It may be the tastiest police state in the world.
On arrival there, you get a little card telling you you'll be killed for drug smuggling. Curiously, they only give it to you once you're already over the border.
But the point is made. Don't mess with Singapore.
Singaporeans have traded a great deal of their political and social freedom for safety and prosperity. The country is one of the most invasive surveillance states in the world, and it's also a clean, prosperous city with a strong social safety net.
The trade-off is one many people seem happy with. While Dubai is morally odious, I feel ambivalent about Singapore. It's a place that makes me question my assumptions about surveillance and social control.
What both these places have in common is that they had some kind of plan. As Walter Sobchak put it, say what you will about social control, at least it's an ethos.
The founders of these cities pursued clear goals and made conscious trade-offs. They used modern technology to work towards those goals, not just out of a love of novelty.
We, on the other hand, didn't plan a thing.
We just built ourselves a powerful apparatus for social control with no sense of purpose or consensus about shared values.
Do we want to be safe? Do we want to be free? Do we want to hear valuable news and offers?
The tech industry slaps this stuff together in the expectation that the social implications will take care of themselves. We move fast and break things.
Today, having built the greatest apparatus for surveillance in history, we're slow to acknowledge that it might present some kind of threat.
We would much rather work on the next wave of technology: a smart home assistant in every home, self-driving cars, and rockets to Mars.
We have goals in the long term: to cure illness, end death, fix climate change, colonize the solar system, create universal prosperity, reinvent cities, and become beings of pure energy.
But we have no plan about how to get there in the medium term, other than “let’s build things and see what happens.”
What we need to do is grow up, and quickly.
Like every kid knows, you have to clean up your old mess before you can play with the new toys. We have made a colossal mess, and don't have much time in which to fix it.
And we owe it to these poor robots! They depend on us, they're trying to serve us, and they're capable of a lot of good. All they require from us is the leadership and a willingness to take responsibility. We can't go back to the world we had before we built them.
It's been a horrible week.
I'm sure I speak for the other Americans here when I thank you guys for your hospitality and understanding as we try to come to terms with what just happened.
For the next few years, we're in this together. We'll need all your help to get through it. And I am very grateful for this chance to speak to you.
I hope you will join me for my talk next year: "Who Will Command The Robot Navies".
COMPASSIONATE, AUSTRALIAN APPLAUSE.
Europe is wrong to take a sledgehammer to Big Google
Evgeny Morozov
It is the continent’s favourite hobby, and even the European Parliament cannot resist: having a pop at the world’s biggest search engine. In a recent and largely symbolic vote, representatives urged that Google search should be separated from its other services — demanding, in essence, that the company be broken up.
This would benefit Google’s detractors but not, alas, European citizens. Search, like the social networking sector dominated by Facebook, appears to be a natural monopoly. The more Google knows about each query — who is making it, where and why — the more relevant its results become. A company that has organised, say, 90 per cent of the world’s information would naturally do better than a company holding just one-tenth of that information.
But search is only a part of Google’s sprawling portfolio. Smart thermostats and self-driving cars are information businesses, too. Both draw on Google’s bottomless reservoirs of data, sensors such as those embedded in hardware, and algorithms. All feed off each other.
Policy makers do not yet grasp the dilemma. To unbundle search from other Google services is to detach them from the context that improves their accuracy and relevance. But to let Google operate as a natural monopoly is to allow it to invade other domains.
Facebook presents a similar dilemma. If you want to build a service around your online persona — be it finding new music or sharing power tools with neighbours — its identity gateway comes in handy. Mapping our interests and social connections, Facebook is the custodian of our reputations and consumption profiles. It makes our digital identity available to other businesses and, when we interact with those businesses, Facebook itself learns even more.
Given that data about our behaviour might hold the key to solving problems from health to climate change, who should aggregate them? And should they be treated as a commodity and traded at all?
Imagine if such data could accrue to the citizens who actually generate them, in a way that favoured its communal use. So a community could visualise its precise travel needs and organise flexible and efficient bus services — never travelling too empty or too full — to rival innovative transport start-up Uber. Taxis ordered through Uber (in which Google is an investor) can now play songs passengers have previously “liked” on music-streaming service Spotify (Facebook is an ally), an indication of what becomes possible once our digital identity lies at the heart of service provision. But to leave these data in the hands of the Google-
Facebook clan is to preclude others from finding better uses for it.
We need a data system that is radically decentralised and secure; no one should be able to obtain your data without permission, and no one but you should own it. Stripped of privacy-compromising identifiers, however, they should be pooled into a common resource. Any aspiring innovator or entrepreneur — not just Google and Facebook — should be able to gain access to that data pool to build their own app. This would bring an abundance of unanticipated features and services.
What Europe needs is not an Airbus to Google’s Boeing but thousands of nimble enterprises that operate on a level playing field with big American companies. This will not happen until we treat certain types of data as part of a common infrastructure, open to all. Imagine the outrage if a large company bought every copy of a particular book, leaving none for the libraries. Why would we accept such a deal with our data?
Basic searches — “Who wrote War and Peace?” — do not require Google’s sophistication and can be provided for free. Unable to hoard user data for advertising purposes, Google could still provide advanced search services, perhaps for a fee (not necessarily charged to citizens). The bill for finding books or articles related to the one you are reading could be picked up by universities, libraries or even your employer.
America will not abandon the current model of centralised, advertising-funded services; its surveillance state needs them. Russia and China have lessened their dependence on Google and Facebook, only to replace them with local equivalents.
Europe should know better. It has a modicum of respect for data protection. Its citizens are uneasy with the rapaciousness of Silicon Valley. But this is no reason to return to the not-so-distant past, when data were expensive and hard to aggregate. European politicians should take a longer term view. The problem with Google is not that it is too big but that it hoovers up data that does not belong to it.
« Bien sûr qu'Internet donne du pouvoir aux individus, et c'est bien ça le problème, regrette Joël Decarsin. C'est la poursuite d'une tragédie : l'homme ne parvient pas à sortir de la logique du pouvoir. »