Cara de Fabio/Fusion

New technologies are constantly befuddling our centuries-old legal system. From states debating whether fantasy sports "contests" are gambling to the seemingly endless parade of lawsuits around how much information police should be able to easily get from our smartphones, our way of life is evolving much faster than the systems designed to govern it.

Parker Higgins, Andrea Matwyshyn and Tim Hwang discussing at Fusion's Real Future Fair

For Fusion's Real Future Fair last week, we decided to cast our gaze into the future of law, coming up with hypothetical legal cases based on the assumed evolution of current technologies to talk about liability when robots break the law, copyright issues when they write their own masterpieces, and criminal consequences of algorithms that kill. We asked three legal experts—Northeastern University law professor Andrea Matwyshyn; Robot, Robot, & Hwang's Tim Hwang, and Electronic Frontier Foundation copyright activism director Parker Higgins—to debate how the cases would be resolved under current law, or how our laws need to evolve to handle the cases.


Without further ado, here's our version of Law & Order 2045, condensed and edited for clarity.

Scenario #1. Artificially intelligent creatures started breaking the law at the beginning of the 21st century and it just keeps getting worse. Drone-imos is a pizza joint that uses Smoogle's self-driving cars to make its deliveries. The cars have been programmed to deliver pizzas within 30 minutes or they're free. The car routes its map to a house and realizes that, with traffic, it won't make it unless it goes 15 mph over the speed limit the whole way. (This is within its allowable law-breaking parameters.) On the way there, a child runs into the road and is struck by the robo-car.


Hwang: Traditionally a driver was at fault because they had the control of the car, but with driverless cars, car company engineers program what the car does, so the control rests with the manufacturer. So there's a really interesting question about whether or not the manufacturer is to blame.

What happens next when the car companies realize they're going to be held liable? I think there are two possibilities, both dystopian. There might be a click-wrap agreement when you first start the car, saying, "I accept all the faults created by the car"—an interface that distributes fault back to the driver.

Or the car is purchased and you have to pay premium for it to take into account the interests of others. Or there's a pro-package that makes the car protect you at all costs. [Ed. note: Volvo already charges extra for a pedestrian detection feature that would prevent you from hitting someone. Meanwhile, ethicists are currently debating whether self-driving cars should be programmed to kill a driver rather than a crowd of people.]


Matwyshyn: I predict that Smoogle will make the pizza company sign a contract when it licenses the cars that requires them to take on the liability. So Dronimos would be the responsible party. But there would also be questions about whether the manufacturer was patching security holes and updating the car's system appropriately. Right now, for example, some autonomous cars' sensors aren't sensitive enough to detect squirrels so there's potentially massive squirrel death happening now. So a question for Smoogle is whether the car is designed to be sensitive enough to detect a small child's movement.

Higgins: The closest case to this now is the Uber driver who struck a child on New Year's Day. Uber has been fighting tooth and nail to ensure it's not their problem. [Ed. note: Uber settled the case out of court this summer.]


If you look at the history of how car accidents came to be called "accidents" — a car "hit someone" rather than a driver "killed somebody"—there was a lot of lobbying and policy work done to make accidents feel like accidents. So we may see that happen around autonomous vehicles as well.

Scenario #2: A programmer named A.I. Rowling designs a bot that tracks young adult reading trends. It tracks what kinds of books are bubbling up as most popular, feeding them into an artificial intelligence that consumes all the books. The bot then writes its own fiction: a three-volume series about teens sent to Mars to play Survivor whose camp is raided by alien vampires. It turns into a best-seller—selling more copies than Twilight, Harry Potter, and Hunger Games combined— and spawns a movie and a TV series that make a gazillion dollars.

Hwang: This kind of optimization is already happening. Netflix uses viewer data to decide what shows to produce next. This is going to be a natural process in the industry where you use systems of quantification to optimize the next creative work you produce.


In this case, you let the algorithm out of the box and let it decide what to copy. I think lawyers for all of the authors will be asking how they get a cut. In the future, are we going to get better at quantifying how much is borrowed from other works and then have micropayments go out to those authors?

I recommend people read James Grimmelmann's Copyright for Robots, which addresses how fair use changes when a robot is doing the reading instead of a human being.

Matwyshyn: One of the interesting questions is whether the process by which A.I. Rowling writes this book is substantially different from a human who would read those 80 books and then write her own. It'll probably depend on how the algorithm does its writing—whether it's grabbing blocks of texts as a whole or instead mimicking patterns of writing.



Higgins: The thing I love about copyright law is that the cases are never clean. The really interesting question is when the next person does it, whether someone reverse-engineers A.I. Rowling's program or it goes open source. Let's say there's a 'fan fic algorithm': someone plays with the algorithm, puts the right inputs in, and then writes their own best-selling book with a modified version of A.I. Rowling's program. A.I. Rowling then says, "I wrote the algorithm that generated that." Then you could say it was a derivative work.

After you're done with the Grimmelmann paper, read Annemarie Bridy on works written by robots. There's a compelling argument that the law will have to change to call that "works for hire" [a different copyright classification] because you hired a robot to write it.

Scenario #3. The Minority Report future has arrived. Social networks and advertising campaigns are everywhere on every surface. All devices talk to you—and they are constantly peppering you with ads. Targeting campaigns and behavioral advertising has gotten so good that companies know exactly when you're going to buy something, it's just a matter of who you buy it from. So ads have gotten hyper aggressive. A young man who is a casual fan of first-person shooter games and horror movies finds he's constantly being targeted with more and more gory entertainment options, til it's all snuff porn all the time on his TV. His refrigerator and coffee machine won't stop telling him about gun sales. On the self-driving bus, to his embarrassment, it's all ads for hockey masks, ski masks, and biographies of serial killers. His phone keeps presenting him with ads about how he's going to be alone for ever, alongside ads for military equipment. When he murders someone, he says the algorithms made him do it.


Hwang: Legally, there's a differentiation between actual cause and proximate cause. The law says you're not responsible for things where you couldn't have reasonably foreseen the risk. [So the ad companies' lawyers would argue that] the algorithms can't possibly be responsible because you couldn't reasonably foresee that all the ads are going to interact in a way to make someone kill someone. But the twist here is that our knowledge of quantitative social science is getting ever better: we have more data about people and we can predict what they're going to do next. Our understanding of how large groups of people behave on aggregate has deeper implications. If you know you can show a person certain inputs and they will buy a gun in 6 months, then there are new questions around what's foreseeable when you're the engineer behind an algorithm. That may cause some kind of new liability to open up as our technologies improve.

Matwyshyn: Well, he'll definitely go to prison. Beyond that, the providers of the advertising will say he consented to receive ads. What we see here is an emergent construct: the various advertisers don't know the ads other advertisers are showing this guy, so they'll also argue they didn't realize they were pushing him over the brink. They'll say, "We had nothing to do with it. We shouldn't be liable."

Higgins: No matter what the outcome is, it will probably strengthen the case for real life ad-blocking. More people will get glasses that blur out logos when they see them. When your ads are causing people to murder, the case against ad blockers gets weaker.



This is based on a panel at our Real Future Fair, held in San Francisco on November 7, 2015.