Elena Scotti/FUSION

Sam Kronick has a bunch of rocks arrayed in front of him on a raised desk in his Oakland studio. He’s an artist and his plan is to sketch the rocks, but not with pen and paper. He and his artistic partner Tara Shi are going to do a 3D scan of them so that an artificial intelligence program can map their contours, learn to recognize rocks and then start generating its own craggy depictions.


The project is deceptively simple: trying to get artificial intelligence to make nature art. But it’s also a way of figuring out the limits of computational creativity. Kronick and Shi are using a neural net, a computer program loosely modeled on biological neural systems like the human brain. A given neural net needs to be trained on data; that could be the shape of a lot of rocks, a massive trove of Google Images, or hundreds of thousands of search terms, depending on what the neural net will be used for. Then, basically, it thinks in layers, with each layer working on a different aspect of whatever it is the network is analyzing (for instance, if a network were learning to identify rocks, one algorithm may try and find the texture of a rock, another different colors on its surface, and so on).

The immediate task has quickly become tricky. When a software program is doing face detection, it’s pretty obvious whether it’s working or not. But asking a computer to make rocks is more complex, says Kronick, because it raises ontological questions. “What is a rock? What matters about a rock?” says Kronick. “Why is that as much of a rock as these are according to this model that we’ve built?”

The reason why Kronick and Shi care about this is because they hope their art can help explain the mysterious workings of artificial neural nets, which are increasingly being used by tech companies large and small to make their products faster and, for lack of a better term, smarter. They’re among a host of artists who are incorporating neural networks into their work, and in the course of doing so, helping the public better understand a technology that will increasingly be a part of their lives, used to make decisions about them and the world around them.


The artistic fruit of neural networks is increasingly seeping into the public consciousness. Earlier this year a team in London used deep learning to write a musical. Google put its neural net technology, DeepMind, on display by having it create psychedelic images. Neural nets have been used to produce music videos. Given the online courses on how to use neural nets for art, expect to see more.

After decades of relative obscurity, the use of neural nets, also called deep learning, is in vogue. That’s because the more data neural nets have to work with, the more effectively they can be trained, and these days they have a lot of data. You encounter neural nets frequently now, possibly without realizing it, when you do a search in Google Photos, use Skype’s translation function, use Facebook to tag yourself and friends in photos, or summon Siri or Cortana.



This is exciting and nerve-wracking at once, since the same technology that gives us trippy DeepDream versions of photos also powers hyper-accurate facial recognition that could effectively mean the end of public privacy. There’s also the added concern that, as Motherboard wrote earlier this year, "when it goes wrong, we won't be able to ask it why." Although the math behind neural nets is comparatively simple, there’s a lot of it, and taken together it can only really be understood by the machines generating it, meaning the technologists working on neural nets don't really understand how they're making the decisions they're making.

DeepMind, the Google-owned company that used neural nets to teach a computer to beat a human at the fantastically complex game of Go, has been secretive about its plans for a vast amount of medical patient data it has been granted access to in the U.K. Facebook has been using neural nets to mine the several thousands of posts written by users every second. And there’s no shortage of literature from the past two decades suggesting military and policing applications for neural nets. The amount of consumer data available through commercial products has been a goldmine for scientists in this area, which is why you’re seeing the same questions asked about “big data” a few years ago being asked once more, this time about the broad category of artificial intelligence into which neural nets fall—questions about how much we should trust neural nets and the inputs going into them.

Kronick is concerned about the increased use of of neural nets. Those designing them often believe that “if you give it good data, if you are well intentioned in your design of these systems, that you'll get good results out” and that “the effects on the world will be positive and valuable.” But in his use so far he’s discovered that the technology walks a fine line between being amazing and being beautifully broken.

In March, Kronick and Shi released another neural net project, AI*Scry, an app they cheekily dubbed “a remote viewing application powered by an alien psyche.” AI*Scry used the “Neural Talk” net developed at Stanford, and a dataset, Microsoft’s COCO (Common Objects in Context), to parse whatever it was you pointed your phone at and describe it back to you. The app, as many pointed out, was often wrong, and actually had a setting for the degree of abstraction it would take with its guesses. For instance, a fairly high level of abstraction, pointed at the teal couch I’m sitting on as I write this, saw “a picture of a wrist which are riding a chair,” as well as “the view out from a train leaving him,” and “a coca-cola sitting on a table using a computer.”

Ethan Chiel/AIScry
AI*Scry interprets the author's cat.

The same thing happened when artist Kyle McDonald took a neural net on a video-taped walk around Amsterdam. The net got some things right, but would also make funny minor mistakes, like seeing a deli case of sandwiches instead of a box of donuts or interpret a person walking as someone riding a skateboard.


These skewed interpretations of the world are what the art is meant to lay bare. They reflect the biases of the creators because they’re trained on the images they chose, so the tools can identify objects in some situations, but are laughably and/or poetically bad at figuring out what is going on in others. It’s the same problem we see in the real world, but with no laughter involved, as when Google Photos mistook black users for gorillas in 2015 — likely because it wasn’t trained with enough photos of black people, perhaps because it relied on photos provided by Google’s majority white employees.

“There’s an opportunity to use art to break apart what’s going on,” Kronick told me.


“[T]here’s this whole chain of human designers, human laborers, human workers that create this system, and they bring along with them all these choices of what’s going to be included versus excluded in this data,” Kronick told TechCrunch when AI*Scry first launched.

In Spook Country, science fiction writer William Gibson writes, “That’s something that tends to happen with new technologies generally: The most interesting applications turn up on a battlefield, or in a gallery.” It’s a sentiment that Tim Hwang, an affiliate at the Data & Society research institute, who also advises Google on artificial intelligence and policy, echoed. Hwang compared machine learning art to the early computer art that emerged out of military spending and research on computer graphics.


Hwang said machine learning researchers are glad to see this.

“Everybody’s really excited about the idea that it could become part of culture as well, because for so long it has been a technical production tool,” said Hwang. “There’s an increasing recognition in the deep learning community that, regardless of the purpose of the technology, it’s probably increasingly important to explain the technology in clear terms, and make it more accessible to a broader audience.”

Artists are illuminating how, and why, these systems are being put to work. And with the help of simple but freely available and manipulatable machine learning programs, like those distributed by Andrej Karpathy or artist-programmer Gene Kogan, it’s become easier to do so.


Images are only one facet of what artists have begun to play around with when it comes to neural nets. Writers, particularly of science fiction, have had a field day with the possibilities. In early June a short film titled “Sunspring” debuted on Ars Technica. The movie garnered a lot of attention because it was written by an AI dubbed Benjamin, which had been trained on a body of science-fiction screenplays. (It also probably didn’t hurt that Thomas Middleditch, star of HBO’s Silicon Valley, was in the 9-minute short.)

Sunspring features a gold-jacketed Middleditch and two other characters idling around a vaguely-futuristic base reciting their AI-written lines, in turns ominously and lightheartedly. The movie’s charming, but most of that charm is due to the actors, and not the often incoherent AI-written dialogue. The movie has lines that make sense on their own—“In a future with mass unemployment, young people are forced to sell blood. That’s something I can do”—but the script as a whole is effectively gobbledygook. Try listening to it without watching the the screen, or just reading it off the page, and the magic is all but gone.

A portion of Sunspring's script.

The same is true of the neural net-written musical from earlier this year, “Beyond the Fence,” about anti-war activists in the 1980s. Without performers the lyrics aren’t necessarily fun to read, even as odd bot poetry:



My wife is grand

This murder falls the miracles

The hole is better and flying

Plus a carbour little time should be with me

I know it’s been passing in the sorrow

That gap between what an AI produces and what humans create is appealing to Robin Sloan, a science-fiction author. He’s built a program for himself which involves a little more collaboration with the neural net: an AI-writing companion based on a recurrent neural network. It's a plugin for a word processor, trained on a corpus of science-fiction lit, which adds a line after each line the human author types. Sloan is using it to help provide the voice for a character in a new story.

“The whole thing will be readable and when you come to this character, it will be weird,” said Sloan. “It will be fun to know that this one part of the story was rigorously generated in a strange way.”


Left to their own devices, such programs usually create the bizarre, such as nightmarish Thanksgiving recipes, but with human massaging, you get the context to smooth the edges and make it sensical.

Sloan, who lives in the Bay Area, got interested in working with neural nets after talking to a friend from college. "He was so over the moon about the potential of these techniques and so convinced that they are of epochal importance," said Sloan. "Not merely a cool, interesting, new thing, but a big, big deal.”

Initially, Sloan trained the neural net on the works of Shakespeare before turning to sci-fi. And then it was all about fine-tuning. The program didn't know the parts that it should be pulling; it couldn't recognize what was "cool and interesting," said Sloan, so he kept re-optimizing it for "weird, interesting, [but] still coherent."



This sits in stark contrast to how someone at Google or Facebook working with neural nets on a large scale might set up their system, but Sloan wants his neural net to have a particular voice. The characters he writes with the neural net-powered plugin are still going to say what he wants, so as not to derail a short story or novel. The plugin just provides an uncanny flavor to the language that, hopefully, makes the character's dialogue feel not quite human. But it’s meant to feel not human in conjunction with his own particularly humanistic writing. When he first publicly wrote about the bot he explained that “[t]he “animating ideas here are augmentation; partnership; call and response.”

That’s not to say there isn’t a place for neural network-generated art on its own, but Sloan raised a fair point: In those cases the art itself is often not the point.

“I see some people pursuing this path where the output is okay but the real product is ‘look what I did,’ the real product is the way it was done, and so it becomes this sort of conceptual art/gimmick sort of situation.”


Deep learning is exciting. It’s bringing AI closer to what we're long been promised by science fiction, where computers gain a human-like intelligence. Without most people noticing, it's made products they use better or quicker. But it's also hard to understand and, as Kate Crawford and Meredith Whitaker recently wrote, "hard to see," though it's "embedded in a wide array of social institutions, from influencing who is released from jail to shaping the news we see."

The artists using neural nets are making the technology visible. Though the neural nets they employ are much simpler than the ones being worked into Facebook or Google's products, they still gesture at the problems of deep learning systems left to their own devices. But even more so, they seem to teach another lesson: When a human is kept in the loop, what emerges is much more beautiful.

Ethan Chiel is a reporter for Fusion, writing mostly about the internet and technology. You can (and should) email him at ethan.chiel@fusion.net