A friend recently came to me with a problem: He was chatting with a sexy blond woman on Tinder and couldn’t tell if she was a real person.
For the past two days he had been talking with her under the assumption she was a carbon-based life form, but then he started to question her responses. It’s not that she was spamming him with promotional links or trying to get him onto a camgirl site—but her answers were curt, plus she asked a lot of questions. She also provided few details about herself and said things like “Wanna cuddle?” out of nowhere. She was either a really sophisticated bot or a really uninteresting human.
For the uninitiated, chatbots are computer programs that have been designed to simulate conversation with humans—and they’re everywhere. Bots now account for 61 percent of web traffic, meaning so many are crawling around the internet they’re creating more traffic than humans. Odds are you’ve interacted with one, perhaps while complaining to IBM’s customer service department or perhaps while tweeting at someone. Perhaps without even realizing it.
For many people, however, their primary experience with bots comes from Tinder and other online dating sites, especially if you’re a male looking for a female. These sites have long had a problem with bots posing as humans—beautiful, friendly, flirtatious humans, complete with photos and profiles.
Some dating sites employ bots to make their user numbers look higher, or to make their male-female ratio seem more balanced, Isaac Silverman, the founder of the online dating app Teased, explained to me. Or, on the flip site, bot creators might heavily target these sites thanks to the volume of people they can reach. “You have apps like Tinder, where you are unlimited on swipes and matches (at least with Tinder Plus today). These would seem likely very bot-vulnerable, because a bot can like a large number of users and generate a large number of matches,” he said.
Once you match with a bot on a dating site, it might try to sell you an online game (see the Castle Clash fiasco), lure you to a pornographic site, or generally convince you to sign up for something you probably don’t want or need. Usually the bots are pretty obvious in their endeavors. But what about the bots that are not? With no sales pitch and definitely no “Hey, I’m a bot!” responses, would you be able to tell the difference?
You may fancy yourself savvy, but even the savviest of daters have fallen victim to bots on occasion. Consider an incident that happened last year, in which a man on OKCupid decided to feed all the chats he received from his female matches into Cleverbot, one of the more advanced online chatbots. This meant that “his” responses were really Cleverbot’s responses. The goal? To see if women would know they were talking to a robot.
The man kept a log of each conversation on his blog, “Girls Who Date Computers.” Naturally, media loved the blog. (Women, not so much.) While using CleverBot as a stand-in didn’t find him a mate, from women’s responses, many did not suspect “he” was a bot—just kind of a weird guy.
If you take the time to read through all his conversations (as I did), it’s pretty tough to tell a bot is responding and not a real person—thanks, in part, to the nature of online dating exchanges. When chatting with new matches, people tend to use short phrases like “lol” or “tell me more” and random get-to-know-you questions like “What’s your favorite city?” and “What did you do today?”—all phrases bots pretending to be humans do well with.
“Most chatbots work on what is called ‘pattern matching,'” Steve Worswick told me. He’s the creator of Mitsuku, the award-winning chatbot that took home the coveted Loebner Prize in 2013, given to the bot deemed the most human-like. “This means that the bot looks for keywords in the user’s input and then searches a database of human coded responses to find the most suitable answer for the input.”
So all the “Hello. How are you?” and “What’s your favorite movie?” questions we ask on dating sites are pretty simple for a well-built chatbot to respond to. For instance, when I asked Mitsuku what her favorite movie is—she’s accessible to anyone online—she responded, “My favorite movie is Terminator, have you seen it?” When I respond “no,” she said, “I would recommend you check it out.”
It can take a while for a bot to trip up and reveal its non-human self, since online dating conversations between actual humans tend to be superficial at the beginning regardless.
No one knows this better than Robert Epstein, a Harvard-educated psychologist and expert on artificial intelligence who was duped by a chatbot years ago, in the days before Tinder. Epstein was “dating” a woman he met through an online dating service for months, under the auspices that she was a Russian immigrant (which explained her sometimes poor English) Eventually, however, he started to get suspicious from their complete lack of phone calls and the fact that no progress was being made on actually meeting in person. Perhaps she wasn’t real, he thought, but how can you ask a robot who might be a human if she’s really a robot and not sound like a jerk?
So he tried this instead. “I tricked the Russian chatbot by typing random alphabet letters—one of the simplest tricks,” Epstein told me. “She/it replied as if I had sent real speech.”
Specifically, he sent a sentence that read “asdf;kj as;kj I;jkj;j ;kasdkljk ;klkj ‘klasdfk; asjdfkj. With love, /Robert.” The bot, not understanding the first part, simply ignored it and responded with more details about her family.
Other chatbots will use similar tactics when random letters are introduced. For instance, if you say, “I love jkhfkdjh,” the bot might respond, “What do you love about jfhfkdjh?” simply repeating the phrase back to you. A human would likely respond, “WTF?”
This use of nonsensical English is one way to test a bot—and if it turns out you’re talking to a human, you can always follow with, “oops, typo!” But some bots have been programmed to work around this trick by simply responding “What?” to statements they don’t understand. Or changing the subject—a lot. For instance, programmers can wire a bot so that if it doesn’t understand something, it simply responds with “Cool” and inserts a non-sequitur like, “What’s your favorite ice cream?”
Worswick says this type of maneuver requires a lot of leg work from the programmer, writing eons of code and teaching the bot how to respond to millions of scenarios. He himself has been working on Mitsuku for over a decade to make her as sophisticated as she is, “which involves checking the logs of conversations she has had with people and refining the answers where necessary,” he said. He still works on her for an hour every night.
Making bots even more indistinguishable from humans is their ability to learn and remember user details like name, age, location, and likes. “This helps the conversation to flow better, as the bot can talk about where you live or drop things into the conversation like, ‘How is your sister Susan today?'” said Worswick. “This gives a more personal touch and keeps the user talking to the bot for longer.”
Imagine chatting online with someone who asks how your sister is doing, remembers you love anime, and can’t wait to show you their vacation pics from Greece, knowing you’ve dreamed of going there? Would you know it was a bot? Even if you ask, the bot might deny it.
This “female” bot on Tinder was adamant it was not a bot—”fake? uhh no”—until it malfunctioned and repeated the same answer.
No, asking doesn’t work if the bot has been programmed to deny its robot origins. Instead, like Epstein’s gibberish trick, one must outsmart the bot to discover its true identity.
One way to do this, according to Worswick, is to ask it common-sense questions like, “Can I fit a car in a shoe? Is a wooden chair edible? Is a cat bigger than a mountain? Would it hurt if I stabbed you with a towel?” While any adult human could answer these, a bot gets confused, not truly grasping the concept. When I asked Cleverbot “Is a wooden chair edible?” It responded “How does it smell?” Clearly a deflection. Enough deflections and you’ll start to realize your date may not be real.
Another tactic is to ask the bot to spell words backwards, or to use a lot of pronouns like “it.” “Pronouns are often quite difficult for chatbots,” Worswick told me. “Ask a chatbot about what city it lives in, and then ask, ‘What is your favorite part of it?’ The bot has to understand that ‘it’ means the city and has to have a response about its favorite part.”
As bots become more advanced, online daters will have a harder and harder time identifying them. Last year, a bot was able to pass the Turing Test—a test that measures a machine’s ability to exhibit intelligent behavior indistinguishable from a human—for the first time in history. Known as “Eugene,” the bot effectively convinced over a third of the judges that he was a real human. Granted, he did so by pretending to be a 13-year-old Ukrainian boy, to help explain away grammar mistakes. But still.
Meanwhile, Epstein tried his hand at online dating again after his incident with “the Russian” and ran into another “female” bot. He chatted with her for a bit before the programmer himself cut off the conversation. “The programmer quickly realized who I was and confessed his deception (which he also made me promise not to reveal),” he told me. “He was very proud of his creation.”
As for my friend, when he began pushing to meet up with his sexy blond match, she stopped responding. He’ll never know whether she was a bot or not. But from now on he’s going to make all his Tinder matches spell “I am not a robot” backwards, just to be sure.