Artificial intelligence can write, but its grammar tends to be lousy. Good news for cybercriminals: Twitter users don’t care. Grammatical shortcuts on micro-blogging sites are the norm, which gives an advantage to automated systems trying to trick users into clicking on malicious links.
Companies get better every year at training their employees not to click on malicious links in email according to a new report by Wombat Security, which says that companies have been getting hit more often with emails attempting to trick them (probably because their staff are clicking less—good job, you guys).
The practice of tricky mass email is known as phishing, conning people by sending enticing messages. The classic phishing campaign is the lottery prize or the Nigerian prince scam, where people are asked to send money in exchange for even greater amounts of money. The trick has also been borrowed by cybercriminals who want to install spyware or ransomware. Phishing campaigns send a generic message to as many people as possible.
Criminals also use spear phishing, crafting a special message specifically for one person. That’s how the internet got hold of all of John Podesta‘s emails.
Companies like Wombat Security and PhishMe train employees how to recognize suspicious email messages, and that training seems to be kicking in on a cultural level. So what happens if the bad guys shift their attention to another place where people click on a lot of links, like Twitter?
In a video from DEF CON 24, a hacker conference that took place last August in Las Vegas, two data scientists from ZeroFOX, which specializes in threats over social media, demo an automated system for writing targeted tweets at Twitter users with malicious links. It worked disturbingly well. Traditionally, spear phishing is a time intensive activity. A real person needs to sit down, research a mark and then craft a message that fits their personal interests while sounding plausible, but these two proved computers could pull it off at machine speed, as The Atlantic previously reported.
John Seymour and Philip Tully presented SNAP_R, a tool for automating a large number of targeted tweets at Twitter users. Powering spear phishing with artificial intelligence gives miscreants much of the speed of spam but the higher success rate of manual messaging.
The nature of Twitter helps this kind of trick along. People share a lot of information about themselves and what they like on the site. AI can ingest this information and come up with related language to use when crafting tweets that might interest someone.
Also, a bot’s lousy syntax isn’t as harmful on Twitter as it is in email, because people don’t expect tweets to read like Victorian poetry. “The bar on Twitter is so low to have a tweet people will be interested in,” Seymour said. He showed an example tweet from the conference and pointed out that no one would have even been able to make sense of it pre-Twitter.
Twitter also helps by hiding attempted tricks. If a potential victim replies to a malicious account, that reply won’t show up on other people’s timelines, unless for some extremely weird reason they are following the criminal Twitter bot. This means that it won’t draw the attention of a user’s less guileless followers.
Shortened links are very normal on Twitter as well, so people are less likely to look askance at a Tweet from a URL shrinking service than they might over email (Twitter actually runs every link through its own shortener). The URL shortener that seemed to work best for sending malicious links? Goo.gl, the Google product.
They ran a test attack on 90 users posting with the #cat hashtag (testing with a live link, but without malicious content). Every link they sent was customized to the user. When they used data captured by Goo.gl to match with data about the user known from Twitter, they estimated at least 30 percent of the people spear phished clicked. It might have even been as high as 67 percent. There’s a lot of uncertainty because a lot bots are crawling Twitter clicking links.
They also got a click out of a security pro “who will remain nameless,” Seymour said.
Lastly, they did a one-on-one test of human vs. machine in a spear phishing test. In two hours, both got a lot of clicks, but the automated system got far more.
“We don’t like to think of this as a Twitter vulnerability,” Tully said. Instead, the team built this proof of concept in order to move all internet users to be skeptical about links on all websites, especially social ones.
DEF CON is a prominent conference in the cybersecurity space. There’s a decent chance that Twitter and Google engineers attended. Either one of the two companies might have taken steps since then to better scan the content at links posted by new accounts in order to better protect users. Neither Twitter nor Google have replied to request for comment on this story.
“People think about email very cautiously,” Tully said. He hopes soon they will “think the same way about Twitter,” and every site where people can post links.