Leveraging Machine Learning to Reduce Spam on Twitter

Engineers at Twitter have leveraged machine learning techniques to reduce the incidence of spam on the popular social networking site.

Twitter, a popular online social networking site, facilitates communication of ideas among individuals, companies, and organizations. Founded in 2006, Twitter now has over 300 million active users worldwide [1]. In a nutshell, Twitter is a social media-based communication platform on which users exchange “tweets,” short messages (280 characters or fewer) capturing ideas, news, and reactions. The platform processes 6,000 messages per second or nearly 200 billion per year.

Given such a high volume of messages, content is largely unfiltered. In fact, Twitter’s business model is built upon real-time communication and so any potential review system would impede on posting speed and run counter to the site’s purpose. The lack of filtering has lead to a rise in spam and automatically generated content. Prior to implementing detection mechanisms, Twitter relied on users reporting incidents of suspected spam. Users submit thousands of spam reports daily, identifying sources of irrelevant or inappropriate content with the intention of disabling spam accounts.

As Twitter mentions, “inauthentic accounts, spam, and malicious automation disrupt everyone’s experience on Twitter.” The prevalence of spam and its negative impact on users has emerged as a top priority for Twitter, as its corporate valuation (as with similar internet-based companies) depends on the size and engagement level of its user base and dissatisfied users could leave the site or become less active. Advances in machine learning have presented an opportunity for Twitter to leverage computer processing power and known user trends to identify and automatically disable accounts contributing spam. As such, Twitter has invested in machine learning to address this problem. As a company whose “product” is a robust base of news and thoughts, spam directly threatens the site’s value proposition and action against spam is necessary for its survival and success.

In a recent memo, Twitter reaffirmed its commitment to ensuring content shared was reliable, trustworthy, and relevant. The rampant incidence of spam motivated Twitter’s decision to invest in machine learning technology to automatically identify and take action against accounts producing spam [2]. This approach fights spam proactively, rather than waiting to receive and verify suspected reports of spam submitted by users. Since implementation last year, Twitter has been successful in identifying 3x more spam accounts (10 million per month as of May) [2]. Consequently, user-submitted spam reports have declined from 25,000 in March to 17,000 in May.

Research conducted by Alex H. Wang at Pennsylvania State University describes the mechanism behind Twitter’s approach to spam detection. His research paper entitled “Detecting Spam Bots in Online Social Networking Sites: A Machine Learning Approach” highlights the numerical parameters algorithms leverage to identify “spammy” accounts. The algorithm extracts the number and the relationships among the user’s friends and followers to evaluate the user’s authenticity. According to Mr. Wang, this machine learning approach is “an efficient and accurate to identify spam bots in Twitter” [3].

Twitter has implemented this machine learning approach since 2017 and continues to see positive results. In the near term, Twitter expects to continue calibrating and improving its machine learning algorithms to improve spam detection. According to a research report from Deakin University in Australia, Twitter’s current methods and techniques achieve an accuracy rate of approximately 80% [4]. While quite successful already, Twitter can further increase its accuracy rate toward 100%. Presently, in 20% of cases, Twitter’s algorithms incorrectly label accounts as spam and interrupt their ability to contribute content. Over the next few years, it is reasonable to expect that Twitter will close the accuracy gap and elevate the quality of its machine learning algorithm to achieve accuracy rates closer to 100%.

Creating a Twitter account involves specifying a name and phone number or e-mail address, which is then verified via a code sent to the user. Sophisticated bots can generate fake phone numbers and e-mail addresses and successfully create Twitter accounts. As a suggestion, Twitter could adopt Google’s reCAPTCHA technology, which requires users to decipher blurry words or phrases and enter them to proceed with making accounts. While not insurmountable, unsophisticated bots struggle with passing through the reCAPTCHA step and thus would be prevented from creating a spam account [5].

Question for further discussion: What is the customer impact of falsely identifying a Twitter account as spam and suspending it?

Word Count: 702

Sources

[1] Twitter, Inc. Annual Report 2018 (February 2018). http://www.viewproxy.com/Twitter/2018/AnnualReport2017.pdf. Accessed November 13, 2018.

[2] Twitter, Inc. “How Twitter is Fighting Spam and Malicious Automation” (June 2018). https://blog.twitter.com/official/en_us/topics/company/2018/how-twitter-is-fighting-spam-and-malicious-automation.html Accessed November 13, 2018.

[3] Wang, Alex Hai. Detecting Spam Bots in Online Social Networking Sites: A Machine Learning Approach. (June 2010). Data Applications: Security and Privacy. Accessed November 13, 2018.

[4] Wu, Tingmin et. al. Twitter Spam Detection Based on Deep Learning. (February 2017). Australasian Computer Science Week. Accessed November 13, 2018.

[5] Beede, Rodney. Analysis of reCAPTCHA Effectiveness. (December 2010). Computer Vision. Accessed November 13, 2018.

Previous:

Breaking the Molds: 3D-Printing and The Future of Shoemaking

Next:

Chez Glossier: How the Millennial Beauty Brand Built a Community Prime for Collaborative Innovation

Student comments on Leveraging Machine Learning to Reduce Spam on Twitter

  1. With a 20% chance that an account is incorrectly flagged, there is significant risk to shutting down real consumer accounts by mistake. Until the machine learning model that Twitter uses hits an acceptable accuracy, which I imagine would be an error rate in the ballpark of 1 per 100,000 (giving roughly 3000 wrongly flagged accounts), Twitter will need to augment its machine learning model by manually back checking these accounts. It would be far too damaging to the company’s reputation if a fifth of its user base was labeled as fraudulent.

  2. Thanks for the interesting insights! The article certainly highlights the interesting position that Twitter is in. On one hand, it must continue to live its brand proposition of instant & condensed communication. However, in an era of increasing spam, it must also be the company’s priority to substantially monitor its content. Using machine learning makes sense. However, I was struck that is accuracy rate is only 80%. I worry that as “real” accounts are labeled as “fake”, Twitter could lose users and negatively impact their entire community that is already the subject of increasing backlash. I would hope that the author’s premise is correct and that as this is a new technology, there is much room to grow in terms of accurate “fake” account identification. At the end of the day though, I still think Twitter will need its user base to also help police the site as I think the computer learning can only go so far while trying to balance on the delicate line of mislabel too many accounts.

  3. Machine learning can be a tool to mass scan suspicious posts/accounts, but it needs, at least in today’s world, a human touch in the final QA process to make sure everything is accurate. As a social platform, DAU/MAU (daily active users/monthly active users) are key metrics for Twitter. 20% chance of falsely identify will heavily, negatively impact consumer experience that I believe Twitter is not able to afford the severe consequences yet.

  4. The 20% incidence of an incorrect spam detection is extremely inaccurate and unreliable – the potential customer churn and negative press could bring back the twitter short-sellers that doomed the stock for the past few years. The initiative does sound promising – but the accuracy and some human intervention at a particular point is necessary to prevent a media scandal. I like your suggestion on alternative account authentication, I believe from time to time – if these flags are raised by machine learning tools/ bots – one could re-authenticate accounts using these means rather than outright closure of fake accounts.

  5. I agree with you that it would just be easier to stop bots from creating accounts than it is to find them. I wonder why they aren’t already using reCAPTCHA (if they aren’t)? It seems like a widely-adopted technology at this point.

    This article also made me thing about the uses for ML to filter on other social media sites. Can Facebook use it to filter out fake news?

  6. Thank you for sharing! I continue to be fascinated by the uses of machine learning on these social media platforms in light of reporting in recent years on fake news. This made me think about the research I’ve seen from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) about using AI to determine accuracy of sources and political bias (see https://venturebeat.com/2018/10/03/mit-csails-ai-can-detect-fake-news-and-political-bias/). The researchers are trying to create an open source dataset with bias scores that could be eventually used on platforms like Facebook and Twitter to detect questionable content. To your question about how to deal with the accuracy rate, I wonder if they can tailor the process by probabilistic tranches, and use a combination of human intervention and machine learning, where machine learning essentially reduces the burden and staff time for near-certain cases. For instance, the algorithms can rank accounts or material by the level of uncertainty, so that human staffers can provide a second check on a smaller pool of accounts and reduce the limitations of the AI solution in isolation.

  7. Thanks for this very interesting article.

    I am however extremely surprised that with a 20% mistake rate, the company is not involving any human judgement to double check. Twitter is indeed extremely important for some individuals and it feels like shutting down the account without any human judgement is causing a real damage to the account owner. Considering this, what is the legal issue the company is facing while doing this? Does the user can pursue Twitter in court? How reactive is the company towards dealing with those mistakes? And most importantly, who is held responsible for the mistakes of the algorithm?

  8. It’s great to hear that Twitter is relying on machine learning to identify spammy accounts, and it particularly makes me glad to hear that they are relying on more numeric indicators (e.g., number of connections, which users they are connected to) as markers for identification, rather than natural language processing of the tweets themselves which would be subject to far more bias. However, as pointed out by a commenter above, the 80% accuracy rate is concerning. That’s not to say that every algorithm should be expected to be totally accurate, but it does make me wonder what kind of input is inserted into the process of evaluation alongside the algorithm for identifying and removing spam accounts.

Leave a comment