The internet has democratized video content creation and distribution. Today, over 400 hours of video are uploaded to YouTube every minute.1 While the speed and volume at which content is added to the platform has been key to YouTube’s success, it also poses many questions and risks: How can YouTube ensure that content is appropriate for advertisers and consumers? How can YouTube stop extremist organizations from using the site to spread hate?
One answer is artificial intelligence (AI) and machine learning. YouTube’s “anti-abuse machine learning algorithm” enables YouTube to combat some of the biggest threats to their business by automatically spotting content deemed inappropriate for the platform, such as content featuring child pornography, hate speech, and violence.2
The need for a more robust review of YouTube content came into the public spotlight with an investigation by The Times in February 2017.3 The investigation highlighted how advertisements for global companies were being shown in tandem with indecent content. For example, a L’Oreal ad appeared on a video posted by hate preacher Steven Anderson.4 As a reaction, advertisers, including key global iconic CPG conglomerates such as Unilever, have been threatening to pull their advertisements from YouTube, posing a significant threat to the platform’s key revenue stream.5
YouTube has taken a few significant actions in response. For one, YouTube has committed to building out its content moderation workforce, by “bringing the total number of people working to address violative content to 10,000 across Google by the end of 2018.”6 This includes hiring full-time artificial intelligence and machine learning specialists as well as those with expertise in “violent extremism, counterterrorism, and human rights.”6 In addition, YouTube is leveraging micro-labor sites such as Amazon’s Mechanical Turk to train their AI algorithms via human intelligence.7 Workers are asked to watch a piece of content and indicate what it contains.7 The learning fuels the AI algorithm, teaching it to better review and identify content in the future.7
In April 2018, YouTube published its first quarterly report on content moderation. Results have been promising: from October – December 2017, YouTube removed over 8 million indecent videos, 6.7 million of which were first flagged for review by machines rather than by humans.6 Of the 6.7 million videos taken down, 76 percent were removed before they ever received a single view.6 To further highlight the progress of the content moderation, fueled by AI, YouTube has rolled out a reporting history dashboard that shows YouTube users the status of the videos they’ve flagged.6
YouTube has also refined the way they categorize and sell content to advertisers via the Google Preferred offering, which comprises 5% of total video inventory.8 Seen as the most premium video inventory, YouTube highlights that all Google Preferred content is reviewed by humans before it is sold as an ad placement.8 One might infer that this suggests human review is still superior to YouTube’s AI enabled algorithms.
As YouTube looks to the future, it is important to carefully consider how the algorithms are being taught. This includes implementing practices that prevent biases in content review. Put eloquently by reporter Jacob J. Hunt in a piece for the ACLU, machine learning’s role should not be to “aggregate our biases and mechanize them.”9 Also, YouTube should be transparent with advertisers and consumers on what content YouTube allows, as the lines are often blurry. For example, in March 2018, YouTube initially defended its decision to permit Neo-Nazi group Atomwaffen Division to publish videos on its platform, but later decided to remove them after receiving pressure from the Anti-Defamation League.7 Further, the speed at which YouTube’s AI can review content will become increasingly important as live-streaming becomes more prevalent.
A few questions that remain to be answered include: What is YouTube’s responsibility in monitoring content vs. enabling freedom of speech? Will YouTube ever be able to entirely rely on its AI technology or will human content review always be needed? How else might YouTube be able to leverage AI to enhance its product offering – for example, through more targeted video recommendations to users, review of comments on video posts, or anything else? How are you seeing other digital media platforms, such as Facebook and Twitter, think through similar challenges?
As you think through these questions, consider the below iconic photograph by Nick Ut (1972), referred to as “The Napalm Girl.”11 How might a human interpret this image as opposed to an AI algorithm? What are the implications?
(Word Count: 735).
 Nilas, J. (2018). YouTube Subjecting All ‘Preferred’ Content to Human Review.[online] Available at: www.wsj.com/articles/youtube-subjecting-all-preferred-content-to-human-review-1516143751.
 Meyer, D (2018). AI Is Now YouTube’s Biggest Weapon Against the Spread of Offensive Videos.[online] Available at: http://fortune.com/2018/04/24/youtube-machine-learning-content-removal/.
 Mostrous, A. (2017). Big brands fund terror through online adverts. [online] Available at: https://www.thetimes.co.uk/article/big-brands-fund-terror-knnxfgb98.
 Vizard, S. (2017). Google under fire as brands pull advertising and ad industry demands action. [online] Available at: https://www.marketingweek.com/2017/03/17/google-ad-safety/.
 Marvin, G. (2018).A final call? Unilever threatens to pull ads from platforms swamped with ‘toxic’ content. [online] Available at: https://marketingland.com/final-call-unilever-threatens-pull-ads-platforms-swamped-toxic-content-234323.
 YouTube (2018). Official Blog. [online] Available at: https://youtube.googleblog.com/2018/04/more-information-faster-removals-more.html.
 Matsakis, L. (2018). A Window Into How YouTube Trains AI To Moderate Videos. [online] Available at: https://www.wired.com/story/youtube-mechanical-turk-content-moderation-ai/.
 Nicas, J (2018). YouTube Subjecting All ‘Preferred’ Content to Human Review. [online] Available at: https://www.wsj.com/articles/youtube-subjecting-all-preferred-content-to-human-review-1516143751.
 Hutt, J. (2018). Why YouTube Shouldn’t Over-Rely on Artificial Intelligence to Police Its Platform. [online] Available at: https://www.aclu.org/blog/privacy-technology/internet-privacy/why-youtube-shouldnt-over-rely-artificial-intelligence.
 Ut, N (1972). Napalm Girl Photo. [online] Available at: http://www.apimages.com/Collection/Landing/Photographer-Nick-Ut-The-Napalm-Girl-/ebfc0a860aa946ba9e77eb786d46207e.