How YouTube Works With Police to Weed Out Dangerous Music Videos

JOSH EDELSON/AFP/Getty Images
YouTube's headquarters in San Bruno, Calif. 

YouTube’s removal of 30 violent music videos at the request of British law enforcement this week wasn’t unusual: the video giant says it has an established process in place for police around the world to flag potentially dangerous clips. 

In particular, the Alphabet-owned unit relies on law enforcement to identify coded language, slang or gestures in music videos that allude to threats of real violence.

YouTube’s latest violent-video removal came on the heels of Spotify’s announcement earlier this month of a controversial policy against promoting music by artists who engage in “hateful conduct,” a rule that has so far only resulted in the de-playlisting of several acts who’ve been accused -- but not convicted -- of felonies, including R. Kelly

But YouTube’s longstanding policies are more precise. 

“We have developed policies specifically to help tackle videos related to knife crime in the UK and are continuing to work constructively with experts on this issue,” a YouTube spokeswoman said in an email. “We work with the Metropolitan Police, The Mayor’s Office for Policing and Crime, the Home Office, and community groups to understand this issue and ensure we are able to take action on gang-related content that infringe our Community Guidelines or break the law. We have a dedicated process for the police to flag videos directly to our teams because we often need specialist context from law enforcement to identify real-life threats. Along with others in the UK, we share the deep concern about this issue and do not want our platform used to incite violence.”

YouTube developed its knife and gang crime policies for the U.K. in September 2008, blocking videos there in which people brandish weapons in a threatening way.