Google’s EMEA boss on brand safety, policing ‘bad actors’ and regulation
YouTube has been the subject of many brand safety scandals over the last 24 months, with the recent tragedy in New Zealand prompting louder calls for urgent regulation of big tech players.
As YouTube enters another brand safety crisis, parent company Google acknowledges it needs to work “faster” and there is “a lot to do”. However, it maintains YouTube is a “positive resource” and says high-profile headlines that give cause for public concern “sometimes miss proportionality”.
Speaking to Marketing Week at Advertising Week Europe on Monday (18 March), Google’s EMEA president Matt Brittin said in many brand safety cases on its platforms, particularly those that surfaced around The Times investigation two years ago, only “a tiny handful of impressions” were involved, equating to less than £1 in ad spend.
“High-profile headlines about issues give people concern and in the case of some of the content on YouTube they were accurate, what they sometimes miss is proportionality,” Brittin said. “No impression served on video that shouldn’t be served on is acceptable, but proportionally, in terms of the huge volume of impressions they were getting, these were very small numbers.”
Brittin’s comments come at a time when YouTube is facing heavy scrutiny for its handling of the terrorist attack in New Zealand on Friday. “Tens of thousands” of videos of the attack surfaced on the platform at an “unprecedented” scale and speed, a spokesperson said – many of which remained online for up to 24 hours.
YouTube, which sees 400 hours of content uploaded every minute, also hit the headlines last month after revelations that inappropriate comments were appearing next to videos featuring children. That prompted a number of big brands, such as McDonald’s and Nestle, to pull advertising spend.
“I don’t think you can ever do enough to make everything as safe as possible. You have to have zero tolerance but that doesn’t mean to say you can achieve zero occurrences in any walk of life – whether that’s crime on the street or bad actors in the technology world,” Brittin said.
We’re not anti-regulation, we’re pro better regulation and up-to-date regulation.
Matt Brittin, Google
“If you look at the violent extremist content, where we started and where we are now, that is huge progress and demonstrates that improved policies, enforcement and people can get us to a better position. There’s a lot to do, we’ve invested a lot. Never satisfied, zero tolerance, but also constantly trying to look at how we can improve on speed.”
YouTube claims 83% of videos that violate its policies are taken down before they are even flagged by human moderators. But in the case of the attack in New Zealand last week, YouTube removed human moderators and instead let artificial intelligence block videos, which at one point were being uploaded at a rate of one per second, unilaterally.
A number of videos, however, still managed to slip through YouTube’s safety net. In addition, two days before the shooting the terrorist posted 60 links to a 74-page manifesto denouncing Muslims across social media, half of which were YouTube videos that were still active late on Friday.
Facebook – which is where the original video was live-streamed for 17 minutes – removed 1.5 million videos of the attack within the first 24 hours.
READ MORE: YouTube claims the conversation with advertisers has moved on from brand safety
Where YouTube is investing, Brittin said, is in organisations that create counter-radicalisation narratives to make sure they are being seen on its platform. For example, if YouTube spots somebody looking at terrorist propaganda, it will try and serve content to counter that and show the other side of the narrative.
It also has technology in place to identify videos that are coming from extremist organisations that then fingerprints those videos and shares them with other platforms to try to stop them spreading further online.
However, as technology becomes ever more sophisticated, especially with the rise of artificial intelligence, so too are these “bad actors”. There is increased interference in politics and elections, massive data breaches, a rise in the funding of misinformation, and online platforms being used to do serious harm.
As such, big technology players like Google and Facebook have drawn increased attention from governments. The UK government last week called for “urgent” regulation of digital advertising in the UK, warning that without intervention self-regulating platforms will increase their control of technology at the expense of people’s privacy and society.
A report into digital advertising says big tech companies have failed to adequately tackle online harm and responses to growing public concern have been piecemeal and inadequate. This has also contributed to a sharp decline in trust in advertising, although Brittin argues favourability of advertising in the UK has been in decline for a very long time, “since well before the internet came along”.
“It’s incumbent on us to be in a position where people see we have got expertise and a will to make changes that help us get to a better outcome together. It’s a responsibility of the ad industry, a technology player like Google, and policy makers,” Brittin said.
“We’re not anti-regulation, we’re pro better regulation and up-to-date regulation. We don’t want to be policing the internet, it’s up for governments to define the framework we work within.”
Conceding that YouTube will never be “100% brand safe”, he said: “Your publication is probably not 100% brand safe either because errors creep through in editorial publications that can cause problems. There’s a contrast between an open platform where you can upload video content and it’s understood that that’s an open platform, and something which is professionally published by professional journalists and editing.
“We’re not [ungoverned by rules]. YouTube’s clearly got very serious responsibilities just like any publication or press would have. But they’re different responsibilities because they’re different types of property.”