Brand Blunder #86: Google gaffe rips band aid off brand safety claims

Imagine you’re the world’s biggest provider of advertising media.  In fact, 90% of your company’s $75B revenues comes from advertisers who buy online ad space from you.

But your customers aren’t unhappy.  You’ve sometimes placed their ads on unsavory websites… sites fuelled by hate, porn, fake news.  Even as you’ve raked in $68B a year in advertising, you’ve not invested anywhere near enough to protect their expensive brands from the darker forces of the web.

With your customers clamoring for “brand safety”, you come up with a quick and easy solution.  You knock together a bunch of computer algorithms and dashboard controls which you promise will “give brands more control over where their ads appear”.

Phew!  Problem solved, right…?   Let the advertising cheques flow freely once more!

Well, sure.  As long as those new computer algorithms actually work

Last week Google’s Brand Safety algorithms flagged as spam a Google ad for Google’s own Chromebook Pixel.

The confidence-sapping faux pas comes after weeks of well-publicised blunders by the very algorithms Google promised would automatically flag troublesome videos on YouTube.  For example, many content creators have had their videos demonetized for no apparent reason.  Disturbing content targeting kids continues to appear.  Etc, etc, etc.

An embarrassed Google moved quickly to fix its own Chromebook ad.  However, screenshots and a video from The Next Web continue to show the incorrectly-flagged, now-deleted video.

And the bigger issue of brand safety for Google’s advertisers remains.

As influential tech site The Verge, said “it’s particularly telling about whatever is happening with YouTube’s algorithm that even official Google content is getting removed for violating YouTube’s policy on spam, deceptive practices, and scams.

“(I)t’s also a potentially unnerving look at how Google is moderating content on YouTube and the dangers of machine learning.

“It’s great that Google is building tools to automatically flag and remove deceptive videos and weed out spam, but if the end result is a black box that just arbitrarily makes decisions that even Google’s content isn’t safe from, then who is it really helping?”

For advertisers globally, that is a very important question.


Leave a Reply

Your email address will not be published. Required fields are marked *