Meta Partners with Zefr to Improve its Advertiser Safety Tools
As it works to provide more options for brands to manage their ad placements across its apps, Meta has announced a new partnership with brand suitability platform Zefr which will better enable advertisers to ensure that their promotions don’t appear alongside potentially offensive material, as defined by their own concerns on this front.
Zefr, which has also partnered with TikTok and YouTube on similar initiatives, utilizes advanced AI identification systems, including audio, text, and frame-by-frame video analysis, along with scaled human review, to provide a more accurate and customized brand safety solution, giving more specific placement control to ad partners.
As explained by Meta:
“We will work together [with Zefr] to develop a solution to measure and verify the suitability of adjacent content to ads in Feed, starting with small scale testing in the third quarter of this year and moving to limited availability in the fourth quarter.”
The partnership will help Meta develop better systems to ensure brand safety, while still maximizing ad opportunity.
Meta additionally notes that it’s developing internal suitability controls as another means to give advertisers more control over where their ads are shown.
“We have begun scoping and building these new controls for Facebook and Instagram Feeds focused on primarily English speaking markets, with plans to test in the second half of the year before rolling more broadly in early 2023. Over the course of the next year, we will expand placement coverage to include Stories, Reels, Video Feeds, Instagram Explore and other surfaces across Facebook and Instagram, as well as expanding to additional languages.”
Meta already offers various brand safety tools, including topic exclusions and ‘publisher allow’ lists, which provide broad-ranging oversight tools for brands. These new options will facilitate more specific control, so that brands can exclude the exact placements they choose, while still reaching as wide an audience as possible.
Brand Safety controls came into focus back in 2017 after YouTube lost millions in ad revenue when publishers started pulling their ads due to them appearing alongside extremist and hate speech content. Meta has also faced various challenges on this front – though its major ad challenges have been more specifically focused on the company’s own stances, as opposed to placement concerns.
Meta banned ad placements near NSFW content back in 2013, and has been working to refine its systems on this front ever since. Meta was also the subject of an advertiser boycott in 2020, in protest against the platform’s handling of hate speech and misinformation, which further underlined the rising concerns around the company’s perceived focus on revenue over safety.
Given this, it’s important for Meta to continue its development, and these new projects will help to improve its placement tools and options.