Expert IDs brand risks within UGC platforms

Default Image

Kristina: What brand risks are you finding with gaming platforms?

Barbie Koelker, VP of Marketing, The majority of ad inventory surrounding gaming related content is safe, but tolerances vary by brand, and very little is vetted via an independent third party. When user generated content is being analyzed, the tool employed often relies on keywords alone to flag content. However, such solutions are risky for both content creators and brands. As we’ve measured in a recent study of typical brand safety tools, keyword lists are as likely to misclassify innocuous content as unsafe as they are to miss truly unsafe content. This is particularly pronounced with attempting to accurately identify hateful and toxic content, though keywords can still struggle with sexual and profanity classifications due to a lack of entity or language recognition.

As a result, some of the same pages and environments being flagged as potentially risky are actually the spaces with the most potential, and truly unsafe environments go unchecked.

Kristina: Both Discord and Twitch have announced plans that could protect brands somewhat – what is behind these decisions?

Barbie: Every aspect of the creator economy benefits when environments are safer. Users have a more pleasant experience, content creators can drive deeper engagement, platforms can improve monetization, and advertisers can rest easy.

Every player in this space should be looking to see how they can verify and nurture the safety of the environments wherein they engage their audiences.

Kristina: How does your Brand Safety Suite fit in as a solution to these brand safety issues?

Barbie: Spiketrap’s brand safety suite provides independent safety monitoring and verification of the environments driving the most audience engagement, empowering brands with the clarity and reassurance they need to make safe and effective campaign decisions.

Specifically, Spiketrap’s brand safety solution addresses five key needs for the creator economy: accuracy, completeness, clarity, independence, and speed.

With respect to accuracy, our natural language processing (NLP) AI examines content in context, thereby minimizing both false positives and false negatives. This ensures unsafe content is identified without impeding authentic audience engagement.

Kristina: Despite the brand safety issues, you believe these kinds of platforms can be beneficial to brands – why?

Barbie: High velocity environments are indicative of highly engaged audiences. When advertisers are able to safely reach audiences who are actively participating, their message has more potential to resonate.

Moreover, platforms like Twitch allow advertisers to reach niche audiences who may be otherwise difficult to target. With clarity into the safety and sentiment of creators at scale, advertisers can identify hundreds, if not thousands, of safe and effective channels for their campaigns — channels that would otherwise be missed, and impactful impressions that would otherwise be lost.

Kristina: Before jumping in to advertise, what do brands need to know?

Barbie: Look beyond follower and viewership metrics and examine the health of the conversation around a given channel or creator. Is the community safe, or are they toxic? Is conversation sentiment positive, or is it heated? Understanding the tenor of the community is key in uncovering whether the audience is open to your message, and whether it’s a safe environment for your brand.

When assessing the safety of high UGC environments, consider discrepancies in scoring methodologies. Each platform may classify content and assign grades differently, and your tolerance for risk may vary accordingly.

In the same vein, scrutinize what is being promised. If a keyword based solution purports to capture all hate speech, remember that language is ever-evolving, and bullies can be creative. Outside of profanity filters — which still face challenges — keyword based solutions will likely miss a fair share of unsafe content. They may also incorrectly flag safe content as unsafe. For instance, while an AI based solution with an extensive knowledge graph may recognize that a mention of “Sex and the City” is innocent, a keyword based solution is apt to falsely flag the content as sexual.



Kristina Knight-1
Kristina Knight, Journalist , BA
Content Writer & Editor
Kristina Knight is a freelance writer with more than 15 years of experience writing on varied topics. Kristina’s focus for the past 10 years has been the small business, online marketing, and banking sectors, however, she keeps things interesting by writing about her experiences as an adoptive mom, parenting, and education issues. Kristina’s work has appeared with, NBC News,, DisasterNewsNetwork, and many more publications.