How businesses can employ better hiring practices to increase customer satisfaction
Kristina: Already many digital jobs are automated, especially as it pertains to advertising, data collection, and data analysis. What jobs shouldn’t be automated, in your opinion? Why?
Anna Bold, Community Manager, The Tylt: People have biases and AI reflects the biases of the people that build it. According toÂ IBM, 180 human biases have been defined, a number that is anticipated to grow within the next 5 years.Â Tackling AI biases begins by using a diverse staff to build and test the outcomes of automation. If AI is developed from the perspectives of one or two individuals, it will result in blind spots, and ultimately result in bias. Automation, at times, has been a helping hand, granting me the energy to work on larger projects. Specifically, automation tools can squash a percentage of the negative and hateful conversations on my networks, even if I am not directly seeking them out.Â
Kristina: You’ve singled out community managers as one area that shouldn’t be automated – why?
Anna: Community managers/moderatorsÂ is an occupation that cannot be fully automated. According to a report from Amnesty International, a moderated automation system of larger social media platforms, on average, achieves about a 50 percent accuracy level of identifying problematic tweets in comparison to the judgment of moderators. This means automation tools tend to identify two in every 14 tweets as abusive or problematic, whereas experts identify one in every 14 tweets as abusive or problematic. Plenty of users have found a way around these tools, through the self-censoring of curse words and slurs and other means, which highlights the importance of the human touch.
Kristina: Like the ‘fake news’ problem we’ve heard so much about since the 2016 US Presidential election?
Anna: The ability to stop the spread of misinformation is another important job of a community manager, something that is difficult to be moderated. According to aÂ study from Oxford University, content from less reputable sources is shared four-times more often than content from reputable news publications on Facebook. But Facebook is not the only social platform at fault. YouTube recently made a statement about cracking down on videos promoting white supremacy. However,Â their automatic systemÂ removed educational videos relating to World War II and Nazism from a history teacher.Â
Automation tools can lend a hand when facilitating conversations on social media, but it will never carry the ability to detect the nuance that a person brings to the table when moderating conversations on social media.Â
Kristina: How can brands ensure they’re finding the right fit in these creative spaces?
Anna: Brands need to ask themselves, “What are you putting out in the world, what do you want your platform to relay, and how are you implementing that? Do you care what people associate with your brand? How does how social media reflect the kind of comments and culture that you attract?” Answering these questions helps brands produce a tone, an expression of their company’s values and way of thinking. When brands decide to implement these values through specific language, they form an online personality.Â
At The Tylt, weÂ draw on our editors, who have high journalistic standards, to define and objectively showcase two sides of a debate. Our users assess the information presented to them in the debate and they vote based on their sentiment of the topic. Sometimes, debates can become heated.Â This is why it is essential to monitor conversations and facilitate a healthy, reflective space online.Â