He claims that a lovely puppy video was among the first things he viewed on his social media accounts. But suddenly everything changed.
He says he was shown videos of someone getting hit by a car, a misogynistic influencer’s monologue, and clips of violent altercations “out of nowhere.” Why me? he found himself wondering.
For 19 months, from December 2020 to June 2022, Andrew Kaung worked as a user safety analyst for TikTok in Dublin.
He claims that he and a colleague made the decision to look into the recommendations the app’s algorithms were making to users in the UK, including some 16-year-olds.
Regardless of how many views a piece of material has, TikTok and other social media businesses utilize artificial intelligence (AI) algorithms to delete the vast majority of dangerous content and flag other content for human moderator review. But AI methods are not always able to identify everything.
According to Andrew Kaung’s claim, videos that were not removed, flagged for human review by AI, or reported to moderators by other users would only be manually reviewed again if they met a specific threshold during his time at TikTok.