I see repeated claims that content moderation isn't that big a problem, that it "can be done with AI", etc. Anyone who says this is unlikely in the extreme to have worked with UGC (User Generated Content) at scale.

While AI/ML is getting better, and can be useful as a first level filtering mechanism, the false positive and false negative error rates tend to be quite high. It's pretty obvious why. Without a human understanding of context, sarcasm, and the constantly evolving tricky ways of authoring a post, etc., it's easy to mischaracterize a post.

Major social media firms who care about the quality of their content utilize the services of large numbers of humans in conjunction with automated systems. They work not only to deal with posts those systems find confusing, but also to deal with flagged posts and user appeals.

Find a large social media firm that doesn't employ enough human moderators working from a set of ethical rules, and you've found a social media firm whose content is likely full of toxic posts.

#Twitter

@lauren I predict that in the short run, such things on Mastodon will look more like waves of de-federation. With few (no?) admins in the space paying to devote resources to moderation, moderation will instead look like failure to control content followed by de-fed of those who have had enough.

On the other hand, this platform is unlikely to attract ad money so the incentive to regulate at all will be low for awhile.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.