Via @jdp23, the Senator behind the STOP CSAM bill, which outlaws many forms of encryption and doesn't actually stop CSAM, read that WaPo article about CSAM on the fediverse and tweeted about it:
https://twitter.com/SenatorDurbin/status/1683562063270928384
@thisismissem @jdp23 I wrote about this before. I can go further, if you like.
This "study" is absolute garbage.
For instance, it scans around half a million posts to find 100 "potential" hits, and on sites which don't use one particular tool.
He then acts as if this faux pas is the "end of the world", even though mainstream social media is known to be objectively worse than the fediverse in sheer number of cases.
He also uses Google's algorithms which have been known to misclassify computer generated images. While that might not be to your liking, it is extremely misleading to suggest that this is that.
It is also not unlikely that some of these posts might be spammy / automation based and which hit a large number of hashtags.
Also, he cherry-picks one *particular site* (which has recently been under heavy fire from fediverse admins) when other similar sites, even with similar policies, aren't seen to be troublesome in the same way.
Also, some cherry-picked posts shown in screenshots are labelled as having been posted almost a year ago, and statistics are ever so conveniently missing on this.
Also, if he wanted to help admins with a pertinent issue, he could have reached out to them privately, rather than cherry-picking posts here and there to try to humiliate them.
Also, this very same person has previously made tweets in opposition to Facebook deploying end-to-end encryption in FB Messenger.
He also seems to want Facebook to essentially run the fediverse in the name of "saving the children", or to run every image through a Microsoft hosted service (a PRISM / NSA partner).
Problematically here, some of these services are not even based in the U.S., even if they were, services have First / Fourth Amendment rights, and the argument is in the quality of moderation / communications, not a lack of moderation.
It's not tenable to hold every service liable for a small amount of misuse, nor it is proportionate to do so, especially when someone's free expression is taken into consideration.
Also, a bad actor could just run their own dedicated service in violation of the law. If they're so determined to flout the law, they could well do so.
Also, these services are known to take actual child porn down, he admitted as much, often within hours, however, because it wasn't taken down "immediately", it becomes a "scandal".
@olives @jdp23 We are talking about the same thing right? This report? https://purl.stanford.edu/vb515nd6874
112 posts of CSAM, and 554 posts that are potentially CSAM or child sex-trafficking is too much
Even if 87% are from "alt fediverse" or "defediverse" instances, that still leaves 15 posts of CSAM and 72 of potential CSAM/child sexual abuse that are on the main fediverse that haven't been reported or left unaddressed.
On the main fediverse, any number greater than 0 is unacceptable. We must do better
@olives @jdp23 using Microsoft PhotoDNA, Google's SafeSearch APIs, and Thorn's service for detecting CSAM is in fact an industry standard when in comes to trust and safety on user generated content.
You might not like that they're US based or not know these tools, but we can surely work towards tools that work for the fediverse and within a privacy framework.
We don't yet have data on how quickly reports of CSAM or similar content are actioned on. Ideally we prevent publishing CSAM up front.
@olives @jdp23 Also, at the end of the day, if you want to run a small instance, and you know your members are absolutely not going to post any content that's illegal (e.g., CSAM), then you don't have to use any of those tools to scan for potentially harmful content.
But, other admins may go "yeah, I'd rather play it safe", and then employ tools to assist them in moderation.
To me, several things are true simultaneously:
- the report called attention to a problem that Mastodon collectively hasn't paid enough attention to, and had some useful suggestions for improving moderation tools
- by eliding important details, including that the source of much of the CSAM material has been known for this since 2017, is widely defederated, and that reject_media was developed specifically in 2017 specifically to deal with this problematic instance (and does so effectively for sites that turn it on), it painted an inaccurate picture of the situation.
- focusing only on the report's shortcomings shifts attention away from real problems, including that Mastodon installations by default don't block instances that are known sources of CSAM, that Mastodon gGmbH hasn't prioritized addressing this or improving moderation tools, and that the mobile apps and SpreadMastodon direct newcomers to a site where the moderators don't take action on clearly illegal content. Mastodon gGmbH's has a track record of not prioritizing user safety, and it's a huge problem. Hopefully the reaction to this report leads to positive changes.
- then again, the report doesn't take a "positive deviance" approach of looking at what works (tier0 blocklists, existing mechanisms like silencing and reject_media) and the possibilities for making a decentralized approach work. Instead the report concludes that centralization will be required, and suggests collaboration with Threads and others "to help bring the trust and safety benefits currently enjoyed by centralized platforms to the wider Fediverse ecosystem." But wait a second, trust and safety SUCKS for most people on Threads, why won't these supposed "benefits" lead to the same situation in the fediverse?
@thisismissem@hachyderm.io @olives@qoto.org
@jdp23 @thisismissem As far as I know, the real child porn issue is far more recent. It was initially defederated by instances for containing objectionable content.
What's frustrating to me (about that instance, not you) is that other similar instances moderate such content better in a bid to offer a space for free artistic expression.
At the end of the day, I don't think these kinds of activities are good for any instance. It drags everyone down.
Some degree of collaboration might help here. Even if someone wants to have a different standard from another, it would be nice to have a baseline (i.e. real child porn should have no place anywhere).
While I suspect David is cherry-picking some of these posts (given the dates), it would be nice to see some of these other iffy accounts go away too.
The spammy ones which hint at engaging in illicit activity. I'm not sure if you've had to deal with them, however, a few instances have had to.
If they're appearing elsewhere (i.e. on the main fediverse), then that is quite a disturbing development. These accounts are also present on mainstream social media.
@thisismissem @jdp23 Our admin caught sight of the smoke and defederated that specific instance around a month ago.
From an administrative perspective, it's hard to say he made the wrong decision (although, hard to say it's good either). It wasn't made lightly.
If there was better collaboration, you wouldn't be learning that there might be a bit of a problem from an outsider's report (who probably only discovered the fediverse not long ago).
Some parts of the network do not cache images.
This is probably done for peace of mind, more than it being likely that something will happen. It seems the person who developed Mastodon (but not other tools) didn't think about this at all.
Scanning discussions were held within communities for other software years ago, although that didn't go anywhere as it didn't look like a problem.