Via @jdp23, the Senator behind the STOP CSAM bill, which outlaws many forms of encryption and doesn't actually stop CSAM, read that WaPo article about CSAM on the fediverse and tweeted about it:
https://twitter.com/SenatorDurbin/status/1683562063270928384
@thisismissem @jdp23 I wrote about this before. I can go further, if you like.
This "study" is absolute garbage.
For instance, it scans around half a million posts to find 100 "potential" hits, and on sites which don't use one particular tool.
He then acts as if this faux pas is the "end of the world", even though mainstream social media is known to be objectively worse than the fediverse in sheer number of cases.
He also uses Google's algorithms which have been known to misclassify computer generated images. While that might not be to your liking, it is extremely misleading to suggest that this is that.
It is also not unlikely that some of these posts might be spammy / automation based and which hit a large number of hashtags.
Also, he cherry-picks one *particular site* (which has recently been under heavy fire from fediverse admins) when other similar sites, even with similar policies, aren't seen to be troublesome in the same way.
Also, some cherry-picked posts shown in screenshots are labelled as having been posted almost a year ago, and statistics are ever so conveniently missing on this.
Also, if he wanted to help admins with a pertinent issue, he could have reached out to them privately, rather than cherry-picking posts here and there to try to humiliate them.
Also, this very same person has previously made tweets in opposition to Facebook deploying end-to-end encryption in FB Messenger.
He also seems to want Facebook to essentially run the fediverse in the name of "saving the children", or to run every image through a Microsoft hosted service (a PRISM / NSA partner).
Problematically here, some of these services are not even based in the U.S., even if they were, services have First / Fourth Amendment rights, and the argument is in the quality of moderation / communications, not a lack of moderation.
It's not tenable to hold every service liable for a small amount of misuse, nor it is proportionate to do so, especially when someone's free expression is taken into consideration.
Also, a bad actor could just run their own dedicated service in violation of the law. If they're so determined to flout the law, they could well do so.
Also, these services are known to take actual child porn down, he admitted as much, often within hours, however, because it wasn't taken down "immediately", it becomes a "scandal".
@olives @jdp23 We are talking about the same thing right? This report? https://purl.stanford.edu/vb515nd6874
112 posts of CSAM, and 554 posts that are potentially CSAM or child sex-trafficking is too much
Even if 87% are from "alt fediverse" or "defediverse" instances, that still leaves 15 posts of CSAM and 72 of potential CSAM/child sexual abuse that are on the main fediverse that haven't been reported or left unaddressed.
On the main fediverse, any number greater than 0 is unacceptable. We must do better
@olives @jdp23 using Microsoft PhotoDNA, Google's SafeSearch APIs, and Thorn's service for detecting CSAM is in fact an industry standard when in comes to trust and safety on user generated content.
You might not like that they're US based or not know these tools, but we can surely work towards tools that work for the fediverse and within a privacy framework.
We don't yet have data on how quickly reports of CSAM or similar content are actioned on. Ideally we prevent publishing CSAM up front.
@olives @jdp23 Also, at the end of the day, if you want to run a small instance, and you know your members are absolutely not going to post any content that's illegal (e.g., CSAM), then you don't have to use any of those tools to scan for potentially harmful content.
But, other admins may go "yeah, I'd rather play it safe", and then employ tools to assist them in moderation.
To me, several things are true simultaneously:
- the report called attention to a problem that Mastodon collectively hasn't paid enough attention to, and had some useful suggestions for improving moderation tools
- by eliding important details, including that the source of much of the CSAM material has been known for this since 2017, is widely defederated, and that reject_media was developed specifically in 2017 specifically to deal with this problematic instance (and does so effectively for sites that turn it on), it painted an inaccurate picture of the situation.
- focusing only on the report's shortcomings shifts attention away from real problems, including that Mastodon installations by default don't block instances that are known sources of CSAM, that Mastodon gGmbH hasn't prioritized addressing this or improving moderation tools, and that the mobile apps and SpreadMastodon direct newcomers to a site where the moderators don't take action on clearly illegal content. Mastodon gGmbH's has a track record of not prioritizing user safety, and it's a huge problem. Hopefully the reaction to this report leads to positive changes.
- then again, the report doesn't take a "positive deviance" approach of looking at what works (tier0 blocklists, existing mechanisms like silencing and reject_media) and the possibilities for making a decentralized approach work. Instead the report concludes that centralization will be required, and suggests collaboration with Threads and others "to help bring the trust and safety benefits currently enjoyed by centralized platforms to the wider Fediverse ecosystem." But wait a second, trust and safety SUCKS for most people on Threads, why won't these supposed "benefits" lead to the same situation in the fediverse?
@thisismissem@hachyderm.io @olives@qoto.org
@jdp23 @thisismissem As far as I know, the real child porn issue is far more recent. It was initially defederated by instances for containing objectionable content.
What's frustrating to me (about that instance, not you) is that other similar instances moderate such content better in a bid to offer a space for free artistic expression.
At the end of the day, I don't think these kinds of activities are good for any instance. It drags everyone down.
Some degree of collaboration might help here. Even if someone wants to have a different standard from another, it would be nice to have a baseline (i.e. real child porn should have no place anywhere).
While I suspect David is cherry-picking some of these posts (given the dates), it would be nice to see some of these other iffy accounts go away too.
The spammy ones which hint at engaging in illicit activity. I'm not sure if you've had to deal with them, however, a few instances have had to.
If they're appearing elsewhere (i.e. on the main fediverse), then that is quite a disturbing development. These accounts are also present on mainstream social media.
@olives @jdp23 yeah, probably the first step to dealing with them is to start collecting data & signals on content that may be harmful & starting to either process that content for review or to build better automated tools based on the learnings — the real thing highlighted in the report is that large instances are currently flying blind, which is dangerous for everyone
@jdp23 @thisismissem "that would be illegal in the US" In the U.S., free expression is protected by the First Amendment (and honestly, that's a good thing).
In Japan, it would fall under Article 21 of the Constitution, which in this case, is seen as protecting free expression.
It would be inaccurate to suggest that this expression was "illegal in Western Europe".
Naturally though, real child porn is not protected by any of these.
Also, the E.U. e-Commerce Directive (similar to the U.S. Section 230) limited intermediary liability (which is a good thing for innovation / online expression) for communicating with another service.
There are additional measures which could be taken, such as not storing external content.
Of course, if real child porn appears, that can be dealt with accordingly, but it's important to give services a bit of breathing room.
As for that instance, I think it was acquired by a different company, at some point. I can't comment further on their moderation.
I've seen a post on the fedi that when it comes to moderation, you can't just look at the world the way it is today, you have to adapt to changes which come along.
@jdp23 @thisismissem Noting that at one point, a Japanese friend of mine who runs an instance did tell me that people in the West tend to send a lot of bogus reports, because they don't like one kind of expression or another.
So, it might be preferable, if those were limited to ones which are actually actionable. Likely not relevant though.