In what is hopefully my last child safety report for a while: a report on how our previous reports on CSAM issues intersect with the Fediverse.
https://cyber.fsi.stanford.edu/io/news/addressing-child-exploitation-federated-social-media
Similar to how we analyzed Twitter in our self-generated CSAM report, we did a brief analysis of public timelines of prominent servers, processing media with PhotoDNA and SafeSearch. The results were legitimately jaw-dropping: our first pDNA alerts started rolling in within minutes. The true scale of the problem is much larger, as inferred by cross-referencing CSAM-related hashtags with SafeSearch level 5 nudity matches.
Hits were primarily on a not-to-be-named Japanese instance, but a secondary test to see how far they propagated did show them getting federated to other servers. A number of matches were also detected in posts originating from the big mainstream servers. Some of the posts that triggered matches were removed eventually, but the origin servers did not seem to consistently send "delete" events when that happened, which I hope doesn't mean the other servers just continued to store it.
@det "grooming" Speculation. Unsubstantiated. You have not demonstrated this at all. Even once.
"attracts" Speculation. Unsubstantiated.
"users posting" No statistics on this one. Trying to conflate one form of content with another. Again, unsubstantiated.
"computer-generated imagery" We know very well you don't like this content. Personally, I wonder why this service has decided to host this.
These insinuations you're making are nonetheless highly misleading.