In what is hopefully my last child safety report for a while: a report on how our previous reports on CSAM issues intersect with the Fediverse.
https://cyber.fsi.stanford.edu/io/news/addressing-child-exploitation-federated-social-media
Similar to how we analyzed Twitter in our self-generated CSAM report, we did a brief analysis of public timelines of prominent servers, processing media with PhotoDNA and SafeSearch. The results were legitimately jaw-dropping: our first pDNA alerts started rolling in within minutes. The true scale of the problem is much larger, as inferred by cross-referencing CSAM-related hashtags with SafeSearch level 5 nudity matches.
Hits were primarily on a not-to-be-named Japanese instance, but a secondary test to see how far they propagated did show them getting federated to other servers. A number of matches were also detected in posts originating from the big mainstream servers. Some of the posts that triggered matches were removed eventually, but the origin servers did not seem to consistently send "delete" events when that happened, which I hope doesn't mean the other servers just continued to store it.
The Japanese server problem is often thought to mean "lolicon" or CG-CSAM, but it appears that servers that allow computer-generated imagery of kids also attracts users posting and trading "IRL" materials (their words, clear from post and match metadata), as well as grooming and swapping of CSAM chat group identifiers. This is not altogether surprising, but it is another knock against the excuses of lolicon apologists.