Show newer

Hate Mail 

Oh boy. A popular science piece I wrote for @ProstasiaInc is now live. Found out via the bird site. Given that site’s willingness to tolerate #hatespeech, #ViolentThreats rhetoric, and all out threats, I can’t foresee this ending well. The piece I wrote is about #sexualabuse #prevention and I stand by it.
The thesis of the piece is simple: we need to stop the performative rage and actually put our money where our mouths are. #protect #children.

RT @JuliaSerano
new essay! not only debunks "social contagion" & "gr00ming" charges, but shows how they're linked, & the unconscious thinking behind the TERF-to-fascist pipeline. no paywall link, pls share & give it lots of "claps" (up to 50) so other ppl see it!
juliaserano.medium.com/anti-tr

RT @evan_greer
RED ALERT: hearing @SenSchumer is falsely claiming he has hear “no opposition” to despite LGBTQ+ and digital rights advocates repeatedly explaining how this bill is a disaster with a lot of the same problems as the EARN IT act and similar bills. act.eff.org/action/tell-the-se

Tell the Senate: the Solution to Kids’ Privacy Isn’t More Surveillance

The Senate Commerce Committee is considering a bill that, in the name of children’s privacy, creates a system of private surveillance that would force platforms to collect more information on every user, further invading their privacy in the process. The “Kids Online Safety Act” (KOSA) would make platforms the arbiter of what children see online and could hand over significant power, and private data, to third-party identity verification companies like Clear or ID.me. Lawmakers should be providing real privacy protections for everyone online. KOSA doesn’t do that. Instead, KOSA would likely require everything from Apple’s iMessage, Signal, web browsers, email applications, VPN software, and platforms like Facebook and TikTok to collect more user data. Perhaps even worse, the bill would allow individual state attorneys general to decide what topics pose a risk to the physical and mental health of a minor, and allow them to force online services to remove and block access to that material everywhere, by default. This isn’t safety—it’s censorship.

act.eff.org

"Every person is responsible for ensuring the safety and well-being of the most vulnerable among us. But how safe are we and our children if we (the people who engage in this work) are not allowed to be their complete selves?" buff.ly/3gDgzDt @DrGTenbergen

RT @mikestabile
With antis pushing to deplatform adult creators on Twitter, and Twitter potentially relying more heavily on AI to identify TOS violations, it's critical to remember that a regular creator who permanently loses an account here, will see a 62% drop income in the year that follows.

RT @lovingtaeonmain
He's willing to work with fascists and transphobes and homophobes and ignore the vast majority of people who DO NOT WANT bills like and . If he actually cared about kids he'd push for more public school funding, more mental health resources, better wages.

The call is coming from inside the house
---
RT @willsommer
QAnon believers say they're fighting a pedophile cabal.

Today, I've got a story on a leading Q promoter whose ill-advised defamation lawsuit against a local paper revealed his own criminal relationship and sexually charged text messages with a teenager.

thedailybeast.com/qanon-leader
twitter.com/willsommer/status/

RT @Oxen_io
Privacy is a fundamental human right, but that doesn't mean we don't have to fight for it.

Another child has been bullied into attempting to take their own life over accusations about "pedophilic" artwork. No other child protection group recognizes this crisis because they support and profit from such moral panic.
---
RT @ztater_
@evelynisepic has attempted suicide for a fourth time, and this time was very likely successful, because she has never been as low as she was the last few days. I hope you're happy, twitter. I really ho…
twitter.com/ztater_/status/159

Lots of folks are creating new Mastodon instances, and I expect we'll see a new generation of admins learn how to handle CSAM, DMCA requests, etc.

What are the avenues for collective learning and peer support for instance operators?

Did you know that falsely reporting drawings and illustrations as CSAM makes it harder for actual abusive content to be identified and removed? The problem is so pervasive that @IWFhotline had to publicly ask people to stop filing false reports.

Thanks to higher character limits and the growing number of people who are moving to Mastodon and deleting their Twitters, some of our social media content is only available on Mastodon. Follow us there to make sure you don't miss anything! qoto.org/@ProstasiaInc

"I know how consent works in real life. I know how to get and give consent. I know that children can’t consent because they haven’t matured enough to be able to do so (nor do I have any sexual attraction to minors)." buff.ly/3fYOkOY

What @Google didn't mention is that not only do they falsely flag drawings, but their AI falsely flags medical images buff.ly/3Df8hsA

"For the crime of enjoying and writing fiction, I’ve been labeled a predator. Or, even more erroneously, a “pedo”. Nevermind that my enjoyment has nothing to do with attraction to cartoon characters (let alone real children)." buff.ly/3fYOkOY

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.