https://www.eff.org/deeplinks/2025/01/eff-statement-metas-announcement-revisions-its-content-moderation-processes The EFF might have goofed by "applauding" it (it would presumably result in fewer erroneous takedowns / demotions) in a post without considering the political heat from other decisions.
It's hard for me to tell what's going to happen from this article, so I suppose we are going to see.
https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes
"As part of these changes, we will be moving the trust and safety teams that write our content policies and review content out of California to Texas and other US locations."
Curiously, there is already misinfo being spread about this post about handling misinfo. This for instance is being reported as them exclusively moving to Texas (though they did name it specifically).
https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes
"Over time, we have developed complex systems to manage content on our platforms, which are increasingly complicated for us to enforce. As a result, we have been over-enforcing our rules, limiting legitimate political debate and censoring too much trivial content and subjecting too many people to frustrating enforcement actions.
For example, in December 2024, we removed millions of pieces of content every day. While these actions account for less than 1% of content produced every day, we think one to two out of every 10 of these actions may have been mistakes (i.e., the content may not have actually violated our policies). This does not account for actions we take to tackle large-scale adversarial spam attacks. We plan to expand our transparency reporting to share numbers on our mistakes on a regular basis so that people can track our progress. As part of that we’ll also include more details on the mistakes we make when enforcing our spam policies."
Yes, that is an issue. For instance, an algorithm might censor someone talking about Harvest Moon because they said the word "weed".
https://mastodon.social/@GDPRhub/113763839004405363
"New decision from Italy: The DPA held that, under Italian law, the consent of both parents is needed in order to share the picture of a child on a social network."
What do you think of this decision?
If it's a funding thing, it'd make sense to say something like "we provide grants to orgs which share our values and work on relevant projects", here are our values, here is a list of grantees (and relevant info).
I get that what they've put out so far is a draft but that comes to mind.
For instance, if they just provide grants and the actual thing is run by another team, that should be clear upfront. Like in some bits, they say "we do this". Who is "We"? An org that's an member in this org? But, if you look a bit further, it sounds like a funded org.
https://www.jfsribbon.org/2024/11/blog-post_23.html There was a conference last month in #Japan (which was scheduled on short notice) to discuss dealing with the urgent issue of financial censorship, a serious threat to #HumanRights. #FreeSpeech
They've taken down the page now but when it was up the TSPA just ran apologia apologia apologia for authoritarian regimes in Asia in the name of discussing "Trust & Safety". #HumanRights #FreeSpeech
There needs to be a real reckoning with the role the so-called "TSPA" plays in white-washing digital authoritarianism. #HumanRights #FreeSpeech
Software Engineer. Psy / Tech / Sex Science Enthusiast. Controversial?
Free Expression. Human rights / Civil Liberties. Anime. Liberal.