Show newer

Someone pointed out that FB plans to provide more information in transparency reports about content that has been removed.

about.fb.com/news/2025/01/meta
"we’ve started using AI large language models (LLMs) to provide a second opinion on some content before we take enforcement actions."
Of course, they would try to get these in somehow.

The article could have a better structure, they could introduce the issues at play, then how this kind of direction (lowering the error rate) could be positive for expression.

Show thread

Not in this post, but in a post linking to this post.

I think there was a missed opportunity here for the EFF to point to *many examples* of unreasonable takedowns.

Show thread

eff.org/deeplinks/2025/01/eff- The EFF might have goofed by "applauding" it (it would presumably result in fewer erroneous takedowns / demotions) in a post without considering the political heat from other decisions.

It's hard for me to tell what's going to happen from this article, so I suppose we are going to see.

Show thread

about.fb.com/news/2025/01/meta
"As part of these changes, we will be moving the trust and safety teams that write our content policies and review content out of California to Texas and other US locations."
Curiously, there is already misinfo being spread about this post about handling misinfo. This for instance is being reported as them exclusively moving to Texas (though they did name it specifically).

about.fb.com/news/2025/01/meta
"Over time, we have developed complex systems to manage content on our platforms, which are increasingly complicated for us to enforce. As a result, we have been over-enforcing our rules, limiting legitimate political debate and censoring too much trivial content and subjecting too many people to frustrating enforcement actions.

For example, in December 2024, we removed millions of pieces of content every day. While these actions account for less than 1% of content produced every day, we think one to two out of every 10 of these actions may have been mistakes (i.e., the content may not have actually violated our policies). This does not account for actions we take to tackle large-scale adversarial spam attacks. We plan to expand our transparency reporting to share numbers on our mistakes on a regular basis so that people can track our progress. As part of that we’ll also include more details on the mistakes we make when enforcing our spam policies."
Yes, that is an issue. For instance, an algorithm might censor someone talking about Harvest Moon because they said the word "weed".

mastodon.social/@GDPRhub/11376
"New decision from Italy: The DPA held that, under Italian law, the consent of both parents is needed in order to share the picture of a child on a social network."
What do you think of this decision?

I see a few takes which assume that anime style porn appeared only a few years ago (rather than being around for many decades).

When you are reading an article which references manga from sixty years ago.

Context is concern trolling by authoritarian regimes.

Show thread

If it's a funding thing, it'd make sense to say something like "we provide grants to orgs which share our values and work on relevant projects", here are our values, here is a list of grantees (and relevant info).

I get that what they've put out so far is a draft but that comes to mind.

Show thread

For instance, if they just provide grants and the actual thing is run by another team, that should be clear upfront. Like in some bits, they say "we do this". Who is "We"? An org that's an member in this org? But, if you look a bit further, it sounds like a funded org.

Show thread

My advice to COSL would be to make clear the divisions between the funding org and the funded org, or other relationships, because I doubt they want someone to confuse it for them running the other org directly.

If you're confused by the date in the URL showing November, that is because that page used to show something like "a conference has been scheduled for December", then it was updated after the conference.

jfsribbon.org/2024/11/blog-pos There was a conference last month in (which was scheduled on short notice) to discuss dealing with the urgent issue of financial censorship, a serious threat to .

Content moderation / Content policy > "Trust & Safety"

"Trust & Safety" ignores the politics and presents things as objective.

"Trust & Safety" is a cynical PR term coined by Big Tech.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.