Likewise, these args like "the rigged system" seem to fall apart when you compare it to another site (I'm aware there may be ppl who don't like a recent FB policy change).
Finally, as we've covered before, a lot of Bluesky's freedom is *illusory*. #FreeSpeech
This has less to do with the management of a site or technical architecture. It has much to do with hype cycles in the media. It curiously ignores Bluesky is not the only other platform. There's Threads, Reddit, Discord, but they present it as Twitter vs Bluesky. #FreeSpeech
They promised sunshine, rainbows, and unicorns. An unrealistic promise. Now, the media determined there are users who're too much, clobbering former NYTimes columnist, Jesse (and a newer pundit), and they are voicing their concerns, including a NYTimes journalist. #FreeSpeech
https://www.techdirt.com/2025/06/20/community-and-choice-are-not-bubbles/
The contradictions in Bluesky run deep. A thread on #FreeSpeech, Bluesky, and promises
Mike talks of a rosy ideology involving user freedom. However, the main media pitch wasn't that. It was about it not being Twitter.
Even on Twitter, the small character limit was always a bad architectural decision and it was always stifling.
Bluesky talk. #FreeSpeech
There is no reason to have such a small character limit. This is a mistake also made by Mastodon (although, with a small improvement) where developers decide to do something because Twitter did something in the past.
Another issue with Bluesky that's related to #FreeSpeech is the small character limit.
A longer one would allow users to add more context and to avoid confusion.
I think one of the big issues with Bluesky when it comes to #FreeSpeech is the lack of privacy settings.
Plus, this isn't something which is done with practically any other crime, so it seems strange to keep lowering the threshold.
There has to be a point when turning medical professionals into snitches, rather than actually doing their jobs, becomes a problem.
"exploitation" is defined broadly, so it's not just abuse. Maybe, viewing images.
exclude someone who does not believe they are likely to abuse someone. #privacy #HumanRights
or to self-censor when doing so (which could be detrimental in a number of ways).
One proposed mitigation seems to be to exempt services which are explicitly for abuse prevention, however, such an overt focus on that might be intimidating to someone and would necessarily #privacy #HumanRights
I have a #privacy concern about a proposal part of the E.U. Parliament came up with. If a medical professional reports someone, because they "suspect" they may have been involved in child exploitation, it's likely that would turn a sizeable number of people off from seeing any #HumanRights
"proximity to abusers"
Not in a passive way, for instance, you wouldn't argue there is a proximity, because someone who has happened to have abused watches television. I mean, there is a dependence there, one which might appear in a community dynamic.
otherwise involves a sexual proliferation depicting someone's likeness w/o their (adult) consent) make sense, but more abstract and conceptual prohibitions do not (and are very problematic when it comes to #HumanRights).
For more, I have a post about various things to do with porn. #FreeSpeech
In addition, in a black market, there is no need to follow any standards or to have any values. It's not unlikely a prohibition selects out such people. Blanket prohibitions are a very bad idea and a bad form of policy. A prohibition restricting content involving abuse (or #HumanRights #FreeSpeech
Software Engineer. Psy / Tech / Sex Science Enthusiast. Controversial?
Free Expression. Human rights / Civil Liberties. Anime. Liberal.