https://reclaimthenet.org/uncs-crusade-against-anonymous-apps-sparks-free-speech-firestorm
"The University of North Carolina (UNC) is moving to ban anonymous social apps, supposedly out of declarative concern for the students’ well-being."
"However, the Foundation for Individual Rights and Expression (FIRE) non-profit sees anonymous apps as valuable tools for students to express themselves without fear, as self-censorship has been on the rise in US universities in recent years.
According to FIRE’s Program Officer Jessie Appleby, blocking these apps is tantamount to “getting rid of that outlet for constructive speech just because of a small amount of offensive speech, and that’s generally not how you want to approach speech.”"
#FirstAmendment #FreeSpeech #privacy
https://www.techdirt.com/2024/05/10/congressional-committee-threatens-to-investigate-any-company-helping-tiktok-defend-its-rights/
"As the rest of that paragraph makes clear, there was very much an implied threat that Congress would investigate organizations working with TikTok to defend its rights. I’m also hearing that others, like PR agencies and lobbying organizations that work with TikTok, are now facing similar threats from Congress."
This looks like a violation of the #FirstAmendment. #FreeSpeech
Where were you between the hours of 3 am and 4 am. No, I'm not a cop, I'm a marketer. #privacy
I need to verify you are not a child.
Just send me a copy of your ID, bank records, and credit card details. Thanks. #privacy
A flood destroyed the town. The economy isn't doing too well now. I wonder why. Huh, quite a few people are drinking alcohol now (maybe from losing their homes). Well, it's got to be that doing it.
Not a real example but you get the analogy.
It seems we're at the point where stupid takes from apparent students (i.e. Tammana Malik, who specializes in intellectual property law) are uplifted as if they're serious policy takes. For instance, talking about Section 230 as if it would only make anti-social conduct go away, rather than broadly chilling expression, or leading to poorer moderation practices (i.e. blunter instruments), and to a greater number of frivolous lawsuits which might be good for a lawyer's career but bad for everyone else.
While writing about policy might be an interesting project for a student, and in that context, it might be alright, the problem lies in when a stupid take is treated as if it is an actually serious and informed take, rather than a clumsy and harmful one. It's the typical "I've been here for a few minutes and I have a simple fix in mind for all our problems", and being ignorant of all the ways in which "simple fixes" have been harmful in the past. The issues involved are fairly nuanced.
Then, there is an invocation of the slogan "safety by design" which was invented by a foreign politician more concerned with looking powerful over tech companies (even if in a manner which is extremely harmful), and shaking hands with Discord's executives, than with protecting anyone's rights. "looking powerful" is not a valid policy goal.
It is a close cousin of "duty of care" where anytime a feature is misused, that is taken as an invitation to demand a company "does something"[1], even if that something is something completely unreasonable. Then, someone pretends this is somehow the same as workplace safety where someone's rights are not implicated by measures to "make things safer".
Vacuous demands to "do something" are neither useful or productive but they do contribute to a higher word count for an article. So, in that sense, they might serve some purpose, even if practically speaking, they're harmful for society.
There are also elements of hypothesizing fantastical scenarios which "require" rights violating interventions and barely have any substance to them, or where if there are any, it is not remotely proportionate (much like shutting down a park because crime might happen there). Just because someone can come up with an idea or hypothetical doesn't mean that it is useful to do so. Maybe, that is useful for "padding the word count" but it is not in the slightest useful here.
One example of this is with VR and "CSAM" (which has never been documented to be an issue there, and as I've covered before, it would be a very poor medium for disseminating things like photographs through). There is hardly an incentive for someone to do so and it would be a lot of work to do so. And if someone is doing it, how about going after them specifically? Then, there is a general sense that bad people might do bad things to people with VR. If so, we can presumably punish them for that, or you know, someone could use those tools which already exist to keep people a certain distance away, or something which doesn't involve clumsy censorship (censorship of say porn, same interpretation as in the porn science post, wouldn't solve anything, it would stifle a lot of legitimate usage though #FreeSpeech) or messing around with statutes which the writer doesn't understand (they also clearly don't understand the First Amendment, although that is a whole story of it's own which I'd rather not get into here). Comparing it to "AI" also doesn't make it akin to "AI"[2] (and concerns about free expression also apply there). #Metaverse
It also remains fascinating how something like "VR", something which hardly anyone uses so far, and is considered a flop so far, still attracts people to regurgitate ignorant hot takes about what should be done, almost as if a laboratory. It's even tempting to ignore these ignorant takes, however, it might also be dangerous to do so, with someone talking about messing with Section 230 and other sensitive statutes.
1 https://en.wikipedia.org/wiki/Politician%27s_syllogism Politician's syllogism.
2 https://qoto.org/@olives/112402648186219830 Commentary on the U.S. Department of Commerce's "AI" takes.
https://arxiv.org/abs/2405.05904 Apparently, "AI" models have a hard time learning new knowledge via fine-tuning, instead primarily relying on pre-training.
https://www.eff.org/deeplinks/2024/05/no-country-should-be-making-speech-rules-world I urge you to look past Elon's personality and to look at the dreadful precedent it sets. #auspol #FreeSpeech
Basically, there are a load of oranges to apples comparisons and tortured stretches to try to make out that there is a story when there really isn't.
We also know not to take the people who wrote this seriously because this is quite the piece of misleading clickbait.
https://nichegamer.com/helldivers-2-is-still-effectively-banned-in-countries-without-psn/ Well, that's not good.
It's not a secret that these bots are unreliable, for instance, they have strong tendency to make things up, and someone shouldn't rely on any of it's outputs.
ChatGPT (and other OpenAI models) in particular also have serious symptoms of being over-trained.
Still, chasing every hypothetical and "scoop" gets really silly.
https://www.theguardian.com/technology/article/2024/may/10/is-ai-lying-to-me-scientists-warn-of-growing-capacity-for-deception Do you think The Guardian is scared of ChatGPT competing with their site?#AI #ukpol
While you might be able to interpret it more narrowly, if you squint at it, I wouldn't count on people reading that document doing so.
https://www.wired.com/story/what-happens-when-a-romance-author-gets-locked-out-of-google-docs/
In the year of 2024, someone should be able to reasonably expect that they can mind their own business without a company digging through their private files (or that shared with a small select group). It's likely that legislation might be passed to curb this practice, and already has been in some jurisdictions. That said, just because someone could theoretically look through someone's files doesn't mean they should.
When it comes to moderation practices, it is very inappropriate for #Google to attempt to moderate "sexual content" here, and it feels like something which could easily trip people over. It is inherently user hostile and there isn't a good reason for it.
The article covers both this specific case (and a few other cases more generally and briefly), the following passage is not about this case:
"To a banhammer, every query looks like a nail: depictions of rape disappeared, but so did posts by rape survivors."
There are problems with this passage. For instance, this person focuses on one specific case "posts by rape survivors", and fails to unpack the more implication of pieces of fiction (with dark themes) being censored, which is an obvious incursion on freedom of expression. By failing to engage with the main problem at hand, it is also easier for concerns of censorship to be ignored entirely.
https://qoto.org/@olives/112362450620045294 This is a large part of why I will just point people to my new porn science piece directly.
I'm not saying someone can't cover that case "posts by rape survivors". In fact, it's a fairly important case. What I'm saying is that they shouldn't cover it exclusively.
In a way, this reminds me of someone writing a piece to argue that "ageplay" shouldn't be censored. Instead of making an argument that it isn't a form of abuse, and someone shouldn't be discriminated against because of the actions of a few criminals (guilt by association), they relied entirely on an argument that it wasn't inherently sexual. This isn't an inaccurate argument, in a number of cases it is not, it also misses the point.
Software Engineer. Psy / Tech / Sex Science Enthusiast. Controversial?
Free Expression. Human rights / Civil Liberties. Anime. Liberal.