Show newer

@freezenet @mmasnick flipboard.social/@ScienceDesk/ Here's an egregious article. It implies it's bots like ChatGPT "deliberately deceiving people" when it's actually a bot made to play a negotiation game.

Sometimes, I hear people talk about Big Tech as if censorship from them is new, and that they haven't done anything like this in the past.

It's all very naive.

If you dig around, you can find Twitter banning people over a decade ago for saying the word "tit". Presumably, they discovered that such censorship is not fun for anyone.

Olives boosted

infrastructure.gov.au/have-you
It looks like they've moved another consultation up (for political reasons, it seems). This time for Australian online control. You can provide feedback there.

Some core things to consider:

One is the ratings type stuff being handled by the other consultation. Some of that crops up here too and it might be useful to refer to my other post on this: qoto.org/@olives/1122637219951 I've also written a new piece on porn science here: qoto.org/@olives/1123624506200 (I'd be wary of any calls to censor any sort of porn)

It mentions a "duty of care". The problem with a duty of care is that any time something goes wrong, that is an invitation for someone to attack a company, and there might not be anything a company could have reasonably done in that situation. Someone might even ask for things which aren't reasonable or particularly effective. There is also a cognitive phenomena where events in the past feel more predictable than they actually are[1].

There are comparisons to "workplace safety" but it is worth considering that matters of speech are not the same as wearing something to protect your head or feet on a construction site. At worst, a company might expend more resources to address a particular hypothetical. It is, however, not the same as someone's rights being violated.

There are words like "reasonable", frankly, someone could argue that something is "reasonable" which you find ridiculous. It is also worth considering the intent of such language, the intent is typically to push for someone to "do more", even if that "do more" might be harmful, sometimes even counter-productive[2].

Removing footage of "murders" could lead to evidence of war crimes being removed[3].

Some of the language is vague and seems to depend a lot on someone interpreting it properly. Like in the ratings consultation post, I would argue for a strong presumption against censorship for fiction in media that is for the purposes of entertainment (i.e. video games, books, and so on).

There is a certain expectation that services in other countries should be following whatever it is that officials in Australia want but that is not really how the Internet works and it could be harmful to expect that it works that way.

And yes, this one covers "age verification" for things like porn. As noted in one of my other posts, there can be privacy implications (including breaches[4]), and it could also lead to content or services becoming unavailable entirely, particularly when you consider the global nature of the Internet.

Update: In light of [4], I've made a new post.

1 en.wikipedia.org/wiki/Hindsigh Hindsight Bias

2 en.wikipedia.org/wiki/Politici Politician's Syllogism

3 theintercept.com/2017/11/02/wa YouTube and Facebook Are Removing Evidence of Atrocities, Jeopardizing Cases Against War Criminals

4 wired.com/story/outabox-facial The Breach of a Face Recognition Firm Reveals a Hidden Danger of Biometrics

Olives boosted

infrastructure.gov.au/have-you
Ever been irritated by petty Australian Government censorship[1]? Well, the Australian Government is running a consultation on that. You have a chance to have a say on the matter.

If there are other areas of censorship which you'd like addressed, you can tackle those as well. I am simply covering in this post what comes to mind for me. The two main ones being the particular brand of puritanism which the government has sometimes had, and the irrational fear of games containing "drugs and alcohol" (even going as far as banning these entirely at times). There was also a game which was censored which appeared to allow players to perform drone strikes on tanks, perhaps due to fears of this seeming too similar to the situation in Ukraine (the precise classification appeared to be "criminal instructions" or something to that effect).

While what is happening to the folks from Ukraine is most despicable, and war more generally is tragic, I don't think there is any justification for this sort of censorship. There should be a strong presumption against censoring fictional content in general.

For violence, animated violence should probably be rated a bit to somewhat lower than more realistic violence. It doesn't make a lot of sense to treat these the same (unless the rating is low enough that it doesn't matter).

For sexual content, I have a couple of recommendations here:

1) If it involves a fictional character who doesn't exist (i.e. / manga), there shouldn't ever be a reason to issue a RC rating. At most, maybe a R18 rating. A lower degree of eroticism or nudity (not really porn) might be present in anime and I think any rating should avoid rating that highly. It doesn't matter what the fictional character looks like.

I feel that muddling reality and fiction here really diminishes the seriousness of things like abuse. There also isn't a scientific basis for that sort of censorship, [2] goes into that (and other related matters). Some sort of sex education (perhaps around respecting someone's boundaries) might be better than relying on crude censorship which does not appear to be effective (and has harmful drawbacks of it's own, including even a harmful "War on Drugs" type phenomena when taken to an extreme).

2) For content containing real human actors, as a rule of thumb, if the content is produced with the (obviously adult) actor's consent, it should be permitted. If there is to be any limitation, it should involve an objective standard of serious physical harm, rather than the remote possibility that someone might be offended by the content. You also have to be wary of the Board construing this far too broadly though by deciding that a very mundane activity might have a remote possibility of physical harm. They've done this in the past (as has the British one).

Neither of these two recommendations mean that every site has to carry every possible kind of content.

As a rule of thumb, you might want online content to be treated far more liberally than content to be broadcasted on TV. If you're not careful, they might try to impose stricter TV standards outside of that context, despite them being inappropriate. I don't think that is what people would expect. Online, in particular, tends to be more oriented around curating your own experience, than relying on a broad brush one-size-fits-all solution.

In regards to the government wanting higher classifications for "simulated gambling", I'd be wary of construing terms like simulated gambling very broadly and assuming any game which contains it is primarily focused on gambling (or contains things like loot boxes). As an example, classic Pokémon games had a building in one city which had gambling machines. These elements made up a tiny portion of the game and the vast majority of gameplay does not involve these.

1 refused-classification.com Many examples of petty censorship (even containing dramatic sounding excuses for what is essentially mundane everyday content).

2 qoto.org/@olives/1118889463563

Olives boosted

The first consultation regarding classification and censoring things like sexual expression is going to be closing soon so I recommend looking at that before it does.

Olives  
https://infrastructure.gov.au/have-your-say/statutory-review-online-safety-act-2021 It looks like they've moved another consultation up (for politi...

I've made some small alterations and added a new paragraph, although it isn't a big change.

Olives  
Reposting as this seems like a good time to do so. Despite the scant / non-existent evidence for porn being such a bogeyman, it keeps getting cast ...

Reposting as this seems like a good time to do so. Despite the scant / non-existent evidence for porn being such a bogeyman, it keeps getting cast as a scapegoat which is quite frustrating, so I am going to have to go over this... Again.

Even if online porn "might" be "problematic" to someone out there, it would not be anywhere remotely near proportionate to engage in censorship (or privacy intrusive measures, which among other things might pose a security risk), especially as it can be free expression to someone, and expression which someone might casually share as part of their more general interaction / engagement with others.

Sometimes, restrictions can lead to services becoming inaccessible entirely, rather than simply limiting them to people over a particular age.

A typical recommendation is sex education (perhaps, teach someone about respecting others boundaries?), not censorship (which is harmful in it's own ways). I don't mean criticizing someone for telling an offensive joke.

The science isn't really showing porn is this awful thing:

tandfonline.com/doi/abs/10.108
psyarxiv.com/ehqgv/
Two studies showing porn is not associated with sexism. One carried out by German scientists, another carried out by Canadians.

qoto.org/@olives/1104622745318
American scientists carried out a meta analysis of 59 studies. They found porn isn't associated with crime. A meta analysis is a study where someone studies studies.

pubmed.ncbi.nlm.nih.gov/314325
Nor does it necessarily seem this is the case among adolescents (the meta analysis also points to that). Here, the minors who used more porn engaged in less sexual aggression.

psychologytoday.com/us/blog/al
qoto.org/@olives/1104002886657
There are even studies (across the United States, Japan, Finland, and more) showing that porn is associated with less crime, even among criminals.

pubmed.ncbi.nlm.nih.gov/310420
While an older Dutch study showed there might be worse levels of "sexual satisfaction" among adolescents using porn, a Croatian lab failed to replicate that.

sciencedirect.com/science/arti
This is a meta analysis on sexualization in video games. It finds that studies tend to pick cut-offs where it's difficult to distinguish signal from noise. This increases the number of false positives.

There are also results which contradict the theory of sexualization being harmful. In the end, it fails to find a link between this and sexism, and this and mental well-being.

I'm also usually sceptical of apparent links, as the "scientific pile on effect" (as one described it) drives people to go looking for "links" between porn and "something bad" however tenuous it might be, or methodologically flawed an approach it might be (and later, that something is debunked, or the "link" is a phantom due to methodological limitations).

I could add it doesn't matter if they're "child-like" or "fictional children", (this is far, far more likely to hit someone good than someone bad who don't need it, and a bad actor could still do bad things)*. This necessarily excludes involvement of abuse or invasions of privacy. If it were actual real children, I'd oppose that on ethical grounds (though, I still wouldn't want to burn down the Internet / sites, because of unwanted bad actors). This is covered above but it is also kind of common internet sense.

While I'm not making a point about anything in particular, to inoculate you against potential problematic arguments, it's worth mentioning the basic precept that correlation does not imply causation.

Let's use ice cream as an example. Everyone loves ice cream, right? Well, I like ice cream. This also happens to be used as a classic example by others for this sort of thing.

Anyway, ice cream is correlated with crime. No one would say ice cream causes people to go out and commit crimes though. Just because there is a "correlation" doesn't mean it is meaningful. And that's not the only way in which correlation might not imply causation. For instance, warm weather is a far more compelling explanation for this phenomena. That might come in useful somewhere...

Here's a couple which were added for :

reason.com/2015/07/23/despite- U.S. data shows teens are having less sex with each other (in a world with more porn).

Misapprehensions about porn can be more about expressions of sexual orientations than porn. In fact, we've seen an Australian news outlet specifically singling out "anal sex" as a negative thing not that long ago, who would that disproportionately impact? pubmed.ncbi.nlm.nih.gov/297020 Also, moralizing can be harmful (and ineffective).

Typically, responsibility is put on individuals to behave in a manner that is reasonable to them, instead of looking for a scapegoat whenever someone behaves in a manner which could be argued to be negative. This isn't to discount external factors (i.e. socioeconomic ones) entirely but there isn't always something sensible which can be done. People live their own lives.

We might also want to look at how alcohol is handled. We tend to look at this through the lens of personal responsibly, that someone is reasonable for consuming it responsibly, and not behaving inappropriately. Now, alcohol is not the same thing as porn, it is an actual substance, not some pixels on the screen. It further illustrates though how strange and unusual the idea of censorship here is.

Quite a few things which might get blamed on "the porn" are actually general mental health issues which could be dealt with more normally, and crucially, without conflating it with porn (which might even detract from dealing with someone's actual issues).

In fact, online censorship has increased in quite a few ways over the past few years and it doesn't appear to be any sort of panacea. It has, however, created a number of harms in it's own right, including even murder by practically forcing some sex workers to work with more dangerous clients. It also provides a space for abusive bigots to dwell in.

An addendum (from another post which might be useful to add useful context, we won't delve too deeply into this section):

An additional bit on why "porn censorship" (perhaps, even some themes) is bad.

Some points about censoring fictional content there (censorship is a bad idea):

It might fuel someone's persecution complex, especially in the context of *. The idea of a dangerous world where people are out to get them. Feeds anxiety, alienation. It's happened a fair bit. It doesn't seem to do anything positive.

Someone might be more inclined to see someone as an idiot or crazy (that's not wrong, lol). In any case, it poisons the well as someone is not seen to be credible or competent in these matters at all. Promoting distrust doesn't seem like a positive outcome.

It violates someone's free expression. People have these things called rights, that's important. This point comes from the original post, I'm aware I've covered this here more generally, still there may be value in reaffirming it.

Bad people don't need it. They could still do bad things. Good people are who'd suffer.

It violates the Constitution. Multiple constitutions.

Punishing someone because they resemble someone unpleasant isn't good. Also, due process still applies, in any case...

Can be a coping mechanism.

@mike_k qoto.org/@olives/1123806662147 There is a consultation open which relates to her (and censorship more broadly), that is my take on it.

And yes, I think it was a mistake to give her so much power. She used to be a liaison of sorts to talk to companies, then the government decided to turn her into an internet censor I think in 2019 / 2021. She keeps trying to look powerful and mighty.

Also: eff.org/deeplinks/2024/05/no-c

The first consultation regarding classification and censoring things like sexual expression is going to be closing soon so I recommend looking at that before it does.

Olives  
https://infrastructure.gov.au/have-your-say/statutory-review-online-safety-act-2021 It looks like they've moved another consultation up (for politi...
Olives boosted

@echo_pbreyer Since this mentions links briefly, I suspect the number of "links" is likely exaggerated by spam bots rather than someone communicating links to others.

I've know of people who have reported such bots and they can be very annoying. They behave like machines. The bots even respond to each other.

This document seems very extreme and single-mindedly focused on the possibility of "content being disseminated" to the exclusion of everything else. It also looks like the result of someone throwing every random idea they can at the wall.

It really sets Europe's reputation for human rights ablaze.

"It's not a platform's fault for blasting lots of reports which they reckon are very unlikely to be child porn and are never going to be followed up on." (paraphrased)

Actually, it is. Accusing someone of being a child predator (or whatever it is someone is thinking) is an egregious violation of and part of why is so notorious.

And if we're gonna admit they're never going to be followed up on, then surely it should be a no-brainer to prune it back to protect people's privacy? If it takes legislation to do this, then so be it.

This U.S. "AI" consultation doesn't adequately take free expression into consideration and you might want to address that.

Olives  
https://airc.nist.gov/docs/NIST.AI.100-4.SyntheticContent.ipd.pdf A branch of the U.S. Department of Commerce has shown up with hot "AI" takes. Som...
Olives boosted

I see another push against on here (which is good because it's a very bad censorship bill).

Olives boosted

thehill.com/opinion/national-s
"This week, the Senate may pass a bill granting the executive branch extraordinary power to investigate and strip nonprofits of tax-exempt status based on a unilateral accusation of wrongdoing.

The potential for abuse under H.R. 6408 is staggering. If it were to become law, the executive branch would be handed a tool perfectly designed to stifle free speech, target political opponents and punish disfavored groups."

Olives boosted

For an influencer, something like Instagram or YouTube might be a better choice, as it provides someone with a "comment section" which they can moderate.

The Twitter like model of the fediverse is more tuned to random people casually going in to talk to random other people in perhaps the most random of ways (something which a few people seem to dislike).

Show thread

The fediverse is probably not a place for influencers with a trillion followers and I'm not sure why people want it to be.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.