Show newer

Looks like the game "Anime Maze Game - Visual 2D"(1) is being censored by Australia(2), probably because the system was built by freakin puritans (who worry about things which don't matter(3)).

As always, you can write to reps at the territory, state, and federal levels (4) to oppose any and all censorship.

1 play.google.com/store/apps/det

2 refused-classification.com/cen

3 qoto.org/@olives/1110833026508

4 efa.org.au/get-involved/lobbyi

Olives boosted

The "loaded answers" are particularly interesting here (it's a multiple choice style survey).

Let's say someone is asking a question about a policy. Normally, you would expect there to be "Yes" and "No" as options.

Here though, there is instead something like "Yes, I think children shouldn't suffer" and "No, I think children should suffer" (it's not quite that but it is pretty close to it).

So unethical.

Show thread

The "loaded answers" are particularly interesting here (it's a multiple choice style survey).

Let's say someone is asking a question about a policy. Normally, you would expect there to be "Yes" and "No" as options.

Here though, there is instead something like "Yes, I think children shouldn't suffer" and "No, I think children should suffer" (it's not quite that but it is pretty close to it).

So unethical.

Show thread

@glynmoody Conservatives ran a campaign saying that a bunch of terrible things would end up happening.

There appears to be another questionable survey, this time from a "think of the children" group (they're not known to carry out ethical surveys). I'm sure you will see it in the coming days. I don't want to give these kinds of vile and disingenuous people clicks though.

Some flaws (these are very likely not the only ones):

1) The first informatory segment is not neutral or particularly nuanced. It instead frames the situation in a propagandistic manner favorable to the ideology of this group. It also contains no negative drawbacks.

To understand why it is so problematic to present something in such a one-sided manner like this, and particularly without providing the long history of misleading claims and statements (or if we're to be less charitable, what we'd refer to as lies), we only need to refer to the example of "dihydrogen monoxide"(1).

2) One question conflates minors viewing online porn with abuse. This likely inflates the number of responses where minors are "more at risk" now. I've been over online porn not being a big deal (2).

3) The second informatory segment deceives the respondent about what content might be flagged by the algorithm. There is no mention of the heated discussion around false positives either. Also, they claim that only "a few providers do scanning" but there appears to be no actual evidence for this claim (though, even if they didn't, it's arguable they'd still have a right not to do so). They also leave out that these few providers which do appear to disproportionately account for the majority of known child abuse photos.

4) A question following this fails to note that most providers are probably already "preventing exploitation", though there are probably human rights considerations at play. No evidence is provided that they don't.

The only "evidence" I've seen in around three years, unrelated to this document, is a Canadian group bringing up a few anecdotes where specific pieces of content didn't appear to be moderated to their liking. This Canadian group is very activist and appears to have zero or little regard for the human rights implications of their actions, they've even been accused of censoring historic stamps which they erroneously identified as "child abuse".

At other times, this Canadian group talks in a vague manner like "broad" and "narrow", and do not actually say what sort of content they're flagging. This creates room for creative interpretations of "abuse" which don't actually involve abuse. They refuse to define these terms. One of their advisors (who appears to be very responsive to conservative concerns, even fringe ones, and has often been preoccupied by things like "ritual abuse" in schools) explicitly refers to things which are clearly not abuse as "abuse". They network with organizations which do this. I'm also aware that the executive director of this organization has met with E.U. reps very recently.

5) There's some nonsense about it "being possible to detect things within E2EE environments". In the real world, companies just wouldn't implement E2EE, because that is the most practical thing to do. It's a red herring argument in more ways than one.

6) Loaded questions intended to make you feel like a bad person for not agreeing with the premise. Inflates responses in line with the group's ideology.

This is not an exhaustive list of all problems with the spying / censorship proposal (or this survey). I don't want to repeat all of that discourse in this post.

1 en.wikipedia.org/wiki/Dihydrog

2 qoto.org/@olives/1110833026508

Ylva defends this as a "standard normal practice".

When you're being accused of engaging in unethical practices (and she seems very unethical, indeed), it doesn't seem like essentially saying "I do this all the time" is a particularly good defence...?

Show thread

If it was up to me, I would have sacked Ylva a long time ago.

Cool to see U.N. Human Rights speaking in support of end-to-end encryption.

Imagine being a shill for Ylva and her bullshit.

reason.com/2023/10/13/israel-e

"In the wake of the Hamas terrorist organization's murderous attacks on Israel, the country's government is admitting—not for the first time—that even Israel's extensive security apparatus can't be everywhere to protect everyone. Under the pressure of bloody events, officials are again making it easier for civilians to acquire and carry firearms for self-defense."

The Second Amendment folks might be interested in this one.

nbcnews.com/news/us-news/state

"A Pennsylvania state trooper was caught on camera appearing to physically assault his ex-girlfriend after he allegedly abused his power to have her involuntarily committed to a hospital.

Ronald Davis, 37, is accused of having improperly obtained a warrant to have the woman committed without divulging his connection to her, according to the criminal complaint and a probable cause affidavit provided by the Dauphin County District Attorney’s Office.

While he was carrying out the order himself, he asked another person to record him as he appeared to strangle her and restrain her. During the 12½-minute video, released by the DA’s office, she repeatedly says, “I can’t breathe.”"

"The woman wound up being held at Lehigh Valley Hospital for four days, until her release Aug. 25. “The video and text communications with Davis show that [the victim] was rational and the involuntary commitment was improper,” the DA’s office said in the news release."

"After she was released from the hospital, the woman met with police and described ways she alleged Davis had sought to control her, including telling her “I know you’re not crazy, I’ll paint you as crazy” and “I know the law,” the probable cause affidavit says."

While I've always been critical of Twitter's censoriousness (particularly in regards to art, but there are also other areas where the "free speech absolutist" has failed to live up to that motto), it appears they're now just not bothering to surface sexual content or other "sensitive" content now to followers (except in the following feed).

Presumably, this is to do with them not wanting to surface dead bodies, and you know, sexual expression tends to get lumped in with that as "sensitive content" (which is problematic in it's own right).

Another possibility is misinformation, which is problematic, because it would also take legitimate content down with it. I don't see what that has to do with sexual expression, other than "taking it as an opportunity to suppress 'offensive' content".

Why does Tim Cook even care what some disingenuous right wing rag says? Why does he pay such close attention to them? Is he a fascist?

I don't think it is acceptable for Apple to engage in censorship at the behest of known right wing rags (who are all too prone to present things they don't like in a particular way).

Show thread

OAMA is pretty simple:

1) It forces Apple to not remove apps, though there are a few exceptions where they can.

2) It forces Apple to allow for "side-loading".

3) It forces Apple to allow alternate app stores.

Olives  
"Apple said it removed Chai from the app store for repeated violations of guidelines related to objectionable content and user-generated content." ...

techdirt.com/2023/10/12/new-yo

"The bills, the New York Child Data Protection Act and the Stop Addictive Feeds Exploitation (SAFE) for Kids Act (which doesn’t appear to have text live just yet), incredibly seem to be taking a page from equally censorial bills that have already been ruled unconstitutional in places like Arkansas and California. The SAFE bill is actually quite similar to a bill in Utah, which hasn’t been challenged yet, but I have to believe it will be soon, and it’s equally unconstitutional. Incredibly, the Data Protection Act itself cites the bill in Utah AND California’s Age Appropriate Design Code even though that bill has already been declared unconstitutional by a federal judge! Incredible."

"As with Utah’s bill, New York’s SAFE Act will require parental consent for anyone under age 18 to have a social media account, which means that if you’re an LGBTQ+ child and your parent disapproves of your identity, they can cut you off from your community support."

"It will also require “default chronological feeds” rather than algorithmically generated feeds, even though a recent study of chronological feeds found that they expose users to more misinformation than algorithmic feeds."

I'm more ambivalent on this one. If it's an option, then someone can just switch from one feed to another, if the default option is not serving them well. Though, if there's no evidence it does anything, it's questionable for the government to come in and micro product design.

"As for the Data Protection Act, it will require age verification (since it says sites have to treat those under 18 differently), and, as we’ve seen with the rulings in California and Arkansas (not to mention multiple past Supreme Court rulings), that’s just blatantly unconstitutional as it ends up limiting adult access to content as well."

Yes, that is troublesome. If they're going to do some sort of privacy law, then it might be better to go down the avenue of pushing higher privacy standards for everyone, rather than carving out special ones for minors (which first requires determining who is a minor).

"Apple said it removed Chai from the app store for repeated violations of guidelines related to objectionable content and user-generated content."

While I'm not that fond of the Open App Markets Act, as it gets the government involved in regulating these companies (though, it is permissible in this case), Apple is only adding to the case that they cannot be trusted, and need to be forced to stop interfering in commerce.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.