Remember when Zoom and Dropbox lied about using E2EE? Something that is nice about it is that it's provably secure.
https://www.wired.com/2011/05/dropbox-ftc/
https://wersm.com/zoom-does-not-actually-support-e2e-encryption-for-video-meetings/
If Facebook was going to exploit Whatsapp or similar to make money btw (other than putting in regular ads, sponsored posts, and business accounts), they probably wouldn't directly break the E2EE, they would mine the metadata, which they're already marketing as a tool for "fighting abuse".
Then, they could sell the idea of protecting the confidentiality of your messages. In a chat called puppies? Alright, so we know you like puppies. Didn't break the E2EE.
@mitexleo One of the earlier iterations allowed the Digital Minister to overrule the nominally independent (though, they tried to appoint the editor of a populist right wing paper who was one of their allies*) "regulator" whenever she saw fit to.
The House of Lords was supposedly going to do something about this aspect of it. I don't know if they did in the end.
The Digital Minister who came up with the bill was Nadine Dorries.
* https://www.theguardian.com/media/2021/nov/19/paul-dacre-pulls-out-of-running-next-ofcom-chair
https://inews.co.uk/news/nhs-psychiatric-wards-are-video-monitoring-children-and-adults-24-hours-a-day-sparking-privacy-fears-2553448 Cameras installed in rooms of government run mental hospitals sparks privacy concerns.
@gustavoturner I dunno what is up with Europe lately.
Updated after double checking.
International Justice Mission (a group with a clearly religious name, and a bit of a negative reputation) made a submission to insert "written story" into chat control (what critics call the E.U.'s think of the children surveillance proposal).
This would target things like literature, and probably even someone talking about things which happened to them. It is so monumentally stupid.
One of the countries where they have a branch, Germany, seemed a bit confused about fiction and reality. Conflating fiction with abuse in '21 and '22, particularly when it came to cartoons and literature (same window of time). This is troublesome because they're *not* the same.
With this particular report, these are strong assertions. Do they have any evidence whatsoever? Presumably evidence that doesn't involve looking for any keyword which someone could spin to be "related to" these things?
I suspect it's probably someone like Laila who conflates fantasy with reality, and is prone to talking in an exaggerated fashion.
https://reason.com/2022/04/09/the-new-campaign-for-a-sex-free-internet/ An important article from last year about who the real censors are.
Read why "Web Environment Integrity" is terrible, and why we must vocally oppose it now. Google's latest maneuver, if we don't act now to stop it, threatens our freedom to explore the Internet with browsers of our choice: https://u.fsf.org/40a #EndDRM #Enshittification #Google #WebStandards #DefectiveByDesign
https://www.sciencedirect.com/science/article/pii/S0955395923002025
"Changes in arrests following decriminalization of low-level drug possession in Oregon and Washington"
"We obtained arrest data for 2019 to 2021 for intervention states (Oregon and Washington) and control states (Colorado, Idaho, Montana, and Nevada). We calculated monthly rates for arrests overall and for violent crimes, drug possession, equipment possession, non-drug crimes, and a set of low-level crimes termed displaced arrests."
"There were no significant changes in overall arrests, non-drug arrests or arrests for violent crime in either state, relative to controls."
https://jamanetwork.com/journals/jamapsychiatry/article-abstract/2809867
"In this cohort study using synthetic control analysis, laws decriminalizing drug possession in Oregon and Washington were not associated with changes in fatal drug overdose rates in either state."
I've resisted commenting on a few internet control lobbyists. I thought I'd cover this one though:
It was a worry that someone might encounter child porn on the Internet (or something they think is it).
1) It seems to be pretty rare. I suppose if someone spends a lot of time on the Internet, they *might* encounter it, especially over the years. Maybe.
2) I don't understand what the expectation here is supposed to be. It's not realistic for every bad thing on the Internet to never appear...
3) Burning things down simply because something *might* appear doesn't seem very proportionate or rights preserving... It's also unlikely to make a difference, or much of one, but that is secondary to this.
Read why "Web Environment Integrity" is terrible, and why we must vocally oppose it now. Google's latest maneuver, if we don't act to stop it, threatens our freedom to explore the Internet with browsers of our choice: https://u.fsf.org/40a #EndDRM #Enshittification #Google #WebStandards
"Would AI porn reduce child abuse?"
The answer to that would be yes.
I honestly don't think this is an interesting question for a number of reasons.
A better question is whether AI panic would lead to incursions on free expression, privacy, due process, and other human rights. The answer to that is absolutely yes.
Prohibitions or restrictions tend not to be particularly nuanced. This is particularly the case when it involves the State. For a number of reasons, the State is the worst place for that.
The State also tends to be very adversarial, and not particularly co-operative (to advance better ends), whenever they get involved in something. Keeping the State out entirely seems like a good scenario.
Some arguments are very bad.
Someone might deliberately "send porn to a minor". It appears there are already laws to deal with this? Also, they could still bother a minor in different ways, and chances are that a bad actor could still do it, regardless of how someone targets good actors.
There are other ways in which someone could be harassing. However, these are either illegal, and / or don't inherently involve a particular technology. Also, punishing good actors would not stop bad ones.
Do you know one of the things which struck me last year in regards to the #chatcontrol?
Someone pointed out they rely on things like E2EE to talk to their psychologist. It was a form of teletherapy. This proposal would be essentially invading the sanctity of that space. That was one of the reasons they opposed it.
It really makes you think. What sorts of cases of therapy might be covered here? Also, could you feel secure knowing your words could be plucked out and twisted against you? Something said in therapy used against you? Or viewed by a third party?
https://jezebel.com/ashton-kutcher-thorn-sex-workers-1850852760
"Spotlight is regarded as especially dangerous as it uses Amazon’s Rekognition facial recognition technology, even though one test by the ACLU showed Rekognition misidentified 28 members of Congress, who were disproportionately people of color, as having previous arrests, and studies have discredited the accuracy of facial recognition algorithms generally."
"Kutcher claimed in 2017 that Thorn helped identify 6,000 U.S. sex-trafficking victims, including 2,000 children, in a six-month period by using Spotlight. But, as some journalists pointed out at the time, those numbers didn’t seem to square with reality.
From 2009 through 2015, FBI agents working on child sex trafficking cases identified just 175 underage trafficking victims on average per year, per the attorney general’s 2015 annual report to Congress on trafficking reviewed by Reason.
A 2020 version of this report specifies that the U.S. Health and Human Services Department’s Trafficking Victim Assistance Program served 105 underage victims in 2018, 144 in 2019, and 307 in 2020."
https://www.engadget.com/2019-05-31-sex-lies-and-surveillance-fosta-privacy.html Oh. I forgot about this article. How nostalgic.
Software Engineer. Psy / Tech / Sex Science Enthusiast. Controversial?
Free Expression. Human rights / Civil Liberties. Anime. Liberal.