"they seem to think breaking encryption is a front for data brokers"
It's kind of true. The CRC operates out of the same building as a data broker. It's not hard to imagine this is to safetywash their reputation. That collecting non-consensual data sets on people is really for "the children".
One of the shills I've seen a few times just so happens to come from there.
Also, Ashton just so happens to be a large investor in "AI", and just so happens to be trying to pitch AI as a magical solution for everything elsewhere.
He is also providing a "surveillance based service" to one of his own companies (OpenAI) to make them look more "socially responsible" (at a time when they're under increasing scrutiny for unrelated reasons).
Clearview is also kind of a thing, and some of these "think of the children" people were also supporting that.
Clearview is a data broker which creates non-consensual data sets of people. They've also allowed their services to be used for non law enforcement purposes (as if there wasn't enough room for over-reach there).
Even when it's not all directly data broker related, they're still selling the idea of surveillance actually being a "good thing".
By the way, while the religious IJM casually cites "terminology guidelines" here, this document (from 2016) resembles more of a propagandistic lobbying manifesto than terminology guidelines.
It tries to encourage states to interpret terms like child in child porn legislation in an alarmingly broad manner, mingling reality with fiction.
It directly conflates reality and fiction, even giving explicit examples of fiction which they disapprove of, it concern trolls with extremely rare "possibilities", and disseminates propagandistic language which someone can utilize to conflate reality and fiction.
At one point, it even tries to suggest the Lanzarote Convention, which explicitly has a "non-existent children are not covered" clause (and they admitted as such), was supportive of their ideology.
The dedicated domain for this document appears to have expired in late 2022 / early 2023.
This is not even directly mentioned in IJM's submission. They just wink at it with "the guidelines". Very sneaky. Deeply sinister.
Did you know the religious group International Justice Mission (IJM) tried to get the E.U. to criminalize written text as "child porn" via the controversial chat control?
https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12726-Fighting-child-sexual-abuse-detection-removal-and-reporting-of-illegal-content-online/F3337815_en
"written story" Very sneaky term slipped in.
It's not hard to imagine such a thing targeting fictional literature, roleplay, fantasy, and even someone talking about an event which happened to themselves.
It's simply spitting in the face of fundamental rights.
Remember when Zoom and Dropbox lied about using E2EE? Something that is nice about it is that it's provably secure.
https://www.wired.com/2011/05/dropbox-ftc/
https://wersm.com/zoom-does-not-actually-support-e2e-encryption-for-video-meetings/
If Facebook was going to exploit Whatsapp or similar to make money btw (other than putting in regular ads, sponsored posts, and business accounts), they probably wouldn't directly break the E2EE, they would mine the metadata, which they're already marketing as a tool for "fighting abuse".
Then, they could sell the idea of protecting the confidentiality of your messages. In a chat called puppies? Alright, so we know you like puppies. Didn't break the E2EE.
https://inews.co.uk/news/nhs-psychiatric-wards-are-video-monitoring-children-and-adults-24-hours-a-day-sparking-privacy-fears-2553448 Cameras installed in rooms of government run mental hospitals sparks privacy concerns.
https://reason.com/2022/04/09/the-new-campaign-for-a-sex-free-internet/ An important article from last year about who the real censors are.
Read why "Web Environment Integrity" is terrible, and why we must vocally oppose it now. Google's latest maneuver, if we don't act now to stop it, threatens our freedom to explore the Internet with browsers of our choice: https://u.fsf.org/40a #EndDRM #Enshittification #Google #WebStandards #DefectiveByDesign
https://www.sciencedirect.com/science/article/pii/S0955395923002025
"Changes in arrests following decriminalization of low-level drug possession in Oregon and Washington"
"We obtained arrest data for 2019 to 2021 for intervention states (Oregon and Washington) and control states (Colorado, Idaho, Montana, and Nevada). We calculated monthly rates for arrests overall and for violent crimes, drug possession, equipment possession, non-drug crimes, and a set of low-level crimes termed displaced arrests."
"There were no significant changes in overall arrests, non-drug arrests or arrests for violent crime in either state, relative to controls."
https://jamanetwork.com/journals/jamapsychiatry/article-abstract/2809867
"In this cohort study using synthetic control analysis, laws decriminalizing drug possession in Oregon and Washington were not associated with changes in fatal drug overdose rates in either state."
I've resisted commenting on a few internet control lobbyists. I thought I'd cover this one though:
It was a worry that someone might encounter child porn on the Internet (or something they think is it).
1) It seems to be pretty rare. I suppose if someone spends a lot of time on the Internet, they *might* encounter it, especially over the years. Maybe.
2) I don't understand what the expectation here is supposed to be. It's not realistic for every bad thing on the Internet to never appear...
3) Burning things down simply because something *might* appear doesn't seem very proportionate or rights preserving... It's also unlikely to make a difference, or much of one, but that is secondary to this.
Read why "Web Environment Integrity" is terrible, and why we must vocally oppose it now. Google's latest maneuver, if we don't act to stop it, threatens our freedom to explore the Internet with browsers of our choice: https://u.fsf.org/40a #EndDRM #Enshittification #Google #WebStandards
"Would AI porn reduce child abuse?"
The answer to that would be yes.
I honestly don't think this is an interesting question for a number of reasons.
A better question is whether AI panic would lead to incursions on free expression, privacy, due process, and other human rights. The answer to that is absolutely yes.
Prohibitions or restrictions tend not to be particularly nuanced. This is particularly the case when it involves the State. For a number of reasons, the State is the worst place for that.
The State also tends to be very adversarial, and not particularly co-operative (to advance better ends), whenever they get involved in something. Keeping the State out entirely seems like a good scenario.
Some arguments are very bad.
Someone might deliberately "send porn to a minor". It appears there are already laws to deal with this? Also, they could still bother a minor in different ways, and chances are that a bad actor could still do it, regardless of how someone targets good actors.
There are other ways in which someone could be harassing. However, these are either illegal, and / or don't inherently involve a particular technology. Also, punishing good actors would not stop bad ones.
Do you know one of the things which struck me last year in regards to the #chatcontrol?
Someone pointed out they rely on things like E2EE to talk to their psychologist. It was a form of teletherapy. This proposal would be essentially invading the sanctity of that space. That was one of the reasons they opposed it.
It really makes you think. What sorts of cases of therapy might be covered here? Also, could you feel secure knowing your words could be plucked out and twisted against you? Something said in therapy used against you? Or viewed by a third party?
https://jezebel.com/ashton-kutcher-thorn-sex-workers-1850852760
"Spotlight is regarded as especially dangerous as it uses Amazon’s Rekognition facial recognition technology, even though one test by the ACLU showed Rekognition misidentified 28 members of Congress, who were disproportionately people of color, as having previous arrests, and studies have discredited the accuracy of facial recognition algorithms generally."
"Kutcher claimed in 2017 that Thorn helped identify 6,000 U.S. sex-trafficking victims, including 2,000 children, in a six-month period by using Spotlight. But, as some journalists pointed out at the time, those numbers didn’t seem to square with reality.
From 2009 through 2015, FBI agents working on child sex trafficking cases identified just 175 underage trafficking victims on average per year, per the attorney general’s 2015 annual report to Congress on trafficking reviewed by Reason.
A 2020 version of this report specifies that the U.S. Health and Human Services Department’s Trafficking Victim Assistance Program served 105 underage victims in 2018, 144 in 2019, and 307 in 2020."
https://www.engadget.com/2019-05-31-sex-lies-and-surveillance-fosta-privacy.html Oh. I forgot about this article. How nostalgic.
Software Engineer. Psy / Tech / Sex Science Enthusiast. Controversial?
Free Expression. Human rights / Civil Liberties. Anime. Liberal.