A poorer (although, well-meaning) argument which I've seen is that "there is no evidence that porn is linked to bad things".
A better argument is that the idea has been thoroughly debunked / discredited (which is true, as shown here).
I don't really like speculating about relations or thoughts, even those of data brokers (or those adjacent to them).
But, it's not as if it's particularly a reach that a data broker might want to rebrand themselves.
Even if they didn't, it's still viewing the world through the lens of top-down management (control), data collection (surveillance), and buddy government (inherent human rights concerns).
Then, stubbornly ignoring fundamental rights concerns / issues.
Read why "Web Environment Integrity" is terrible, and why we must vocally oppose it now. Google's latest maneuver, if we don't act now to stop it, threatens our freedom to explore the Internet with browsers of our choice: https://u.fsf.org/40a #EndDRM #Enshittification #Google #WebStandards #DefectiveByDesign
If I were to give someone the benefit of the doubt (although, circumstances push me away from that), I would caution someone away from Maslow's Hammer.
There is a saying that when all you have a hammer, everything looks like a nail.
Whether it's a "ban", a "regulation", or "surveillance", these are pretty blunt instruments, and not necessarily useful / good.
"they seem to think breaking encryption is a front for data brokers"
It's kind of true. The CRC operates out of the same building as a data broker. It's not hard to imagine this is to safetywash their reputation. That collecting non-consensual data sets on people is really for "the children".
One of the shills I've seen a few times just so happens to come from there.
Also, Ashton just so happens to be a large investor in "AI", and just so happens to be trying to pitch AI as a magical solution for everything elsewhere.
He is also providing a "surveillance based service" to one of his own companies (OpenAI) to make them look more "socially responsible" (at a time when they're under increasing scrutiny for unrelated reasons).
Clearview is also kind of a thing, and some of these "think of the children" people were also supporting that.
Clearview is a data broker which creates non-consensual data sets of people. They've also allowed their services to be used for non law enforcement purposes (as if there wasn't enough room for over-reach there).
Even when it's not all directly data broker related, they're still selling the idea of surveillance actually being a "good thing".
By the way, while the religious IJM casually cites "terminology guidelines" here, this document (from 2016) resembles more of a propagandistic lobbying manifesto than terminology guidelines.
It tries to encourage states to interpret terms like child in child porn legislation in an alarmingly broad manner, mingling reality with fiction.
It directly conflates reality and fiction, even giving explicit examples of fiction which they disapprove of, it concern trolls with extremely rare "possibilities", and disseminates propagandistic language which someone can utilize to conflate reality and fiction.
At one point, it even tries to suggest the Lanzarote Convention, which explicitly has a "non-existent children are not covered" clause (and they admitted as such), was supportive of their ideology.
The dedicated domain for this document appears to have expired in late 2022 / early 2023.
This is not even directly mentioned in IJM's submission. They just wink at it with "the guidelines". Very sneaky. Deeply sinister.
Did you know the religious group International Justice Mission (IJM) tried to get the E.U. to criminalize written text as "child porn" via the controversial chat control?
https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12726-Fighting-child-sexual-abuse-detection-removal-and-reporting-of-illegal-content-online/F3337815_en
"written story" Very sneaky term slipped in.
It's not hard to imagine such a thing targeting fictional literature, roleplay, fantasy, and even someone talking about an event which happened to themselves.
It's simply spitting in the face of fundamental rights.
Remember when Zoom and Dropbox lied about using E2EE? Something that is nice about it is that it's provably secure.
https://www.wired.com/2011/05/dropbox-ftc/
https://wersm.com/zoom-does-not-actually-support-e2e-encryption-for-video-meetings/
If Facebook was going to exploit Whatsapp or similar to make money btw (other than putting in regular ads, sponsored posts, and business accounts), they probably wouldn't directly break the E2EE, they would mine the metadata, which they're already marketing as a tool for "fighting abuse".
Then, they could sell the idea of protecting the confidentiality of your messages. In a chat called puppies? Alright, so we know you like puppies. Didn't break the E2EE.
https://inews.co.uk/news/nhs-psychiatric-wards-are-video-monitoring-children-and-adults-24-hours-a-day-sparking-privacy-fears-2553448 Cameras installed in rooms of government run mental hospitals sparks privacy concerns.
https://reason.com/2022/04/09/the-new-campaign-for-a-sex-free-internet/ An important article from last year about who the real censors are.
Read why "Web Environment Integrity" is terrible, and why we must vocally oppose it now. Google's latest maneuver, if we don't act now to stop it, threatens our freedom to explore the Internet with browsers of our choice: https://u.fsf.org/40a #EndDRM #Enshittification #Google #WebStandards #DefectiveByDesign
https://www.sciencedirect.com/science/article/pii/S0955395923002025
"Changes in arrests following decriminalization of low-level drug possession in Oregon and Washington"
"We obtained arrest data for 2019 to 2021 for intervention states (Oregon and Washington) and control states (Colorado, Idaho, Montana, and Nevada). We calculated monthly rates for arrests overall and for violent crimes, drug possession, equipment possession, non-drug crimes, and a set of low-level crimes termed displaced arrests."
"There were no significant changes in overall arrests, non-drug arrests or arrests for violent crime in either state, relative to controls."
https://jamanetwork.com/journals/jamapsychiatry/article-abstract/2809867
"In this cohort study using synthetic control analysis, laws decriminalizing drug possession in Oregon and Washington were not associated with changes in fatal drug overdose rates in either state."
I've resisted commenting on a few internet control lobbyists. I thought I'd cover this one though:
It was a worry that someone might encounter child porn on the Internet (or something they think is it).
1) It seems to be pretty rare. I suppose if someone spends a lot of time on the Internet, they *might* encounter it, especially over the years. Maybe.
2) I don't understand what the expectation here is supposed to be. It's not realistic for every bad thing on the Internet to never appear...
3) Burning things down simply because something *might* appear doesn't seem very proportionate or rights preserving... It's also unlikely to make a difference, or much of one, but that is secondary to this.
Read why "Web Environment Integrity" is terrible, and why we must vocally oppose it now. Google's latest maneuver, if we don't act to stop it, threatens our freedom to explore the Internet with browsers of our choice: https://u.fsf.org/40a #EndDRM #Enshittification #Google #WebStandards
"Would AI porn reduce child abuse?"
The answer to that would be yes.
I honestly don't think this is an interesting question for a number of reasons.
A better question is whether AI panic would lead to incursions on free expression, privacy, due process, and other human rights. The answer to that is absolutely yes.
Prohibitions or restrictions tend not to be particularly nuanced. This is particularly the case when it involves the State. For a number of reasons, the State is the worst place for that.
The State also tends to be very adversarial, and not particularly co-operative (to advance better ends), whenever they get involved in something. Keeping the State out entirely seems like a good scenario.
Some arguments are very bad.
Someone might deliberately "send porn to a minor". It appears there are already laws to deal with this? Also, they could still bother a minor in different ways, and chances are that a bad actor could still do it, regardless of how someone targets good actors.
There are other ways in which someone could be harassing. However, these are either illegal, and / or don't inherently involve a particular technology. Also, punishing good actors would not stop bad ones.
Software Engineer. Psy / Tech / Sex Science Enthusiast. Controversial?
Free Expression. Human rights / Civil Liberties. Anime. Liberal.