https://www.wired.com/story/911-s5-botnet-arrest/ Interesting. Like Wired, I hope no one was falsely accused of committing a crime over it.
Also, to be even clearer, if it involves abuse, maybe a piece of content is a derivative of a piece of it, then that is also prohibited (18 U.S.C. 2256). From what I've heard, it would likely require some element of intent.
From what I've seen within the United States, virtually every case where some "sexual depiction of imaginary children" (excuse me for the jargon, I don't like jargon myself) has turned up at court involved people who the police were already charging for committing some other crime.
This isn't because the police aren't investigating either. The police have lost interest many times when it came to light that it is just that. It is also well-known that trying that would violate the First Amendment.
The First Amendment has even been said to have been brought up. The federal government even publicly admitted last year that it is generally protected by the First Amendment.
Legal scholars from Stanford also noted this year that if it were brought up in court, it would likely fail to pass muster.
Other than that, it's just not effective at combatting crime and wastes time which could be spent on real criminals.
While someone doesn't seem to like the point, the point that a few inputs (i.e. people) are unlikely to have a material difference on the outputs of a model remains.
If someone wants to make a copyright or privacy argument, they're free to, but I'm wary of making exaggerations here when that could lead to unintended consequences.
For the record, OpenAI is specifically worse with that, although it is also more restrictive. It's hard to say why. It could be that whatever architecture they use to appear more sophisticated is more prone to this.
In fact, I don't agree with everything that Dr. Tenbergen says, but at least she actually took the time to do a study, and has some years of experience. If she says something interesting, then I might listen.
But, the "AI ethicist" or the "image analysis guy" is not much better than consulting Gary the computer technician about something he hardly knows about.
If someone actually studied it, and they had a bad take, maybe assuming too much bad, then that might be one thing, but you have these people who don't actually know a whole lot (and haven't touched the field) who get consulted as if they have an defining opinion on the matter.
For whatever reason, they keep finding people with hot takes about porn but who don't actually study porn effects, instead they hear something alarming and they decide to echo it.
A guy who has knowledge in analysing images or data is not an expert in sexology.
It might be a step up over an "#AI ethics" person with hot takes about offensive content but their opinion is worth a lot less than someone might make out.
I don't think you need to be an expert in the human mind to realize that an opt-out for Microsoft Recall is going to lead to a lot of people getting tripped up by it in a sensitive situation (perhaps, one involving security).
If it is going to be there at all, this should be an opt-in instead.
I'm also wondering whether it shouldn't be implemented in some other way, and if there shouldn't be some API or something for a program to opt out of it? #privacy
I didn't cover Endrass et al specifically (it is referenced indirectly) as it relates to actual child porn, which as we know is not ethical, therefore it's more interesting as part of a suite to debunk anti porn arguments than being interesting in it's own right (where the most it has to say is that someone who views that tend to have different characteristics from actual abusers).
I am aware of it though.
Also, I don't really want to lump people who don't engage in actual child porn in with those who do as I don't think it would be good to invite prejudice upon good people.
I'm reading crazy nonsense conflating teenagers viewing porn with them being "abused" and teens making jokes about sex (or whatever) to each other with them being groomed.
I don't even know how to comment on this. It's too crazy for me. #ukpol
More than half the world now faces a freedom of expression crisis.
“At no point in the last 20 years have so many people been denied the benefits of open societies,” says our executive director, Quinn MCKew
Read more in The guardian's feature of our #GXR2024 report:
https://www.theguardian.com/global-development/article/2024/may/22/more-than-half-the-world-cannot-speak-freely-report-finds
https://www.defendonlineprivacy.com/ca/action.php You really should take the time to oppose the #California "#AgeVerification" bill which for some inexplicable reason has a lot of support in the Assembly.
There are #privacy implications, there are security implications, it impedes speech, and might make some sites inaccessible entirely (#FreeSpeech). It also violates the #FirstAmendment.
Software Engineer. Psy / Tech / Sex Science Enthusiast. Controversial?
Free Expression. Human rights / Civil Liberties. Anime. Liberal.