https://www.tomshardware.com/tech-industry/artificial-intelligence/stack-overflow-bans-users-en-masse-for-rebelling-against-openai-partnership-users-banned-for-deleting-answers-to-prevent-them-being-used-to-train-chatgpt Programming answers website (known for being run by pedantic jerks) bans users for deleting their answers to avoid them being used to train ChatGPT. #AI #privacy
I see another push against #KOSA on here (which is good because it's a very bad censorship bill). #FreeSpeech
I'm thinking that I could have put a bit more polish on that post, but after sitting on it for a while, I just wanted to get out the news.
In general, both the language *and* the argument is broad and abusable for NCII and that other one (for page 7), therefore, both would have to be addressed. Frankly, the line itself doesn't actually add anything of value.
https://airc.nist.gov/docs/NIST.AI.100-4.SyntheticContent.ipd.pdf
A branch of the U.S. Department of Commerce has shown up with hot "AI" takes. Some of them are awful. In fairness, this part of it is said to be underfunded and undermanned (they also start out by saying these aren't necessarily endorsements or recommendations in a little note).
"Mitigating the production and dissemination of AI generated child sexual abuse material (CSAM) and non-consensual intimate imagery (NCII) of real individuals."
Well, it's nice to know they have presumably scoped themselves to only tackling actual problems, and not completely imaginary ones too.
"even when synthetic NCII and/or CSAM does not depict or appear to depict, real individuals" (page 7)
I think that using terms like "NCII" like this is very problematic. The NC in NCII refers to the consent of whoever is in the thing, not whether an outsourced contractor off in Kenya reckons it might be non-consensual. Isn't it moving the goalposts too? Instead of calling something a false positive, which it is, change the definition in a slimy way? #FreeSpeech
This argument is also fundamentally flawed, and I think abusable enough that it is worth responding to this consultation (the other tries too hard to justify censorship and offers up broad and abusable language). This is actually the only time this comes up in this document, even if you could argue that some of the ideas in this document are blunt instruments (for instance, in one later page, they admit that consent is relevant for NCII).
"Comments on NIST AI 100-4 may be sent electronically to NIST-AI-100-4@nist.gov with “Comment on NIST AI 100-4” in the subject line or via www.regulations.gov (enter NIST-2024-0001 in the search field.) Comments containing information in response to this notice must be received on or before June 2, 2024, at 11:59 PM Eastern Time." (from page 3).
Also, my new porn science post: https://qoto.org/@olives/112362450620045294
Page 9. While I could see it being possible for someone to disclose that a professionally produced textual piece involved "AI", it would be silly to expect everything that has been written using it to do so, nor is it technically possible to do so.
Page 12. I'm not sure it is good idea to add copyright enforcement metadata here.
I think that anything someone does with "AI" here is unlikely to be useful against a sophisticated state actor, especially the most obvious ones.
"people with disabilities and those with limited language skills regularly using generative AI to create content may be discriminated against if the content they publish on platforms is labeled as AI-generated"
Interesting, although the document fails to cover other risks to free expression.
Page 20. What about the context around the "terrorist" and "extremist" content? Also, I think the government should consider whether their ideas chill free expression prior to proposing them, in line with the values which underlie the #FirstAmendment. *Facebook does something* means nothing when Facebook is one of the platforms most notorious for suppressing vast swathes of legitimate expression.
Page 22. The document points out that some (or more?) metadata schemes have privacy issues.
For "provenance", it might be a better idea to have an optional additional metadata file along with the main file, rather than trying to be "smart" about it (violates the KISS principle). There are strong vibes of over-engineering here.
Page 28. Fake faces are easy to distinguish, apparently.
Page 32. "is being debated" is an under-statement. Detecting text is known to be completely unreliable.
Page 35. It's silly to think an algorithm can necessarily determine intent.
Page 39. Assuming that humans won't take a dodgy result at face value is really expecting too much from them.
Page 42. "keywords" have a high false positive rate (there have been many issues in the past, including even PornHub of all sites wrongly accusing people of looking for *actual* child porn at very high rates). This can be partially alleviated by having more dedicated models for different things but it can still be troublesome. This page also presumes that "sexual content" is harmful which is not necessarily the case.
Page 46. Likely exaggerations of the harms / influence of potential inputs in data sets.
Also, it's one thing to be mean about particularly vile criminals, but the Nazi crap is creepy and approaches the territory of treating someone like a cockroach which is disturbing. Particularly, if this is supposed to be a serious media outlet.
https://www.theguardian.com/australia-news/article/2024/may/07/sydney-council-bans-same-sex-parenting-books-from-libraries-for-safety-of-our-children
"A Sydney council has voted to place a blanket ban on same-sex parenting books from local libraries in a move the New South Wales government warns could be a breach of the state’s Anti-Discrimination Act."
If you're wondering, Australia also has people who push for things like this. #auspol #FreeSpeech
Going over a couple of important Australian consultations which involve things like free expression (i.e. sexual expression) and internet freedom. You can respond to these. #auspol #anime #FreeSpeech #FreeExpression
https://www.wired.com/story/outabox-facial-recognition-breach/
"Police and federal agencies are responding to a massive breach of personal data linked to a facial recognition scheme that was implemented in bars and clubs across Australia."
"“Sadly, this is a horrible example of what can happen as a result of implementing privacy-invasive facial recognition systems,” Samantha Floreani, head of policy for Australia-based privacy and security nonprofit Digital Rights Watch, tells WIRED. “When privacy advocates warn of the risks associated with surveillance-based systems like this, data breaches are one of them.”"
#privacy #FaceRecognition #auspol
They should consider using something other than PHP to power their site, lol.
https://www.defendonlineprivacy.com/ca/action.php
If you want to oppose the proposed #California "#AgeVerification" bill. #FirstAmendment #privacy
https://www.eff.org/deeplinks/2024/05/us-version-kosa-still-censorship-bill
"A companion bill to the Kids Online Safety Act (KOSA) was introduced in the House last month. Despite minor changes, it suffers from the same fundamental flaws as its Senate counterpart. At its core, this bill is still an unconstitutional censorship bill that restricts protected online speech and gives the government the power to target services and content it finds objectionable."
"Our concern, which we share with others, is that the bill’s broad and vague provisions will force platforms to censor legally protected content and impose age-verification requirements. The age verification requirements will drive away both minors and adults who either lack the proper ID, or who value their privacy and anonymity."
#FirstAmendment #FreeSpeech #KOSA #AgeVerification #privacy
The U.K. has a QAnon problem and and I'm sure you all already know this. #ukpol
How can #freesoftware grow and thrive within walled-garden environments? Learn how at #LibrePlanet 2024: https://u.fsf.org/444
Software Engineer. Psy / Tech / Sex Science Enthusiast. Controversial?
Free Expression. Human rights / Civil Liberties. Anime. Liberal.