https://www.techdirt.com/2023/05/24/heritage-foundation-says-that-of-course-gop-will-use-kosa-to-censor-lgbtq-content/ I'm not surprised.
For the past couple of years, the Heritage Foundation (a big conservative thinktank) has been trying to frame such things as saving the children and pushing things like this.
https://www.eff.org/deeplinks/2023/05/congress-must-exercise-caution-ai-regulation
My worry is any "regulation" would give a middle finger to open source projects and would entrench rich billionaires like Sam Altman and Google. Rich billionaires grift over fanciful notions of algorithms becoming "sentient" or "destroying the world".
Google's grift is if someone sees an algorithm generate something "offensive", then that's not good, and this billion dollar company needs to come in to "save you". It's not them spending billions of dollars to send those GPUs whirring for a questionable gain.
Politicians worry an algorithm might destroy "democracy". Something which has as of yet not happened over five years with an endless ensemble of algorithms. And it ignores how far older technologies could be used in the same hypothetical manner.
Propaganda operates the same way it always has. With dumb messaging stirring up fear towards minorities. Othering has always been a far more effective tool for authoritarians than hypothetical applications of AI. Just ask the Nazi Party.
There are real problems though, and ones which get glossed over.
This might be "predictive policing". A way to hide existing police practices, such as racial profiling behind a black box.
Worse, someone might presume flawed determinations by an algorithm are less biased. These systems are frequently accused of just regurgitating the same biases that cops often have.
This might be face recognition.
This might be an algorithm deciding who to hire or micro-managing someone working in a very unpleasant manner. The idea of monitoring children to make sure they're always "fully alert" during classes was floated in China.
In the near future, an algorithm might be involved in an "assessment" to decide who gets parole. Or a myriad of restrictions, after coming out of prison. Some restrictions are already said to be so burdensome it is equivalent to setting someone up for failure, and to be re-imprisoned.
In some countries, algorithms chase people for outstanding debts owed to the government, and many turned out to not have outstanding debts. These threatening debt collection notices were very bad for their mental health.
Moderation algorithms are notoriously imprecise:
"NSFW filters" tend to hit LGBT content (and use of such features are usually lobbied for by anti-LGBT religious groups).
Copyright filters have had great difficulty distinguishing ambient noise from music. They're also trivial to circumvent.
Worse still when a moderation algorithm might report someone to the police. That might ruin someone's life.
These are all nasty consequences of algorithms which are far more realistic (and scary to me) than these fanciful "Skynet is coming!" scenarios.
Many of these scenarios don't even need a fancy neural network. A simple traditional algorithm could perpetuate the same harms.
https://www.thepetitionsite.com/en-gb/takeaction/959/553/635/
Here's a petition to oppose the anti-E2EE parts of this bill (the U.K. OSB), although you should sign the other one too.
The OSB reads like a vague wish list of "wouldn't it be nice if the internet was like this?" while completely ignoring how the real world works.
Client-side scanning of private chat messages was top of the Today programme political debate this morning with @Mer__edith and Ciaran Martin, former Head of the National Cyber Security Centre.
Client-side scanning is a technology that intercepts and checks chat messages on mobile phones before being encrypted.
@Mer__edith: these are mass surveillance measures that operate at scale. The government has used sleight of hand to put them in.
#e2ee #encryption #onlinesafetybill #ukpolitics
Obviously, if an accidental "resemblance" (that'd likely come from very high realism, and the fact a lot of people look fairly similar, especially at scale), it shouldn't count as deliberately making it look a certain way.
Resemblance in quotes because it'd probably be a real reach.
I think only a really bad faith actor with a bone to grind might look for coincidences, and probably even they don't have the time to bother.
https://apnews.com/article/target-pride-lgbtq-4bc9de6339f86748bcb8a453d7b9acf0 Target moves some LGBT products from the front of the store to the back after their workers were implicated in violent confrontations.
Bills like the UK's so-called Online Safety Bill and the just re-introduced EARN IT Act undermine safety by attacking end-to-end encryption. #noearnitact #makedmssafe
QT signalapp: Our position remains clear. We will not back down on providing private, safe communications. Today, we join with other encrypted messengers pushing back on the UK's flawed Online Safety Bill.
This is messed up 🐦elonmusk. People need to be able to trust their messages are safe, and rolling out a half-baked version of end-to-end encryption, and only for people who pay, doesn't do that. Stop f**king around and take end-to-end encryption of Twitter seriously. #MakeDMsSafe
QT elonmusk: Early version of encrypted direct messages just launched.
Try it, but don’t trust it yet.
https://www.techdirt.com/2023/05/23/fake-images-spread-on-twitter-fooled-media-spooked-stock-market-briefly/ I agree with him, there is a moral panic around generative AI.
Also, another Elon screw up so that people can pay him a mere $8 to feel like celebrities.
Everyone should have safe places to communicate online, & this half-assed approach isn't going to cut it. End-to-end encryption must be the default, & not just accessible to people who can pay for it. 🐦Twitter needs to take this seriously and #MakeDMsSafe
https://www.wired.com/story/twitter-encrypted-dm-signal-whatsapp/
HEY! Are you a part of an organization that uses Slack? And/or are you a part of any Slack communities? Send us a message and check out http://MakeSlackSafe.com to sign onto a letter calling for end-to-end encryption and blocking features at Slack! #MakeSlackSafe
https://chatcontrol.wtf The parser is having difficulty with recognizing this link? It's probably the gTLD.
Snooping on metadata is not a solution to chatcontrol.wtf either (although, these guys are greedy, they want all the data).
To quote the CIA: "We kill over metadata".
Metadata can expose very intimate details of someone's life. If someone attends a political rally, I'm sure that metadata's of no value.
If someone makes a call to a mental health hotline, or engages in an online equivalent, I'm sure that metadata's of no value.
If someone is part of a marginalized group the government or society frowns upon, that can be used to make them very miserable.
In fact, as these kinds of government agencies tend to look at the world primarily through the lens of people being "potential criminals", they might read the metadata in a very paranoid fashion, and easily jump to conclusions.
Alright. So, I think I've made clear that metadata has a great deal of value, and it's not something you want someone casually trawling through.
There are even messengers which go down the road of eliminating as much metadata as possible, such as the server having no clue of who is talking to who, and about what.
To some extent, Signal does this. Messengers like Cwtch are experimenting with taking it to a whole new level.
The main selling point of E2EE is that it's private and secure. "client scanning" is neither private or secure.
It'd also be very easy for people to see it failing, asking questions, and to start to think that a company is trying to deceive / manipulate them.
From a business perspective, it would not make sense to implement. It'd be crazy to implement. That is probably the point.
Ireland is probably taking a plausibly deniable angle where they put companies in a position where they're practically going to not implement E2EE.
All this nonsense about "client scanning" is to distract from this. This is like Article 17 all over again, where Axel Voss MEP pretended it wasn't about upload filters.
Another factor is that one of the main "experts" the Commission is consulting is a white knight Hollywood actor who created a start-up a couple of years ago and is running around making impossible promises.
He has next to no actual knowledge or experience in this area.
It's not surprising to me the police would want to ban / restrict E2EE.
Anything which might theoretically remove obstacles for them would likely be something a police force would like.
I really don't think it's worth elevating police opinions couched in convenience over and above human rights.
Software Engineer. Psy / Tech / Sex Science Enthusiast. Controversial?
Free Expression. Human rights / Civil Liberties. Anime. Liberal.