https://www.eff.org/deeplinks/2023/05/congress-must-exercise-caution-ai-regulation
My worry is any "regulation" would give a middle finger to open source projects and would entrench rich billionaires like Sam Altman and Google. Rich billionaires grift over fanciful notions of algorithms becoming "sentient" or "destroying the world".
Google's grift is if someone sees an algorithm generate something "offensive", then that's not good, and this billion dollar company needs to come in to "save you". It's not them spending billions of dollars to send those GPUs whirring for a questionable gain.
Politicians worry an algorithm might destroy "democracy". Something which has as of yet not happened over five years with an endless ensemble of algorithms. And it ignores how far older technologies could be used in the same hypothetical manner.
Propaganda operates the same way it always has. With dumb messaging stirring up fear towards minorities. Othering has always been a far more effective tool for authoritarians than hypothetical applications of AI. Just ask the Nazi Party.
There are real problems though, and ones which get glossed over.
This might be "predictive policing". A way to hide existing police practices, such as racial profiling behind a black box.
Worse, someone might presume flawed determinations by an algorithm are less biased. These systems are frequently accused of just regurgitating the same biases that cops often have.
This might be face recognition.
This might be an algorithm deciding who to hire or micro-managing someone working in a very unpleasant manner. The idea of monitoring children to make sure they're always "fully alert" during classes was floated in China.
In the near future, an algorithm might be involved in an "assessment" to decide who gets parole. Or a myriad of restrictions, after coming out of prison. Some restrictions are already said to be so burdensome it is equivalent to setting someone up for failure, and to be re-imprisoned.
In some countries, algorithms chase people for outstanding debts owed to the government, and many turned out to not have outstanding debts. These threatening debt collection notices were very bad for their mental health.
Moderation algorithms are notoriously imprecise:
"NSFW filters" tend to hit LGBT content (and use of such features are usually lobbied for by anti-LGBT religious groups).
Copyright filters have had great difficulty distinguishing ambient noise from music. They're also trivial to circumvent.
Worse still when a moderation algorithm might report someone to the police. That might ruin someone's life.
These are all nasty consequences of algorithms which are far more realistic (and scary to me) than these fanciful "Skynet is coming!" scenarios.
Many of these scenarios don't even need a fancy neural network. A simple traditional algorithm could perpetuate the same harms.