Artificial intelligence and algorithmic tools used by central government are to be published on a public register after warnings they can contain “entrenched” racism and bias.
(I'm obviously not thinking far enough ahead or I'd have seen this coming—"racist AIs" will of course be used to encourage racist policy-making as well as delivery of services.)
https://www.theguardian.com/technology/article/2024/aug/25/register-aims-to-quash-fears-over-racist-and-biased-ai-tools-used-on-uk-public
@cstross
A number of academics have been pointing this out for a long time. Dimissed because they are sociologists I guess.
@robryk @cstross @Andii possibly that when we supervised training of algorithms with smaller data sets we were more likely to try and exclude some of the noise (where we knew of course, lots of the time you can't know).
With the large volume self training models then people tend to more uncritical about the inputs in the belief that the machine will sort out for itself what's noise