Artificial intelligence and algorithmic tools used by central government are to be published on a public register after warnings they can contain “entrenched” racism and bias.

(I'm obviously not thinking far enough ahead or I'd have seen this coming—"racist AIs" will of course be used to encourage racist policy-making as well as delivery of services.)
theguardian.com/technology/art

@cstross
A number of academics have been pointing this out for a long time. Dimissed because they are sociologists I guess.

@Andii in the UK context, such concerns were dismissed because (a) the last (Conservative) government *wanted* an excuse for racist policy-making eg. on immigration, (b) training GANs is always based on historic data so is inherently backward-looking (old data features baked-in outdated priors), and (c) the hucksters driving the AI bubble in silicon valley VC are themselves reactionaries with far-right sympathies.

Follow

@cstross @Andii

Is there something that makes (b) apply more to GANs than to ~any supervised non-active-learning non-RL model?

@robryk @cstross @Andii possibly that when we supervised training of algorithms with smaller data sets we were more likely to try and exclude some of the noise (where we knew of course, lots of the time you can't know).

With the large volume self training models then people tend to more uncritical about the inputs in the belief that the machine will sort out for itself what's noise

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.