@UncleAlbie @trwnh

If the flaws are not proven it becomes hard to hold the person in charge of the software accountable.

And AI enables purposeful flaws to be built in, covertly (like a "bug").

I see real danger there when the programmer can essentially use any datasets they choose to create something flawed/sold that bypasses traditional responsibility.

And that's just my take on it.

Would be nice if people w/held responsible, I just don't see infrastructure/regulation for that today.

Follow

@RTP @UncleAlbie @trwnh

Machine learning security is still mostly a research field.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.