TIL that a #BlackBox #AI can be defined as #OpenSource.
Granting the “freedom to fine tune” cannot be enough to qualify as Open Source AI just like granting the “freedom to configure” cannot be enough to qualify as Open Source Software.
Yet, for #OSI it's enough.
And you can't provide effective solutions to the problem they pretend to see into making training data available.
Or you will be silenced.
#OpenSourceAI #OpenWashing #OpenWashingAI
#OSS #OSAI #AIAct #BigTech #GAFAM
I really think that every marginalized group that suffered #discrimination and systemic oppression should join the discussion about #OpenSourceAI
If a black box will be distributed, regulated and trusted as #OpenSource, no human right organization will be able to inspect the dataset used to train it.
And if you can implant undetectable backdoors in a machine learning model (see https://arxiv.org/abs/2204.06974), you can much more easily implant undetectable bias that will hurt some groups of people to benefit others.
#LGBTQ people and victims of systemic #racism should really demand transparency,...before it's too late!
Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate "backdoor key", the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees. First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given black-box access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm or in Random ReLU networks. In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is "clean" or contains a backdoor. Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, our construction can produce a classifier that is indistinguishable from an "adversarially robust" classifier, but where every input has an adversarial example! In summary, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness.
arxiv.org