Very nice text.
There is a issue through.
Tomorrow #OSI will release #OSAID 1.0, an #OpenWashingAI definition that will magically turn any #blackbox trained on unshareable data into #OpenSourceAI!
As far as I have been able to understand about the co-design process, it was intended to open a loophole in the #AIAct, to benefit OSI sponsors such as #Meta, #Microsoft, #Google and so on..
But if you read your Aspirational text while taking the OSAID in mind, you see how its loophole affect your aspirations too, as your text talks about "FOSS", and any black box trained over unshareable data will qualify both as valid input and as valid output.
So I think that @civodul's proposal is the only viable one to achieve your aspirations, as it would be tainted by proprietary black boxes anyway, thanks to OSAID, don't you think?
@zacchiro I think a question to ask first is whether generative-AI-assisted programming should be supported at all.
First because it’s technically and scientifically ridiculous to rely on statistically plausible code.
But more importantly because of the environmental disaster it contributes to.
Can we, as passionate as we are about free software, just say “no, we won’t play this game”?
Oggi i dati sensibili sono diventati una delle risorse più ambite, rappresentando una moneta invisibile e preziosa. In un’epoca in cui la digitalizzazione avanza, proteggere i dati non è solo una questione tecnica, ma una battaglia per la libertà individuale e la fiducia che abbiamo nei sistemi di cui facciamo parte.
The humor columnist has more backbone than Bezos.
Gift link: https://wapo.st/3UqHWRM
L'istanza qoto.org
L'amministratore è un liberista libertario e ogni tanto va down per diversi giorni per manutenzione aggiornamento.
Have you seen #OSI's #OpenWashingAI definition?
It has been carefully crafted around the needs of OSI's sponsors, #Meta, #Google, #Microsoft.. that fear the #transparency requirements of the #AIAct, but are more than happy to open wash their systems, as long as they don't need to share their most valuable asset: training data.
https://lwn.net/SubscriberLink/995159/a37fb9817a00ebcb/
#OSAID #OpenSource #OpenSourceAIDefinition #BigTech #GAFAM #AI #CorporateCapture
Have you seen #OSI's #OpenWashingAI definition?
It has been carefully crafted around the needs of OSI's sponsors, #Meta, #Google, #Microsoft.. that fear the #transparency requirements of the #AIAct, but are more than happy to open wash their systems, as long as they don't need to share their most valuable asset: training data.
https://lwn.net/SubscriberLink/995159/a37fb9817a00ebcb/
#OSAID #OpenSource #OpenSourceAIDefinition #BigTech #GAFAM #AI #CorporateCapture
OSI on the co-design process that [still-]birthed the Open Source AI Definition (OSAID) https://samjohnston.org/2024/10/26/osi-on-the-co-design-process-that-still-birthed-the-open-source-ai-definition-osaid/ #OpenSource #ArtificialIntelligence #AI
Attenzione, se parliamo sul serio di sicurezza, parlare di threat model senza sottolineare anche le vulnerabilità evidenti è poco utile.
E di vulnerabilità evidenti qui ce ne sono due:
- Google come single point of failure
- incapaci al Governo
Concentrarsi solo sul secondo è davvero miope: se l'infrastruttura del Paese fosse distribuita, invece che sotto il controllo esclusivo di una singola azienda o due, la seconda vulnerabilità non potrebbe essere sfruttata tanto facilmente.
How is it #Python is moving away from well-known, verifyable OpenPGP signatures (kinda like federation) to something with a centralised single point of failure that barely checks anything and trusts random commercial services‽
This is worse than ridiculous. Have fun…
Sometimes I wonder whether to still bother with FOSS. I see a wedge being driven into the midst of the conglomerate of communities.
I read something the other day (ofc cannot find it now) where someone was wondering why IT is constantly dumbing down its tools, whereas all other fields require people to actually learn how to safely use a tool first. This also applies.
And this one is even better!
https://discuss.opensource.org/t/list-of-unaddressed-issues-of-osaid-rc1/650
(and likely the real reason of my censorship)
Unfortunately we cannot simply forget them, at least in Europe.
They are opening a huge loophole in the #AIAct, I suppose to better serve their sponsors.
Open washing #AI will cause harm to people and democracies!
https://discuss.opensource.org/t/list-of-unaddressed-issues-of-osaid-rc1/650
Unfortunately we cannot simply forget them, at least in Europe.
They are opening a huge loophole in the #AIAct, I suppose to better serve their sponsors.
Open washing #AI will cause harm to people and democracies!
https://discuss.opensource.org/t/list-of-unaddressed-issues-of-osaid-rc1/650
Here is the post you won't see anytime soon:
#OpenSource #censorship #fear #OSAID #AIAct #OpenSourceAI #privacy
Wow!
It's impressive how "open" is @OpenSource!
I've been silenced on #OSI forum for ten days after posting https://discuss.opensource.org/t/privacy-or-real-data-transparency-a-false-dichotomy/661.
Now I'm back... and guess what?
I'm too dangerous to #OpenWashing.
🤷♂️
1) "#EU #AI Act" wird erlassen
2) #EUAIAct enthält Ausnahmeregelungen für " free
and open source" modelle
So weit, so gut.
Dann aber, Teil 3:
Die OSI (ja, DIE von opensource.org) wollen das Open Source AI Label (auch) für Systeme vergeben, deren Trainingsdaten nicht zugänglich sind. 🤯
https://discuss.opensource.org/t/list-of-unaddressed-issues-of-osaid-rc1/650