Negli elenchi aggiornati delle riviste dichiarate scientifiche dall’ANVUR continua a mancare Open Research Europe (ORE), infrastruttura di revisione paritaria aperta offerta dalla Commissione dell’Unione Europea agli autori i cui lavori di ricerca sono esito di finanziamenti europei. Visti i precedenti, questa assenza non sorprende. Ed era anche già chiaro che la recente “Disposizione […]
I attended the EDPB event on #PayorOkay models and left deeply concerned. The discussion lacked acknowledgment of data protection as a fundamental right and ignored clear GDPR principles making the model unlawful. Instead, it conflated ads with core services, sidelining fairness and rights. Surveillance ads harm individuals and society, yet their ‘value’ is overstated. We must reclaim the debate: data protection is key to human dignity and a rights-respecting digital future.
proxy_media
boolean field to server.json
to true.strict_public_timelines
option introduced in the previous release now works correctly.@knowprose @sj Thanks for sharing this, it's good to know there's a sizable community that is unhappy with how OSAID got rolled out.
@sj @jaredwhite Through LinkedIn, I got invited to this discussion area regarding the topic... So I'm passing it along.
I haven't had enough contiguous time yet to chime in, but I will.
*Dusts off penguins and Gnus*
Schneier joins the criticism of "Open Source" AI:
More people in the Open Source community ought to read this: https://archive.is/paD1W
h/t @Shamar (via LWN)
tl;dr: OSI’s behaviour is even worse than we thought
(And apparently Fakebook was one of the driving forces behind this… and still there are idiots on Fedi who want to connect with them. Bah!)
I remember Google+, and the idee fixe and mad hype around it. (Google was afraid of Facebook.) G+ was a ghost town. But for the first year or so the G+ team reported astronomical engagement numbers. Huhh?
Finally we learned they were counting every G+ notification dropped at the top of Gmail as an "engagement."
Anyway, I wonder how they're measuring this...
https://www.businessinsider.com/google-earnings-q3-2024-new-code-created-by-ai-2024-10
#Debian classify the kind of #AI systems that the #OSI's #OSAID defines as #opensource, as #ToxicCandy: https://salsa.debian.org/deeplearning-team/ml-policy/-/blob/master/ML-Policy.rst
Without training data you cannot ecercise the freedom to study a #ML system and your ability to modify it is severely limited to fine tuning.
Which is like to say that windows is open source because you can tweak the registry.
For these reasons some open source developers are already moving beyond OSI: https://opensourcedeclaration.org
You are welcome to join!
It's not just matter of securing the #OSD, but to update it with a truly open process, with all developers, artists, musicians, data scientists... who contribute their time and valuable skills to open source: https://opensourcedefinition.org/wip/
"Still, it’s unclear whether “Just Go Independent” is a sustainable career path for the number of journalists who we need to have a functioning society. But I do know that relying on the passing interest of billionaires to keep journalism alive is not sustainable. And I know that 250,000 subscribers could fund a lot of independent journalists."
(Original title: The Billionaire Is the Threat, Not the Solution)
https://www.404media.co/the-billionaire-is-the-threat-not-the-solution/
They posit you can still modify (tune) the distributed models without the training source. You can also modify a binary executable without its source code. Frankly that's unacceptable if we actually care about the human beings using the software.
A key pillar of freedom as it relates to software is reproducibility. The ability to build a tool from scratch, in your own environment, with your own parameters, is absolutely indispensable to both learning how the tool works and changing the tool to better serve your needs, especially if your needs fall on the outskirts of the bell curve.
There's also the issue of auditability. If you can't run the full build process yourself, producing your own results from scratch in a trusted environment to compare with what's distributed, it becomes exponentially harder to verify any claims about how a tool supposedly works.
Without the training data, this all becomes impossible for AI models. The OSI knows this. They're choosing to ignore it for the sake of expediency for the companies paying their bills, who want to claim "open" because it sounds good while actually hiding the (largely stolen and fraudulently or non-consentually acquired) source material of their current models.
Do we want a new definition of "open source" that actively thwarts analysis and tinkering, two fundamental requirements of software that respects human beings today? Reject this nonsense.
#OpenSource #OpenSourceAI #OSI #OpenSourceInitiative #FreeSoftware #AI #GenAI #GenerativeAI
As for the #OSAID, it could have required something like the #CDLA (Community Data License Agreement) by the #LinuxFoundation for the AI/ML/LLM training data, or started establishing equivalency classes like it did for programming language source code licensing.
That it didn't, and doesn't even recommend it, is ... disappointing, and falls short of even the open-washing it did for less protective licenses in the past.
🚨 Please take a moment to SIGN ✒️ and SHARE 📢 the #OpenSourceDeclaration to protect #OpenSource from damaging forks that undermine the four essential freedoms of free software on which modern society is based: to Use, Study, Modify, and Share it:
Il diniego di Bezos alla presa di posizione del Washington Post riguardo le prossime elezioni e' aperto al pubblico.
la risposta dei suoi editor critici della decisione, è dietro paywall.
dai paladini della libertà di espressione è tutto, a voi studio.
p.s. fortunatamente ci sono modi per aggirare il paywall.
la risposta e' leggibile qui
https://archive.is/JX3fp