#AI #GenerativeAI #AIRegulation #RuleOfLaw: "In the thick of AI’s promises to secure a better future, there may well be something unique about AI that requires specific, focused regulation when it comes to its application in the justice system. AI’s adaptability, powered by advanced pattern-matching capabilities that draw out novel insights from massive datasets, creates an acceleration effect. Its speed and perceived reliability risk transforming potential answers into concrete outcomes with worrying ease. Deploying AI-powered tools developed by private industry within the legal sphere, especially the administrative state, requires thinking hard about balance. Achieving the appropriate balance will require thinking through the degree of impact we want to allow corporate entities to have within our public institutions. We risk undue corporate influence, coupled with automation bias, through overreliance on AI-powered tools. Those risks are simply too high when it comes to the cornerstone of functioning democracies — a transparent and robust legal system that aims to guarantee just outcomes to its citizens through the Rule of Law."
https://www.justsecurity.org/103777/maintaining-the-rule-of-law-in-the-age-of-ai/
In theory, I'd fully agree.
But #OSI is rushing to open wash any black box with [a new "Open Source AI" that nobody want (except BigTech)](https://samjohnston.org/2024/10/15/the-osi-lacks-competence-to-define-open-source-ai/).
@Debby @remixtures
Here you can find a short list of the issues that OSI refused to address in the process:
https://discuss.opensource.org/t/list-of-unaddressed-issues-of-osaid-rc1/650