Show newer

Israel accidentally bombed a food aid convoy which had shared their coordinates with the IDF in advance. Then they accidentally bombed it again. Then they accidentally bombed it a third time to finish off the survivors

World Central Kitchen founder's response

“The air strikes on our convoy were not just some unfortunate mistake in the fog of war. It was a direct attack on clearly marked vehicles whose movements were known by the [Israeli military]. It was also the direct result of his [PM Netanyahu’s] government’s policy to squeeze humanitarian aid to desperate levels.”

Jose Andres

@palestine
#Gaza
#aid
#WCK

@pluralistic Hi Cory, this will be of interest to you - a preprint demonstrating (mathematically) that hallucinations are an inevitable consequence of how LLMs are made and work. You can’t avoid them: arxiv.org/abs/2401.11817

Hallucination is Inevitable: An Innate Limitation of Large Language Models

Hallucination has been widely recognized to be a significant drawback for large language models (LLMs). There have been many works that attempt to reduce the extent of hallucination. These efforts have mostly been empirical so far, which cannot answer the fundamental question whether it can be completely eliminated. In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs. Specifically, we define a formal world where hallucination is defined as inconsistencies between a computable LLM and a computable ground truth function. By employing results from learning theory, we show that LLMs cannot learn all the computable functions and will therefore inevitably hallucinate if used as general problem solvers. Since the formal world is a part of the real world which is much more complicated, hallucinations are also inevitable for real world LLMs. Furthermore, for real world LLMs constrained by provable time complexity, we describe the hallucination-prone tasks and empirically validate our claims. Finally, using the formal world framework, we discuss the possible mechanisms and efficacies of existing hallucination mitigators as well as the practical implications on the safe deployment of LLMs.

arXiv.org

This take from @pluralistic deserves more play.

TL;DR: #Copyright is not a fix for your #AI fears. Bullies want you to think they're on your side, but it's a trap. Bosses blame immigrants for low wages, while they actually take workers' wages. Corporations claim to protect creative workers from AI companies, but they exploit them just as much. It's a ploy to gain control and exploit workers, while deflecting blame onto others.

pluralistic.net/2024/03/13/hey

#Labor #Unions #WorkerRights

As @pluralistic writes, "we're nowhere near the point where an AI can do your job, but we're well past the point where your boss can be suckered into firing you and replacing you with a bot that *fails* at doing your job"

Casilli :mastodon:  
Anytime you read news like this, remember that bosses always call themselves "AI" when they're about to eliminate millions of jobs. https://www.the...

Under the AI ACT's harm approach to fundamental rights impact assessments, fundamental rights can be violated with impunity as long as there is no foreseeable harm.
@mireillemoret : “harm is NOT a condition for the violation of a fundamental right"
zenodo.org/records/10866778

Do AI systems have politics? Predictive optimisation as a move away from the rule of law, liberalism and democracy

In predictive optimisation systems, machine learning is used to predict future outcomes of interest about individuals, and these predictions are used to make decisions about them. Despite being based on pseudoscience (on the belief that the future of the individual is already written and, therefore, readable), not working and unfixably harmful, predictive optimisation systems are still used by private companies and by governments. As they are based on the assimilation of people to things, predictive optimisation systems have inherent political properties that cannot be altered by any technical design choice: the initial choice about whether or not to adopt them is therefore decisive, as Langdon Winner wrote about inherently political technologies. The adoption of predictive optimisation systems is incompatible with liberalism and the rule of law because it results in people not being recognised as self-determining subjects, not being equal before the law, not being able to predict which law will be applied to them, all being under surveillance as 'suspects' and being able or unable to exercise their rights in ways that depend not on their status as citizens, but on their contingent economic, social, emotional, health or religious status. Under the rule of law, these systems should simply be banned. Requiring only a risk impact assessment – as in the European Artificial Intelligence Act – is like being satisfied with asking whether a despot is benevolent or malevolent: freedom, understood as the absence of domination, is lost whatever the answer. Under the AI ACT's harm approach to fundamental rights impact assessments (perhaps a result of the "lobbying ghost in the machine of regulation"), fundamental rights can be violated with impunity as long as there is no foreseeable harm.  

Zenodo

@mcp @informapirata @informatica

L'AI Act, impostatato com'è sul rischio di danno e sulle valutazioni di impatto, rende leciti quasi tutti i sistemi maggiormente lesivi dei diritti individuali.

Si arricchiranno, oltre alle big tech, le imprese che si occuperanno di valutazione del rischio.

L'autorità italiana che dovrebbe fare, baloccarsi con le check lists?

L'unica cosa utile sarebbe una presa di posiziione italiana, che sancisca l'illegalità di diritto
di ciò che è già illegale sulla base del diritto vigente, al netto dell'AI Act: ad esempio, che la polizia possa usare un sistema intrusivo e non funzionante di "riconoscimento" delle emozioni.

Siamo sicuri che versare decine o centinaia di milioni agli editori commerciali sia per leggere sia per scrivere (in accesso aperto) avvicini all'apertura della scienza? Una lettera di @aisa alla #Crui: (1) versione breve (aisa.sp.unipi.it/contratti-tra) (2) versione lunga (aisa.sp.unipi.it/contratti-tra) #openscience ?

LA CRUI, associazione privata dei rettori italiani, offre alle università un servizio non gratuito, noto come CRUI-CARE, per la negoziazione di contratti consortili con gli editori scientifici commerciali.
Dal 2020 CRUI-CARE ha cominciato a stipulare una serie di contratti in virtù dei quali gli editori sono pagati non solo per leggere, cioè […]

https://aisa.sp.unipi.it/contratti-trasformativi-una-lettera-aperta-alla-crui/

Lo segnala l’Open science blog della Statale di Milano. Fra i motivi di questa scelta merita menzionare la partecipazione dell’ateneo svizzero a COARA, la mole di lavoro amministrativo, a carico dell’università, generata dalla necessità di preparare e consegnare dati a THE, la sottomissione delle istituzioni indicizzate a criteri quantitativi, […]

https://aisa.sp.unipi.it/luniversita-di-zurigo-abbandona-il-ranking-the/

Our report on the risks of AI in education doesn't have major trade publisher marketing support, tech billionaire blurbs, or jazzy tech-utopian cover imagery, so I'm just going to do one final push here hoping it reaches a few educators and school leaders - please share with any you know! nepc.colorado.edu/publication/

Time for a Pause: Without Effective Public Oversight, AI in Schools Will Do More Harm Than Good.

Ignoring their own well-publicized calls to regulate AI development and to pause implementation of its applications, major technology companies such as Google, Microsoft, and Meta are racing to fend off regulation and integrate artificial intelligence (AI) into their platforms. The weight of the available evidence suggests that the current wholesale adoption of unregulated AI applications in schools poses a grave danger to democratic civil society and to individual freedom and liberty. Years of warnings and precedents have highlighted the risks posed by the widespread use of pre-AI digital technologies in education, which have obscured decision-making and enabled student data exploitation. Without effective public oversight, the introduction of opaque and unproven AI systems and applications will likely exacerbate these problems. This policy brief explores the harms likely if lawmakers and others do not step in with carefully considered measures to prevent these extensive risks. The authors urge school leaders to pause the adoption of AI applications until policymakers have had sufficient time to thoroughly educate themselves and develop legislation and policies ensuring effective public oversight and control of school applications. Suggested Citation: Williamson, B., Molnar, A., & Boninger, F. (2024). Time for a pause: Without effective public oversight, AI in schools will do more harm than good. Boulder, CO: National Education Policy Center. Retrieved [date] from http://nepc.colorado.edu/publication/ai

National Education Policy Center

I realize the latest open letter from the "AI Safety" crowd is essentially self-parody, but at the same time, I couldn't resist ...

docs.google.com/document/d/1Z_

(For the source reference, repair this URL: https: //openletter.svangel.com/ )

Exxon CEO Darren Woods says the quiet part out loud: The problem with renewable energy sources is that they “don't generate above-average returns for Exxon's shareholders.” — @pluralistic pluralistic.net/2024/03/06/exx

The sun generates virtually limitless and free energy, with much of it available in the form of wind and tides. And we’re already well underway to harnessing that energy.

The main message of or new report on AI in education is not that AI is a solution to school problems, or that AI is a problem on its own, but that AI will amplify problems stemming from years of school underfunding, datafication, standardization and privatization - "tutorbots" are just the same politics in disguise nepc.colorado.edu/publication/

Time for a Pause: Without Effective Public Oversight, AI in Schools Will Do More Harm Than Good.

Ignoring their own well-publicized calls to regulate AI development and to pause implementation of its applications, major technology companies such as Google, Microsoft, and Meta are racing to fend off regulation and integrate artificial intelligence (AI) into their platforms. The weight of the available evidence suggests that the current wholesale adoption of unregulated AI applications in schools poses a grave danger to democratic civil society and to individual freedom and liberty. Years of warnings and precedents have highlighted the risks posed by the widespread use of pre-AI digital technologies in education, which have obscured decision-making and enabled student data exploitation. Without effective public oversight, the introduction of opaque and unproven AI systems and applications will likely exacerbate these problems. This policy brief explores the harms likely if lawmakers and others do not step in with carefully considered measures to prevent these extensive risks. The authors urge school leaders to pause the adoption of AI applications until policymakers have had sufficient time to thoroughly educate themselves and develop legislation and policies ensuring effective public oversight and control of school applications. Suggested Citation: Williamson, B., Molnar, A., & Boninger, F. (2024). Time for a pause: Without effective public oversight, AI in schools will do more harm than good. Boulder, CO: National Education Policy Center. Retrieved [date] from http://nepc.colorado.edu/publication/ai

National Education Policy Center
Show thread

“A votare però sono i cittadini, non le lobby. La speranza è che sempre più donne e uomini prendano consapevolezza della follia di una corsa al riarmo in cui ogni parte pensa di potere spendere di più, di acquisire una superiorità di mezzi e tecnologica in un’infinita corsa verso guerre sempre più distruttive”.
Andrea #Baranes
valori.it/von-der-leyen-armi-v

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.