Digital information technologies are having unprecedented effects on society. Both their implementations and consequences are extremely difficult to monitor, and efforts to do so are often impeded by the technology companies that deploy them.
Today we have piece out in which we make the case for an intergovernmental body, analogous to the IPCC, with the role of collating information about implementations and consequences, benefits and harms of these systems.
@Riedl Can you skip to the last page and take the quiz? That used to work but some of the vendors have caught on.
@Riedl This seems more like an attempt to appear to be addressing AI safety than something that would be effective in the long term.
@ct_bergstrom This partially explains their thinking - Gigerenzer is a proponent of something called "fast-and-frugal trees" which appear to just be simple decision trees. Maybe there's more to it. https://en.wikipedia.org/wiki/Fast-and-frugal_trees
@ct_bergstrom "I suggest that you read the books by my co-author G. Gigerenzer to learn how to interpret numbers correctly"
Think he's get much better results if he added a condescension feature to his decision tree.
Entertaining (at least for me) physics discussion of why the problem with SUV/pedestrian accidents isn't due to the mass difference relative to sedans but the height of the SUV. SUV/sedan collisions are a different matter.
https://docs.google.com/document/d/1nERuJylCqR42irULLTVOMjIefb4yD63FI4yLpJpnO4A/edit
Looks like “a [completely uninterpretable] deep neural network [with substantial unreported hyperparameter and architecture tuning] reproduced [some aspects of] our brain data”
has replaced
“a simple computational model capturing our proposed mechanism reproduced our brain data”
as the new figure 7 strategy for high-profile neuroscience papers
@ct_bergstrom @Hoch Just to add to the pile on for the worst decision tree ever - if he had just classified every paper as positive, he would have had perfect sensitivity and a slightly degraded (50%) alarm aka false positive rate.
@Noupside Good for reducing AI fakes, potentially bad for privacy. Also wondering who owns the images/text created by Google's models.
Google's announced "paper" about their latest LLM (Palm 2) is a continuation of the trend of redefining academic papers as a list of cherry picked demos and benchmark results. https://ai.google/static/documents/palm2techreport.pdf
AI has been a great boon for hucksters, grifters, and con artists
https://gizmodo.com/ai-frank-ocean-discord-ai-generated-songs-1850423659
Great threat by Kareem Carr about Elon's racist support for the claim that there's more black violence on whites than the converse.
@ct_bergstrom It's very googley.
@merz Ah, metadata standards. I know people in federal labs who've made careers out of writing 500 page metadata documents that no one (justifiably) ever reads.
Unprofessional data wrangler and Mastodon’s official fact checker. Older and crankier than you are.