Yesterday I had a number of conversations with people working in the scholarly publishing sphere about what happens when AI chatbots pollute our information environment and then start feeding on this pollution.
As it so often, the case, we didn’t have to wait long to get some hint of the kind of mess we could be looking at.
https://www.theverge.com/2023/3/22/23651564/google-microsoft-bard-bing-chatbots-misinformation
My fear is that we’ve created an information ecosystem that is uniquely susceptible to the perversions of these AI tools. Fifty years ago, had they existed, they would’ve been mere curiosities because we lacked the information infrastructure for their output to swamp more trusted forms of information. Even twenty years ago there would have been substantially less opportunity for them to have cause harm.
@ct_bergstrom There's a good piece in the FT today about the level of private control over the development of these tools. It shows not just a concentration, but how the scale of investment and proprietary nature of the tech put it largely beyond researchers & regulators ability to investigate the dangers and biases.
"A lack of access means researchers cannot replicate the models built in corporate labs, and can therefore neither probe nor audit them for potential harms and biases very easily."
https://www.ft.com/content/e9ebfb8d-428d-4802-8b27-a69314c421ce