Daniel Dvorkin

Every #tool is dangerous, and the more powerful the tool the more dangerous it is. Of course. Is #AI as dangerous as #nuclear #weapons? Probably not. It might be in the same league as, oh, say #internal #combustion #engines—and those have done a hell of a lot of damage. But they haven't done it by ushering in the #apocalypse. Instead the damage is from slow, creeping, cumulative change where the effect of any one individual event is too small to measure.

So I really think the focus on world-ending scenarios takes away from the conversations we need to be having. This reminds me a lot of the simmering "how far is too far" #genetics debate, especially the kibitzing from "#ethicists" with no understanding of the #biology and an #ethical sense that isn't nearly as developed as they think it is. There are conversations on that topic I'd like to have without the constant Greek chorus of "#Frankenstein! #Gattaca! #JurassicPark!"

Toni Aittoniemi

@aidenbenton Yes. AI only models what it sees. And instead of ”trying to repair” it, they would best be used as mirrors to what people are really like.

Also, public application of #ai needs 100% #transparency on the #training data. No business decision is too important to allow cultural bias to contaminate the system.

This is where #ethicists need to work together with #computer #scientists. The problem is no longer algoritmical.

Caroline Bowen PhD

@BronwynHemsley Interesting. Just knowing that ChatGPT was developed by OpenAI makes me cautious. OpenAI is a research and development company, founded as a nonprofit in 2015 by Silicon Valley investor Sam Altman and Elon Musk and backed financially by venture capitalists such as Peter Thiel, and other investors. Then there are the questions of ethics,,,! Any #ethicists listening?