Follow

"a Large Language Model () can be convinced to tell you how to build a bomb if you prime it with a few dozen less-harmful questions first"

techcrunch.com/2024/04/02/anth

@lupyuen I actually have no issue with AI or anything else instructing people on how to make bombs. Knowledge should never be illegal.

@freemo @lupyuen I don't see the problem except that it didn't specify if it was fission, fission-fusion or pure fusion.

Conventional energetic devices are just containers that fail to hold a chemical reaction.

There's even an argument that not knowing how to make a bomb is worse. For example a young agent finding a rental van with a lot of fertilizer and saying that it's fine exactly a year after setting a residence on fire and massacring a religious community.

Or making a funny tiktok where a glitter prank goes in an unexpected direction because they used aluminum powder.

"A little learning is a dangerous thing; drink deep, or taste not the Pierian spring." Alexander Pope

@freemo
Besides. It's probably going to hallucinate that you can make a bomb from bicarb of soda and dishwashing liquid. Then anyone actually wanting to blow up a kindergarten or suchlike will be defeated...

Go AI, go!

@lupyuen

@lupyuen It sounds like interrogation. Many of similar methods could possibly work too.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.