"a Large Language Model (#LLM) can be convinced to tell you how to build a bomb if you prime it with a few dozen less-harmful questions first"
https://techcrunch.com/2024/04/02/anthropic-researchers-wear-down-ai-ethics-with-repeated-questions/
@freemo @lupyuen I don't see the problem except that it didn't specify if it was fission, fission-fusion or pure fusion.
Conventional energetic devices are just containers that fail to hold a chemical reaction.
There's even an argument that not knowing how to make a bomb is worse. For example a young agent finding a rental van with a lot of fertilizer and saying that it's fine exactly a year after setting a residence on fire and massacring a religious community.
Or making a funny tiktok where a glitter prank goes in an unexpected direction because they used aluminum powder.
"A little learning is a dangerous thing; drink deep, or taste not the Pierian spring." Alexander Pope
@lupyuen It sounds like interrogation. Many of similar methods could possibly work too.
@lupyuen I actually have no issue with AI or anything else instructing people on how to make bombs. Knowledge should never be illegal.