Tried the "Dolphin 2.5 Mixtral 8X7B" LLM. It's an uncensored model, so it can tell you how to cook meth or make a nuclear bomb. But I don't have enough knowledge to let it tell me how to do it precisely.
Anyway, I noticed the LLM have different performance on different languages. Multi lang support is fairly normal for LLM nowadays, but English is significantly better than expensive languages like Chinese and Japanese. Not only on speed (one token can be one English word, or one Chinese character), but also the result. The model is fairly shy and doesn't spill out a lot in Chinese. If you force it to generate more details, it's just repeat the same thing. However in English, it gives a more detailed answer by default.
@skyblond So the performance is inadequate? Those two examples should be possible for college students.
Some might say that it would be harmful to society to have such capabilities. I remember learning about the designs of fission weapons in elementary school. The fission fusion chain was a bit more advanced. The knowledge is beneficial and the chance of someone achieving anything more than a fizzle is incredibly low.
One shouldn't need the instructions as better quality instructions and actual science are located in a library.
With that out of the way, can it make malware?