i'm increasingly convinced that ken thompson's "reflections on trusting trust" also applies to ai. much like a malicious "seed" compiler can insert backdoors into compiled code, especially later compilers that bootstraps from it, today's ai models will emit content that inevitably end up as future model's training set. by the time a bunch of tensors becomes our overlord (lol), you know the seed of malice is planted no later than 2022
@terrorjack we don't even need an evil seed to be disassembled for some useful atoms (but this way we'd get overlords only metaphorically)
@terrorjack nah, bulk matter processing should be pretty cheap. Taking extra precautions for everything vaguely alive - that would cost some.