Anybody know of any good articles considering the possibilities of the resonant effects of AI training (for both predictive and generative tasks) on inputs increasingly produced by generative AI? And/or where strategic manipulation of AI might occur (through intentionally crafted content that ends up as training data)?
@katestarbird What a great research idea! It would also be interesting to study the effects of *adversarial* "intentionally crafted content" if it were widely distributed.
Depressingly dystopian but likely relevant, unfortunately.
@katestarbird Curious about the answer!