I suppose I'll write a post about "AI" which I should have written but didn't.
While I don't get any joy from doing so, I do criticize people for complaining about offensive content that has been generated using some sort of diffusion model, LLM, or some other "AI" related technology. For instance, it can be annoying when someone complains about someone generating an image of Mickey Mouse holding a knife.
Mickey Mouse is one of the prime examples of copyright laws which have gone way too far. Even if you argue that copyright has merit (if someone came up with copyright today, it might be dead in the water), it was never intended to protect someone's intellectual property for anywhere near as long as it has. Content was always intended to return to the public domain after a certain amount of time had passed, so that other people would be able to make use of it.
Other than that though, it's hard to imagine how an image of Mickey Mouse holding knife "competes" with Disney's business. Is Disney selling / offering such images? Presumably, copyright is intended to offer protection from competition. And even in the framework of copyright, there is fair use which protects things like parody. So, even in that framework, it is accepted that a black and white approach to copyright is not necessarily useful.
If someone *was* to make a point, it might make more sense to point to the training process. If not, it starts to look a lot like arguing for copyright to be expanded to a dangerous degree.
Practically speaking, not having access to copyrighted content might impede the development of "AI" models. Someone might argue that that is fine but it is still a relevant point.
"It is copyrighted" is not the only "offensive content" type argument which might come up.
For instance, something might depict someone without that person's content. But, instead of focusing on that, someone instead focuses on vague concepts of "offensive content" which once again sounds quite a bit like advocating for harmful censorship (it is also a distraction), if read literally (for instance, covering content which doesn't depict someone at all, and might not even use a particular technology / process).
Sometimes, someone might use ambiguous language, or some sort of novel language (there might be clearer language to get their point across but they'll decide to reinvent the wheel), and it becomes even less clear what it is that they are talking about.
Also, even depicting someone without that person's consent might not necessarily be problematic. What if someone creates a parody of a politician? In fact, people have.