#StableDiffusion
@theexplorographer I was wondering about this today. I'm just an amateur photographer, but on my pixelfed blog I clearly state that all the images are subject to CC BY-NC-ND.
The ND term restricts derivative work from being distributed. Would this not mean that while the AI can look at it, it cannot distribute anything that builds on my work? Or am I misenterpreting the license?
My point is; if your work is under a license you already opted out.. no?
#StableDiffusion
Opting out in this case means telling them "Hey you're violating my rights here, maybe you could stop?". Because by default they assume everyone is okay with that. 😜
Also, though, there isn't case (or statutory) law on whether either the neural networks themselves, or the images they produce, are derivative works in the legal sense. All sorts of things that are obviously "derivative" in the normal use of that word aren't derivative works for the purpose of copyright protection. It's all a mess of unknowns.
So in a way Stability is being generous by admitting that their uses might possibly be something a creator might object to, and allowing an opt out...
@theexplorographer@mindly.social
#StableDiffusion
@ceoln @theexplorographer@mindly.social
I suppose what I take issue with then, is the shifting of responsibility of opting out original works from the Stable Diffusion creators to the authors of those original works. Soon @theexplorographer can spend all their time opting out of training pools of various AI algos such as #stablediffusion, #DALLE and #midjourney. There are bound to be more in the future.
(1/4)
#StableDiffusion
@ceoln @theexplorographer
Anyone in that position is bound to just give up if they have more than a few images to opt out, considering the mountain of soul-sucking work they would have to do to opt out. That does not seem reasonable to me.
(2/4)
#StableDiffusion
@ceoln @theexplorographer
Consider how trivial it would be, for the Stable Diffusion team to simply check the domains they are parsing to acquire training data for strings such as 'BY-NC-ND' or other explicit licensing. With this information it is a non-issue to automatically remove works that are not public domain from the training data. It is easy even. Almost no work involved by any party, only a few lines of code at most.
(3/4)
#StableDiffusion
@ceoln @theexplorographer
They could even send the creator of the original works a message asking for consent to include their works in the training data. This is a bit harder of course, but not undoable.
I can imagine that we will see some sort of regulation on this in the future - what AI can and cannot be trained on. But it will take a long time..
(4/4)
#StableDiffusion
Apologies if I've already linked this here, but I wrote down some of the possibilities that I see, here: https://paper.wf/ceoln/ai-art-tools-the-way-s-forward
@theexplorographer@mindly.social