@nichg like this model here? https://huggingface.co/lambdalabs/sd-image-variations-diffusers
@nichg I don't think that 2k images are nearly enough for this job
@GaggiX interesting if that's the case, since Dreambooth seems to work so well with only 10-100. What if one just trained the projection layer and not the UNet?
@nichg with Dreambooth you're not trying to shift the entire conditioning distribution
@GaggiX Well anyhow, this probably saved me three months of training time, so thanks!
@GaggiX Oh, hm, its actually not quite the same is the Image Variations thing. What I was trying to do was to use the whole set of ViT patches as tokens, whereas it looks like Variations uses the pooled and projected CLIP vector. So you can't just use two images as input and get a third with Variations unfortunately...
@nichg I thought it would be possible to interpolate between two CLIP embeddings and get in-between variations, Isn't this the same way Dalle 2 was conditioned on?
@GaggiX Yeah but that means something different. Interpolating between two vectors gives you the concept halfway between the two. But if you use cross-attention over a token set and add additional tokens, its more like an 'and'.
@nichg I don't understand what are you trying to do, I thought it was something like Midjourney when it's conditioned on two or more images
@GaggiX Broadly, if fine-tuning to new modalities is easy, you could have a multi-modality model where you choose what to drop in and what to remove. Use the ViT tokens except in an area to inpaint. Combine with audio because why not. Have some text be descriptive, other be seen as tags, one image for a mask, one for a depth map, segmentation map, and a texture reference, etc.
@nichg I still have no idea what you are trying to do but I guess good luck ahah
@nichg @GaggiX does the cross attention operation accept arbitrary length sequences? Sounds like a really cool idea.
Would be especially cool if you could do something like LoRA on the cross attention weights to separately fine-tune different conditioning modalities and then merge the ones you need at inference time.
@GaggiX exactly like that I guess 😅