I tried to use another stablediffusion user interface called "ComfyUi", it's node based, but for some reason all the images came out very ugly, nothing like the kind of images i get from Auomatic WebUI and i was using the exact same model, i don't know what happens.

@ryuichi@qoto.org the initial noise is generated on cpu in comfy. So the images will never match the ones from A1111. Also the weights are handled differently, if you use a lot of (coolprompt:1.3) this will most likely have negative effect, BREAK also has no effect. In A1111 weights are averaged, but in comfy they are not.
Things like clip skip and vae are also controlled via separate loader nodes. My latest gens are made in comfy. I rarely use a1111, only for making grids and merging models.

Follow

@maid i see, so it's like a totally new way to generate images you have to get used to

Well i can see the advantages of the system, probably wouldn't be bad to give it a try.

· · Dashboard FE · 1 · 0 · 0

@ryuichi@qoto.org https://comfyanonymous.github.io/ComfyUI_examples/faq/ explains it better then i can.

The main benefit for me is control over upscale, instead of doing one big hiresfix step, I have a custom workflow that's a couple of small upscales instead of one big. The prompting is mostly the same, I never really used weights that much so the different behaviour isn't really an issue. Heres an example of my workflow, its somewhat overloaded, but i have plenty of flexibility with just moving couple of outputs around.

@ryuichi@qoto.org Image got crushed quite a bit, you can load attached json into comfy to see my workflow. I also use custom upscale by ratio node, so I dont have to fiddle with it. Drop the python file into custom_nodes folder.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.