I tried to use another stablediffusion user interface called "ComfyUi", it's node based, but for some reason all the images came out very ugly, nothing like the kind of images i get from Auomatic WebUI and i was using the exact same model, i don't know what happens.
@nop i generated this one later and seems like it came out good.
And yeah supposedly this way of generating images seems to provide more flexibility in the creation of a workflow and more understanding on how the image generation works. You could chain several nodes and provide a finished image within the same job.
I've used nodes a lot in blender so it's not a foreign concept to me but i think still would take some time to get used to it.
@ryuichi I'll try it too, as it sounds interesting. With the node connection format, we can see the type and order of inputs required to generate the image, so there may be some new discoveries to be made
@nop check the other replies from
dtzbts~maid in my original post, he explains a bit more stuff and uploads some screenshots.
@ryuichi@qoto.org the initial noise is generated on cpu in comfy. So the images will never match the ones from A1111. Also the weights are handled differently, if you use a lot of (coolprompt:1.3) this will most likely have negative effect, BREAK also has no effect. In A1111 weights are averaged, but in comfy they are not.
Things like clip skip and vae are also controlled via separate loader nodes. My latest gens are made in comfy. I rarely use a1111, only for making grids and merging models.
@maid i see, so it's like a totally new way to generate images you have to get used to
Well i can see the advantages of the system, probably wouldn't be bad to give it a try.
@ryuichi@qoto.org https://comfyanonymous.github.io/ComfyUI_examples/faq/ explains it better then i can.
The main benefit for me is control over upscale, instead of doing one big hiresfix step, I have a custom workflow that's a couple of small upscales instead of one big. The prompting is mostly the same, I never really used weights that much so the different behaviour isn't really an issue. Heres an example of my workflow, its somewhat overloaded, but i have plenty of flexibility with just moving couple of outputs around.
@ryuichi@qoto.org Image got crushed quite a bit, you can load attached json into comfy to see my workflow. I also use custom upscale by ratio node, so I dont have to fiddle with it. Drop the python file into custom_nodes folder.
@ryuichi
This is interesting. If the image is generated successfully, it would be nice to have a sense of how the image was created in what workflow, more than using a WebUI.