I tried to use another stablediffusion user interface called "ComfyUi", it's node based, but for some reason all the images came out very ugly, nothing like the kind of images i get from Auomatic WebUI and i was using the exact same model, i don't know what happens.

@ryuichi
This is interesting. If the image is generated successfully, it would be nice to have a sense of how the image was created in what workflow, more than using a WebUI.

Follow

@nop i generated this one later and seems like it came out good.

And yeah supposedly this way of generating images seems to provide more flexibility in the creation of a workflow and more understanding on how the image generation works. You could chain several nodes and provide a finished image within the same job.

I've used nodes a lot in blender so it's not a foreign concept to me but i think still would take some time to get used to it.

· · Dashboard FE · 1 · 0 · 2

@ryuichi I'll try it too, as it sounds interesting. With the node connection format, we can see the type and order of inputs required to generate the image, so there may be some new discoveries to be made

@nop check the other replies from
dtzbts~maid in my original post, he explains a bit more stuff and uploads some screenshots.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.