Countryside Bus Stop
This is a joint project using a #blender scene as a basis, then processed several times using #stablediffusion
More info and more images in the following in the replies.
Continuing my attempt to bridge the creativity of the artist and the advances in AI image generation, I made a scene from scratch using Blender. My idea was to use this scene as a starting point and then enhance and add more details using #stablediffusion.
To create the scene, I used traditional modeling methods in Blender and mostly avoided using pre-made image textures. Instead, I used noise and generated materials.
Once I had everything arranged the way I wanted it (objects, colors, lighting, etc.), I used the img2img feature of the #AUTOMATIC1111 webui along with a specific prompt I had crafted for this image. I processed the image with stablediffusion several times, each time modifying the prompt slightly and using different models to mask different areas of the image. I chose to leave the bus stop itself mostly untouched by the AI and only let it enhance the surrounding elements.
My goal with this is again to prove that for an artist or creative, a tool like #stablediffusion does NOT need to be a nemeses to ignore or make an enemy of, but it can be efficiently used as part of the toolbox of the digital artist to enhance pieces of work. Like i said in a nother post, this can allows someone with limited time or limited knowledge in time to create better artwork, and make the computer take care of other stuff while the artists can concentrate in artistic decision like object design, composition, colors etc.
@nop ah yeah i totally forgot about the controlnet! it would be a good way to bring blender into the whole stablediffusion workflow too
@ryuichi The rendering of 3D models, such as toon rendering, accurately projects the three-dimensional positions onto a two-dimensional space. However, I feel that reproducing the distorted perspective of illustrations, for example, can be challenging. With StableDiffusion, it seems like those aspects could be resolved.
@ryuichi With a 3D model, you can use ControlNet's depth map to generate images with depth. Unlike toon rendering , we can also generate images deformed from the 3D model. I think there are various interesting ways to use it.