Show HN: A Stable Diffusion desktop frontend with inpainting, img2img and more
github.comI was frustrated with laggy notebook stable diffusion demos. Plus they usually didn't have all the features I wanted (for example some of them only had inpainting and some only had img2img, so if I wanted both I had to repeatedly copy images between notebooks). So I made this desktop frontend which has much smoother performance than notebook alternatives and integrates image generation, inpainting and img2img into the same workflow. See a video demo here: https://user-images.githubusercontent.com/6392321/191858568-...
Features include:
* Can run locally or connect to a google colab server
* Ability to erase
* Ability to paint custom colors into the image. It is useful both for img2img (you can sketch a rough prototype and reimagine it into something nice) and inpainting (for example, you can paint a pixel red and it forces Stable Diffusion to put something red in there)
* Infinite undo/redo
* You can import your other images into a scratch pad and paste them into main image after erasing/cropping/scaling it
* Increase image size (by padding with transparent empty margins) for outpainting
Impressive! I appreciate not having to FFW to the useful info in the video. It's concise, to-the-point, and effectively shows what you made and how it works. Looking forward to trying it out.
Is there a way to run a batch of files if you wanted to use this on video frames?
Would you get in trouble with Colab using this approach?
I am not sure :|. I don't see how this is any different from running, say, a gradio application though.