https://github.com/CompVis/stable-diffusion/
Does anyone have this working on 64-bit Ubuntu 22.04 LTS? Could you share steps on how to get it working, or just link to a known tested/working guide for same?
https://github.com/CompVis/stable-diffusion/
Does anyone have this working on 64-bit Ubuntu 22.04 LTS? Could you share steps on how to get it working, or just link to a known tested/working guide for same?
Got it. I'll write it up in case it helps another. This will initially only cover CPU "sampling" (generating an image) until I get GPU sampling working. Sampling should run entirely offline.
pip install --upgrade diffusers transformers scipy torch
sudo apt install git-lfs
https://huggingface.co/runwayml/stable-diffusion-v1-5
(you have to log in or sign up first and accept their license agreement)Then you can create a small Python script (inside your local working copy of the cloned git repo above) and run it to try sampling for yourself:
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained('.')
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
https://github.com/invoke-ai/InvokeAI#installation
This provides a really nice web GUI, too.
My GPU shows up as Intel CometLake-S GT2 [UHD Graphics 630]
from lspci | grep VGA
or neofetch
. screenfetch
calls it Mesa Intel(R) UHD Graphics 630 (CML GT2)
. Either way I don't know how to use this GPU for sampling (or if it is even possible).