How to Run Stable Diffusion Without A Graphic Card (GPU)

Sharing is caring!

Last Updated on September 22, 2022 by Jay

There’s a way to run the Stable Diffusion software without a graphic card (GPU) and for free. Don’t believe me? Keep reading and I’ll show you how.

What is Stable Diffusion

Stable Diffusion is an artificial intelligence software that can create images from text. Like most AI software, it requires a good graphic card for intensive computation. As of today (Sept 10, 2022), the minimum hardware requirement to run Stable Diffusion is 4GB of Video RAM. Generally, the more VRAM, the faster the program can run and produce results.

What If I Don’t Have A Graphic Card?

Some of us don’t even own a graphic card, let alone a high-end GPU.

However, what if we can “rent” a computer that has a powerful GPU, which we can use to run the Stable Diffusion software? And by “rent”, I really mean free of charge?

Introducing Google Colab

Google Colaboratory or “Colab” is a Google product that allows anyone to write and run Python code through the browser. Indeed, it’s basically a Jupyter notebook, but running on Google’s powerful servers instead of locally.

The good part is that Google Colab is free to use, as long as you have a Google account.

Google Colat Free
Google Colat Free

And there’s more – Google also provide free GPUs for people to use. It’s not just any GPU – they are Nvidia Tesla T4 at the time of writing, though they might change/upgrade them later on. Based on the specs sheet, these GPUs have 16GB VRAM!

Indeed it sounds too good to be true, but that is the reality – Google offers free resources to researchers and practitioners for machine learning research. There’s one downside that we need to be aware. To make sure that everyone can share the free resources, our Colab notebooks will disconnect from Google’s virtual servers after 12 hours (or less if it’s inactive). That said, we can already do a lot of things within 12 hours, and we can always reconnect to a new session after the 12-hour limit!

The Catch

Nothing is free forever, I get it.

After two days of use and generating a few hundred of images on Google Colab, I got this message:

I’m not compalining about the usage since it’s a free service. Google still has my thumb up. But if you need to use the GPUs extensively for a long period of time, at least the free tier of Google Colab is not for you.

Setup Google Colab & Environments

Can’t wait to get started? Me too!

This is the general Google Colab URL: When you first open the Google Colab page, you’ll see a short tutorial. Feel free to skip that since I’ll show you everything you need to know to run Stable Diffusion in a Colab notebook.

Instead of going to the general Colab URL, go to this one:

This will open up a notebook created by HuggingFace, which is like an AI playground, similar to Kaggle.

Enable GPU

Once we open the stable_diffusion notebook, head to the Runtime menu, and click on “Change runtime type”.

Enable GPU Inside Google Colab
Enable GPU Inside Google Colab

Then, in the Hardware accelerator, click on the dropdown and select GPU, and click on Save.

Enable GPU Inside Google Colab
Enable GPU Inside Google Colab

Now run the first line of code inside the Colab notebook by clicking on the play button in front of the code block. You’ll get some warnings. Click Run anyway to proceed. This will display the available GPU for our notebook session. We can confirm it’s a nice and juicy Telsa T4, with 16GB of VRAM!

Nvidia Tesla T4 From Google Colab
Nvidia Tesla T4 From Google Colab

Install Python Libraries (Hugging Face Diffuser)

Once we confirm GPU is available, run the next code block to install the required libraries, which includes the Diffuser library created by Hugging Face. The diffuser library is like a collection of AI models, and it’s compatible with the Stable Diffusion model. In other words, we can run Stable Diffusion via the Diffuser library.

Install Python libraries
Install Python libraries

Installing libraries on the Google Colab server is pretty quick. Note that when we reconnect to our notebook after 12 hours, it seems we’ll need to re-install those libraries as the environment and libraries appear to be deleted. Still, no big deal to re-install since it takes only a few minutes.

After installing all the libraries, run the next bit of code to enable external widgets in Google Colab.

Enable external widgets
Enable external widgets

Hugging Face Login And Access Token

Run the next bit of code, this will attempt to connect to Hugging Face. You’ll need a Hugging Face account first. Once you have an account, click on your profile icon on the top right corner of the page, then Settings.

Hugging Face Account Settings
Hugging Face Account Settings

Then click on Access Tokens, and if you don’t have a token already, click on New token to generate one.

Hugging Face Access Token
Hugging Face Access Token

Give your new token a new, anything is fine, either read of write Role is okay, then click on Generate a token

Generate A Token
Generate A Token

Once you have a token, click on the square icon beside “Show”, that will copy the token without revealing it.

Copy Token
Copy Token

With the token copied, head back to the Colab notebook, and paste the token into the input box that appeared after running the hugging face login code. Then click on the “Login” button.

Authenticate Hugging Face
Authenticate Hugging Face

If successful, we’ll see a Login successful message like the following:

Login Successful
Login Successful

Download the Stable Diffusion Model Weights

Run the next bit of code to download the Stable Diffusion model weights. Note the default “fp16” and torch_dtype=torch.float16 arguments will load the half-precision model weights. I find these weights give pretty good results while the program runs fast, so I’ll stick with the half-precision weights.

The download will take a few minutes. Let it run patiently.

Once done, our Python environment is ready to go!

Run Stable Diffusion Without a GPU

Run this line of code to move the diffuser pipeline to our Tesla GPU:

Move pipeline to GPU
Move pipeline to GPU

Then run the next block of code to kick off a Stable Diffusion run! Note on the arguments inside pipe():

  • num_inference_steps: the number of inference steps, the more steps, the better overall image quality
  • width: width of the generated image
  • height: height of the generated image
from torch import autocast

prompt = """a wonderful traditional japanese city, day light, detailed, multi color vegetation, myst, sunny, 
          crowd on the streets, featured on artstation, wide shot, desaturated look, lense flares"""
with autocast("cuda"):
  image = pipe(prompt, num_inference_steps = 150, width = 1024, height=512).images[0]  # image here is in [PIL format](

# Now to display an image you can do either save it such as:"astronaut_rides_horse.png")

# or if you're in a google colab you can directly display it with 

Feel free to copy the prompt, which I copied from a Reddit post (lol).

I didn’t specify a seed so you might get a random image, but hopefully a good one as well.


Stable Diffusion is a super awesome software, but some of us might not have the adequate hardware to run it. With the help of Google Colab notebook, anyone can use Stable Diffusion without a GPU for free.

Let’s start creating amazing art!

Additional Resources

How to Run Stable Diffusion on Windows


  1. I can’t get past the “make sure you’re logged in with `huggingface-cli login” error message. I created a hugging face account and generated a write token. I then entered it after executing notebook_login() and got a message saying the token had been saved and login was successful. However, when I execute the next block I code I continually get the “make sure you’re logged in with `huggingface-cli login” error message. I checked stackoverflow and it seems like a lot of people are having the same issue, but none of the suggested solutions is working. Please help!

  2. ARE THE RESULTS PRIVATE LIKE OFFLINE? srr to scream but i;ve been looking to make it work for months. I absolutely do not want the results to go online or on the lexica library for access. I’ve purchased midjourney due to its private mode but stable diffusion is working so much better for me.

Leave a Reply

Your email address will not be published. Required fields are marked *