How to Install LCM in A1111 for Faster Stable Diffusion Generations
This is a guide on how to install LCM. LCM is a Latent Consistency Model, a novel class of generative AI models that can create high-quality images from text or other images. LCM works with Automatic1111, a software that provides a user-friendly interface for Stable Diffusion, another generative AI model that can create images from text or other images. With LCM, you can enhance the quality and diversity of the images generated by Stable Diffusion, as well as use image-2-image techniques and ControlNet, a feature that allows you to control the appearance and movement of the generated images with more graphical fidelity.
LCM is especially useful for A1111, when generating animations using Stable Diffusion. A1111 is compatible with AnimateDiff, an animation extension. However, AnimateDiff requires a lot of GPU power, which may not be available for low-end computers. LCM can help you overcome this limitation, as it can run on CPU as well as GPU, and still produce high-quality images that can be used for animation. LCM can also improve the speed and stability of the diffusion process, making it faster and more reliable. [Learn More]
Faster Generations with Latent Consistency Model
In this guide, I will show you how to easily install SD Webui LCM for A1111, an extension that integrates LCM into the Stable Diffusion web UI. With this extension, you can access LCM features from the web UI, and use them to create stunning images and animations with Stable Diffusion and A1111 at a much more efficient and faster rate.
Table of Contents
Once you have Automatic1111 ready, you can install LCM tab as an extension and use it with the Stable Diffusion Web UI. The installation process is very simple and takes only a few minutes. In this blog post, I will guide you through the steps to install and use SD Web UI LCM with Automatic1111. Let’s get started!
Installing SD WebUI LCM:
Open A1111 and select the ‘Extensions’ tab.
Choose the ‘Available’ tab.
Click on ‘Load from:’ to see a list of extensions.
In the search bar at the bottom, type in ‘Latent’.
(it may look like an empty white space, but you can click on it)
Find ‘Latent Consistency Models tab’ extension.
Click on ‘Install’ on the right side.
When the installation is complete,
Go to the ‘Settings’ tab and click on ‘Apply Settings and Reload UI’. Alternatively, you can close the Web UI and restart Automatic1111.
You have successfully installed SD Webui LCM.
You can use the LCM model with LoRA in your prompts by typing the filename of the LoRA model in the Prompts editor. For example, if you want to use LoRA-SDXL, you can type:n in the LoRA field.
This will load the LoRA model in the base or fine-tuned SD or SDXL model and perform a 4-step inference. You can also use LCM with LoRA for Img2Img or Vid2Vid tasks, but note that the output height and width will be the same as the input.
Step 1: Download & Rename File
To download the LCM LoRa checkpoints, you must click on one of the links below.
You can choose to use SD1.5 or SDXL. After you’ve downloaded the checkpoints, rename them to something that you can remember and recall in A1111. To keep it simple, just name it to something like, ‘lcm-lora-sdv1-5’.
For the LCM-LoRa SD 1.5 version:
- Download: ‘pytorch_lora_weights.safetensors‘
- Rename to: ‘lcm-lora-sdv1-5’
- Source Link
For the LCM LoRa SDXL version:
- Download: ‘pytorch_lora_weights.safetensors‘
- Rename to: ‘lcm-lora-sdxl’
- Source Link
Step 2: Save the LCM checkpoint
Save the LCM checkpoint you want to use with a new name and put it in the LoRa folder. Then, launch Automatic1111.
Step 3: Access the LCM LoRa.
You have two options to access the LCM LoRa.
Option 1: The first option is to use the LoRa Tab. There, you can choose the LoRa LCM version you want by clicking on it. It will give you a short code for the LoRa in the prompt, which is the name you gave it.
Option 2: The second option is to use the ‘Add network to prompt’ drop down menu. If you don’t see it on your interface, you can enable it by following these steps:
Go to Settings
Go to User Interface
In the [info] section, type ‘sd_lora’
Apply Settings and Reload the UI. You will see the Network to Prompt drop down menu.
With option 2, you can change the LoRa weights in the prompts, which may improve the output. Some custom models that have terrible colors at 1, will actually have better colors at a weight of 0.7. You can experiment with this option and use it like a usual lora.
AnimateDiff has recently added an LCM sampler support, to install it:
Go to [this link] to download and install the latest version of AnimateDiff extension.
This extension will allow you to use LCM as a sampling method for your animations.
Download LCM LoRa and place it in your LoRa folder as usual. [Guide Here]
Follow these steps to use LCM Lora in Automatic1111 WebUI:
Write the LCM LoRa prompt in the prompt box like you would with a LoRa.
Or use the alternative method mentioned above.
Choose LCM from the sampling method list.
Set the CFG Scale value to a number between 1 and 2.
(Very Important!)
Set the sampling steps value to a number between 2 and 8.
The higher the number, the higher the quality, but the lower the speed. A value of 4 is suggested for a balance between quality and speed.
Enjoy your generation/animation with up to 4x faster speed than normal LoRA.
Try these settings for fast and high-quality results with LCM Sampler:
Set the Steps to 8, the CFG to 2, and the lora weight to 0.75.
This will make the sampler use the LCM LoRA network to enhance the image quality.
With these settings, I was able to generate a 768×1344 image from the same seed and SDXL model in only 5 seconds.
Without the LCM LoRA, it took me 30 seconds. The results were slightly different, but both the quality and speed were amazing.
You can also use other LoRA networks with the LCM LoRA, but you may need to adjust the strength of the LCM LoRA a bit.
Sometimes, I got some artifacts with LoRAs, so I reduced the strength of the LCM LoRA a little by changing the lora weight parameter.
One more thing, I found that the LCM LoRA from the Additional Networks plug-in did not work well for me.
I preferred to put the LCM LoRA file name in the prompt with , which would use the LCM LoRA at 70% strength. Make sure you put the LCM LoRA file under the models/lora folder, otherwise it won’t work in the prompt. [Source]
The LCM samplers were used for these examples above, and I experimented with the weight of the LoRa for each one. The weight of the LoRa affects the amount of shadows in the image, so lowering it makes the image less dark. The results are mostly similar otherwise, but you can see for yourself and draw your own conclusion.
Github Desktop is my preferred method for installing anything related to Automatic1111, because it simplifies the process and keeps everything neat and tidy. I have written a guide that shows you how to use Github Desktop to install any extensions in two different ways.
To install it with Github Desktop, you will need the following repository link:
https://github.com/0xbitches/sd-webui-lcm
What are Extensions in Stable Diffusion? nExtensions in Stable Diffusion refer to additional features and functionalities that can be integrated into the Automatic1111 WebUI. These extensions are essentially add-ons that enhance the capabilities of the WebUI, allowing users to perform a wider range of tasks and operations within the Stable Diffusion framework. nn Looking to…
Final Thoughts
LCM is a new image generation model that can perform fast inference with minimal steps. However, it is not fully compatible with A1111 yet, and some samplers may not work well with it. You can use the XYZ plot that I created to see which samplers are more suitable for LCM.
Successful AI creations hinge not only on talent, but also on the effective use of sophisticated tools. Among them, the XYZ Plot of Stable Diffusion shines bright. This tool streamlines your creative process, offering a tangible roadmap for your AI visual generation. n How do you use XYZ plot in Stable Diffusion? n The XYZ…
As you can see, most of them do not produce good results, but I hope that A1111 will improve its functionality with LCM soon. For now, if you want to use LCM effectively, you will need to use ComfyUI2, which is a user interface for Stable Diffusion models. ComfyUI has an extension that integrates LCM into its workflow3. You can also use LCM with LoRA, which is a fine-tuning technique for Stable Diffusion models. I am trying to adapt and like ComfyUI as well, since it has some interesting features and nodes.
Leave a Reply