How to Install ADetailer Extension for Automatic1111
After Detailer: A Tool for Image Quality Improvement
Hello and welcome to this blog post, where I will show you how to install ADetailer extension for Automatic1111. ADetailer or After Detailer is a powerful tool for image inpainting, which means filling in missing or damaged parts of an image with realistic and coherent content. ADetailer uses Stable Diffusion models, which preserves the structure and texture of the image while generating high-quality results. ADetailer can be used for various applications, such as restoring old photos, removing unwanted objects, fixing distorted faces and hands, and more.
To use ADetailer, you need to have Automatic1111 installed and running on your computer. Automatic1111 is a software that allows you to create and edit images using Stable Diffusion models, an A.I generative art tool. You can find the installation instructions for Automatic1111 below.
What are Extensions in Stable Diffusion? nExtensions in Stable Diffusion refer to additional features and functionalities that can be integrated into the Automatic1111 WebUI. These extensions are essentially add-ons that enhance the capabilities of the WebUI, allowing users to perform a wider range of tasks and operations within the Stable Diffusion framework. nn Looking to…
Table of Contents
How to Install ADetailer Extension
Once you have Automatic1111 ready, you can install ADetailer as an extension and use it with the Stable Diffusion Web UI. The installation process is very simple and takes only a few minutes. In this blog post, I will guide you through the steps to install and use ADetailer with Automatic1111. Let’s get started!
Installing Adetailer
Open A1111 and select the ‘Extensions’ tab.
Choose the ‘Available’ tab:
Click on ‘Load from:’ to see a list of extensions.
In the search bar at the bottom type in ‘after’.
(It may look like an empty white space, but you can click on it)
Find ‘ !After Detailer ’ extension and click on ‘Install’ on the right side.
When the installation is complete:
Go to the ‘Settings’ tab and click on ‘Apply Settings and Reload UI’. Alternatively, you can close the Web UI and restart Automatic1111.
You have successfully installed Adetailer.
You DO NOT need to Download Models for ADetailer
ADetailer models are already installed when you launch A1111, so you don’t have to download them separately. The only exceptions are deepfashion2_yolove8s.
Here are the models available for ADetailer and their descriptions:
Model | Target | Purpose |
---|---|---|
face_yolov8n.pt | 2D / realistic face | Detect and enhance the details of human faces. |
face_yolov8s.pt | 2D / realistic face | Improved version of face_yolov8n.pt |
hand_yolov8n.pt | 2D / realistic face | Can detect and enhance the details of human hands. |
person_yolov8n-seg.pt | 2D / realistic face | Can be used to change the pose and appearance of characters in a natural way. |
person_yolov8s-seg.pt | 2D / realistic face | Improved version of person_yolov8n-seg.pt |
mediapipe_face_full | realistic face | Full-range images, like those taken with a back-facing phone camera. |
mediapipe_face_short | realistic face | Short-range images, like those taken with a front-facing phone camera or webcam. |
mediapipe_face_mesh | realistic face | Any type of images, regardless of the distance or angle. |
deepfashion2_yolov8s-seg | realistic clothes | Trained on a comprehensive fashion dataset. |
face_yolov8n.pt | A model that can detect and enhance the details of human faces in images. It uses the YOLOv8 algorithm, which is a state-of-the-art object detection method that is fast and accurate. The model can improve the quality and realism of faces generated by other models, such as Stable Diffusion12. It can also be used to change the facial expressions of characters in a natural way. [Source] |
face_yolov8s.pt | The model is an improved version of face_yolov8n.pt, which was released in November 2022. The main differences are: face_yolov8s.pt uses a larger backbone (CSPNet) and more parameters (7.6M) than face_yolov8n.pt (2.0M), which makes it more powerful and accurate. face_yolov8s.pt achieves a higher MAP (mean average precision) on the WIDER FACE dataset, which is a benchmark for face detection. The model scores 95.48% on the easy set, 93.66% on the medium set, and 82.8% on the hard set, compared to 94.67%, 92.75%, and 83.03% for face_yolov8n.pt. face_yolov8s.pt has a faster inference speed than face_yolov8n.pt, especially on GPU devices. The model can process an image in 1.4 ms on an A100 GPU, compared to 1.2 ms for face_yolov8n.pt. On CPU, the model takes 173.3 ms, compared to 62.7 ms for face_yolov8n.pt. [Source] |
hand_yolov8n.pt | A model that can detect and enhance the details of human hands in images. It uses the YOLOv8 algorithm, which is a state-of-the-art object detection method that is fast and accurate. The model can improve the quality and realism of hands generated by other models, such as Stable Diffusion. It can also be used to change the hand gestures of characters in a natural way. The model is part of the Adetailer project, which aims to create models that can detect and enhance the details of various objects in images, such as faces, clothes, and anime characters. The project is developed by Bingsu, a user on Hugging Face and Reddit. [Source] |
person_yolov8n-seg.pt | A model that can detect and enhance the details of human persons in images. It uses the YOLOv8 algorithm.It can also be used to change the pose and appearance of characters in a natural way. The model is part of the adetailer project, which aims to create models that can detect and enhance the details of various objects in images, such as faces, clothes, and anime characters. The model is a segment model, which means it can output both bounding boxes and masks for the detected persons. The masks are useful for inpainting or editing the background or foreground of the images. |
person_yolov8s-seg.pt | The difference between person_yolov8s-seg.pt and person_yolov8n-seg.pt is that person_yolov8s-seg.pt is an improved version of person_yolov8n-seg.pt, which was released in November 2022. The main differences are: person_yolov8s-seg.pt uses a larger backbone (CSPNet) and more parameters (7.6M) than person_yolov8n-seg.pt (2.0M), which makes it more powerful and accurate. person_yolov8s-seg.pt achieves a higher mAP (mean average precision) on the COCO 2017 dataset, which is a benchmark for person detection. The model scores 0.824 for bounding boxes and 0.809 for masks at 50% IoU, and 0.605 for bounding boxes and 0.508 for masks at 50-95% IoU, compared to 0.782, 0.761, 0.555, and 0.460 for person_yolov8n-seg.pt respectively. person_yolov8s-seg.pt has a faster inference speed than person_yolov8n-seg.pt, especially on GPU devices. The model can process an image in 1.4 ms on an A100 GPU, compared to 1.2 ms for person_yolov8n-seg.pt. On CPU, the model takes 173.3 ms, compared to 62.7 ms for person_yolov8n-seg.pt. |
The three models below use the MediaPipe framework and the BlazeFace algorithm, but they have different input sizes, output formats, and performance characteristics.
mediapipe_face_full | Face full is a model that can detect and enhance the details of human faces in full-range images, like those taken with a back-facing phone camera. The model can output 468 facial landmarks, which are useful for face alignment, face recognition, face swapping, face animation, and other applications. |
mediapipe_face_short | Face short is a model that can detect and enhance the details of human faces in short-range images, like those taken with a front-facing phone camera or webcam. The model can output 6 facial landmarks, which are useful for face alignment, face recognition, face swapping, face animation, and other applications. |
mediapipe_face_mesh | Face mesh is a model that can detect and enhance the details of human faces in any type of images, regardless of the distance or angle. The model can output 3D facial landmarks, which are useful for creating realistic 3D models of faces, as well as for face alignment, face recognition, face swapping, face animation, and other applications. |
How to Install Deep Fashion for After Detailer
The model is trained on the DeepFashion2 dataset, which is a comprehensive fashion dataset that contains 491K diverse images of 13 popular clothing categories from both commercial shopping stores and consumers. The model can output both bounding boxes and masks for the detected clothes, which are useful for clothes segmentation, retrieval, editing, and other applications.
Download DeepFashion2[Click Here for More Information]
To install DeepFasahion2, you need to follow these steps:
Create a specific folder for storing the models.
ADetailer does not create this folder automatically, so you have to make one yourself. You can use any location you want, but remember the path. For example, I made mine here: D:\Models\Adetailer.
Open Automatic1111 and go to the Settings Tab.
On the left side, you will see the ADetailer option.
Click on it and you will see a field that says “Extra path to scan Adetailer models”.
In the Max models slider, increase it to 3.
You’ll be able to use 3 different detailer models at once.
Enter the path of the folder you created for the models.
Then, click on the button that says “Apply Settings and Reload UI”. This will refresh the interface and load the models.
You should now see the DeepFashion2 model in the ADetailer model dropdown menu.
You can select it and use it to enhance and refine your images.
Github Desktop: A Simple and Clean Way to Install Automatic1111 and Its Extensions
Github Desktop is my preferred method for installing anything related to Automatic1111, because it simplifies the process and keeps everything neat and tidy. I have written a guide that shows you how to use Github Desktop to install any extensions in two different ways.
To install it with Github Desktop, you will need the following repository link:
https://github.com/Bing-su/adetailer
What are Extensions in Stable Diffusion? nExtensions in Stable Diffusion refer to additional features and functionalities that can be integrated into the Automatic1111 WebUI. These extensions are essentially add-ons that enhance the capabilities of the WebUI, allowing users to perform a wider range of tasks and operations within the Stable Diffusion framework. nn Looking to…
Learn how to navigate the ADetailer User interface with this complete guide. With ADetailer, you can add more details and refine the images with a bunch of extra tools that will allow you to fine-tune your images. For example, you can use ADetailer to fix any flaws or gaps in your images, such as missing faces or hands. You know, the kind of things that make your image look like a Picasso painting gone wrong.
Final Thoughts
That’s it! You have learned how to install ADetailer and probably understand what some of the Adetailer models do as well. It is a powerful tool for image inpainting, with Automatic1111. ADetailer can help you improve the quality of your images by filling in missing or damaged parts with realistic and coherent content. You can use ADetailer for various applications, such as restoring old photos, removing unwanted objects, fixing distorted faces and hands, and more. ADetailer uses Stable Diffusion models, which preserves the structure and texture of the image while generating high-quality results.
I hope you enjoyed this blog post and found it useful. If you want to learn more about ADetailer and how to use its features, please check out my next blog post on getting to know the User Interface of ADetailer, where I will explain every parameter and option in detail. Thank you for reading and happy inpainting!
Leave a Reply