comfyui sdxl. No external upscaling. comfyui sdxl

 
 No external upscalingcomfyui sdxl After the first pass, toss the image into a preview bridge, mask the hand, adjust the clip to emphasize hand with negatives of things like jewlery, ring, et cetera

. SDXL, ComfyUI and Stable Diffusion for Complete Beginner's - Learn everything you need to know to get started. Part 3: CLIPSeg with SDXL in ComfyUI. lora/controlnet/ti is all part of a nice UI with menus and buttons making it easier to navigate and use. I've been using automatic1111 for a long time so I'm totally clueless with comfyUI but I looked at GitHub, read the instructions, before you install it, read all of it. Custom nodes extension for ComfyUI, including a workflow to use SDXL 1. Please share your tips, tricks, and workflows for using this software to create your AI art. It took ~45 min and a bit more than 16GB vram on a 3090 (less vram might be possible with a batch size of 1 and gradient_accumulation_step=2)There are several options on how you can use SDXL model: How to install SDXL 1. Other options are the same as sdxl_train_network. 3, b2: 1. If you only have a LoRA for the base model you may actually want to skip the refiner or at least use it for fewer steps. Part 3 - we added. x and SDXL ; Asynchronous Queue system ; Many optimizations: Only re-executes the parts of the workflow that changes between executions. • 4 mo. SDXL ComfyUI ULTIMATE Workflow. But suddenly the SDXL model got leaked, so no more sleep. I’ll create images at 1024 size and then will want to upscale them. For comparison, 30 steps SDXL dpm2m sde++ takes 20 seconds. Make sure to check the provided example workflows. Installing ComfyUI on Windows. 2 SDXL results. Comfyroll Template Workflows. Please keep posted images SFW. SDXL is trained with 1024*1024 = 1048576 sized images with multiple aspect ratio images , so your input size should not greater than that number. Examples shown here will also often make use of these helpful sets of nodes: ComfyUI IPAdapter plus. For SDXL stability. r/StableDiffusion. Today, even through Comfyui manager, where FOOOCUS node is still available, and install it, the node is marked as "unloaded" and I. Probably the Comfyiest. Part 3 (this post) - we will add an SDXL refiner for the full SDXL process. 0. This has simultaneously ignited an interest in ComfyUI, a new tool that simplifies usability of these models. Try double-clicking background workflow to bring up search and then type "FreeU". Brace yourself as we delve deep into a treasure trove of fea. Achieving Same Outputs with StabilityAI Official ResultsMilestone. SDXL and SD1. Then I found CLIPTextEncodeSDXL node in advanced section, because someone in 4chan mentioned they got better result with it. Inpainting. We will know for sure very shortly. It has an asynchronous queue system and optimization features that. CUI can do a batch of 4 and stay within the 12 GB. The same convenience can be experienced in ComfyUI by installing the SDXL Prompt Styler. A1111 has its advantages and many useful extensions. lordpuddingcup. 0, it has been warmly received by many users. To enable higher-quality previews with TAESD, download the taesd_decoder. 0 model. the MileHighStyler node is only. ComfyUI-CoreMLSuite now supports SDXL, LoRAs and LCM. it is recommended to use ComfyUI Manager for installing and updating custom nodes, for downloading upscale models, and for updating ComfyUI. Although the Load Checkpoint node provides a VAE model alongside the diffusion model, sometimes it can be useful to use a specific VAE model. 0 the embedding only contains the CLIP model output and the. If you look for the missing model you need and download it from there it’ll automatically put. Refiners should have at most half the steps that the generation has. Based on Sytan SDXL 1. Step 3: Download a checkpoint model. Members Online. ("SDXL") that is currently beta tested with a bot in the official Discord looks super impressive! Here's a gallery of some of the best photorealistic generations posted so far on Discord. Floating points are stored as 3 values: sign (+/-), exponent, and fraction. ai has released Control Loras that you can find Here (rank 256) or Here (rank 128). Study this workflow and notes to understand the basics of ComfyUI, SDXL, and Refiner workflow. One of its key features is the ability to replace the {prompt} placeholder in the ‘prompt’ field of these. r/StableDiffusion. SDXL 1. {"payload":{"allShortcutsEnabled":false,"fileTree":{"ComfyUI-Experimental/sdxl-reencode":{"items":[{"name":"1pass-sdxl_base_only. 1. 5) with the default ComfyUI settings went from 1. For an example of this. x, SD2. (especially with SDXL which can work in plenty of aspect ratios). Comfy UI now supports SSD-1B. 0 with both the base and refiner checkpoints. 5 was trained on 512x512 images. . On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. Try DPM++ 2S a Karras, DPM++ SDE Karras, DPM++ 2M Karras, Euler a and DPM adaptive. ComfyUI was created by comfyanonymous, who made the tool to understand how Stable Diffusion works. 9, s2: 0. Go to the stable-diffusion-xl-1. Welcome to the unofficial ComfyUI subreddit. Open ComfyUI and navigate to the "Clear" button. The CLIP Text Encode SDXL (Advanced) node provides the same settings as its non SDXL version. 0 in both Automatic1111 and ComfyUI for free. SDXL Examples. Drawing inspiration from the Midjourney Discord bot, my bot offers a plethora of features that aim to simplify the experience of using SDXL and other models both in the context of running locally. Check out my video on how to get started in minutes. . Create a primitive and connect it to the seed input on a sampler (You have to convert the seed widget to an input on the sampler), then the primitive becomes an RNG. In my opinion, it doesn't have very high fidelity but it can be worked on. Detailed install instruction can be found here: Link to the readme file on Github. ComfyUI can do most of what A1111 does and more. SDXL - The Best Open Source Image Model. Of course, it is advisable to use the ControlNet preprocessor, as it provides various preprocessor nodes once the ControlNet. x and SDXL models, as well as standalone VAEs and CLIP models. To launch the demo, please run the following commands: conda activate animatediff python app. It boasts many optimizations, including the ability to only re-execute the parts of the workflow that. 5/SD2. This ability emerged during the training phase of the AI, and was not programmed by people. Detailed install instruction can be found here: Link to. ComfyUI-Manager is an extension designed to enhance the usability of ComfyUI. A good place to start if you have no idea how any of this works is the: 1.sdxl 1. That wouldn't be fair because for a prompt in DALL-E I require 10 seconds, to create an image using a ComfyUI workflow based on Controlnet, I require 10 minutes. . 5でもSDXLでもLCM LoRAは使用できるが、ファイルが異なるので注意が必要。. Comfyui + AnimateDiff Text2Vid. py, but --network_module is not required. Increment ads 1 to the seed each time. No milestone. To begin, follow these steps: 1. Also, in ComfyUI, you can simply use ControlNetApply or ControlNetApplyAdvanced, which utilize controlnet. 0, the flagship image model developed by Stability AI, stands as the pinnacle of open models for image generation. ComfyUI is an advanced node based UI utilizing Stable Diffusion. 5 and Stable Diffusion XL - SDXL. the templates produce good results quite easily. Per the ComfyUI Blog, the latest update adds “Support for SDXL inpaint models”. Moreover, SDXL works much better in ComfyUI as the workflow allows you to use the base and refiner model in one step. SDXL can be downloaded and used in ComfyUI. Although it looks intimidating at first blush, all it takes is a little investment in understanding its particulars and you'll be linking together nodes like a pro. 21:40 How to use trained SDXL LoRA models with ComfyUI. Those are schedulers. Comfyui's unique workflow is very attractive, but the speed on mac m1 is frustrating. 1 view 1 minute ago. 0 is here. Introduction. ago. I feel like we are at the bottom of a big hill with Comfy, and the workflows will continue to rapidly evolve. Reply replyAfter the first pass, toss the image into a preview bridge, mask the hand, adjust the clip to emphasize hand with negatives of things like jewlery, ring, et cetera. 236 strength and 89 steps for a total of 21 steps) 3. Lets you use two different positive prompts. Two others (lcm-lora-sdxl and lcm-lora-ssd-1b) generate images around 1 minute 5 steps. The right upscaler will always depend on the model and style of image you are generating; Ultrasharp works well for a lot of things, but sometimes has artifacts for me with very photographic or very stylized anime models. py. 38 seconds to 1. Stable Diffusion XL (SDXL) 1. SDXL Prompt Styler Advanced. The solution to that is ComfyUI, which could be viewed as a programming method as much as it is a front end. Everything you need to generate amazing images! Packed full of useful features that you can enable and disable on the fly. Installation. And I'm running the dev branch with the latest updates. make a folder in img2img. Just add any one of these at the front of the prompt ( these ~*~ included, probably works with auto1111 too) Fairly certain this isn't working. Please read the AnimateDiff repo README for more information about how it works at its core. sdxl 1. Here's what I've found: When I pair the SDXL base with my LoRA on ComfyUI, things seem to click and work pretty well. Hi, I hope I am not bugging you too much by asking you this on here. Check out the ComfyUI guide. 9 comfyui (i would prefere to use a1111) i'm running a rtx 2060 6gb vram laptop and it takes about 6-8m for a 1080x1080 image with 20 base steps & 15 refiner steps edit: im using Olivio's first set up(no upscaler) edit: after the first run i get a 1080x1080 image (including the refining) in Prompt executed in 240. It divides frames into smaller batches with a slight overlap. It boasts many optimizations, including the ability to only re-execute the parts of the workflow that. If you are looking for an interactive image production experience using the ComfyUI engine, try ComfyBox. The base model and the refiner model work in tandem to deliver the image. ensure you have at least one upscale model installed. Thanks! Reply More posts you may like. You signed in with another tab or window. SDXL ComfyUI ULTIMATE Workflow. And for SDXL, it saves TONS of memory. i'm probably messing something up im still new to this but you put the model and clip output nodes of the checkpoint loader to the. Part 5: Scale and Composite Latents with SDXL. Kind of new to ComfyUI. py. 13:57 How to generate multiple images at the same size. SDXL Prompt Styler is a node that enables you to style prompts based on predefined templates stored in multiple JSON files. 0 with ComfyUI. In this section, we will provide steps to test and use these models. Discover how to supercharge your Generative Adversarial Networks (GANs) with this in-depth tutorial. ComfyUI got attention recently because the developer works for StabilityAI and was able to be the first to get SDXL running. only take the first step which in base SDXL. 3 ; Always use the latest version of the workflow json file with the latest. Drag and drop the image to ComfyUI to load. 132 upvotes · 18 comments. 0. 2占最多,比SDXL 1. For illustration/anime models you will want something smoother that would tend to look “airbrushed” or overly smoothed out for more realistic images, there are many options. 9-usage This repo is a tutorial intended to help beginners use the new released model, stable-diffusion-xl-0. r/StableDiffusion. Comfy UI now supports SSD-1B. comfyUI 使用DWpose + tile upscale 超分辨率放大图片极简教程,ComfyUI:终极放大器 - 一键拖拽,不用任何操作,就可自动放大到相应倍数的尺寸,【专业向节点AI】SD ComfyUI大冒险 -基础篇 03高清输出 放大奥义,【AI绘画】ComfyUI的惊人用法,可很方便的. . Are there any ways to. In case you missed it stability. Here's some examples where I used 2 images (an image of a mountain and an image of a tree in front of a sunset) as prompt inputs to. Using in 🧨 diffusers今天,我们来讲一讲SDXL在comfyui中更加进阶的节点流逻辑。第一、风格控制第二、base模型以及refiner模型如何连接第三、分区提示词控制第四、多重采样的分区控制comfyui节点流程这个东西一通百通,逻辑正确怎么连都可以,所以这个视频我讲得并不仔细,只讲搭建的逻辑和重点,这东西讲太细过于. SDXL1. Do you have ComfyUI manager. See below for. SDXL, ComfyUI and Stable Diffusion for Complete Beginner's - Learn everything you need to know to get started. Reload to refresh your session. 5 model (directory: models/checkpoints) Install your loras (directory: models/loras) Restart. they will also be more stable with changes deployed less often. In This video you shall learn how you can add and apply LORA nodes in comfyui and apply lora models with ease. 并且comfyui轻量化的特点,使用SDXL模型还能有着更低的显存要求和更快的加载速度,最低支持4G显存的显卡使用。可以说不论是自由度、专业性还是易用性,comfyui在使用SDXL模型上的优势开始越来越明显。When all you need to use this is the files full of encoded text, it's easy to leak. With for instance a graph like this one you can tell it to: Load this model, put these bits of text into the CLIP encoder, make an empty latent image, use the model loaded with the embedded text and noisy latent to sample the image, now save the resulting image. The SDXL Prompt Styler is a versatile custom node within Comfy UI that streamlines the prompt styling process. At least SDXL has its (relative) accessibility, openness and ecosystem going for it, plenty scenarios where there is no alternative to things like controlnet. ComfyUI uses node graphs to explain to the program what it actually needs to do. Introducing the SDXL-dedicated KSampler Node for ComfyUI. Part 2 ( link )- we added SDXL-specific conditioning implementation + tested the impact of conditioning parameters on the generated images. Range for More Parameters. json: 🦒 Drive. x, SD2. x for ComfyUI . 0_webui_colab About. Well dang I guess. u/Entrypointjip. If you want to open it. LoRA stands for Low-Rank Adaptation. Switch (image,mask), Switch (latent), Switch (SEGS) - Among multiple inputs, it selects the input designated by the selector and outputs it. Reply reply. The refiner is although only good at refining noise from an original image still left in creation, and will give you a blurry result if you try to add. Will post workflow in the comments. The nodes can be used in any. It works pretty well in my tests within the limits of. 1. be upvotes. Sytan SDXL ComfyUI: Very nice workflow showing how to connect the base model with the refiner and include an upscaler. Now do your second pass. Members Online •. Loader SDXL. Between versions 2. 5. 0 tutorial I'll show you how to use ControlNet to generate AI images usi. these templates are the easiest to use and are recommended for new users of SDXL and ComfyUI. Table of Content ; Searge-SDXL: EVOLVED v4. Conditioning combine runs each prompt you combine and then averages out the noise predictions. . . ControlNet Workflow. This works BUT I keep getting erratic RAM (not VRAM) usage; and I regularly hit 16gigs of RAM use and end up swapping to my SSD. AUTOMATIC1111 and Invoke AI users, but ComfyUI is also a great choice for SDXL, we’ve published an installation guide for ComfyUI, too! Let’s get started: Step 1: Downloading the. custom-nodes stable-diffusion comfyui sdxl sd15 Updated Nov 19, 2023SDXLをGoogle Colab上で簡単に使う方法をご紹介します。 Google Colabに既に設定済みのコードを使用することで、簡単にSDXLの環境をつくりあげす。また、ComfyUIも難しい部分は飛ばし、わかりやすさ、応用性を意識した設定済みのworkflowファイルを使用することで、すぐにAIイラストを生成できるように. 0 for ComfyUI (SDXL Base+Refiner, XY Plot, ControlNet XL w/ OpenPose, Control-LoRAs, Detailer, Upscaler, Prompt Builder) Tutorial | Guide I published a new version of my workflow, which should fix the issues that arose this week after some major changes in some of the custom nodes I use. 9 then upscaled in A1111, my finest work yet self. SDXL ComfyUI ULTIMATE Workflow. 0 is the latest version of the Stable Diffusion XL model released by Stability. Hires fix is just creating an image at a lower resolution, upscaling it and then sending it through img2img. 0 版本推出以來,受到大家熱烈喜愛。. be. Step 1: Install 7-Zip. 5 comfy JSON and import it sd_1-5_to_sdxl_1-0. 概要. 0 most robust ComfyUI workflow. Inpainting a cat with the v2 inpainting model: Inpainting a woman with the v2 inpainting model: It also works with non inpainting models. This notebook is open with private outputs. 0 Base and Refiners models downloaded and saved in the right place, it should work out of the box. I've recently started appreciating ComfyUI. Both models are working very slowly, but I prefer working with ComfyUI because it is less complicated. To encode the image you need to use the "VAE Encode (for inpainting)" node which is under latent->inpaint. pth (for SDXL) models and place them in the models/vae_approx folder. To install it as ComfyUI custom node using ComfyUI Manager (Easy Way) :There are no SDXL-compatible workflows here (yet) This is a collection of custom workflows for ComfyUI. This might be useful for example in batch processing with inpainting so you don't have to manually mask every image. 5 method. You switched accounts on another tab or window. Apprehensive_Sky892. Here is the rough plan (that might get adjusted) of the series: In part 1 (this post), we will implement the simplest SDXL Base workflow and generate our first images. SDXL has 2 text encoders on its base, and a specialty text encoder on its refiner. Please keep posted images SFW. 5. CLIPVision extracts the concepts from the input images and those concepts are what is passed to the model. Tedious_Prime. In this guide, we'll show you how to use the SDXL v1. Welcome to SD XL. SDXL-ComfyUI-workflows. SDXL Style Mile (ComfyUI version) ControlNet Preprocessors by Fannovel16. But here is a link to someone that did a little testing on SDXL. Discover the Ultimate Workflow with ComfyUI in this hands-on tutorial, where I guide you through integrating custom. Their result is combined / compliments. This one is the neatest but. . Stable Diffusion XL (SDXL) 1. 0 ComfyUI workflows! Fancy something that in. ComfyUI is better for more advanced users. 0 comfyui工作流入门到进阶ep05-图生图,局部重绘!. ComfyUI is a node-based user interface for Stable Diffusion. I’ve created these images using ComfyUI. That is, describe the background in one prompt, an area of the image in another, another area in another prompt and so on, each with its own weight, This and this. s1: s1 ≤ 1. Also SDXL was trained on 1024x1024 images whereas SD1. Here are the models you need to download: SDXL Base Model 1. Maybe all of this doesn't matter, but I like equations. Start ComfyUI by running the run_nvidia_gpu. Tedious_Prime. . 0. Control-LoRAs are control models from StabilityAI to control SDXL. The workflow should generate images first with the base and then pass them to the refiner for further refinement. ComfyUIでSDXLを動かす方法! 最新モデルを少ないVRAMで動かそう【Stable Diffusion XL】 今回もStable Diffusion XL(SDXL)に関する話題で、タイトルの通り ComfyUIでStable Diffusion XLを動かす方法 を丁寧に解説するという内容になっています。 今回は 流行りの SDXL についてです。先日、StableDiffusion WebUI にアップデートが入り、 SDXL が対応したらしいなのですが、おそらく ComfyUI を使ったほうがネットワークの構造をそのまま見ることができるので、分かり易いと思います。 宣伝 最後に少し宣伝です。 AnimateDiff for ComfyUI. 13:57 How to generate multiple images at the same size. The prompt and negative prompt templates are taken from the SDXL Prompt Styler for ComfyUI repository. 0 Comfyui工作流入门到进阶ep. A1111 has a feature where you can create tiling seamless textures, but I can't find this feature in comfy. Part 4 - we intend to add Controlnets, upscaling, LORAs, and other custom additions. While the normal text encoders are not "bad", you can get better results if using the special encoders. This is an aspect of the speed reduction in that it is less storage to traverse in computation, less memory used per item, etc. Download the . However, due to the more stringent requirements, while it can generate the intended images, it should be used carefully as conflicts between the interpretation of the AI model and ControlNet's enforcement can. In this ComfyUI tutorial we will quickly c. . Left side is the raw 1024x resolution SDXL output, right side is the 2048x high res fix output. 0. You can specify the rank of the LoRA-like module with --network_dim. SD 1. Hats off to ComfyUI for being the only Stable Diffusion UI to be able to do it at the moment but there are a bunch of caveats with running Arc and Stable Diffusion right now from the research I have done. 1. This uses more steps, has less coherence, and also skips several important factors in-between. Yesterday I woke up to this Reddit post "Happy Reddit Leak day" by the Joe Penna. It divides frames into smaller batches with a slight overlap. Run sdxl_train_control_net_lllite. that should stop it being distorted, you can also switch the upscale method to bilinear as that may work a bit better. This aligns the node (s) to the set ComfyUI grid spacing size and move the node in the direction of the arrow key by the grid spacing value. I created some custom nodes that allow you to use the CLIPSeg model inside ComfyUI to dynamically mask areas of an image based on a text prompt. 我也在多日測試後,決定暫時轉投 ComfyUI。. Do you have ideas, because the ComfyUI repo you quoted doesn't include a SDXL workflow or even models. 0. ComfyUI lives in its own directory. 9 facedetailer workflow by FitCorder, but rearranged and spaced out more, with some additions such as Lora Loaders, VAE loader, 1:1 previews, Super upscale with Remacri to over 10,000x6000 in just 20 seconds with Torch2 & SDP. these templates are the easiest to use and are recommended for new users of SDXL and ComfyUI. Testing was done with that 1/5 of total steps being used in the upscaling. CustomCuriousity. eilertokyo • 4 mo. 0 the embedding only contains the CLIP model output and the. How to use SDXL locally with ComfyUI (How to install SDXL 0. 2 ≤ b2 ≤ 1. 3. 9模型下载和上传云空间 google colab安装comfyUI和sdxl 0. ComfyUIは若干取っつきにくい印象がありますが、SDXLを動かす場合はメリットが大きく便利なツールだと思います。 特にStable Diffusion web UIだとVRAMが足りなくて試せないなぁ…とお悩みの方には救世主となりうるツールだと思いますので、ぜひ試してみて. Be aware that ComfyUI is a zero-shot dataflow engine, not a document editor. I ran Automatic1111 and ComfyUI side by side, and ComfyUI takes up around 25% of the memory Automatic1111 requires, and I'm sure many people will want to try ComfyUI out just for this feature. Txt2Img is achieved by passing an empty image to the sampler node with maximum denoise. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. Loras are patches applied on top of the main MODEL and the CLIP model so to use them put them in the models/loras directory and use the LoraLoader. could you kindly give me some hints, I'm using comfyUI . I've looked for custom nodes that do this and can't find any. Comfyroll SDXL Workflow Templates. There is an Article here. ControlNET canny support for SDXL 1. 15:01 File name prefixs of generated images. BRi7X. T2I-Adapter aligns internal knowledge in T2I models with external control signals. 1. Install SDXL (directory: models/checkpoints) Install a custom SD 1. Updating ComfyUI on Windows. 7. At this time the recommendation is simply to wire your prompt to both l and g. Before you can use this workflow, you need to have ComfyUI installed. Previously lora/controlnet/ti were additions on a simple prompt + generate system. 9 model images consistent with the official approach (to the best of our knowledge) Ultimate SD Upscaling. 5 Model Merge Templates for ComfyUI. . When an AI model like Stable Diffusion is paired with an automation engine, like ComfyUI, it allows. Reply reply litekite_You can Load these images in ComfyUI to get the full workflow. All LoRA flavours: Lycoris, loha, lokr, locon, etc… are used this way. SDXL ComfyUI工作流(多语言版)设计 + 论文详解,详见:SDXL Workflow(multilingual version) in ComfyUI + Thesis explanationIt takes around 18-20 sec for me using Xformers and A111 with a 3070 8GB and 16 GB ram. This node is explicitly designed to make working with the refiner easier. Klash_Brandy_Koot. Settled on 2/5, or 12 steps of upscaling. x, and SDXL. 0 repository, under Files and versions; Place the file in the ComfyUI folder modelscontrolnet. When you run comfyUI, there will be a ReferenceOnlySimple node in custom_node_experiments folder. Launch the ComfyUI Manager using the sidebar in ComfyUI. 5 and 2. Lora Examples. 0 ComfyUI. Inpainting. Discover the Ultimate Workflow with ComfyUI in this hands-on tutorial, where I guide you through integrating custom nodes, refining images with advanced tool. Important updates. Here is the rough plan (that might get adjusted) of the series: In part 1 (this post), we will implement the simplest SDXL Base workflow and generate our first images. The ComfyUI SDXL Example images has detailed comments explaining most parameters. Efficient Controllable Generation for SDXL with T2I-Adapters. Fast ~18 steps, 2 seconds images, with Full Workflow Included! No controlnet, No inpainting, No LoRAs, No editing, No eye or face restoring, Not Even Hires Fix! Raw output, pure and simple TXT2IMG. . 2 comments. It didn't work out. SDXL from Nasir Khalid; comfyUI from Abraham; SD2. Searge SDXL Nodes. In addition it also comes with 2 text fields to send different texts to the two CLIP models. * The result should best be in the resolution-space of SDXL (1024x1024). The SDXL workflow does not support editing. Please share your tips, tricks, and workflows for using this software to create your AI art. It also runs smoothly on devices with low GPU vram. 13:29 How to batch add operations to the ComfyUI queue. SDXL base → SDXL refiner → HiResFix/Img2Img (using Juggernaut as the model, 0. 5 base model vs later iterations. VRAM usage itself fluctuates between 0. The ComfyUI Image Prompt Adapter offers users a powerful and versatile tool for image manipulation and combination. Install this, restart ComfyUI and click “manager” then “install missing custom nodes” restart again and it should work. /output while the base model intermediate (noisy) output is in the . ComfyUI is a powerful and modular GUI for Stable Diffusion, allowing users to create advanced workflows using a node/graph interface. Hotshot-XL is a motion module which is used with SDXL that can make amazing animations. that should stop it being distorted, you can also switch the upscale method to bilinear as that may work a bit better. Tips for Using SDXL ComfyUI . Download both from CivitAI and move them to your ComfyUI/Models/Checkpoints folder. Navigate to the ComfyUI/custom_nodes folder.