Comfyui controlnet workflow example ComfyUI ControlNet aux: Plugin with preprocessors for ControlNet, so you can generate images directly from ComfyUI. ComfyUI: Node based workflow manager that can be used with Stable Diffusion ComfyUI Manager: Plugin for CompfyUI that helps detect and install missing plugins. Ensure Load Checkpoint loads 512-inpainting-ema. I'm glad to hear the workflow is useful. May 12, 2025 · Flux. This is more of a starter workflow which supports img2img, txt2img, a second pass sampler, between the sample passes you can preview the latent in pixelspace, mask what you want, and inpaint (it just adds mask to the latent), you can blend gradients with the loaded image, or start with an image that is only gradient. 3 billion parameters), covering various tasks including text-to-video (T2V) and image-to-video (I2V). The nodes interface enables users to create complex workflows visually. Use the ControlNet Inpainting model without a preprocessor. FLUX. 5 Model Files. safetensors. Download Stable Diffusion 3. safetensors (5. For these examples I have renamed the files by adding stable_cascade_ in front of the filename for example: stable_cascade_canny. outputs. Take versatile-sd as an example, it contains advanced techniques like IPadapter, ControlNet, IC light, LLM prompt generating, removing bg and excels at text-to-image generating, image blending, style transfer Nov 20, 2023 · IPAdapter + ControlNets + 2pass KSampler Sample Workflow SEGs 與 IPAdapter IPAdapter 與 Simple Detector 之間其實存在一個問題,由於 IPAdapter 是接入整個 model 來做處理,當你使用 SEGM DETECTOR 的時候,你會偵測到兩組資料,一個是原始輸入的圖片,另一個是 IPAdapter 的參考圖片。 SD3 Examples. bat you can run to install to portable if detected. Outpainting Workflow File Download. Image to image interpolation & Multi-Interpolation. 5 Medium (2B) variants and new control types, are on the way! 4 days ago · Workflow default settings use Euler A sampler settings with everything enabled. 0, including video generation enhancements, SD3. Download the image below and drag it into ComfyUI to load the workflow. 1 Depth [dev] Mar 20, 2024 · 7. Example You can load this image in ComfyUI open in new window to get the full workflow. You then should see the workflow populated. As I mentioned in my previous article [ComfyUI] AnimateDiff Workflow with ControlNet and FaceDetailer about the ControlNets used, this time we will focus on the control of these three ControlNets. 3B (1. In this example, we're chaining a Depth CN to give the base shape and a Tile controlnet to get back some of the original colors. 2. Check the Corresponding Nodes and Complete the Examples of ComfyUI workflows. 1 Fun Control Workflow. safetensors or something similar. So if you ever wanted to use the same effect as the OP, all you have to do is load his image and everything is already there for you. ¶Key Features of ComfyUI Workflow ¶ 1. VACE 14B is an open-source unified video editing model launched by the Alibaba Tongyi Wanxiang team. json file. One guess is that the workflow is looking for the Control-LoRAs models in the cached directory (which is my directory on my computer). Thanks. 4. 5 Multi ControlNet Workflow. 0, with the same architecture. It's always a good idea to lower slightly the STRENGTH to give the model a little leeway. As illustrated below, ControlNet takes an additional input image and detects its outlines using the Canny edge detector. Created by: OpenArt: OpenPose ControlNet ===== Basic workflow for OpenPose ControlNet. Wan 2. Apr 9, 2024 · Export ComfyUI Workflow. Currently, ComfyUI officially supports the Wan Fun Control model natively, but as of now (2025-04-10), there is no officially released workflow example. Manual Model Installation. After a quick look, I summarized some key points. Flux is one notable example of a ComfyUI workflow, specifically designed to manage memory usage effectively during processing. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. safetensors Jan 28, 2025 · Includes a Note node that contains the links to all the model, clip, VAE, ControlNet, detailer, etc. 1 is a family of video models. download diffusion_pytorch_model. Here is an example: You can load this image in ComfyUI to get the workflow. It extracts the pose from the image. It includes all previous models and adds several new ones, bringing the total count to 14. Pose Reference Nov 23, 2024 · They work like the same Controlnet , IP Adapter techniques but way more refined than any of the third party Flux Controlnet models. If you're interested in exploring the ControlNet workflow, use the following ComfyUI web. Nvidia Cosmos is a family of “World Models”. I quickly tested it out, anad cleaned up a standard workflow (kinda sucks that a standard workflow wasn't included in huggingface or the loader github Workflow Notes. This transformation is supported by several key components, including AnimateDiff, ControlNet, and Auto Mask. !!!Strength and prompt senstive, be care for your prompt and try 0. 0. May 12, 2025 · Then, in other ControlNet-related articles on ComfyUI-Wiki, we will specifically explain how to use individual ControlNet models with relevant examples. - Ling-APE/ComfyUI-All-in-One-FluxDev-Workflow AnimateDiff + AutoMask + ControlNet | Visual Effects (VFX) Discover the ComfyUI workflow that leverages AnimateDiff, AutoMask, and ControlNet to redefine visual effects creation. Jan 16, 2025 · Use the “Custom Nodes Manager” to search for and install x-flux-comfyui. ¶Mastering ComfyUI ControlNet: Models, Workflow, and Examples. ControlNet Depth Comfyui workflow (Use ControlNet Depth to enhance your SDXL images) View Now. ComfyUI Inpainting Workflow Example Explanation. Select an image in the left-most node and choose which preprocessor and ControlNet model you want from the top Multi-ControlNet Stack node. Credits and License Jun 11, 2024 · It will activate after 10 steps and run with ControlNet and then disable again after 16 steps to finish the last 4 steps without ControlNet. Nov 26, 2024 · Drag and drop the image below into ComfyUI to load the example workflow (one custom node for depth map processing is included in this workflow). May 12, 2025 · Complete Guide to Hunyuan3D 2. 1 Canny. ControlNet 1. Instead of writing code, users drag and drop nodes that represent individual actions, parameters, or processes. SDXL 1. - Ling-APE/ComfyUI-All-in-One-FluxDev-Workflow 1. org for compatibility reasons, you can no longer find the Apply ControlNet(Old) node through search or node list. In this example this image will be outpainted: Using the v2 inpainting model and the “Pad Image for Outpainting” node (load it in ComfyUI to see the workflow): Feb 23, 2024 · この記事ではComfyUIでのControlNetのインストール方法や使い方の基本から応用まで、スムーズなワークフロー構築のコツを解説しています。記事を読んで、Scribbleやreference_onlyの使い方をマスターしましょう! Oct 22, 2023 · ComfyUI Guide: Utilizing ControlNet and T2I-Adapter Overview: In ComfyUI, the ControlNet and T2I-Adapter are essential tools. These are examples demonstrating how to do img2img. This workflow uses the following key nodes: LoadImage: Loads the input image; Zoe-DepthMapPreprocessor: Generates depth maps, provided by the ComfyUI ControlNet Auxiliary Preprocessors plugin. !!!Please update the ComfyUI-suite for fixed the tensor mismatch promblem. In this example we're using Canny to drive the composition but it works with any CN. Select the correct mode from the SetUnionControlNetType node (above the Create cinematic scenes with ComfyUI's CogVideoX workflow. In this example this image will be outpainted: Using the v2 inpainting model and the “Pad Image for Outpainting” node (load it in ComfyUI to see the workflow): An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. You can Load these images in ComfyUI to get the full workflow. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. 完整版本模型下载 May 12, 2025 · SDXL Examples. You can use it like the first example. AP Workflow (APW) is continuously updated with new capabilities. The model installation is the same as the inpainting section, please refer to the inpainting section above. The following is an older example for: aura_flow_0. The earliest Apply ControlNet node has been renamed to Apply ControlNet(Old). In this example, we will demonstrate how to use a depth T2I Adapter to control an interior scene. 1 ComfyUI Workflow. 0-controlnet. 1GB) open in new window can be used like any regular checkpoint in ComfyUI. Edge detection example. The image used as a visual guide for the diffusion model. This tutorial will guide you on how to use Flux’s official ControlNet models in ComfyUI. example¶ example usage text with workflow image May 12, 2025 · Stable Diffusion 3. This tutorial is based on and updated from the ComfyUI Flux examples. 5 Medium (2B) variants and new control types, are on the way! Created by: Reverent Elusarca: Hi everyone, ControlNet for SD3 is available on Comfy UI! Please read the instructions below: 1- In order to use the native 'ControlNetApplySD3' node, you need to have the latest Comfy UI, so update your Comfy UI. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Thanks to the ComfyUI community authors for their custom node packages: This example uses Load Video(Upload) to support mp4 videos; The video_info obtained from Load Video(Upload) allows us to maintain the same fps for the output video; You can replace DWPose Estimator with other preprocessors from the ComfyUI-comfyui_controlnet Here is an example using a first pass with AnythingV3 with the controlnet and a second pass without the controlnet with AOM3A3 (abyss orange mix 3) and using their VAE. ComfyUI workflow. . CONDITIONING. 1 Depth and FLUX. 1 is an updated and optimized version based on ControlNet 1. Mar 21, 2024 · To use ComfyUI-LaMA-Preprocessor, you'll be following an image-to-image workflow and add in the following nodes: Load ControlNet Model, Apply ControlNet, and lamaPreprocessor: When setting the lamaPreprocessor node, you'll decide whether you want horizontal or vertical expansion and then set the amount of pixels you want to expand the image by Debugging Tools: Extensive logging and preview functions for workflow understanding; Latest Features. New Features and Improvements May 12, 2025 · ComfyUI内でFlux. To enable or disable a ControlNet group, click the “Fast Bypasser” node in the right corner which says Enable yes/no. Here is an example of how to use upscale models like ESRGAN. Here is an example for how to use the Canny Controlnet: Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. This workflow consists of the following main parts: Model Loading: Loading SD model, VAE model and ControlNet model ComfyUI ControlNet Regional Division Mixing Example. 1 models are similar to this example. Jun 11, 2024 · It will activate after 10 steps and run with ControlNet and then disable again after 16 steps to finish the last 4 steps without ControlNet. example usage text with workflow image May 12, 2025 · This documentation is for the original Apply ControlNet(Advanced) node. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. This workflow guides you in using precise transformations and enhancing realism through the Fade effect, ensuring the seamless integration of visual effects. We will cover the usage of two official control models: FLUX. 0 ControlNet open pose. example. Download SD1. ControlNet Principles. You can load these images in ComfyUI to get the full workflow. The ControlNet nodes provided here are the Apply Advanced ControlNet and Load Advanced ControlNet Model (or diff) nodes. Detailed Guide to Flux ControlNet Workflow. ControlNet can be used for refined editing within specific areas of an image: Isolate the area to regenerate using the MaskEditor node. 1 Model. ComfyUI currently supports specifically the 7B and 14B text to video diffusion models and the 7B and 14B image to video diffusion models. 0 ControlNet zoe depth. You should try to click on each one of those model names in the ControlNet stacker node and choose the path of where your models May 12, 2025 · Complete Guide to Hunyuan3D 2. Prerequisites: - Update ComfyUI to the latest version - Download flux redux safetensors file from Nov 26, 2024 · Drag and drop the image below into ComfyUI to load the example workflow (one custom node for depth map processing is included in this workflow). Available modes: Depth / Pose / Canny / Tile / Blur / Grayscale / Low quality Instructions: Update ComfyUI to the latest version. Image generation has taken a creative leap with the introduction of tools like ComfyUI ControlNet. safetensors and put it in your ComfyUI/checkpoints directory. You can then load up the following image in ComfyUI to get the workflow: AuraFlow 0. 5 Canny ControlNet Workflow. While you may still see the Apply ControlNet(Old) node in many workflow folders you download from comfyui. This repo contains examples of what is achievable with ComfyUI. Then press “Queue Prompt” once and start writing your prompt. You can click the “Load” button on the right in order to load in our workflow. Download the ControlNet inpaint model. May 12, 2025 · 1. Apr 21, 2024 · There are a few different preprocessors for ControlNet within ComfyUI, however, in this example, we’ll use the ComfyUI ControlNet Auxiliary node developed by Fannovel16. Pose ControlNet Workflow Assets; 2. 5 Depth ControlNet Workflow SD1. Greetings! <3. Jul 7, 2024 · The extra conditioning can take many forms in ControlNet. In this example, we will use a combination of Pose ControlNet and Scribble ControlNet to generate a scene containing multiple elements: a character on the left controlled by Pose ControlNet and a cat on a scooter on the right controlled by Scribble ControlNet. for example). Step-by-Step Workflow Execution; Combining Depth Control with Other Techniques SD1. Put it in ComfyUI > models > controlnet folder. Brief Introduction to ControlNet ControlNet is a condition-controlled generation model based on diffusion models (such as Stable Diffusion), initially proposed by Lvmin Zhang, Maneesh Agrawala Application Scenarios for Depth Maps with ControlNet; ComfyUI ControlNet Workflow Example Explanation; 1. ControlNet Latent keyframe Interpolation. Created by: Stonelax@odam. May 12, 2025 · 4. In our example Github repository, we have a worklow. Sep 24, 2024 · Download Multiple ControlNets Example Workflow. If you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes and comfyui_controlnet_aux has write permissions. ComfyUI AnimateDiff, ControlNet, IP-Adapter and FreeU Workflow. Refresh the page and select the inpaint model in the Load ControlNet Model node. 5 Canny ControlNet Workflow File SD1. It is licensed under the Apache 2. However, we use this tool to control keyframes, ComfyUI-Advanced-ControlNet. May 12, 2025 · Wan2. Experience ComfyUI ControlNet Now! 🌟🌟🌟 ComfyUI Online - Experience the ControlNet Workflow Now 🌟🌟🌟. Load the corresponding SD1. 0 ComfyUI workflows, including single-view and multi-view complete workflows, and provides corresponding model download links May 12, 2025 · 現在ComfyUIのControlNetモデルバージョンは多数あるため、具体的なフローは異なる場合がありますが、ここでは現在のControlNet V1. A Conditioning containing the control_net and visual guide. We also use “Image Chooser” to make the image sent to the 2nd pass optional. Install the custom node “ComfyUI’s ControlNet Auxiliary Preprocessors” as it is required to convert the input image to an image suitable for ControlNet. 0 license and offers two versions: 14B (14 billion parameters) and 1. ai: This is a beginner friendly Redux workflow that achieves style transfer while maintaining image composition using controlnet! The workflow runs with Depth as an example, but you can technically replace it with canny, openpose or any other controlnet for your likin. You can load this image in ComfyUI to get the full workflow. 0 ControlNet softedge-dexined Aug 16, 2023 · ComfyUI workflow with Visual Area Prompt node; Install missing Python modules and update PyTorch for the LoRa resizing script; Cordova Recaptcha Enterprise plugin demo; Cordova Recaptcha v2 plugin demo; Generate canny, depth, scribble and poses with ComfyUI ControlNet preprocessors; ComfyUI load prompts from text file workflow May 12, 2025 · Download Flux Dev FP8 Checkpoint ComfyUI workflow example Flux Schnell FP8 Checkpoint version workflow example Flux ControlNet collections: https: Here is an example using a first pass with AnythingV3 with the controlnet and a second pass without the controlnet with AOM3A3 (abyss orange mix 3) and using their VAE. Examples of ComfyUI workflows. If you need an example input image for the canny, use this . 3. The workflows are included below – they are encoded PNG images, dragging them into the ComfyUI canvas will reconstruct the workflows. The workflow is the same as the one above but with a different prompt. For details on the latest features in APW 12. By combining the powerful, modular interface of ComfyUI with ControlNet’s precise conditioning capabilities, creators can achieve unparalleled control over their output. This toolkit is designed to add control and guidance capabilities to FLUX. 0 ComfyUI workflows, including single-view and multi-view complete workflows, and provides corresponding model download links Created by: Stonelax: Stonelax again, I made a quick Flux workflow of the long waited open-pose and tile ControlNet modules. image. Feb 25, 2024 · 使用AI繪圖到某個階段後通常會接觸到Controlnet這個外掛功能,WebUI的使用者在安裝及使用Controlnet上都非常的方便,不過ComfyUI在安裝上也可以透過Manager很快的安裝好,只是在使用上需要自己串接節點或是拉別人的工作流來套用,然後就是不斷試誤和除錯的過程。 May 12, 2025 · Complete Guide to Hunyuan3D 2. 0 ComfyUI Workflows, ComfyUI-Huanyuan3DWrapper and ComfyUI Native Support Workflow Examples This guide contains complete instructions for Hunyuan3D 2. Save the image below locally, then load it into the LoadImage node after importing the workflow Workflow Overview. Once the installation is complete, there will be a workflow in the \ComfyUI\custom_nodes\x-flux-comfyui\workflows. ) The backbone of this workflow is the newly launched ControlNet Union Pro by InstantX. ControlNet Workflow Assets; 2. The workflow files and examples are from the ComfyUI Blog. The veterans can skip the intro or the introduction and get started right away. Step-by-Step Workflow Execution; Explanation of the Pose ControlNet 2-Pass Workflow; First Phase: Basic Pose Image Generation; Second Phase: Style Optimization and Detail Enhancement; Advantages of 2-Pass Image Generation Nov 25, 2023 · Merge 2 images together (Merge 2 images together with this ComfyUI workflow) View Now. ControlNet is probably the most popular feature of Stable Diffusion and with this workflow you'll be able to get started and create fantastic art with the full control you've long searched for. A May 12, 2025 · ControlNet et T2I-Adapter - Exemples de workflow ComfyUI Notez que dans ces exemples, l’image brute est directement transmise à l’adaptateur ControlNet/T2I. ControlNet workflow (A great starting point for using ControlNet) View Now Oct 5, 2024 · ControlNet. First, the placement of ControlNet remains the same. We will use the following two tools, Mar 20, 2024 · 1. Replace the Empty Latent Image node with a combination of Load Image node and VAE Encoder node; Download Flux GGUF Image-to-Image ComfyUI workflow example Jan 16, 2024 · Animatediff Workflow: Openpose Keyframing in ComfyUI. 5 Original FP16 Version ComfyUI Workflow. You signed in with another tab or window. Nodes-Based Flowchart Interface. Import Workflow in ComfyUI to Load Image for Generation. The workflows for other types of ControlNet V1. This example contains 4 images composited together. May 12, 2025 · This article compiles ControlNet models available for the Flux ecosystem, including various ControlNet models developed by XLabs-AI, InstantX, and Jasperai, covering multiple control methods such as edge detection, depth maps, and surface normals. We will use the following image as our input: 2. Everyone who is new to comfyUi starts from step one! Mar 20, 2024 · 1. Follow the steps in the diagram below to ensure the workflow runs correctly. Created by: OlivioSarikas: What this workflow does 👉 In this Part of Comfy Academy we look at how Controlnet is used, including the different types of Preprocessor Nodes and Different Controlnet weights. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. I then recommend enabling Extra Options -> Auto Queue in the interface. safetensors Weight Type: default (can choose fp8 type if memory is insufficient) May 12, 2025 · Since general shapes like poses and subjects are denoised in the first sampling steps this lets us for example position subjects with specific poses anywhere on the image while keeping a great amount of consistency. This guide provides a brief overview of how to effectively use them, with a focus on the prerequisite image formats and available resources. Let me show you two examples of what ControlNet can do: Controlling image generation with (1) edge detection and (2) human pose detection. json will be explained. It's important to play with the strength of both CN to reach the desired result. UNETLoader. May 12, 2025 · Upscale Model Examples. My comfyUI backend is an API that can be used by other apps if they want to do things with stable diffusion so chainner could add support for the comfyUI backend and nodes if they wanted to. You will first need: Text encoder and VAE: May 12, 2025 · In ComfyUI, you only need to replace the relevant nodes from the Flux Installation Guide and Text-to-Image Tutorial with image-to-image related nodes to create a Flux image-to-image workflow. 1. May 12, 2025 · This article focuses on image-to-video workflows. 1 ComfyUI 对应模型安装及教程指南. 5 Depth ControlNet Workflow Guide Main Components. There is a “Pad Image for Outpainting” node to automatically pad the image for outpainting while creating the proper mask. files used in the workflow – no more scrambling to figure out where to download these files from. outputs¶ CONDITIONING. The denoise controls the amount of noise added to the image. The Wan2. Additional ControlNet models, including Stable Diffusion 3. 1 Tools launched by Black Forest Labs. 1 Model Loading Nodes. download depth-zoe-xl-v1. 5 model files This workflow by Draken is a really creative approach, combining SD generations with an AD passthrough to create a smooth infinite zoom effect: 8. 5; Change output file names in ComfyUI Save Image node Download aura_flow_0. 5GB) open in new window and sd3_medium_incl_clips_t5xxlfp8. Here’s an example of a disabled ControlNet through the bypasser. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. 1 background image and 3 subjects. Before you start, ensure your ComfyUI version is at least after this commit so you can find the corresponding WanFunControlToVideo node. 5 Checkpoint model at step 1; Load the input image at step 2; Load the OpenPose ControlNet model at step 3; Load the Lineart ControlNet model at step 4; Use Queue or the shortcut Ctrl+Enter to run the workflow for image generation May 12, 2025 · SD1. safetensors (10. 0 ControlNet canny. i suggest renaming to canny-xl1. 更新 ComfyUI. Workflow Node Explanation 4. Nov 25, 2023 · Prompt & ControlNet. 5 as the starting controlnet strength !!!update a new example workflow in workflow folder, get start with it. Sep 1, 2024 · ComfyUI workflow for the Union Controlnet Pro from InstantX / Shakker Labs. May 12, 2025 · ComfyUI Native Wan2. Aug 26, 2024 · Both for ComfyUI FLUX-ControlNet-Depth-V3 and ComfyUI FLUX-ControlNet-Canny-V3. There is now a install. 1, enabling users to modify and recreate real or generated images. Don’t worry about the pre-filled values and prompts, we will edit these values on inference when we run our May 12, 2025 · Complete Guide to ComfyUI ACE-Step Music Generation Workflow. The vanilla ControlNet nodes are also compatible, and can be used almost interchangeably - the only difference is that at least one of these nodes must be used for Advanced versions of ControlNets to be used (important for In this video, I show you how to generate pose-specific images using Openpose Flux Controlnet. Why do I use the Color Correct? Upscaling with KSampler/Ultimate SD Upscale strips/alters the color from the original image (at least for me). The fundamental principle of ControlNet is to guide the diffusion model in generating images by adding additional control conditions. This example is for Canny, but you can use the A controlNet or T2IAdaptor, trained to guide the diffusion model using specific image data. ACE-Step is an open-source music generation foundation model jointly developed by the Chinese team StepFun and ACE Studio, designed to provide music creators with efficient, flexible, and high-quality music generation and editing tools. From here on, we will introduce a workflow similar to A1111 WebUI. Created by: OpenArt: IPADAPTER + CONTROLNET ===== IPAdapter can be of course paired with any ControlNet. Examples of ComfyUI workflows May 12, 2025 · 3. !!!please donot use AUTO cfg for our ksampler, it will have a very bad result. The total steps is 16. May 12, 2025 · ComfyUI Workflow Examples. Nvidia Cosmos Models. May 12, 2025 · ComfyUI ControlNet workflow and examples; How to use multiple ControlNet models, etc. v3 version - better and realistic version, which can be used directly in ComfyUI! May 12, 2025 · How to install the ControlNet model in ComfyUI; How to invoke the ControlNet model in ComfyUI; ComfyUI ControlNet workflow and examples; How to use multiple ControlNet models, etc. Model Installation; 3. 1 ControlNet Model Introduction. In ComfyUI, using T2I Adapter is similar to ControlNet in terms of interface and workflow. Chaque adaptateur ControlNet/T2I nécessite que l’image qui lui est transmise soit dans un format spécifique comme les cartes de profondeur, les cartes de contours, etc. 0 ComfyUI workflows, including single-view and multi-view complete workflows, and provides corresponding model download links With a better GPU and more VRAM this can be done on the same ComfyUI workflow, but with my 8GB RTX3060 I was having some issues since it's loading two checkpoints and the ControlNet model, so I broke off this part into a separate workflow (it's on the Part 2 screenshot). Purpose: Load the main model file; Parameters: Model: hunyuan_video_t2v_720p_bf16. 1 模型它,包括以下几个主题: About VACE. ComfyUI AnimateDiff, ControlNet and Auto Mask Workflow. Through integrating multi-task capabilities, supporting high-resolution processing and flexible multi-modal input mechanisms, this model significantly improves the efficiency and quality of video creation. You switched accounts on another tab or window. Integrate ControlNet for precise pose and depth guidance and Live Portrait to refine facial details, delivering professional-quality video production. This workflow comes from the ComfyUI official documentation. The ComfyUI workflow implements a methodology for video restyling that integrates several components—AnimateDiff, ControlNet, IP-Adapter, and FreeU—to enhance video editing capabilities. 2- Right now, there is 3 known ControlNet models, created by Instant-X team: Canny, Pose and Tile. Explanation of Official Workflow. Model Introduction FLUX. Load this workflow. In this example, we will guide you through installing and using ControlNet models in ComfyUI, and complete a sketch-controlled image generation example. You will first need: Text encoder and VAE: Aug 17, 2023 · ** 09/09/2023 - Changed the CR Apply MultiControlNet node to align with the Apply ControlNet (Advanced) node. 1 model, open-sourced by Alibaba in February 2025, is a benchmark model in the field of video generation. This ComfyUI workflow introduces a powerful approach to video restyling, specifically aimed at transforming characters into an anime style while preserving the original backgrounds. This workflow by Antzu is a nice example of using Controlnet to Jan 20, 2024 · Put it in Comfyui > models > checkpoints folder. This article accompanies this workflow: link. Animation workflow (A great starting point for using AnimateDiff) View Now. This section will introduce the installation of the official version models and the download of workflow files. I'm not sure what's wrong here because I don't use the portable version of ComfyUI. (Canny, depth are also included. Inpainting with ControlNet. The SD3 checkpoints that contain text encoders: sd3_medium_incl_clips. If any groups are marked DNB on the workflow, they cannot be bypassed without you making adjustments to the workflow yourself. May 12, 2025 · Img2Img Examples. Nov 17, 2024 · ComfyUI - ControlNet Workflow. Reply reply More replies More replies More replies May 12, 2025 · Flux. 1 Models. Refresh the page and select the Realistic model in the Load Checkpoint node. Files to Download. This is the input image that will be used in this example: Here is an example using a first pass with AnythingV3 with the controlnet and a second pass without the controlnet with AOM3A3 (abyss orange mix 3) and using their VAE. In both FLUX-ControlNet workflows, the CLIP encoded text prompt is connected to drive the image contents, while the FLUX-ControlNet conditioning controls the structure and geometry based on the depth or edge map. Choose the “strength” of ControlNet : The higher the value, the Oct 7, 2024 · Example of ControlNet Usage. 这份指南将向介绍如何在 Windows 电脑上使用 ComfyUI 来运行 Flux. Try an example Canny Controlnet workflow by dragging in this image into ComfyUI. In this article, flux-controlnet-canny-v3-workflow. 1バージョンモデルを例に説明し、具体的なワークフローは後続の関連チュートリアルで補足します。 Oct 22, 2023 · ComfyUI Guide: Utilizing ControlNet and T2I-Adapter Overview: In ComfyUI, the ControlNet and T2I-Adapter are essential tools. The node pack will need updating for A general purpose ComfyUI workflow for common use cases. Overview of ControlNet 1. Here is an example. May 6, 2024 · Generate canny, depth, scribble and poses with ComfyUI ControlNet preprocessors; ComfyUI load prompts from text file workflow; Allow mixed content on Cordova app’s WebView; ComfyUI workflow with MultiAreaConditioning, Loras, Openpose and ControlNet for SD1. Pose ControlNet. A controlNet or T2IAdaptor, trained to guide the diffusion model using specific image data. Unlike the workflow above, sometimes we don’t have a ready-made OpenPose image, so we need to use the ComfyUI ControlNet Auxiliary Preprocessors plugin to preprocess the reference image, then use the processed image as input along with the ControlNet model Created by: AILab: Flux Controlnet V3 ControlNet is trained on 1024x1024 resolution and works for 1024x1024 resolution. May 12, 2025 · This documentation is for the original Apply ControlNet(Advanced) node. Forward the edited image to the latent space via the KSampler. May 12, 2025 · Outpainting is the same thing as inpainting. resolution: Controls the depth map resolution, affecting its ComfyUI 2-Pass Pose ControlNet Usage Example; 1. Covering step by step, full explanation and system optimizatio You can achieve the same thing in a1111, comfy is just awesome because you can save the workflow 100% and share it with others. There are other third party Flux Controlnets, LoRA and Flux Inpainting featured models we have also shared in our earlier article if haven't checked yet. 1 Canny and Depth are two powerful models from the FLUX. Manual Model Installation; 3. ComfyUI examples range from simple text-to-image conversions to intricate processes involving tools like ControlNet and AnimateDiff. This is a workflow that is intended for beginners as well as veterans. 首先确保你的 ComfyUI 已更新到最新版本,如果你不知道如何更新和升级 ComfyUI 请参考如何更新和升级 ComfyUI。 注意:Flux ControlNet 功能需要最新版本的 ComfyUI 支持,请务必先完成更新。 2. You signed out in another tab or window. ComfyUI Official HunyuanVideo I2V Workflow. 1を利用するには、最新のComfyUIモデルにアップグレードする必要があります。まだComfyUIを更新していない場合は、以下の記事を参照してアップグレードまたはインストール手順を確認してください。 Sep 24, 2024 · Example workflow: Use OpenPose for body positioning; Follow with Canny for edge preservation; Add a depth map for 3D-like effects; Download Multiple ControlNets Example Workflow. Here is a workflow for using it: Save this image then load it or drag it on ComfyUI to get the workflow. , selon le OpenPose SDXL: OpenPose ControlNet for SDXL. 5 support, and workflow improvements, see the . fp16. It comes fully equipped with all the essential customer nodes and models, enabling seamless creativity without the need for manual setups. Created by: OpenArt: Of course it's possible to use multiple controlnets. safetensors, stable_cascade_inpainting. Put it under ComfyUI/input . Reload to refresh your session. My go-to workflow for most tasks. Download the model to models/controlnet. If you want to learn about Tencent Hunyuan’s text-to-video workflow, please refer to Tencent Hunyuan Text-to-Video Workflow Guide and Examples. download OpenPoseXL2. May 19, 2024 · Now with ControlNet and better Faces! Feel free to post your pictures! I would love to see your creations with my workflow! <333. May 12, 2025 · Using ComfyUI ControlNet Auxiliary Preprocessors to Preprocess Reference Images. jpy nbvgw jljl joxl flceg nkrkojr cqgun wuevoxor rbwjl oumfz