Comfyui workflows examples
Comfyui workflows examples. It’s one that shows how to use the basic features of ComfyUI. This guide is about how to setup ComfyUI on your Windows computer to run Flux. Then press “Queue Prompt” once and start writing your prompt. I made this using the following workflow with two images as a starting point from the ComfyUI IPAdapter node repository. OpenPose SDXL: OpenPose ControlNet for SDXL. You can load this image in ComfyUI to get the full workflow. Examples of ComfyUI workflows. It can be a little intimidating starting out with a blank canvas, but by bringing in an existing workflow, you can have a starting point that comes with a set of nodes all ready to go. You can Load these images in ComfyUI to get the full workflow. I then recommend enabling Extra Options -> Auto ComfyUI Workflows: Your Ultimate Guide to Fluid Image Generation. Here is a workflow for using it: Save this image then load it or drag it on ComfyUI to get the workflow. How to Load a New Workflow? Simple Steps: Sep 7, 2024 路 GLIGEN Examples. But let me know if you need help replicating some of the concepts in my process. Img2Img works by loading an image like this example image (opens in a new tab), converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. Any Node workflow examples. x and SDXL; Asynchronous Queue system May 27, 2024 路 Simple ComfyUI workflow used for the example images for my model merge 3DPonyVision. ComfyUI Workflows are a way to easily start generating images within ComfyUI. yaml. The workflows are meant as a learning exercise, they are by no means "the best" or the most optimized but they should give you a good understanding of how ComfyUI works. Diverse Options: A myriad of workflows from the ComfyUI official repository are at your fingertips. One of the best parts about ComfyUI is how easy it is to download and swap between workflows. Put the GLIGEN model files in the ComfyUI/models/gligen directory. The only way to keep the code open and free is by sponsoring its development. json file. The default workflow is a simple text-to-image flow using Stable Diffusion 1. Apr 26, 2024 路 Workflow. Here is an example of how to use upscale models like ESRGAN. For some workflow examples and see what ComfyUI can do you can check out: 馃敆 The workflow integrates with ComfyUI's custom nodes and various tools like image conditioners, logic switches, and upscalers for a streamlined image generation process. SD3 performs very well with the negative conditioning zeroed out like in the following example: SD3 Controlnet. Then I created two more sets of nodes, from Load Images to the IPAdapters, and adjusted the masks so that they would be part of a specific section in the whole image. Shortcuts. Here’s an example with the anythingV3 model: Outpainting. This image has had part of it erased to alpha with gimp, the alpha channel is what we will be using as a mask for the inpainting. The image below is the empty workflow with Efficient Loader and KSampler (Efficient) added and connected to each other nodes. The lower the value the more it will follow the concept. The workflows are designed for readability; the execution flows from left to right, from top to bottom and you should be able to easily follow the "spaghetti" without moving Lora Examples. SDXL Examples. Once loaded go into the ComfyUI Manager and click Install Missing Custom Nodes. [Last update: 01/August/2024]Note: you need to put Example Inputs Files & Folders under ComfyUI Root Directory\ComfyUI\input folder before you can run the example workflow Feb 7, 2024 路 My ComfyUI workflow that was used to create all example images with my model RedOlives: https://civitai. 0 reviews. Download. Intermediate Examples of what is achievable with ComfyUI open in new window. Feb 19, 2024 路 ComfyUI serves as a node-based graphical user interface for Stable Diffusion. 5. example to extra_model_paths. 809. ComfyUI AnyNode: Any Node you ask for - AnyNodeLocal (6) ComfyUI-N-Nodes - LoadVideo [n Examples of ComfyUI workflows. Upscale Model Examples. Stable Zero123 is a diffusion model that given an image with an object and a simple background can generate images of that object from different angles. Keybind Explanation; For these examples I have renamed the files by adding stable_cascade_ in front of the filename for example: stable_cascade_canny. com Jul 6, 2024 路 You can construct an image generation workflow by chaining different blocks (called nodes) together. Explore thousands of workflows created by the community. Each ControlNet/T2I adapter needs the image that is passed to it to be in a specific format like depthmaps, canny maps and so on depending on the specific model if you want good results. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. 2 . 1. ComfyUI IPAdapter Plus; ComfyUI InstantID (Native) ComfyUI Essentials; ComfyUI FaceAnalysis; Not to mention the documentation and videos tutorials. Sep 7, 2024 路 SDXL Examples. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. Simple workflow for using the new Stable Video Diffusion model in ComfyUI for image to video generation. json workflow file from the C:\Downloads\ComfyUI\workflows folder. 5 checkpoint model. Save this image then load it or drag it on ComfyUI to get the workflow. Always refresh your browser and click refresh in the ComfyUI window after adding models or custom_nodes. Sep 7, 2024 路 Hypernetwork Examples. FLUX with img2img and LLM generated prompt, LoRA's, Face detailer and Ultimate SD Upscaler. Infinite Zoom: Create your comfyui workflow app锛宎nd share with your friends. Rework of almost the whole thing that's been in develop is now merged into main, this means old workflows will not work, but everything should be faster and there's lots of new features. Check my ComfyUI Advanced Understanding videos on YouTube for example, part 1 and part 2. This repository contains a handful of SDXL workflows I use, make sure to check the usefull links as some of these models, and/or plugins are required to use these in ComfyUI. Users have the ability to assemble a workflow for image generation by linking various blocks, referred to as nodes. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI (opens in a new tab). Achieves high FPS using frame interpolation (w/ RIFE). 5K. Stable Video Diffusion (SVD) – Image to video generation with high FPS As a reminder you can save these image files and drag or load them into ComfyUI to get the workflow. Open the YAML file in a code or text editor Here is a workflow for using it: Example. Mixing ControlNets Comfyui Flux All In One Controlnet using GGUF model. Text to Image: Build Your First Workflow. Video Examples Image to Video. Start by running the ComfyUI examples . For Flux schnell you can get the checkpoint here that you can put in your: ComfyUI/models/checkpoints/ directory. x model for the second pass. Unlike other Stable Diffusion tools that have basic text fields where you enter values and information for generating an image, a node-based interface is different in the sense that you’d have to create nodes to build a workflow to generate images. The following images can be loaded in ComfyUI to get the full workflow. Dec 10, 2023 路 Moreover, as demonstrated in the workflows provided later in this article, comfyUI is a superior choice for video generation compared to other AI drawing software, offering higher efficiency and In ComfyUI the saved checkpoints contain the full workflow used to generate them so they can be loaded in the UI just like images to get the full workflow that was used to create them. Dec 19, 2023 路 Recommended Workflows. Here are the top 10 best ComfyUI workflows to enhance your experience with Stable Diffusion in 2024: 1. 2. Sep 7, 2024 路 Img2Img Examples. A good way of using unCLIP checkpoints is to use them for the first pass of a 2 pass workflow and then switch to a 1. 0. All LoRA flavours: Lycoris, loha, lokr, locon, etc… are used this way. Flux Schnell. The following is an older example for: aura_flow_0. Aug 1, 2024 路 For use cases please check out Example Workflows. Here is an example: You can load this image in ComfyUI to get the workflow. Goto ComfyUI_windows_portable\ComfyUI\ Rename extra_model_paths. You can then load up the following image in ComfyUI to get the workflow: AuraFlow 0. In this post we'll show you some example workflows you can import and get started straight away. You can then load or drag the following image in ComfyUI to get the workflow: Flux Controlnets. ComfyUI Workflows. Dec 4, 2023 路 In this post we'll show you some example workflows you can import and get started straight away. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. This is a very nicely refined workflow by Kaïros featuring upscaling, interpolation, etc. 馃З Seth emphasizes the importance of matching the image aspect ratio when using images as references and the option to use different aspect ratios for image-to-image Sep 7, 2024 路 Inpaint Examples. The easiest way to get to grips with how ComfyUI works is to start from the shared examples. ComfyUI workflow with all nodes connected. Keybind Explanation; Flux. The workflow is the same as the one above but with a different prompt. safetensors. All the images in this page contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. SD3 performs very well with the negative conditioning zeroed out like in the following example: SD3 Controlnet Efficient Loader node in ComfyUI KSampler(Efficient) node in ComfyUI. Hence, we'll delve into the most straightforward text-to-image processes in ComfyUI. 0. Motion LoRAs w/ Latent Upscale: This workflow by Kosinkadink is a good example of Motion LoRAs in action: 7. A A comprehensive collection of ComfyUI knowledge, including ComfyUI installation and usage, ComfyUI Examples, Custom Nodes, Workflows, and ComfyUI Q&A. This example is an example of merging 3 different checkpoints using simple block merging where the input, middle and output blocks of the unet can have a My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. Download it and place it in your input folder. 1. . SD3 Controlnets by InstantX are also supported. ComfyUI workflows for Stable Diffusion, offering a range of tools from image upscaling and merging. Fully supports SD1. Here's a list of example workflows in the official ComfyUI repo. What Makes ComfyUI Workflows Stand Out? Flexibility: With ComfyUI, swapping between workflows is a breeze. Start with the default workflow. 2. 1 ComfyUI install guidance, workflow and example. Here is an example for how to use the Canny Controlnet: Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. - lots of pieces to combine with other workflows: 6. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Example. Let's embark on a journey through fundamental workflow examples. For example, errors may occur when generating hands, and serious distortions can occur when generating full-body characters. All LoRA flavours: Lycoris, loha, lokr, locon, etc are used this way. Easy starting workflow These versatile workflow templates have been designed to cater to a diverse range of projects, making them compatible with any SD1. x, SD2. Description. These versatile workflow templates have been designed to cater to a diverse range of projects, making them compatible with any SD1. See the following workflow for an example: See this next workflow for how to mix multiple images together: You can find the input image for the above workflows on the unCLIP example page Download aura_flow_0. Introducing ComfyUI Launcher! new. Img2Img Examples. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples Installing ComfyUI Features Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything. For legacy purposes the old main branch is moved to the legacy -branch Here is how you use it in ComfyUI (you can drag this into ComfyUI (opens in a new tab) to get the workflow): noise_augmentation controls how closely the model will try to follow the image concept. Load the . It covers the following topics: Examples of ComfyUI workflows. XLab and InstantX + Shakker Labs have released Controlnets for Flux. Another Example and observe its amazing output. If you have a previous installation of ComfyUI with Models, or would like to use models stored in an external location, you can use this method to reference them instead of re-downloading them. This should update and may ask you the click restart. ComfyUI: Node based workflow manager that can be used with Stable Diffusion More advanced Workflows. In this example we will be using this image. 5. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now. These are examples demonstrating how to use Loras. Hypernetworks are patches applied on the main MODEL so to use them put them in the models/hypernetworks directory and use the Hypernetwork Loader node like this: Here is a basic example how to use it: As a reminder you can save these image files and drag or load them into ComfyUI to get the workflow. I am very interested in shifting from automatic1111 to working with ComfyUI I have seen a couple templates on GitHub and some more on civitAI ~ can anyone recommend the best source for ComfyUI templates? Is there a good set for doing standard tasks from automatic1111? Is there a version of ultimate SD upscale that has been ported to ComfyUI? Jun 23, 2024 路 Despite significant improvements in image quality, details, understanding of prompts, and text content generation, SD3 still has some shortcomings. As of writing this there are two image to video checkpoints. This repo contains examples of what is achievable with ComfyUI. See full list on github. This is the input image that will be used in this example: Here is an example using a first pass with AnythingV3 with the controlnet and a second pass without the controlnet with AOM3A3 (abyss orange mix 3) and using their VAE. Here is a link to download pruned versions of the supported GLIGEN model files (opens in a new tab). By examining key examples, you'll gradually grasp the process of crafting your unique workflows. 3D Examples - ComfyUI Workflow Stable Zero123. Tenofas v3. This example is an example of merging 3 different checkpoints using simple block merging where the input, middle and output blocks of the unet can have a Aug 30, 2024 路 5 Best ComfyUI Workflows. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. Inpainting a cat with the v2 inpainting model: Inpainting a woman with the v2 inpainting model: It also works with non inpainting models. Basic txt2img with hiresfix + face detailer. I'm not sure why it wasn't included in the image details so I'm uploading it here separately. I will make only Feb 24, 2024 路 ComfyUI is a node-based interface to use Stable Diffusion which was created by comfyanonymous in 2023. Sep 7, 2024 路 Lora Examples. safetensors, stable_cascade_inpainting. Easy starting workflow. This is how the following image was generated. Be sure to check the trigger words before running the Jan 15, 2024 路 In this workflow building series, we'll learn added customizations in digestible chunks, synchronous with our workflow's development, and one update at a time. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. (you can load it into ComfyUI to get the workflow): In ComfyUI the saved checkpoints contain the full workflow used to generate them so they can be loaded in the UI just like images to get the full workflow that was used to create them. These are examples demonstrating how to do img2img. You can also use similar workflows for outpainting. safetensors and put it in your ComfyUI/checkpoints directory. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. I then recommend enabling Extra Options -> Auto Queue in the interface. You can Load these images in ComfyUI (opens in a new tab) to get the full workflow. com/models/283810 The simplicity of this wo Jan 8, 2024 路 The optimal approach for mastering ComfyUI is by exploring practical examples. By the end of this article, you will have a fully functioning text to image workflow in ComfyUI built entirely from scratch. The initial set includes three templates: Simple Template. For some workflow examples and see what ComfyUI can do you can check out: Workflow examples can be found on the Examples page. To load a workflow, simply click the Load button on the right sidebar, and select the workflow . 4 ControlNet and T2I-Adapter - ComfyUI workflow Examples Note that in these examples the raw image is passed directly to the ControlNet/T2I adapter. agdc ezjye jmgrunin rkh kcgpvdc ufgvy vbbvk uemzl ykr agjt