Comfyui load workflow from image reddit. Sync your collection everywhere by Git.

Comfyui load workflow from image reddit. I have made a workflow to enhance my images, but right now I have to load the image I want to enhance, and then, upload the next one, and so on, how can I make my workflow to grab images from a folder and for each queued gen, it loads the 001 image from the folder, and for the next gen, grab the 002 image from the same folder? Thanks in advance! My ComfyUI workflow was created to solve that. You can then load or drag the following image in ComfyUI to get the workflow: Welcome to the unofficial ComfyUI subreddit. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. the diagram doesn't load into comfyui so I can't test it out. 5, then uses Grounding Dino to mask portions of the image to animate with AnimateLCM. And images that are generated using ComfyBox will also embed the whole workflow, so it should be possible to just load it from an image. As someone relatively new to AI imagery, I started off with Automatic 1111 but was tempted by the flexibility of ComfyUI but felt a bit overwhelmed. With for instance a graph like this one you can tell it to: Load this model, put these bits of text into the CLIP encoder, make an empty latent image, use the model loaded with the embedded text and noisy latent to sample the image, now save the resulting image. Thanks a lot for sharing the workflow. The TL;DR version is this: it makes a image from your prompt without a LoRA, runs it through ControlNet, and uses that to make a new image with the LoRA. If that works out, you can start re-enabling your custom nodes until you find the bad one or hopefully find out the problem resolved itself. here Tip, for speed, you can load image using the (clipspace) method using right click on images you generate. This is what it looks like, second pic. I'm using the ComfyUI notebook from their repo, using it remotely in Paperspace. Drag and drop doesn't work for . I liked the ability in MJ, to choose an image from the batch and upscale just that image. ComfyUI/web folder is where you want to save/load . Details on how to use the workflow are in the workflow link. Have fun. Experimental Functions. 5 checkpoint in combination with a Tiled ControlNet to feed an Ultimate SD Upscale node for a more detailed upscale. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. I've been using ComfyUI for nearly a year, during which I've accumulated a significant number of images in my input folder through the load image node. It's simple and straight to the point. Please share your tips, tricks, and workflows for using this software to create your AI art. So, I just made this workflow ComfyUI. My goal is that I start the ComfyUI workflow and the workflow loads the latest image in a given directory and works with it. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! Basically if you have a really good photo, but no longer have the workflow used to create it, you can just load the image and it'll load the workflow. 8K views 11 months ago. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion - this creats a very basic image from a simple prompt and sends it as a source. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. Your efforts are much appreciated. Unfortunately, the file names are often unhelpful for identifying the contents of the images. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. You can save the workflow as json file and load it again from that file. 0 includes the following experimental functions: Then I fix the seed to that specific image and use it's latent in the next step of the process. In either case, you must load the target image in the I2I section of the workflow. Load Image List From Dir (Inspire). An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. After borrowing many ideas, and learning ComfyUI. 168. The one I've been mucking around with includes poses (from OpenPose) now, and I'm going to Off-Screen all nodes that I don't actually change parameters on. more. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. it's nothing spectacular but gives good consistent results without Starting workflow. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. These are examples demonstrating how to do img2img. I'm not really checking my notifications. In 1111 using image to image, you can batch load all frames of a video, batch load control net images, or even masks, and as long as they share the same name as the main video frames they will be associated with the image when batch processing. I hope you like it. [DOING] Clone public workflow by Git and load them more easily. I am trying to understand how it works and created an animation morphing between 2 image inputs. No need to put in image size, and has a 3 stack lora with a Refiner. This causes my steps to take up a lot of RAM, leading to killed RAM. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. I tried the load methods from Was-nodesuite-comfyUI and ComfyUI-N-Nodes on comfyUI, but they seem to load all of my images into RAM at once. Now the problem I am facing is that it starts like already morphed between the 2 I guess because it happens so quickly. 0. You need to select the directory your frames are located in (ie. . However, it is not for the faint hearted and can be somewhat intimidating if you are new to ComfyUI. My 2nd Attempt, i thought to myself, I will go as basic and as easy as possible, I will limit the models I am using to only large popular models, I will try to stick to basic ComfyUI nodes as possible, meaning I have none except for Manager and Workflow Spaces, thats it. 8 denoise won't have actually 20 steps but rather decrease that amount to 16. This workflow allows you to load images of an AI Avatar's face, shirt, pants and shoes and pose generates a fashion image based on your prompt. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. enjoy. Just load your image, and prompt and go. If it's a . The image you're trying to replicate should be plugged into pixels and the VAE for whatever model is going into Ksampler should also be plugged into the VAE Encode. The prompt for the first couple for example is this: Basically, I want a simple workflow (with as few custom nodes as possible) that uses an SDXL checkpoint to create an initial image and then passes that to a separate "upscale" section that uses a SD1. You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. A lot of people are just discovering this technology, and want to show off what they created. Upcoming tutorial - SDXL Lora + using 1. Browse and manage your images/videos/workflows in the output folder. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. PNG into ComfyUI. And you need to drag them into an empty spot, not a load image node or something. a search of the subreddit Didn't turn up any answers to my question. Get Started with ComfyUI - Drag and Drop Workflows from an Image! Run Diffusion. Hi all! Was wondering, is there any way to load an image into comfyui and read the generation data from it? I know dragging the image into comfyui loads the entire workflow, but I was hoping I could load an image and have a node read the generation data like prompts, steps, sampler etc. json file location, open it that way. I have like 20 different ones made in my "web" folder, haha. 1:8188 but when i try to load a flow through one of the example images it just does nothing. That node will try to send all the images in at once, usually leading to 'out of memory' issues. I can load workflows from the example images through localhost:8188, this seems to work fine. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. this is just a simple node build off what's given and some of the newer nodes that have come out. this will open the live painting thing you are looking for. This workflow chains together multiple IPAdapters, which allows you to change one piece of the AI Avatar's clothing individually. Is there a way to load each image in a video (or a batch) to save memory? Welcome to the unofficial ComfyUI subreddit. and spit it out in some shape or form. You need to load and save edited image. So dragging an image made with Comfy onto the UI loads the entire workflow used to make it, which is awesome, but is there a way to make it load just the prompt info and keep my workflow otherwise? Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. 5. This workflow generates an image with SD1. Thanks. This is the node you are looking for. Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. I thought it was cool anyway, so here. If this is what you are seeing when you go to choose an image in the image loader, then all you need to do is go to that folder and delete the ones you no longer need. Activate the Face Swapper via the auxiliary switch in the Functions section of the workflow. Pretty Comfy, Right? ComfyUI is a popular tool that allow you to create stunning images and animations with Stable Diffusion. Flux Schnell is a distilled 4 step model. I had to load the image into the mask node after saving it to my hard drive. There's a node called VAE Encode with two inputs. Sync your collection everywhere by Git. Are you referring to the Input folder in the Comfyui installation folder? Comfyui runs as a server and the input images are 'uploaded'/copied into that folder. To be fair, I ran into a similar issue trying to load a generated image as an input image for a mask, but I haven't exhaustively looked for a solution. I'm trying to get dynamic prompts to work with comfyui but the random prompt string won't link with clip text encoder as is indicated on the diagram I have here from the Github page. I have to 2nd the comments here that this workflow is great. Hello there. A quick question for people with more experience with ComfyUI than me. 75K subscribers. Aug 7, 2023 ยท Workflows can only be loaded from images that contain the actual workflow metadata created by ComfyUI, and stored in each image COmfyUI creates. Load Image Node. 82. 1 or not. AP Workflow v5. The images above were all created with this method. They are completely separate from the main workflow. Get a quick introduction about how powerful ComfyUI Hidden Faces. 2. That image would have the complete workflow, even with 2 extra nodes. Pixels and VAE. I want to load an image in comfyui and the workflow appear, just as it does when I load a saved image from my own work. Please keep posted images SFW. Ensure that you use this node and not Load Image Batch From Dir. json file hit the "load" button and locate the . json files. Maybe a useful tool to some people. 5 by using XL in comfy. Images created with anything else do not contain this data. If you are still interested - basically I added 2 nodes to the workflow of the image (image load and save image). All the adapters that loads images from directories that I found (Inspire Pack and WAS Node Suite) seem to sort the files by name and don't give me an option to sort them by anything else. load your image to be inpainted into the mask node then right click on it and go to edit mask. I'm sorry, I'm not at the computer at the moment or I'd get a screen cap. The graph that contains all of this information is refered to as a workflow in comfy. Belittling their efforts will get you banned. Is there a common place to download these? Nome of the reddit images I find work as they all seem to be jpg or webp. Add your workflows to the collection so that you can switch and manage them more easily. Pro-tip: Insert a WD-14 or a BLIP Interrogation node after it to automate the prompting for each image. Made this while investigating the BLIP nodes, it can grab the theme off an existing image and then using concatenate nodes we can add and remove features, this allows us to load old generated images as a part of our prompt without using the image itself as img2img. this is like copy paste basically and doesnt save the files to disk. I have a video and I want to run SD on each frame of that video. Notice that Face Swapper can work in conjunction with the Upscaler. About a week or so ago, I've began to notice a weird bug - If I load my workflow by dragging the image into the site, it'll put the wrong positive prompt. I tend to agree with NexusStar: as opposed to having some uber-workflow thingie, it's easy enough to load specialised workflows just by dropping a wkfl-embedded . Welcome to the unofficial ComfyUI subreddit. I can load the comfyui through 192. Ending Workflow. You can Load these images in ComfyUI to get the full workflow. Those images have to contain a workflow, so one you've generated yourself for example. And above all, BE NICE. Also notice that you can download that image and drag'n'drop it to your comfyui to load that workflow and you can also drag'n'drop images to Load Image node to quicker load them And another general difference is that A1111 when you set 20 steps 0. === How to prompt this workflow === Main Prompt ----- The subject of the image in natural language Example: a cat with a hat in a grass field Secondary Prompt ----- A list of keywords derived from the main prompts, at the end references to artists Example: cat, hat, grass field, style of [artist name] and [artist name] Style and References It is necessary to give the last generated image as it does load image locally. Any ideas on this? Welcome to the unofficial ComfyUI subreddit. Nobody needs all that, LOL. It animates 16 frames and uses the looping context options to make a video that loops. That's how I made and shared this. where did you extract the frames zip file if you are following along with the tutorial) image_load_cap will load every frame if it is set to 0, otherwise it will load however many frames you choose which will determine the length of the animation Update ComfyUI and all your custom nodes first and if the issue remains disable all custom nodes except for the ComfyUI manager and then test a vanilla default workflow. How to solve the problem of looping? I had an idea to just write an analog of two-in-one Save image, Load image in one node, that would save the last result to a file and then output it at the next rendering queue. Hey all- I'm attempting to replicate my workflow from 1111 and SD1. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. Initial Input block - I cant load workflows from the example images using a second computer. cqmm nzdvev dmve uarvka dsngd xowo nftq lxmiqe mjnlgf rfbdpo