Ksampler efficient reddit.
- Ksampler efficient reddit In theory nodes can be 'colorized' in levels, which will then enable parallelism, but the lightgraph library doesn't colorize that way. Actually on second thought the KSampler efficient I don’t even know if it should stay at -1 because I always use the rgtree seed node and convert the KSampler seed to input and hook it up and the rgtree seed node has that issue sometimes that the random seed will bug out and switch to fixed every queue. The issues are as follows. sample(model, noise Welcome to the unofficial ComfyUI subreddit. First KSampler: steps 14, cfg 8. anyhow just some But then I hit the KSampler, and. We are Reddit's primary hub for all things modding, from troubleshooting for beginners to creation of mods by experts. Out of interest, is there a reason you’re effectively denoising the image three times? (Once to the left in your first simple ksampler, then twice in the efficient ksampler - the node itself will do one denoising pass first and then do at least one more due to the use of the hires fix I can get it to show a live preview in the ksampler. I did have some OOM errors before but not anymore. A combination of common initialization nodes. However, its impossible for me to make it show in other nodes such as Ultimate SD Upscaler. Are you saying you want to use a different checkpoint for the upscale then the one you use to make the first image? If so, you can follow the high-res example from the GitHub. __get__(instance, owner)() ----- Efficient Loader Models Cache: Ckpt: [1] dreamshaper_8 Lora: [1] base_ckpt: dreamshaper_8 lora(mod,clip): epi_noiseoffset2(1. Since adding endless lora nodes tends to mess the simplest workflow, I'm looking for a plugin with a lora stacker node. sample. Do you have any suggestions on Extension: Efficiency Nodes for ComfyUI Version 2. Oct 4, 2024 · Ksampler (Efficient) A modded KSampler with the ability to preview/output images and run scripts. Normally this would just be a git submodule. Using the default ksampler I can use the Select From Batch node to pick one image from a batch to generate, but I cant seem to find a way to do that with the Efficient loader since it lacks a latent input to attach to. Well I got into img2img last week, which made me switch back to the regular KSampler for simplified denoising, and then I got into Turbo just to see how fast it was. Or check it out in the app stores in common_ksampler samples = comfy. The image created is flat, devoid of details and nuances, as if it were cut out or vector-based. Ksampler (efficient) HiRes Fix Reactor faceswap Pretext (prompt box) control net stacker I suspect the efficiency node is the main issue as I read that it may control other nodes that seem to be failing to update for me. I used the ControlNet extension & the Realistic Vision checkpoint, and it keeps giving me this error, "AttributeError: 'NoneType' object has no… The Tiled KSampler forces the generation to obtain a seamless tile but t change the aesthetics considerably. Its inherent efficiency makes it an ideal choice for applications requiring quick turnaround or run ComfyUI online solutions, like cloud-based setups which utilize services such as ComfyAI Run. I noticed that the efficient ksampler entries were out of whack when I first loaded the workflow (my nodes might be slightly newer) but aside from choosing a different VAE and model, I don't think I changed anything. true. Reload to refresh your session. Image Overlay. This works for both Schnell and Dev. This subreddit is an unofficial, non-affiliated community, run by the users, to embrace and have conversation about the products we love! We would like to show you a description here but the site won’t allow us. You signed out in another tab or window. There is also Kohya's HiresFix node, that provides a way to generate 1024x1024 images (using sd1. The simplest configuration to have a working XY Plot is by using the new Efficient Loader and Efficient KSampler nodes, part of the Efficiency Node Suite. So, I pretty much hacked it into place in the middle of the SDXL workflow, just as a test, and while the Efficient Loader and KSampler nodes are really convenient - to the point that I'll probably make my own SDXL workflow using them - I still can't figure out how to make it do what I want. Seed. For example, I'm doing an img2img with a denoise of 0. Thanks - I’ll replicate this when I get home tonight. The output of the node goes to the positive input on the KSampler. On the regular ksampler, there is a nice denoise option (i'm trying to do img2img btw) but because I'm using SDXL and need the 1st pass to be… Skip to main content Open menu Open navigation Go to Reddit Home Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. Query/Key/Value should either all have the same dtype, or (in the quantized case) Key/Value should have dtype torch. Maybe it will get fixed later on, it works fine with the mask nodes. I only have 8GB VRAM but it's sitting around 7. fget. Node that allows for flexible image We would like to show you a description here but the site won’t allow us. nothing. Welcome to the unofficial ComfyUI subreddit. I can only get the seed of the ksampler to randomize once per queued generation- When doing batches/repeated processes during a single queued generation, how can I make the seed change with each batched iteration? Welcome to /r/SkyrimMods! We are Reddit's primary hub for all things modding, from troubleshooting for beginners to creation of mods by experts. untyped_storage() instead of tensor. What those nodes are doing is inverting the mask to then stitch the rest of the image back into the result from the sampler. Any one ready to help or give support are welcome and I am revamping and recycling all the nodes and may be some are still under edit and will be updated in the coming week. Jan 17, 2025 · KSampler Settings . The one with the denoise factor has no option to return with leftover noise. Or check it out in the app stores Ksampler Efficient/ Animatediff broken after update all in manager . It changes the image too much and often adds mutations. GPU is at 100% yet I know it's not doing anything because the fan is not spinning (any img generation causes it to hit max right away). HighRes-Fix. There is nothing special about it - Ksampler algorithms are same for most (if not all) nodes. The KSampler (Efficient) node is designed to empower users to perform data sampling with minimal latency and computational demands. A seed is just a number, but it plays a crucial role in image generation. I set it at either fixed/random before converting --- not working for both case. KSampler (Efficient) GMFSS Fortuna VFI ConditioningSetMaskAndCombine GrowMaskWithBlur INTConstant Nodes that have failed to load will show as red on the graph. Well this KSampler Node doesn't have a "Denoise" factor. Do you have the ComfyUI Manager installed? If so, it will be in the main menu when you open ComfyUI in your browser. In you case you may want to stop your first Ksampler at 14 and continue in a new one from step 15 to 20 as to finished picture. It would be cool to be able to have your ksampler with lets say 30 steps. The results are a bit different, but I would not say they are better, just a bit different. 40 votes, 23 comments. By posting it here I hope to find a solution that might… I was using efficiency node and it allows for the generated images in the step process to be viewed. webp for Just finding those Efficiency Nodes have Welcome to the unofficial ComfyUI subreddit. That plus how complicated the advanced KSampler is made latent too frustrating. 0,1. We ask that you please take a minute to read through the rules and check out the resources provided before creating a post, especially if you are new here. In general the aesthetics is very simple and far from what the chosen model would have with another Ksampler. Please share your tips, tricks, and workflows for using this software to create your AI art. Is there a difference in how these official controlnet lora models are created vs the ControlLoraSave in comfy? I've been testing different ranks derived from the diffusers SDXL controlnet depth model, and while the different rank loras seem to follow a predictable trend of losing accuracy with fewer ranks, all of the derived lora models even up to 512 are substantially different from the full Node that allows users to specify parameters for the Efficiency KSamplers to plot on a grid. You can prove this to yourself by taking your positive and negative prompts and switching them, then running that through a ksampler with negative [whatever your initial CFG was]. The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other and can be bigger. The nodes on the top for the mask shenanigan are necessary for now, the efficient ksampler seems ignore the mask for the VAE part. When you run a normal KSampler with step 20, you ask the sampler to denoise the image from noisy image to what it thinks should be the clean image in 20 steps. It only appears if i do the following every single time i want to generate an image: Click "Queue Prompt" -> Click on "manager" -> Click "preview method" -> change it -> click on it again and change it We would like to show you a description here but the site won’t allow us. dtype: torch. Start with the HighResFix script of KSampler (efficient), that is close to A1111's HiResFix. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. In my opinion, this approach is the “proper” way to generate a “batch” of images that can be individually reproducible. We would like to show you a description here but the site won’t allow us. If you do so, the entire Refiner section goes away and so do the switches that you need to configure to use the Refiner. [w/NOTE: This node is originally created by LucianoCirino, but the a/original repository is no longer maintained and has been forked by a new maintainer. You switched accounts on another tab or window. Keep in mind that when using an acyclic graph-based ui like comfyui, usually one node is being executed at a time. Oh my goodness, I've been wrestling with this for a few days, even tired to post this question with the exact same copy and paste text but Reddit decided to block my posts! I can't even delete the account, had to make a new one. Similarly, I think the VAE is also different such that you can't just pass it through. 0+ A collection of ComfyUI custom nodes to help streamline workflows and reduce total node count. r/Garmin is the community to discuss and share everything and anything related to Garmin. 0) KSampler(Efficient) Warning: No vae input detected, proceeding as if vae We would like to show you a description here but the site won’t allow us. 2. Node that the gives user the ability to upscale KSampler results through variety of different methods. I can only get the seed of the ksampler to randomize once per queued generation- When doing batches/repeated processes during a single queued generation, how can I make the seed change with each batched iteration? Welcome to the unofficial ComfyUI subreddit. XY Plotter Nodes. So is there another way to view the images being generate through the steps? Side question: it seems ComfyUI cant do inpainting? Welcome to the unofficial ComfyUI subreddit. Here's how it works: I'm not releasing this workflow because there are a million issues that I'd have to fix and don't have the time right now. The KSampler (efficient) then adds these integers to the current seed, resulting in image outputs for seed+0, seed+1, and seed+2. If you want an alternative that works you should try SDXL nodes from 'Efficiency Nodes' pack or TinyTerraNodes. 0, dpmpp_sde_gpu, karras, denoise 1. Belittling their efforts will get you banned. float1… Thanks for the tips on Comfy! I'm enjoying it a lot so far. Welcome to the TickTick Reddit! This community is devoted to the discussion of TicTick, regarding questions during use, tips/tricks, ideas to discuss, news and updates, anything to make TickTick better to use for you! *Note: Most efficient way to reach support team: sending tickets via Feedback&Suggestion in the app! Welcome to the unofficial ComfyUI subreddit. TL;DR: Instead of using on the “batch size” feature, send a sequential list of Welcome to the unofficial ComfyUI subreddit. KSampler and Advanced are both giving me errors I'd not seen before until this morning. int32 query. And above all, BE NICE. also I think Comfy Devs need to figure out good sort of unit testing , maybe we as a group create a few templates with the Efficient pack and then before pushing out changes they could be run as a test to see what breaks. 1 Dec 16, 2024 · Given that end_at_step >= steps a KSampler Advanced node will denoise a latent in the exact same way a KSampler node would with a denoise setting of: denoise = (steps - start_at_step) / steps. Efficient Loader. A collection of nodes that allows users to specify parameters for the KSampler (Efficient) to plot on a grid. Get the Reddit app Scan this QR code to download the app now. Just doing some refinement in a regular KSampler. SDXL most definitely doesn't work with the old control net. Currently I have the Lora Stacker from efficiency nodes, but it works only with the propietary Efficient KSampler node, and to make it worse the repository has been archived on Jan 9, 2024, meaning it could permanently stop working with the next comfyui update any minute now. You should find that the iterative mixing path generates outputs with richer background details and it should be more faithful to the original low resolution image We would like to show you a description here but the site won’t allow us. it seems KSampler Advanced manages its own SEED and not affected even when convert SEED/NOISE SEED to input . I have never used it myself, but worth experimenting with For Ksamplers you can just pass the latent output of a ksampler into another ksampler, just make sure to put the denoising lower in the 2nd Ksampler You can use advanced ksampler and sdxl refiner cliptextencode, or just use custom loaders/samplers that have the functionality built in, such as the efficiency nodes pack. Please keep posted images SFW. So you can img2img but with a Denoise of 1 which isn't really helpful in most cases. This workflow can be greatly reduced in size by using the new Efficiency Loader SDXL and Efficiency KSampler SDXL nodes, by LucianoCirino, which also support a ControlNet Stack as input. I might be missing something but 3 steps didn't work for me -- I got blurry unresolved images as normally expected. How do I debug this? We would like to show you a description here but the site won’t allow us. I have no idea what your discussion about CFG 0 is intended to establish. Doing it this way makes reproducible builds a huge pain; I had to add an extra step in my build process to manually clone it to a known good commit hash just to keep that node pack from messing with my source files. Is the KSampler is the first thing to go green? Definitely no nodes before that quickly flick green before the KSampler? The seed number shown in the rgthree is the same each time? The image generated is identical? Any clues in the command prompt window? Welcome to the unofficial ComfyUI subreddit. Now if all is left the same, ksampler2 will overwrite the latent image from ksampler 1 with its own seed, as it will assume it is receiving a blank latent, unless you tell it otherwise. If you want to select one of the results from the first KSampler and generate 4 to 8 images using i2i in the second KSampler, you can use the ImageSelector to choose an image and then use RepeatImageBatch or RepeatLatentBatch to create a batch of copies of the same image. Your first KSampler says to denoise this image with a single step. 5 models) without weird artifact and extra limbs. As such you should use the advanced ksampler, to set a starting step of higher than 0 (ideally around the same number as the previous ksampler ended). Also, if this is new and exciting to you, feel free to post The efficient loader has the checkpoint for the initial image being made in the KSampler. storage() return self. 0 denoise) KSample the outputs at perhaps denoise=0. I did a plot of all the samplers and schedulers as a test at 50 steps. When the KSampler receives the empty latent image, it uses this seed number to create a specific pattern of noise. Setup a quick workflow to do the first part of the denoising process on the base model but instead of finishing it stop early and pass the noisy result on to the refiner to finish the process. " I did run the manager and installed missing custom nodes, but I am still getting this. Using the same seed with identical settings produces the exact same image every time. 17 votes, 25 comments. Oct 28, 2023 · You signed in with another tab or window. Comes out of the box with popular Neural Network Latent Upscalers such as Ttl's ComfyUi_NNLatentUpscale and City96's SD-Latent-Upscaler. The Sampler also now has a new option for seeds which is a nice feature. Can you please shed some light on it? Thanks Welcome to the unofficial ComfyUI subreddit. This will keep the shape of the swapped face and increase the resolution of the face. 5, or 1024 for XL models) Use NNLatentUpscale to double the latent resolution Run this through Iterative Mixing KSampler at full strength (1. Once a low res face swap is performed you can pass the image through a lineart controlnet and a ksampler with a low denoise. 5 and SDXL use different conditioners, you can't just pass one to the other as far as I'm aware. Both of them provide a lot of extra detail. Tried the fooocus Ksampler using the same prompt, same number of steps, same seed and same samplers than with my usual workflow. When I run the t2i models, I see no effect, as if controlnet isn't working at all. 5 on 20 steps. I can only get the seed of the ksampler to randomize once per queued generation- When doing batches/repeated processes during a single queued generation, how can I make the seed change with each batched iteration? You can try to use the ModelMergeSimple node, it allows you to put in 2 models and then put them into a single KSampler. On any sampler node (face detailer / ksampler) where I change the scheduler from a widget to input, it wont let me attach the scheduler selector to it. I was running some tests last night with SD1. If not, here is a link to ComfyUI Manager on GitHub-- just follow the instructions on the page to install it. Oct 27, 2023 · Any updates to moving this to dev branch, out of the 10 or so here posting about the issue prob 100's are having it and not using the nodes anymore :/ . And you can have a checkpoint do the first 5 steps then swap checkpoint to do the next let's say 5 or so steps then another checkpoint for like the next 5. The key is that denoising depends on which step it is, so we can not separate a 20-step process into 20 1-step processes. SD1. A lot of people are just discovering this technology, and want to show off what they created. CFG must be set to 1 in the KSampler The performance timings for KSampler and the Guided Sampler seems to be the same. Reply reply Samurai_zero Impact pack does this weird thing where it tries to git clone (!) another repo during startup. 0% indefinitely (even left it over night). I will share a workflow soon from a new custom_node that implements the Iterative Mixing KSampler. . I’m not seeing many others have this problem on discord or Reddit, so I’m a bit lost! all of them works as expected on KSampler nodes BUT not at all on KSampler Advanced which should be used for SDXL workflow. I found the Flux workflow for Schnell and Dev provided by comfyanonymous to be a little complicated, so I decided to experiment with KSampler and I am getting identical results. The new update to Efficiency added a bunch of new nodes for XY Plotting, and you can add inputs on the fly. 5 and I was able to get some decent images by running my prompt through a sampler to get a decent form, then refining while doing an iterative upscale for 4-6 iterations with a low noise and bilinear model, negating the need for an advanced sampler to refine the image. 00 as Reddit seems to do SaveAs to . To upscale 4x well with the Iterative Mixing KSampler node, do this: Generate your initial image at 512x512 (for SD1. But that doesn't seem to be the case. Hi, I understand that the Efficiency node pack's XY plot functionality enables the automatic variation and testing of parameters such as "CFG", "Seeds", and "Checkpoints" within the KSampler (Efficiency). But the efficiency nodes are not working anymore even though i have it installed. I've used these workflows for months without issue, but now anytime I import one of my previous workflows using previous outputs, they fail on the scheduler inputs. Using the Iterative Mixing KSampler to noise up the 2x latent before passing it to a few steps of refinement in a regular KSampler. The "artifacts" you get in your example are from the double generation as what you do there is generating a new image ontop existing one, not continuing building on existing one. The Efficiency Nodes updates and new improvements are now in working mode and you can check same in the forked branch repository. To access UntypedStorage directly, use tensor. eyr nulz wonejg jjii zexikh pfjkcm ahaolw zfex khdg cleatmb