ComfyUI implementation of https://github.com/layerdiffusion/LayerDiffuse.
Download the repository and unpack into the custom_nodes folder in the ComfyUI installation directory.
Or clone via GIT, starting from ComfyUI installation directory:
cd custom_nodes
git clone [email protected]:huchenlei/ComfyUI-layerdiffuse.git
Run pip install -r requirements.txt
to install python dependencies. You might experience version conflict on diffusers if you have other extensions
that depends on other versions of diffusers. In this case, it is recommended to setup separate Python venvs.
If you want more control of getting RGB image and alpha channel mask separately, you can use this workflow.
Forge impl's sanity check sets Stop at
to 0.5 to get better quality BG.
This workflow might be inferior comparing to other object removal workflows.
In SD Forge impl, there is a stop at
param that determines when
layer diffuse should stop in the denosing process. In the background, what this param does is unapply the LoRA and c_concat cond after a certain step
threshold. This is hard/risky to implement directly in ComfyUI as it requires manually load a model that has every changes except the layer diffusion
change applied. A workaround in ComfyUI is to have another img2img pass on the layer diffuse result to simulate the effect of stop at
param.
Combines previous workflows to generate blended and FG given BG. We found that there are some color variations in the extracted FG. Need to confirm with layer diffusion authors on whether this is expected.
- Currently only SDXL is supported. See https://github.com/layerdiffuse/sd-forge-layerdiffuse#model-notes for more details.
- Foreground conditioning
- Background conditioning
- Blended + foreground => background
- Blended + background => foreground