Run ComfyUI workflows on Replicate:
https://replicate.com/fofr/any-comfyui-workflow
We recommend:
- trying it with your favorite workflow and make sure it works
- writing code to customise the JSON you pass to the model, for example changing seeds or prompts
- using the Replicate API to run the workflow
TLDR: json blob -> img/mp4
We've tried to include many of the most popular model weights:
View list of supported weights
The following custom nodes are also supported, these are fixed to specific commits:
- ComfyUI Advanced ControlNet
- ComfyUI AnimateDiff Evolved
- ComfyUI BRIA AI RMBG
- ComfyUI Comfyroll CustomNodes
- ComfyUI Controlnet Aux
- ComfyUI Essentials
- ComfyUI FizzNodes
- ComfyUI Frame Interpolation
- ComfyUI HyperSDXL1StepUnetScheduler
- ComfyUI Impact Pack
- ComfyUI Inspire Pack
- ComfyUI InstantID
- ComfyUI IPAdapter Plus
- ComfyUI KJNodes
- ComfyUI LayerDiffuse
- ComfyUI Logic
- ComfyUI Nodes for External Tooling
- ComfyUI PhotoMaker Plus
- ComfyUI TinyTerra Nodes
- ComfyUI UltimateSDUpscale
- ComfyUI VideoHelperSuite
- comfyui-reactor-node
- comfyui_segment_anything
- Derfuu ComfyUI ModdedNodes
- Efficiency Nodes ComfyUI
- masquerade-nodes-comfyui
- WAS Node Suite
Raise an issue to request more custom nodes or models, or use this model as a template to roll your own.
You’ll need the API version of your ComfyUI workflow. This is different to the commonly shared JSON version, it does not included visual information about nodes, etc.
To get your API JSON:
- Turn on the "Enable Dev mode Options" from the ComfyUI settings (via the settings icon)
- Load your workflow into ComfyUI
- Export your API JSON using the "Save (API format)" button
comfyui-save-workflow.mp4
If your model takes inputs, like images for img2img or controlnet, you have 3 options:
Modify your API JSON file to point at a URL:
- "image": "/your-path-to/image.jpg",
+ "image": "https://example.com/image.jpg",
You can also upload a single input file when running the model.
This file will be saved as input.[extension]
– for example input.jpg
. It'll be placed in the ComfyUI input
directory, so you can reference in your workflow with:
- "image": "/your-path-to/image.jpg",
+ "image": "image.jpg",
These will be downloaded and extracted to the input
directory. You can then reference them in your workflow based on their relative paths.
So a zip file containing:
- my_img.png
- references/my_reference_01.jpg
- references/my_reference_02.jpg
Might be used in the workflow like:
"image": "my_img.png",
...
"directory": "references",
With all your inputs updated, you can now run your workflow.
Some workflows save temporary files, for example pre-processed controlnet images. You can also return these by enabling the return_temp_files
option.
Clone this repository:
git clone --recurse-submodules https://github.com/fofr/cog-comfyui.git
Run the following script to install all the custom nodes:
./scripts/clone_plugins.sh
- GPU Machine: Start the Cog container and expose port 8188:
sudo cog run -p 8188 bash
Running this command starts up the Cog container and let's you access it
- Inside Cog Container: Now that we have access to the Cog container, we start the server, binding to all network interfaces:
cd ComfyUI/
python main.py --listen 0.0.0.0
- Local Machine: Access the server using the GPU machine's IP and the exposed port (8188):
https://<gpu-machines-ip>:8188
When you goto https://<gpu-machines-ip>:8188
you'll see the classic ComfyUI web form!
- Custom stuff:
Install all pip requirements:
./scripts/install_requirements.sh
Install all the checkpoints:
python scripts/get_weights.py ./all_weights.txt