koi is an open source plug-in for Krita that allows you to use AI to accelerate your art workflow!
In the interest of getting the open source community on board--I have released this plug-in early. In its current state you may run into issues (particularly during the setup process). If you do, I encourage you to open an issue here on GitHub and describe your problems so that it can be fixed it for you and others!
The goal of this repository is to serve as a starting point for building increasingly useful tools for artists of all levels of experience to use.
Link to original twitter thread
This plug-in serves as a working example of how new A.I. models like Stable Diffusion can lower the barrier of entry to art so that anyone can enjoy making their dreams a reality!
Because this is an open source project I encourage you to try it out, break things, and come back with suggestions!
If you are new to git, or get stuck during the installation process, Lewington made a nice step-by-step video.
The easiest way to get started is to follow the plug-in installation process for krita. Then use the google colab backend server (button at the top of this readme)! This should give you a good introduction to the setup process and get you up and running fast!
- Step 1: Find your operating system's
pykrita
folder reference - Step 2: Clone the repository, and copy the
koi
folder andkoi.desktop
topykrita
. (restart krita now if it is open) - Step 3: Open Krita and navigate to the python plug-in menu reference
- Step 4: Enable the
koi
plugin and restart Krita to load the plug-in.
-
Step 0: Ensure you have a GPU-accelerated installation of
pytorch
. (You can skip this step if you are using Colab or already have it)- Follow the installation instructions on pytorch's official getting started.
-
Step 1: Get the latest version of HuggingFace's
diffusers
from source by going to the GitHub repo- From here you can clone the repo
git clone https://github.com/huggingface/diffusers.git
&cd diffusers
to move into the directory. - Install the package with
pip install -e .
- From here you can clone the repo
-
Step 2: Install this package! I recommend moving out of the diffusers folder if you haven't already (eg.
cd ..
)git clone https://github.com/nousr/koi.git
, thencd koi
andpip install -e .
Note 🙋
Before continuing, make sure you accept the terms of service for the
diffusers
model link to do so here.Next, inside your terminal run the
huggingface-cli login
command and paste a token generated from here. If you don't want to repeat this step in the future you can then rungit config --global credential.helper store
. (note: only do this on a computer you trust)
- Step 3: Run the server by typing
python server.py
- If you did everything correctly you should see an address spit out after some time (eg. 127.0.0.1:8888)
- Step 4: Open Krita, if you haven't already, and paste your address into the
endpoint
field of the plugin- You will also need to append the actual API endpoint you are using. By default this is
/api/img2img
- If you are using all of the default settings your endpoint field will look something like this
https://127.0.0.1:8888/api/img2img
- You will also need to append the actual API endpoint you are using. By default this is
This part is easy!
- Step 1: Create a new canvas that is 512 x 512 (px) in size and make a single-layer sketch (note: these are temporary restrictions).
- Step 2: Fill out the prompt field in the
koi
panel (default location is somewhere on the right of your screen). - Step 3: Make any additional changes you would like to the inference parameters (strength, steps, etc.)
- Step 4: Copy and paste your server's endpoint to the associated field
- Step 5: Click
dream
!
- What does
koi
stand for?- Krita Open(source) Img2Img: While support for StableDiffusion is first, the goal is to have this plug-in be compatible with any model!
- Why the client/server setup?
- The goal is to make this as widely available as possible. The server can be run anywhere with a GPU (i.e. colab) and allow those with low-powered hardware to still use the plug-in!
- I'm getting an error with "set-size"
- This usually happens when you either forget "/api/img2img" at the end of your endpoint when copying it into the plugin OR you have some issue with your backend server (check the output on your server for more information).
- Add colab backend example
- Flesh out UI
- Move to CompVis repo
- Add CI
- Abstract away drop-in models for the server
- Improve documentation
- Add DreamStudio API support
- Add support for arbitrary canvas size & selection-based img2img
- Add support for masks
- Offer more configuration options