Skip to content
forked from ddPn08/Radiata

StableDiffusionWebUI accelerated using TensorRT

License

Notifications You must be signed in to change notification settings

Kokohachi/Lsmith

 
 

Repository files navigation

Lsmith is a fast StableDiffusionWebUI using high-speed inference technology with TensorRT


Benchmark

benchmark

Screenshots

  • Batch generation

lemons

  • img2img support

img2img

Installation

Docker (All platform, Recommended) | Easy

  1. Clone repository
git clone https://github.com/ddPn08/Lsmith.git
cd Lsmith
git submodule update --init --recursive
  1. Launch using Docker compose
docker-compose up --build

Data such as models and output images are saved in the docker-data directory.

Customization

There are two types of Dockerfile.

Dockerfile.full Build the TensorRT plugin. The build can take tens of minutes.
Dockerfile.lite Download the pre-built TensorRT plugin from Github Releases. Build times are significantly reduced.

You can change the Dockerfile to use by changing the value of services.lsmith.build.dockerfile in docker-compose.yml. By default it uses Dockerfile.lite.

Linux | Difficult

requirements

  • node.js (recommended version is 18)
  • pnpm
  • python 3.10
  • pip
  • CUDA
  • cuDNN < 8.6.0
  • TensorRT 8.5.x
  1. Clone Lsmith repository
git clone https://github.com/ddPn08/Lsmith.git
cd Lsmith
git submodule update --init --recursive
  1. Enter the repository directory.
cd Lsmith
  1. Enter frontend directory and build frontend
cd frontend
pnpm i
pnpm build --out-dir ../dist
  1. Run launch.sh with the path to libnvinfer_plugin.so in the LD_PRELOAD variable.
ex.)
bash launch.sh --host 0.0.0.0

Windows | Difficult

requirements

  • node.js (recommended version is 18)
  • pnpm
  • python 3.10
  • pip
  • CUDA
  • cuDNN < 8.6.0
  • TensorRT 8.5.x
  1. Install nvidia gpu driver
  2. Instal cuda 11.x (Click here for the official guide)
  3. Instal cudnn 8.6.0 (Click here for the official guide)
  4. Install tensorrt 8.5.3.1 (Click here for the official guide)
  5. Clone Lsmith repository
git clone https://github.com/ddPn08/Lsmith.git
cd Lsmith
git submodule update --init --recursive
  1. Enter frontend directory and build frontend
cd frontend
pnpm i
pnpm build --out-dir ../dist
  1. Launch launch-user.bat

Usage

Once started, access <ip address>:<port number> (ex https://localhost:8000) to open the WebUI.

First of all, we need to convert our existing diffusers model to the tensorrt engine.

Building the TensorRT engine

  1. Click on the "Engine" tab
  2. Enter Huggingface's Diffusers model ID in Model ID (ex: CompVis/stable-diffusion-v1-4)
  3. Enter your Huggingface access token in HuggingFace Access Token (required for some repositories). Access tokens can be obtained or created from this page.
  4. Click the Build button to start building the engine.
    • There may be some warnings during the engine build, but you can safely ignore them unless the build fails.
    • The build can take tens of minutes. For reference it takes an average of 15 minutes on the RTX3060 12GB.

Generate images

  1. Select the model in the header dropdown.
  2. Click on the "Generate" tab
  3. Click "Generate" button.



Special thanks to the technical members of the AI 絵作り研究会, a Japanese AI image generation community.

About

StableDiffusionWebUI accelerated using TensorRT

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 52.3%
  • TypeScript 45.3%
  • Other 2.4%