Skip to content
This repository has been archived by the owner on Aug 24, 2023. It is now read-only.

kael558/supreme-octo-tribble

Repository files navigation

MIT License

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Roadmap
  5. Contributing
  6. License
  7. Contact
  8. Acknowledgments

About The Project

Farid and I created this project for LabLab.ai's Stable Diffusion Hackathon. We won first place.

This project allows users to:

  • Generate images based on styles
  • Organize the images frame-by-frame
  • Create videos by interpolating between keyframe images chosen by the user

The reasons why we chose to make this project:

  • There is no tool out there for video creation by selected image interpolation. To clarify, there are examples for image interpolation by stable diffusion. But they only allow you to select the prompt that you want. Not the image.
  • There is no tool that provides an easy user interface to create videos/timelines.
  • There is no tool that can connect to different models for different use cases. For example, perhaps the Stable Diffusion v1-4 model generates good results in a specific art style. Switching to that model should be done very easily.

(back to top)

Built With

(back to top)

Getting Started

We previously used Flask to circumvent the CORS error from LabLab's Stable Diffusion API when running browser-side. However, after we switched to using our own custom endpoint model, we could run the project on the browser.

Now you can simply open the index.html file to run the project. Note, since our Stable Diffusion API is not running, the project will not work. We are currently working on a solution for this.

You may host the API yourself with the notebook provided. All it requires a HuggingFace Spaces token.

(back to top)

Usage

  • You may specify basic/advanced options by clicking the Information icon in the top left corner
  • You may add new images to the timeline by entering information in the prompt and clicking submit
  • You may hover and click over the generated images to regenerate and select a different image or delete it.
  • You may render the interpolated frames by clicking the Interpolate button
  • You may create a video from those frames by clicking the Create Video button

(back to top)

Roadmap

  • Host API with HuggingFace
  • REFACTOR front end code/Clean up UI
  • Optimized rendering (rendering frames that are changed only)
  • Audio track upload
  • img2img generation for amateur drawings
  • inpainting to change parts of image
  • Add more artists/styles (consider categorizing)

(back to top)

Contributing

This repository is intended as an archive. No changes will be made to it in the future.

You may fork the project and work in your own repository.

License

Distributed under the MIT License. See LICENSE.txt for more information.

Contact

Rahel Gunaratne:

Acknowledgments

(back to top)