Skip to content
forked from pluja/whishper

Transcribe any audio to text, translate and edit subtitles 100% locally with a web UI. Powered by whisper models! Foi configurado para utilizar apenas o idioma PT e para fazer uso de 12 threads do CPU

License

Notifications You must be signed in to change notification settings

markgir/whishper

 
 

Repository files navigation

whishper banner

Whishper is an open-source, 100% local audio transcription and subtitling suite with a full-featured web UI.

Features

  • 🗣️ Transcribe any media to text: audio, video, etc.
    • Transcribe from URLs (any source supported by yt-dlp).
    • Upload a file to transcribe.
  • 📥 Download transcriptions in many formats: TXT, JSON, VTT, SRT or copy the raw text to your clipboard.
  • 🌐 Translate your transcriptions to any language supported by Libretranslate.
  • ✍️ Powerful subtitle editor so you don't need to leave the UI!
    • Transcription highlighting based on media position.
    • CPS (Characters per second) warnings.
    • Segment splitting.
    • Segment insertion.
    • Subtitle language selection.
  • 🏠 100% Local: transcription, translation and subtitle edition happen 100% on your machine (can even work offline!).
  • 🚀 Fast: uses FasterWhisper as the Whisper backend: get much faster transcription times on CPU!
  • 👍 Quick and easy setup: use the quick start script, or run through a few steps!
  • 🔥 GPU support: use your NVIDIA GPU to get even faster transcription times!
  • 🐎 CPU support: no GPU? No problem! Whishper can run on CPU too.

Roadmap

  • Local folder as media input (#15).
  • Full-text search all transcriptions.
  • User authentication.
  • Audio recording from the browser.
  • Add insanely-fast-whisper as an optional backend (#53).
  • Support for GPU acceleration.
    • Non NVIDIA GPU support. Is it possible with faster-whisper?
  • Can we do something with seamless_communication?

Project structure

Whishper is a collection of pieces that work together. The three main pieces are:

  • Transcription-API: This is the API that enables running Faster-Whisper. You can find it in the transcription-api folder.
  • Whishper-Backend: This is the backend that coordinates frontend calls, database, and tasks. You can find it in backend folder.
  • Whishper-Frontend: This is the frontend (web UI) of the application. You can find it in frontend folder.
  • Translation (3rd party): This is the libretranslate container that is used for translating subtitles.
  • MongoDB (3rd party): This is the database that stores all the information about your transcriptions.
  • Nginx (3rd party): This is the proxy that allows running everything from a single domain.

Contributing

Contributions are welcome! Feel free to open a PR with your changes, or take a look at the issues to see if there is something you can help with.

Development setup

Check out the development documentation here.

Screenshots

These screenshots are available on the official website, click any of the following links to see:

Support

Star History

Star History Chart

Credits

About

Transcribe any audio to text, translate and edit subtitles 100% locally with a web UI. Powered by whisper models! Foi configurado para utilizar apenas o idioma PT e para fazer uso de 12 threads do CPU

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Svelte 45.0%
  • Go 27.7%
  • Python 9.8%
  • JavaScript 7.3%
  • Shell 3.8%
  • Dockerfile 3.8%
  • Other 2.6%