LLM-Assistant is a browser interface based on Gradio that interfaces with local LLMs to call functions and act as a general assistant.
For the lastest features, please check out the dev branch
- Works with any instruct-finetuned LLM
- Can search for information (RAG)
- Knows when to call functions
- Realtime mode for working across the system
- Answers question from PDF files
- Voice access
- More functions
- None
- Fixed search feature
- Youtube video search
- File Upload
- Clone repo to a virtual environment
- Install requirements.txt
- Download and place LLM model in model folder
- Run main.py
- Use Assistant mode for general chat, and calling functions to execute like playing music, as well as PDF question answering
- Use Realtime mode for editing a word document or replying to an email in realtime, directly by copying a selection and waiting for the output. The output gets auto pasted at cursor location.