Skip to content

This is a simple, interactive chat application powered by Streamlit and the Replicate LlamaV2 model. It uses Streamlit for the front end interface and Replicate's LlamaV2 model for generating responses based on user input.

Notifications You must be signed in to change notification settings

dmytrmk/fb-llamav2-chat

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Streamlit Chat Application with Replicate LlamaV2 Model

This is a simple, interactive chat application powered by Streamlit and the Replicate LlamaV2 model. It uses Streamlit for the front end interface and Replicate's LlamaV2 model for generating responses based on user input.

Prerequisites:

  • Python 3.6 or higher
  • Streamlit
  • Python-dotenv
  • Replicate

Quickstart

  1. Clone the repository
git clone <repo-url>
cd <repo-dir>
  1. Install the dependencies
pip install -r requirements.txt
  1. Set the environment variables

Create a .env file in the root of your project and add the following environment variables.

# .env
REPLICATE_API_TOKEN=<Your Replicate LlamaV2 Key>
  1. Run the Streamlit app
streamlit run main.py

Usage

Just type your message in the text input box and press Enter. The AI model will generate a response that will be displayed on the screen.

How it Works

The generate_response function takes the user's input, sends it to the Replicate LlamaV2 model, and then receives the model's response. The response is then displayed on the Streamlit interface.

Contributing

Contributions are welcome! Please read the contributing guidelines before getting started.

License

This project is licensed under the terms of the MIT license. See the LICENSE file.

Sponsors

✨ Learn to build projects like this one (early bird discount): BuildFast Course

About

This is a simple, interactive chat application powered by Streamlit and the Replicate LlamaV2 model. It uses Streamlit for the front end interface and Replicate's LlamaV2 model for generating responses based on user input.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%