Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable support for other LLMs #36

Open
TechNickAI opened this issue Jul 12, 2023 · 7 comments
Open

Enable support for other LLMs #36

TechNickAI opened this issue Jul 12, 2023 · 7 comments
Labels
enhancement New feature or request

Comments

@TechNickAI
Copy link
Owner

Claude 2 looks interesting

@hanselke
Copy link
Contributor

https://github.com/deep-diver/LLM-As-Chatbot looks interesting. in particular it's https://github.com/deep-diver/PingPong mechanism to isolate the prompt difference requirements.

@TechNickAI TechNickAI added the enhancement New feature or request label Jul 22, 2023
@TechNickAI
Copy link
Owner Author

Note that if you use the openrouter.ai option, you can choose any model they support.

https://openrouter.ai/docs

To do so, add your openrouter_api_key to the config file at $HOME/.aicodebot.yaml

openrouter_api_key: sk-or-v1....

You can then specify which model to use, you can pass AICODEBOT_MODEL as an environment variable.

I'll work on adding that as a config option as well.

@hanselke
Copy link
Contributor

Few questions:

  1. how are you planning to support locally run LLMs?
    have an ugly hack here hanselke@ea7ccae

Think thats the wrong approach tho, to run the LLMs within the core AICodebot code.

I would suggest using nats as a queue/service discovery mechanism, like https://github.com/openbnet/troybot/blob/main/services/stt/nats_stt.py

Benefits of using nats

  • locally run GPUs that dont have to be on the development machine
  • multiple workers in the queue so that you can run seperate jobs in parallel, or as a HA fallback.

Could do a docker compose to get the LLM + nats running.

  1. we're gona need some sort of prompt management mechanism for different LLMs. not sure how you want to approach it.

Using the default prompts, falcon-7b is totally off the mark.
hanselke@5e09d69
hanselke@a61f8b6

seems like its giving us back the inputs.

@TechNickAI
Copy link
Owner Author

I understand and appreciate the importance of this now.

@hanselke I just re-factored the layout of the commands, so your current code isn't compatible -

I'll look into adding support for other LLMS soon!

@TechNickAI
Copy link
Owner Author

Disappointing results - nothing is as good as ChatGPT-4 so far

@hanselke
Copy link
Contributor

hanselke commented Aug 14, 2023 via email

@ishaan-jaff
Copy link

@hanselke @gorillamania
i'm the maintainer of liteLLM - https://github.com/BerriAI/litellm/
We allow you to call OpenAI, Azure, TogetherAI, Hugging Face, Claude, Cohere (and 50+ LLMs) using OpenAI chatGPT inputs/outputs.

We built liteLLM to solve the problem mentioned in this issue

Here's how litellm calls work

from litellm import completion

## set ENV variables
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["COHERE_API_KEY"] = "cohere key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion("command-nightly", messages)

If this is useful for this issue - I'd love to make a PR to help gorillamania (I noticed your using ChatOpenAI we have a langchain integration using ChatLiteLLM if you'd prefer staying with langchain

@github-staff github-staff deleted a comment May 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants