Skip to content

A CLI tool over the top 1218 Python libraries.

License

Notifications You must be signed in to change notification settings

lukasl-dev/context

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

48 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🛩️ Fleet Context

License Discord

A CLI tool over the top 1218 Python libraries.
Used for library q/a & code generation with all available OpenAI models

Website      |      Data Visualizer      |      PyPI      |      @fleet_ai


fleet-context.mp4




Quick Start

Install the package and run context to ask questions about the most up-to-date Python libraries. You will have to provide your OpenAI key to start a session.

pip install fleet-context
context

If you'd like to run the CLI tool locally, you can clone this repository, cd into it, then run:

pip install -e .
context

If you have an existing package that already uses the keyword context, you can also activate Fleet Context by running:

fleet-context

Limit libraries

You can use the -l or --libraries followed by a list of libraries to limit your session to a certain number of libraries. Defaults to all. View a list of all supported libraries on our website.

context -l langchain pydantic openai

Use a different OpenAI model

You can select a different OpenAI model by using -m or --model. Defaults to gpt-4. You can set your model to gpt-4-1106-preview (gpt-4-turbo), gpt-3.5-turbo, or gpt-3.5-turbo-16k.

context -m gpt-4-1106-preview

Using local models

Local model support is powered by LM Studio. To use local models, you can use --local or -n:

context --local

You need to download your local model through LM Studio. To do that:

  1. Download LM Studio. You can find the download link here: https://lmstudio.ai
  2. Open LM Studio and download your model of choice.
  3. Click the ↔ icon on the very left sidebar
  4. Select your model and click "Start Server"

The context window is defaulted to 3000. You can change this by using --context_window or -w:

context --local --context_window 4096

Advanced settings

You can control the number of retrieved chunks by using -k or --k_value (defaulted to 15), and you can toggle whether the model cites its source by using -c or --cite_sources (defaults to true).

context -k 25 -c false




Evaluations

Results

Sampled libraries

We saw a 37-point improvement for gpt-4 generation scores and a 34-point improvement for gpt-4-turbo generation scores amongst a randomly sampled set of 50 libraries.

We attribute this to a lack of knowledge for the most up-to-date versions of libraries for gpt-4, and a combination of relevant up-to-date information to generate with and relevance of information for gpt-4-turbo.




Embeddings

Check out our visualized data here.

You can download all embeddings here.

Screenshot 2023-11-06 at 10 01 22 PM

About

A CLI tool over the top 1218 Python libraries.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%