Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-work Docs and split out README (using MkDocs) #2894

Merged
merged 12 commits into from
Apr 22, 2023
Prev Previous commit
Next Next commit
get code blocks working across mkdocs and github
  • Loading branch information
richbeales committed Apr 22, 2023
commit 61e20c06f935ac1840796ce6003874e5f9ca15e2
6 changes: 4 additions & 2 deletions docs/configuration/imagegen.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,12 @@ By default, Auto-GPT uses DALL-e for image generation. To use Stable Diffusion,

Once you have a token, set these variables in your `.env`:

```IMAGE_PROVIDER=huggingface
``` shell
IMAGE_PROVIDER=huggingface
HUGGINGFACE_API_TOKEN="YOUR_HUGGINGFACE_API_TOKEN"
```

## Selenium
```sudo Xvfb :10 -ac -screen 0 1024x768x24 & DISPLAY=:10 <YOUR_CLIENT>
``` shell
sudo Xvfb :10 -ac -screen 0 1024x768x24 & DISPLAY=:10 <YOUR_CLIENT>
```
49 changes: 30 additions & 19 deletions docs/configuration/memory.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,32 +12,39 @@ To switch to either, change the `MEMORY_BACKEND` env variable to the value that

## Memory Backend Setup

- Links to memory backends
- [Pinecone](https://www.pinecone.io/)
- [Milvus](https://milvus.io/)
- [Redis](https://redis.io)
- [Weaviate](https://weaviate.io)
Links to memory backends

- [Pinecone](https://www.pinecone.io/)
- [Milvus](https://milvus.io/)
- [Redis](https://redis.io)
- [Weaviate](https://weaviate.io)

### Redis Setup
> _**CAUTION**_ \
This is not intended to be publicly accessible and lacks security measures. Therefore, avoid exposing Redis to the internet without a password or at all
1. Install docker (or Docker Desktop on Windows).
2. Launch Redis container.
``` docker run -d --name redis-stack-server -p 6379:6379 redis/redis-stack-server:latest
```
> See https://hub.docker.com/r/redis/redis-stack-server for setting a password and additional configuration.

``` shell
docker run -d --name redis-stack-server -p 6379:6379 redis/redis-stack-server:latest
```
> See https://hub.docker.com/r/redis/redis-stack-server for setting a password and additional configuration.

3. Set the following settings in `.env`.
> Replace **PASSWORD** in angled brackets (<>)
``` MEMORY_BACKEND=redis
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=<PASSWORD>
```

``` shell
MEMORY_BACKEND=redis
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=<PASSWORD>
```

You can optionally set `WIPE_REDIS_ON_START=False` to persist memory stored in Redis.

You can specify the memory index for redis using the following:
```MEMORY_INDEX=<WHATEVER>
``` shell
MEMORY_INDEX=<WHATEVER>
```

### 🌲 Pinecone API Key Setup
Expand All @@ -57,14 +64,16 @@ Alternatively, you can set them from the command line (advanced):

For Windows Users:

```setx PINECONE_API_KEY "<YOUR_PINECONE_API_KEY>"
``` shell
setx PINECONE_API_KEY "<YOUR_PINECONE_API_KEY>"
setx PINECONE_ENV "<YOUR_PINECONE_REGION>" # e.g: "us-east4-gcp"
setx MEMORY_BACKEND "pinecone"
```

For macOS and Linux users:

```export PINECONE_API_KEY="<YOUR_PINECONE_API_KEY>"
``` shell
export PINECONE_API_KEY="<YOUR_PINECONE_API_KEY>"
export PINECONE_ENV="<YOUR_PINECONE_REGION>" # e.g: "us-east4-gcp"
export MEMORY_BACKEND="pinecone"
```
Expand All @@ -91,15 +100,15 @@ Although still experimental, [Embedded Weaviate](https://weaviate.io/developers/

Install the Weaviate client before usage.

```
``` shell
$ pip install weaviate-client
```

#### Setting up environment variables

In your `.env` file set the following:

```
``` shell
MEMORY_BACKEND=weaviate
WEAVIATE_HOST="127.0.0.1" # the IP or domain of the running Weaviate instance
WEAVIATE_PORT="8080"
Expand All @@ -120,7 +129,8 @@ View memory usage by using the `--debug` flag :)
## 🧠 Memory pre-seeding
Memory pre-seeding allows you to ingest files into memory and pre-seed it before running Auto-GPT.

```# python data_ingestion.py -h
``` shell
# python data_ingestion.py -h
usage: data_ingestion.py [-h] (--file FILE | --dir DIR) [--init] [--overlap OVERLAP] [--max_length MAX_LENGTH]

Ingest a file or a directory with multiple files into memory. Make sure to set your .env before running this script.
Expand All @@ -135,6 +145,7 @@ options:

# python data_ingestion.py --dir DataFolder --init --overlap 100 --max_length 2000
```

In the example above, the script initializes the memory, ingests all files within the `Auto-Gpt/autogpt/auto_gpt_workspace/DataFolder` directory into memory with an overlap between chunks of 100 and a maximum length of each chunk of 2000.

Note that you can also use the `--file` argument to ingest a single file into memory and that data_ingestion.py will only ingest files within the `/auto_gpt_workspace` directory.
Expand Down
3 changes: 2 additions & 1 deletion docs/configuration/voice.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,8 @@

Enter this command to use TTS _(Text-to-Speech)_ for Auto-GPT

```python -m autogpt --speak
``` shell
python -m autogpt --speak
```

### List of IDs with names from eleven labs. You can use the name or ID:
Expand Down
36 changes: 20 additions & 16 deletions docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,18 +26,21 @@ _To execute the following commands, open a CMD, Bash, or Powershell window by na
2. Clone the repository: For this step, you need Git installed.
Note: If you don't have Git, you can just download the [latest stable release](https://github.com/Significant-Gravitas/Auto-GPT/releases/latest) instead (`Source code (zip)`, at the bottom of the page).

``` git clone -b stable https://github.com/Significant-Gravitas/Auto-GPT.git
```
``` shell
git clone -b stable https://github.com/Significant-Gravitas/Auto-GPT.git
```

3. Navigate to the directory where you downloaded the repository.

``` cd Auto-GPT
```
``` shell
cd Auto-GPT
```

4. Install the required dependencies.

``` pip install -r requirements.txt
```
``` shell
pip install -r requirements.txt
```

5. Configure Auto-GPT:
1. Find the file named `.env.template` in the main /Auto-GPT folder. This file may be hidden by default in some operating systems due to the dot prefix. To reveal hidden files, follow the instructions for your specific operating system (e.g., in Windows, click on the "View" tab in File Explorer and check the "Hidden items" box; in macOS, press Cmd + Shift + .).
Expand All @@ -60,33 +63,34 @@ Note: If you don't have Git, you can just download the [latest stable release](h
- `embedding_model_deployment_id` - your text-embedding-ada-002 v2 deployment ID
- Please specify all of these values as double-quoted strings

```
# Replace string in angled brackets (<>) to your own ID
azure_model_map:
fast_llm_model_deployment_id: "<my-fast-llm-deployment-id>"
...
```
``` shell
# Replace string in angled brackets (<>) to your own ID
azure_model_map:
fast_llm_model_deployment_id: "<my-fast-llm-deployment-id>"
...
```
- Details can be found here: https://pypi.org/project/openai/ in the `Microsoft Azure Endpoints` section and here: https://learn.microsoft.com/en-us/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line for the embedding model.

## Docker

You can also build this into a docker image and run it:

```
``` shell
docker build -t autogpt .
docker run -it --env-file=./.env -v $PWD/auto_gpt_workspace:/home/appuser/auto_gpt_workspace autogpt
```

Or if you have `docker-compose`:
```
``` shell
docker-compose run --build --rm auto-gpt
```

You can pass extra arguments, for instance, running with `--gpt3only` and `--continuous` mode:
```
``` shell
docker run -it --env-file=./.env -v $PWD/auto_gpt_workspace:/home/appuser/auto_gpt_workspace autogpt --gpt3only --continuous
```
```

``` shell
docker-compose run --build --rm auto-gpt --gpt3only --continuous
```

Expand Down
5 changes: 3 additions & 2 deletions docs/plugins.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,8 @@ Use the [Auto-GPT Plugin Template](https://github.com/Significant-Gravitas/Auto-
2. **Install the plugin's dependencies (if any):**
Navigate to the plugin's folder in your terminal, and run the following command to install any required dependencies:

``` pip install -r requirements.txt
``` shell
pip install -r requirements.txt
```

3. **Package the plugin as a Zip file:**
Expand All @@ -26,7 +27,7 @@ Use the [Auto-GPT Plugin Template](https://github.com/Significant-Gravitas/Auto-
5. **Allowlist the plugin (optional):**
Add the plugin's class name to the `ALLOWLISTED_PLUGINS` in the `.env` file to avoid being prompted with a warning when loading the plugin:

```
``` shell
ALLOWLISTED_PLUGINS=example-plugin1,example-plugin2,example-plugin3
```

Expand Down
6 changes: 3 additions & 3 deletions docs/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,15 +65,15 @@ Use at your own risk.

If you don't have access to the GPT4 api, this mode will allow you to use Auto-GPT!

```
``` shell
python -m autogpt --gpt3only
```

### GPT4 ONLY Mode

If you do have access to the GPT4 api, this mode will allow you to use Auto-GPT solely using the GPT-4 API for increased intelligence (and cost!)

```
``` shell
python -m autogpt --gpt4only
```

Expand All @@ -83,6 +83,6 @@ Activity and error logs are located in the `./output/logs`

To print out debug logs:

```
``` shell
python -m autogpt --debug
```