Skip to content

Commit

Permalink
Merge pull request NVIDIA#244 from DougAtNvidia/docs/edit-getting-sta…
Browse files Browse the repository at this point in the history
…rted

Docs/edit getting started
  • Loading branch information
drazvan committed Feb 15, 2024
2 parents ed29477 + a577560 commit 201669c
Show file tree
Hide file tree
Showing 9 changed files with 476 additions and 444 deletions.
154 changes: 81 additions & 73 deletions docs/getting_started/1_hello_world/README.md
Original file line number Diff line number Diff line change
@@ -1,30 +1,34 @@
# Hello World

This guide will show you how to create a "Hello World" guardrails configuration, i.e. one where we only control the greeting behavior. Before we begin, make sure you have installed NeMo Guardrails correctly (for detailed instructions, check out the [Installation Guide](../../getting_started/installation-guide.md)).
This guide shows you how to create a "Hello World" guardrails configuration that controls the greeting behavior. Before you begin, make sure you have [installed NeMo Guardrails](../../getting_started/installation-guide.md).

## Prerequisites

This "Hello World" guardrails configuration will use the OpenAI `gpt-3.5-turbo-instruct` model, so you need to make sure you have the `openai` package installed and the `OPENAI_API_KEY` environment variable set.
This "Hello World" guardrails configuration uses the OpenAI `gpt-3.5-turbo-instruct` model.

```bash
pip install openai==0.28.1
```
1. Install the `openai` package:

```bash
export OPENAI_API_KEY=$OPENAI_API_KEY # Replace with your own key
```
```bash
pip install openai==0.28.1
```

If you're running this inside a notebook, you also need to patch the AsyncIO loop.
2. Set the `OPENAI_API_KEY` environment variable:

```python
import nest_asyncio
```bash
export OPENAI_API_KEY=YOUR_OPENAI_API_KEY
```

nest_asyncio.apply()
```
3. If you're running this inside a notebook, patch the AsyncIO loop:

```python
import nest_asyncio

nest_asyncio.apply()
```

## Step 1: create a new guardrails configuration

Every guardrails configuration must be stored in a folder. The standard folder structure is the following:
Every guardrails configuration must be stored in a folder. The standard folder structure is as follows:

```
.
Expand All @@ -35,26 +39,29 @@ Every guardrails configuration must be stored in a folder. The standard folder s
│ ├── rails.co
│ ├── ...
```
For now, you don't need to worry about what goes into every file (you can check out the [Configuration Guide](../../user_guides/configuration-guide.md) for more details later). Start by creating a folder for your configuration, e.g. `config`:
See the [Configuration Guide](../../user_guides/configuration-guide.md) for information about the contents of these files.

```bash
mkdir config
```
1. Create a folder, such as *config*, for your configuration:

Next, create a `config.yml` file with the following content:
```bash
mkdir config
cd config
```

```yaml
models:
- type: main
engine: openai
model: gpt-3.5-turbo-instruct
```
2. Create a *config.yml* file with the following content:

The `models` key in the `config.yml` file configures the LLM model. For a complete list of supported LLM models, check out [Supported LLM Models](../../user_guides/configuration-guide.md#supported-llm-models) section in the configuration guide.
```yaml
models:
- type: main
engine: openai
model: gpt-3.5-turbo-instruct
```

The `models` key in the *config.yml* file configures the LLM model. For a complete list of supported LLM models, see [Supported LLM Models](../../user_guides/configuration-guide.md#supported-llm-models).

## Step 2: load the guardrails configuration

In your Python code base, to load a guardrails configuration from a path, you must create a `RailsConfig` instance using the `from_path` method:
To load a guardrails configuration from a path, you must create a `RailsConfig` instance using the `from_path` method in your Python code:

```python
from nemoguardrails import RailsConfig
Expand All @@ -64,7 +71,7 @@ config = RailsConfig.from_path("./config")

## Step 3: use the guardrails configuration

You can already use this empty configuration by creating an `LLMRails` instance and using the `generate_async` method.
Use this empty configuration by creating an `LLMRails` instance and using the `generate_async` method in your Python code:

```python
from nemoguardrails import LLMRails
Expand All @@ -82,63 +89,63 @@ print(response)
{'role': 'assistant', 'content': "Hello! It's nice to meet you. My name is Assistant. How can I help you today?"}
```

The format for the input `messages` array as well as the response follow the same format as the [OpenAI API](https://platform.openai.com/docs/guides/text-generation/chat-completions-api).
The format for the input `messages` array as well as the response follow the [OpenAI API](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) format.

## Step 4: add your first guardrail

To control the greeting response, you need to define the user and bot messages, as well as the flow that connects the two together. Don't worry about what exactly we mean by *messages* and *flows*, we'll cover that in the next guide. At this point, an intuitive understanding is enough.
To control the greeting response, define the user and bot messages, and the flow that connects the two together. See [Core Colang Concepts](../2_core_colang_concepts/README.md) for definitions of *messages* and *flows*.

To define the "greeting" user message, create a `config/rails.co` file and add the following:
1. Define the `greeting` user message by creating a *config/rails.co* file with the following content:

```colang
define user express greeting
"Hello"
"Hi"
"Wassup?"
```
```colang
define user express greeting
"Hello"
"Hi"
"Wassup?"
```

To add a greeting flow which instructs the bot to respond back with "Hello World!" and ask how they are doing, add the following to the `rails.co` file:
2. Add a greeting flow that instructs the bot to respond back with "Hello World!" and ask how they are doing by adding the following content to the *rails.co* file:

```python
define flow greeting
user express greeting
bot express greeting
bot ask how are you
```
```python
define flow greeting
user express greeting
bot express greeting
bot ask how are you
```

To define the exact messages to be used for the response, add the following to the `rails.co` file:
3. Define the messages for the response by adding the following content to the *rails.co* file:

```python
define bot express greeting
"Hello World!"
```python
define bot express greeting
"Hello World!"

define bot ask how are you
"How are you doing?"
```
define bot ask how are you
"How are you doing?"
```

You can now reload the config and test it:
4. Reload the config and test it:

```python
config = RailsConfig.from_path("./config")
rails = LLMRails(config)
```python
config = RailsConfig.from_path("./config")
rails = LLMRails(config)

response = rails.generate(messages=[{
"role": "user",
"content": "Hello!"
}])
print(response["content"])
```
response = rails.generate(messages=[{
"role": "user",
"content": "Hello!"
}])
print(response["content"])
```

```
Hello World!
How are you doing?
```
```
Hello World!
How are you doing?
```

**Congratulations!** You've just created you first guardrails configuration.
**Congratulations!** You've just created you first guardrails configuration!

### Other queries

What happens if you ask another question? (e.g., "What is the capital of France?")
What happens if you ask another question, such as "What is the capital of France?":

```python
response = rails.generate(messages=[{
Expand All @@ -152,19 +159,20 @@ print(response["content"])
The capital of France is Paris.
```

For any other input, which is not a greeting, the LLM will generate the response as usual. This is because the rail that we have defined is only concerned with how to respond to a greeting.
For any other input that is not a greeting, the LLM generates the response as usual. This is because the rail that we have defined is only concerned with how to respond to a greeting.

## CLI Chat

You can also test this configuration in an interactive mode using the NeMo Guardrails CLI Chat:
You can also test this configuration in interactive mode using the NeMo Guardrails CLI Chat command:

```bash
$ nemoguardrails chat
```

Without any additional parameters, the CLI chat will load the configuration from the `config` folder in the current directory.
Without any additional parameters, the CLI chat loads the configuration from the *config.yml* file in the *config* folder in the current directory.

### Sample session

Sample session:
```
$ nemoguardrails chat
Starting the chat (Press Ctrl+C to quit) ...
Expand All @@ -182,7 +190,7 @@ According to the latest estimates, the population of Paris is around 2.2 million

## Server and Chat UI

Last but not least, you can also test a guardrails configuration using the NeMo Guardrails server and the Chat UI.
You can also test a guardrails configuration using the NeMo Guardrails server and the Chat UI.

To start the server:

Expand All @@ -201,4 +209,4 @@ The Chat UI interface is now available at `http:https://localhost:8000`:

## Next

In the [next guide](../2_core_colang_concepts/README.md), we explain in more detail the two most important Colang concepts: *messages* and *flows*.
The next guide, [Core Colang Concepts](../2_core_colang_concepts/README.md), explains the Colang concepts *messages* and *flows*.
Loading

0 comments on commit 201669c

Please sign in to comment.