Skip to content

Commit

Permalink
Add support for custom tasks and their prompts
Browse files Browse the repository at this point in the history
- Add support for string in render_task_prompt
- Add test for custom tasks
- Update docs + changelog
  • Loading branch information
niralp-nv committed Sep 6, 2023
1 parent cb07be6 commit 2823912
Show file tree
Hide file tree
Showing 5 changed files with 87 additions and 4 deletions.
8 changes: 8 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,14 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).


## Unreleased

### Added

- Support for [custom tasks and their prompts](./docs/user_guide/advanced/prompt-customization.md#custom-tasks-and-prompts).


## [0.5.0] - 2023-09-04

### Added
Expand Down
32 changes: 32 additions & 0 deletions docs/user_guide/advanced/prompt-customization.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,3 +155,35 @@ Optionally, the output from the LLM can be parsed using an *output parser*. The
Currently, the NeMo Guardrails toolkit includes prompts for `openai/text-davinci-003`, `openai/gpt-3.5-turbo`, `openai/gpt-4`, `databricks/dolly-v2-3b`, `cohere/command`, `cohere/command-light`, `cohere/command-light-nightly`.

**DISCLAIMER**: Evaluating and improving the provided prompts is a work in progress. We do not recommend deploying this alpha version using these prompts in a production setting.

## Custom Tasks and Prompts

In the scenario where you would like to create a custom task beyond those included in
[the default tasks](../../../nemoguardrails/llm/types.py), you can include the task and associated prompt as provided in the example below:

```yaml
prompts:
- task: summarize_text
content: |-
Text: {{ user_input }}
Summarize the above text.
```

Refer to ["Prompt Customization"](#prompt-customization) on where to include this custom task and prompt.

Within an action, this prompt can be rendered via the `LLMTaskManager`:

```python
prompt = llm_task_manager.render_task_prompt(
task="summarize_text",
context={
"user_input": user_input,
},
)

with llm_params(llm, temperature=0.0):
check = await llm_call(llm, prompt)
...
```

With this approach, you can quickly modify custom tasks' prompts in your configuration files.
6 changes: 3 additions & 3 deletions nemoguardrails/llm/prompts.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@

"""Prompts for the various steps in the interaction."""
import os
from typing import List
from typing import List, Union

import yaml

Expand Down Expand Up @@ -99,7 +99,7 @@ def _get_prompt(task_name: str, model: str, prompts: List) -> TaskPrompt:
raise ValueError(f"Could not find prompt for task {task_name} and model {model}")


def get_prompt(config: RailsConfig, task: Task) -> TaskPrompt:
def get_prompt(config: RailsConfig, task: Union[str, Task]) -> TaskPrompt:
"""Return the prompt for the given task."""
# Currently, we use the main model for all tasks
# TODO: add support to use different models for different tasks
Expand All @@ -108,7 +108,7 @@ def get_prompt(config: RailsConfig, task: Task) -> TaskPrompt:
task_model = config.models[0].engine
if config.models[0].model:
task_model += "/" + config.models[0].model
task_name = str(task.value)
task_name = str(task.value) if isinstance(task, Task) else task

prompts = _prompts + (config.prompts or [])
prompt = _get_prompt(task_name, task_model, prompts)
Expand Down
2 changes: 1 addition & 1 deletion nemoguardrails/llm/taskmanager.py
Original file line number Diff line number Diff line change
Expand Up @@ -177,7 +177,7 @@ def _render_messages(

def render_task_prompt(
self,
task: Task,
task: Union[str, Task],
context: Optional[dict] = None,
events: Optional[List[dict]] = None,
) -> Union[str, List[dict]]:
Expand Down
43 changes: 43 additions & 0 deletions tests/test_llm_task_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,3 +79,46 @@ def test_openai_gpt_3_5_turbo_prompts(task):
)

assert isinstance(task_prompt, list)


@pytest.mark.parametrize(
"task, expected_prompt",
[
("summarize_text", "Text: test.\nSummarize the above text."),
("compose_response", "Text: test.\nCompose a response using the above text."),
],
)
def test_custom_task_prompts(task, expected_prompt):
"""Test the prompts for the OpenAI GPT-3 5 Turbo model with custom
prompts for custom tasks."""
config = RailsConfig.from_content(
yaml_content=textwrap.dedent(
"""
models:
- type: main
engine: openai
model: gpt-3.5-turbo
prompts:
- task: summarize_text
content: |-
Text: {{ user_input }}
Summarize the above text.
- task: compose_response
content: |-
Text: {{ user_input }}
Compose a response using the above text.
"""
)
)

assert config.models[0].engine == "openai"

llm_task_manager = LLMTaskManager(config)

user_input = "test."
task_prompt = llm_task_manager.render_task_prompt(
task=task,
context={"user_input": user_input},
)

assert task_prompt == expected_prompt

0 comments on commit 2823912

Please sign in to comment.