From bc553a0c160ddbc85500a22b483e0da538b23100 Mon Sep 17 00:00:00 2001 From: Traian Rebedea Date: Mon, 13 Nov 2023 13:52:27 +0200 Subject: [PATCH] Fix typos. --- docs/getting_started/installation-guide.md | 2 +- examples/configs/llm/hf_pipeline_llama2/README.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/getting_started/installation-guide.md b/docs/getting_started/installation-guide.md index 4db68644f..9d6bf864f 100644 --- a/docs/getting_started/installation-guide.md +++ b/docs/getting_started/installation-guide.md @@ -92,7 +92,7 @@ For each feature or LLM example, check the readme files associated with it. ## Extra dependencies The following extra dependencies are defined: -- `dev`: packages requires by some extra Guardrails features for developers (e.g. autoreload feature). +- `dev`: packages required by some extra Guardrails features for developers (e.g. autoreload feature). - `eval`: packages used for the Guardrails [evaluation tools](../../nemoguardrails/eval/README.md). - `all`: install all extra packages. diff --git a/examples/configs/llm/hf_pipeline_llama2/README.md b/examples/configs/llm/hf_pipeline_llama2/README.md index 0d6754e84..3106062c5 100644 --- a/examples/configs/llm/hf_pipeline_llama2/README.md +++ b/examples/configs/llm/hf_pipeline_llama2/README.md @@ -3,7 +3,7 @@ This configuration uses the HuggingFace Pipeline LLM with various llama2 models, including 7B and 13B, e.g. [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf). Note that in order to use community models such as llama2, one will need to first go to [huggingface-llama2](https://huggingface.co/meta-llama). -After receiving access to general llama2 models, one still needs to go to the specific model page on Huggingface, e.g. e.g. [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf), to be granted access on HF to that specific model. +After receiving access to general llama2 models, one still needs to go to the specific model page on Huggingface, e.g. [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf), to be granted access on HF to that specific model. Before running this rail, you need to set the environment via export HF_TOKEN="Your_HuggingFace_Access_Token" , [read more on access token](https://huggingface.co/docs/hub/security-tokens).