Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrations restructure #10

Merged
merged 2 commits into from
Apr 20, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 6 additions & 1 deletion integrations/azure-translator.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,14 @@ type: Custom Node
report_issue: https://github.com/recrudesce/haystack_translate_node/issues
---

## Include in your pipeline as follows:
# Azure Translate Nodes

This package allows you to use the Azure translation endpoints to separately translate the query and the answer. It's good for scenarios where your dataset is in a different language to what you expect the user query to be in. This way, you will be able to translate the user query to the your dataset's language, and translate the answer back to the user's language.

## Installation
git clone the repo somewhere, change to the directory, then `pip install '.'`

## Usage
Include in your pipeline as follows:

```python
Expand Down
116 changes: 116 additions & 0 deletions integrations/basic-agent-memory.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
---
layout: integration
name: Basic Agent Memory Tppl
description: A working memory that stores the Agent's conversation memory
authors:
- name: Roland Tannous
socials:
github: rolandtannous
twitter: rolandtannous
- name: Xceron
socials:
github: Xceron
pypi: https://pypi.org/project/haystack-memory/
repo: https://github.com/rolandtannous/haystack-memory
type: Agent Tool
report_issue: https://github.com/rolandtannous/haystack-memory/issues
---

# Basic Haystack Memory Tool

This library implements a working memory that stores the Agent's conversation memory
and a sensory memory that stores the agent's short-term sensory memory. The working memory can be utilized in-memory or through Redis, with the

Redis implementation featuring a sliding window. On the other hand, the sensory memory is an in-memory implementation that mimics
a human's brief sensory memory, lasting only for the duration of one interaction..

## Installation

- Python pip: ```pip install --upgrade haystack-memory``` . This method will attempt to install the dependencies (farm-haystack>=1.15.0, redis)
- Python pip (skip dependency installation): Use ```pip install --upgrade haystack-memory --no-deps```
- Using git: ```pip install git+https://github.com/rolandtannous/haystack-memory.git@main#egg=haystack-memory```


## Usage

To use memory in your agent, you need three components:
- `MemoryRecallNode`: This node is added to the agent as a tool. It will allow the agent to remember the conversation and make query-memory associations.
- `MemoryUtils`: This class should be used to save the queries and the final agent answers to the conversation memory.
- `chat`: This is a method of the MemoryUtils class. It is used to chat with the agent. It will save the query and the answer to the memory. It also returns the full result for further usage.

```py
from haystack.agents import Agent, Tool
from haystack.nodes import PromptNode
from haystack_memory.prompt_templates import memory_template
from haystack_memory.memory import MemoryRecallNode
from haystack_memory.utils import MemoryUtils

# Initialize the memory and the memory tool so the agent can retrieve the memory
working_memory = []
sensory_memory = []
memory_node = MemoryRecallNode(memory=working_memory)
memory_tool = Tool(name="Memory",
pipeline_or_node=memory_node,
description="Your memory. Always access this tool first to remember what you have learned.")

prompt_node = PromptNode(model_name_or_path="text-davinci-003",
api_key="<YOUR_OPENAI_KEY>",
max_length=1024,
stop_words=["Observation:"])
memory_agent = Agent(prompt_node=prompt_node, prompt_template=memory_template)
memory_agent.add_tool(memory_tool)

# Initialize the utils to save the query and the answers to the memory
memory_utils = MemoryUtils(working_memory=working_memory,sensory_memory=sensory_memory, agent=memory_agent)
result = memory_utils.chat("<Your Question>")
print(working_memory)
```

### Redis

The working memory can also be stored in a redis database which makes it possible to use different memories at the same time to be used with multiple agents. Additionally, it supports a sliding window to only utilize the last k messages.

```py
from haystack.agents import Agent, Tool
from haystack.nodes import PromptNode
from haystack_memory.memory import RedisMemoryRecallNode
from haystack_memory.prompt_templates import memory_template
from haystack_memory.utils import RedisUtils

sensory_memory = []
# Initialize the memory and the memory tool so the agent can retrieve the memory
redis_memory_node = RedisMemoryRecallNode(memory_id="working_memory",
host="localhost",
port=6379,
db=0)
memory_tool = Tool(name="Memory",
pipeline_or_node=redis_memory_node,
description="Your memory. Always access this tool first to remember what you have learned.")
prompt_node = PromptNode(model_name_or_path="text-davinci-003",
api_key="<YOUR_OPENAI_KEY>",
max_length=1024,
stop_words=["Observation:"])
memory_agent = Agent(prompt_node=prompt_node, prompt_template=memory_template)
# Initialize the utils to save the query and the answers to the memory
redis_utils = RedisUtils(agent=memory_agent,
sensory_memory=sensory_memory,
memory_id="working_memory",
host="localhost",
port=6379,
db=0)
result = redis_utils.chat("<Your Question>")
```


## Examples

Examples can be found in the `examples/` folder. They contain usage examples for both in-memory and Redis memory types.
To open the examples in colab, click on the following links:
- Basic Memory: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/rolandtannous/HaystackAgentBasicMemory/blob/main/examples/example_basic_memory.ipynb)
- Redis Memory: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/rolandtannous/HaystackAgentBasicMemory/blob/main/examples/example_redis_memory.ipynb)






4 changes: 3 additions & 1 deletion integrations/fastrag.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ type: Custom Node
report_issue: https://github.com/IntelLabs/fastRAG/issues
---

# fastRAG

fast**RAG** is a research framework designed to facilitate the building of retrieval augmented generative pipelines. Its main goal is to make retrieval augmented generation as efficient as possible through the use of state-of-the-art and efficient retrieval and generative models. The framework includes a variety of sparse and dense retrieval models, as well as different extractive and generative information processing models. fastRAG aims to provide researchers and developers with a comprehensive tool-set for exploring and advancing the field of retrieval augmented generation.

It includes custom nodes such as:
Expand All @@ -22,7 +24,7 @@ It includes custom nodes such as:
- Efficient document vector store (PLAID)
- Benchmarking scripts

## 馃搷 Installation
## Installation

Preliminary requirements:

Expand Down
10 changes: 7 additions & 3 deletions integrations/lemmatize.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,9 @@ repo: https://github.com/recrudesce/haystack_lemmatize_node
type: Custom Node
report_issue: https://github.com/recrudesce/haystack_lemmatize_node/issues
---
## What is Lemmatization

## Lemmatization

Lemmatization is a text pre-processing technique used in natural language processing (NLP) models to break a word down to its root meaning to identify similarities. For example, a lemmatization algorithm would reduce the word better to its root word, or lemme, good.

This node can be placed within a pipeline to lemmatize documents returned by a Retriever, prior to adding them as context to a prompt (for a PromptNode or similar).
Expand All @@ -28,11 +30,13 @@ The process of lemmatizing the document content can potentially reduce the amoun
### After Lemmatization:
![image](https://user-images.githubusercontent.com/6450799/230404246-a8488a57-73bd-4420-9f1b-8a080b84121b.png)

## How to Use
## Installation

Clone the repo to a directory, change to that directory, then perform a `pip install '.'`. This will install the package to your Python libraries.

Then, include it in your pipeline - example as follows:
## Usage

Include it in your pipeline - example as follows:

```python
import logging
Expand Down
2 changes: 2 additions & 0 deletions integrations/qdrant-document-store.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ type: Document Store
report_issue: https://github.com/qdrant/qdrant-haystack/issues
---

# Qdrant DocumentStore

An integration of [Qdrant](https://qdrant.tech) vector database with [Haystack](https://haystack.deepset.ai/)
by [deepset](https://www.deepset.ai).

Expand Down
8 changes: 6 additions & 2 deletions integrations/veracity.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,15 +10,19 @@ repo: https://github.com/Xceron/haystack_veracity_node
type: Custom Node
report_issue: https://github.com/Xceron/haystack_veracity_node/issues
---
# Veracity Node

This Node checks whether the given input is correctly answered by the given context (as judged by the given LLM). One example usage is together with [Haystack Memory](https://github.com/rolandtannous/haystack-memory): After the memory is retrieved, the given model checks whether the output is satisfying the question.

**Important**:
The Node expects the context to be passed into `results`. If the previous node in the pipeline is putting the text somewhere else, use a [Shaper](https://docs.haystack.deepset.ai/docs/shaper) to `rename` the argument to `results`.

## How to Use
## Installation

Clone the repo to a directory, change to that directory, then perform a `pip install '.'`. This will install the package to your Python libraries.

## Example Usage with Haystack Memory
## Usage
### Example Usage with Haystack Memory
```py
from haystack_veracity_node.node import VeracityNode
from haystack_memory.memory import RedisMemoryRecallNode
Expand Down