Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrations restructure #10

Merged
merged 2 commits into from
Apr 20, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
standardizing the integrations descriptions
  • Loading branch information
TuanaCelik committed Apr 18, 2023
commit cd28165bbb6128a68ee29b0de8795c46137c12ca
7 changes: 6 additions & 1 deletion integrations/azure-translator.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,14 @@ type: Custom Node
report_issue: https://github.com/recrudesce/haystack_translate_node/issues
---

## Include in your pipeline as follows:
# Azure Translate Nodes

This package allows you to use the Azure translation endpoints to separately translate the query and the answer. It's good for scenarios where your dataset is in a different language to what you expect the user query to be in. This way, you will be able to translate the user query to the your dataset's language, and translate the answer back to the user's language.

## Installation
git clone the repo somewhere, change to the directory, then `pip install '.'`

## Usage
Include in your pipeline as follows:

```python
Expand Down
4 changes: 3 additions & 1 deletion integrations/fastrag.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ type: Custom Node
report_issue: https://github.com/IntelLabs/fastRAG/issues
---

# fastRAG

fast**RAG** is a research framework designed to facilitate the building of retrieval augmented generative pipelines. Its main goal is to make retrieval augmented generation as efficient as possible through the use of state-of-the-art and efficient retrieval and generative models. The framework includes a variety of sparse and dense retrieval models, as well as different extractive and generative information processing models. fastRAG aims to provide researchers and developers with a comprehensive tool-set for exploring and advancing the field of retrieval augmented generation.

It includes custom nodes such as:
Expand All @@ -22,7 +24,7 @@ It includes custom nodes such as:
- Efficient document vector store (PLAID)
- Benchmarking scripts

## 📍 Installation
## Installation

Preliminary requirements:

Expand Down
10 changes: 7 additions & 3 deletions integrations/lemmatize.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,9 @@ repo: https://github.com/recrudesce/haystack_lemmatize_node
type: Custom Node
report_issue: https://github.com/recrudesce/haystack_lemmatize_node/issues
---
## What is Lemmatization

## Lemmatization

Lemmatization is a text pre-processing technique used in natural language processing (NLP) models to break a word down to its root meaning to identify similarities. For example, a lemmatization algorithm would reduce the word better to its root word, or lemme, good.

This node can be placed within a pipeline to lemmatize documents returned by a Retriever, prior to adding them as context to a prompt (for a PromptNode or similar).
Expand All @@ -28,11 +30,13 @@ The process of lemmatizing the document content can potentially reduce the amoun
### After Lemmatization:
![image](https://user-images.githubusercontent.com/6450799/230404246-a8488a57-73bd-4420-9f1b-8a080b84121b.png)

## How to Use
## Installation

Clone the repo to a directory, change to that directory, then perform a `pip install '.'`. This will install the package to your Python libraries.

Then, include it in your pipeline - example as follows:
## Usage

Include it in your pipeline - example as follows:

```python
import logging
Expand Down
2 changes: 2 additions & 0 deletions integrations/qdrant-document-store.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ type: Document Store
report_issue: https://github.com/qdrant/qdrant-haystack/issues
---

# Qdrant DocumentStore

An integration of [Qdrant](https://qdrant.tech) vector database with [Haystack](https://haystack.deepset.ai/)
by [deepset](https://www.deepset.ai).

Expand Down
8 changes: 6 additions & 2 deletions integrations/veracity.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,15 +10,19 @@ repo: https://github.com/Xceron/haystack_veracity_node
type: Custom Node
report_issue: https://github.com/Xceron/haystack_veracity_node/issues
---
# Veracity Node

This Node checks whether the given input is correctly answered by the given context (as judged by the given LLM). One example usage is together with [Haystack Memory](https://github.com/rolandtannous/haystack-memory): After the memory is retrieved, the given model checks whether the output is satisfying the question.

**Important**:
The Node expects the context to be passed into `results`. If the previous node in the pipeline is putting the text somewhere else, use a [Shaper](https://docs.haystack.deepset.ai/docs/shaper) to `rename` the argument to `results`.

## How to Use
## Installation

Clone the repo to a directory, change to that directory, then perform a `pip install '.'`. This will install the package to your Python libraries.

## Example Usage with Haystack Memory
## Usage
### Example Usage with Haystack Memory
```py
from haystack_veracity_node.node import VeracityNode
from haystack_memory.memory import RedisMemoryRecallNode
Expand Down