No-strings tiny Chain-of-Thought framework for your Large Language Model (LLM) that saves you time ⏰ and money 💰
This applies sequence of prompts towards your data in csv
/json
/sqlite
in order to expand table and export it:
TODO. Features:
- First feature.
- Second feature.
- Third feature.
Just two simple steps:
- Define your sequence of prompts with their dependencies
- For example: Three-hop-Reasoning in Implicit CoT for sentiment analysis at
data/thor_cot_schema.json
- For example: Three-hop-Reasoning in Implicit CoT for sentiment analysis at
- Launch inference:
python infer.py \
--model "google/flan-t5-base" \
--schema "data/thor_cot_schema.json" \
--prompt "rusentne2023_default_en" \
--device "cpu" \
--temp 0.1 \
--output "data/output.csv" \
--max-length 512 \
--hf-token "<YOUR_HUGGINGFACE_TOKEN>" \
--openai-token "<YOUR_OPENAI_TOKEN>" \
--limit 10000 \
--limit-prompt 10000 \
--bf16 \
--l4b